Dissertations / Theses on the topic 'Cloud communication'

To see the other types of publications on this topic, follow the link: Cloud communication.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cloud communication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Espínola, Brítez Laura María. "Efficient communication management in cloud environments." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/666690.

Full text
Abstract:
Las aplicaciones científicas con requisitos de High Performance Computing (HPC) están migrando a entornos cloud debido a las ventajas que ofrece. El cloud computing juega un papel importante teniendo en cuenta la potencia de computo que proporciona, debido a que evita el costo de mantenimiento asociado a un clúster físico. Con características como la elasticidad y el pago por uso, el cloud ayuda a los investigadores a reducir el riesgo de adquisición de recursos físicos. La mayoría de las aplicaciones de HPC se implementan mediante el standard Message Passing Interface (MPI), siendo este un componente clave para tareas de computo distribuido. Sin embargo, ejecutar aplicaciones MPI en entornos cloud tienen como principal desventaja la pérdida de rendimiento durante la ejecución, debido a la virtualización de la red, que afecta la latencia de las comunicaciones y el ancho de banda. Para usar un entorno cloud con aplicaciones científicas de este tipo, se requieren mecanismos de comunicación que permitan baja latencia. La topología de red no está disponible para los usuarios en entornos virtualizados, lo que dificulta el uso de las optimizaciones existentes para bare-metal clusters, basadas en la información de topología de red. En algunos casos, los proveedores de cloud pueden migrar máquinas virtuales, lo que afecta la eficacia de las optimizaciones de enrutamiento y los algoritmos de ubicación. Además, si no se garantiza el aislamiento de los recursos, el intercambio de recursos puede generar un ancho de banda variable y un rendimiento inestable. En esta tesis se presenta Dynamic MPI Communication Balance and Management (DMCBM), un middleware para resolver la pérdida de performance en comunicaciones de aplicaciones HPC en cloud. DMCBM se implementa como un software intermedio entre la aplicación de los usuarios y el entorno de ejecución. Mejora los tiempos de latencia de comunicaciones en sistemas cloud y ayuda a los usuarios a detectar problemas de mapping y ejecución paralela. Nuestra solución re-equilibra dinámicamente los flujos de comunicación a niveles superiores de la pila de HPC virtualizada, sobre la capa de comunicaciones MPI, para eliminar dinámicamente los hot-spots de comunicación y la congestión en las capas subyacentes. DMCBM abstrae el estado de las comunicaciones entre los procesos de la aplicación en función de las mediciones de latencia. Este middleware caracteriza la topología de red subyacente y analiza el comportamiento de las aplicaciones paralelas en cloud. Esto permite detectar la congestión de la red y optimizar las comunicaciones seleccionando rutas de comunicación alternativas entre procesos o aprovechando la migración de máquinas virtuales en entornos cloud. Estas opciones se analizan en tiempo real y se seleccionan según el tipo de congestión (enlace o destino). DMCBM logra un menor tiempo de ejecución de la aplicación en caso de congestión, obteniendo un mejor rendimiento en el cloud. Finalmente, se presentan experimentos que verifican la funcionalidad y las mejoras de DMCBM con aplicaciones MPI en cloud públicos y privados. Los experimentos se realizaron midiendo los tiempos de ejecución y comunicación. Para los experimentos se utilizan dos aplicaciones: NAS Parallel Benchmarks y una aplicación real de simulación dinámica de partículas NBody, obteniendo una mejora de hasta 10% en el tiempo de ejecución y una reducción del tiempo de comunicación de aproximadamente 40% en escenarios de congestión.
Scientific applications with High Performance Computing (HPC) requirements are migrating to cloud environments due to the facilities that it offers. Cloud computing plays a major role considering the compute power that it provides, avoiding the cost of physical cluster maintenance. With features like elasticity and pay-per-use, it helps to reduce the researchers procurement risk. Most of HPC applications are implemented using Message Passing Interface (MPI), which is a key component in common and distributed computing tasks. However, for this kind of applications on cloud environments, the major drawback is the lost of execution performance, due to the virtualized network that affects the communications latency and bandwidth. To use a cloud environment with scientific applications of this kind, low latency communication mechanisms are required. The network topology detail is not available for users in virtualized environments, making difficult to use the existing optimizations based on network topology information done in bare-metal cluster environments. In some cases, cloud providers can migrate virtual machines, which impacts the efficiency of routing optimizations and placement algorithms. Moreover, if resource isolation is not guaranteed, resource sharing can lead to variable bandwidth and unstable performance. In this thesis a Dynamic MPI Communication Balance and Management (DMCBM) is presented, to overcome the communication challenge of HPC applications in cloud. DMCBM is implemented as a middle-ware between the users application and the execution environment. It improves message communication latency times in cloudbased systems, and helps users to detect mapping and parallel implementation issues. Our solution dynamically rebalances communication flows at higher levels of the virtualized HPC stack, e.g. over MPI communications layer, to dynamically remove communication hot-spots and congestion in the underlying layers. DMCBM abstracts the communications state between application processes based on latency measurements. This middleware characterizes the underlying network topology and analyzes parallel applications behavior in the cloud. This allows for detecting network congestion and optimizing communications by either selecting alternative communication paths between processes, or leveraging live migration of virtual machines in cloud environments. These options are analyzed in real-time and selected according to the type of congestion (link or destination). DMCBM achieves lower application execution time in case of congestion, obtaining better performance in clouds. Finally, experiments that verify the functionality and improvements of DMCBM with MPI Applications in public and private clouds are presented. The experiments where done by measuring execution and communication times. NAS Parallel Benchmarks and a real application of dynamic particles simulation NBody are used, obtaining an improvement of up to 10% in the execution time and a communication time reduction of about 40% in congestion scenarios.
APA, Harvard, Vancouver, ISO, and other styles
2

Ward, Daniel R. "Reaper – Toward Automating Mobile Cloud Communication." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1707.

Full text
Abstract:
Mobile devices connected to cloud based services are becoming a mainstream method of delivery up-to-date and context aware information to users. Connecting mobile applications to cloud service require significant developer effort. Yet this communication code usually follows certain patterns, varying accordingly to the specific type of data sent and received from the server. By analyzing the causes of theses variations, we can create a system that can automate the code creation for communication from a mobile device to a cloud server. To automate code creation, a general pattern must extracted. This general solution can then be applied to any database configuration. Automating this process frees up valuable development time, allowing developers to make other parts of the application and/or backend service a better experience for the end user.
APA, Harvard, Vancouver, ISO, and other styles
3

Mullins, Taariq. "Participatory Cloud Computing: The Community Cloud Management Protocol." Thesis, University of Cape Town, 2014. http://pubs.cs.uct.ac.za/archive/00000999/.

Full text
Abstract:
This thesis work takes an investigative approach into developing a middleware solution for managing services in a community cloud computing infrastructure predominantly made of interconnected low power wireless devices. The thesis extends itself slightly outside of this acute framing to ensure heterogeneity is accounted for. The developed framework, in its draft implementation, provides networks with value added functionality in a way which minimally impacts nodes on the network. Two sub-protocols are developed and successfully implemented in order to achieve efficient discovery and allocation of the community cloud resources. First results are promising as the systems developed show both low resource consumption in its application, but also the ability to effectively transfer services through the network while evenly distributing load amongst computing resources on the network.
APA, Harvard, Vancouver, ISO, and other styles
4

Sundaram, Madhu, and Kejvan Redjamand. "Strategy of Mobile Communication System Providers in Cloud (Implementation of cloud in telecom by Ericsson)." Thesis, Blekinge Tekniska Högskola, Sektionen för management, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2814.

Full text
Abstract:
The telecom operators are experiencing low revenues due to reduction in voice calls and SMS in their networks mainly driven by communication services like Skype, Google talk, msn and other VOIP (voice over internet protocol) products. Instant messaging services and social networking are also taking away the operator’s customers reducing them to “dumb pipes” with the OTT (Over the Top) players like Google, Microsoft and other content providers making profits at the expense of the operator. The growth of operators’ revenue is not keeping pace with the growth of traffic in their networks creating the perception that the content providers and OTT players do not share their revenue generated using the operator’s infrastructure. The operators are therefore increasingly being reduced to act as “dumb pipes” connecting the content generated by OTT’s with the operator’s subscribers. The operator’s revenue stream in one-sided, only coming from the subscriber usually as a flat data plan. The operator’s are looking at new revenue models and the cloud computing market is a business opportunity which allows them to monetize their network resources with the possibility to earn revenue from both the subscriber and the content providers. The communication system providers who are the communication equipment vendors to the operators are indirectly affected from the shrinking operator revenue. As part of this thesis, we address how Telco’s and system vendors can differentiate in the cloud computing market against other cloud service providers and monetize the network resources which they own. We discuss the roles in the cloud value network and activities in the value chain that could be adopted and the business opportunities they could pursue. We begin by introducing the telecom operator market and the challenges faced by the industry today. The research question we are targeting is then discussed followed by the limitations of the thesis. The lecommunication industry, cloud computing technology and the relevant service delivery models are discussed. A literature review is then done to formulate our theory. Theory on strategy by Porter, Prahalad and other researchers who have contributed to the research on cloud computing are discussed. The method adopted is then proposed. Data collected is first presented and then analyzed before discussing the results of the analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Łaskawiec, Sebastian. "Effective solutions for high performance communication in the cloud." Rozprawa doktorska, Uniwersytet Technologiczno-Przyrodniczy w Bydgoszczy, 2020. http://dlibra.utp.edu.pl/Content/2268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sgarbi, Andrea. "Machine Cloud Connectivity: a robust communication architecture for Industrial IoT." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Industry 4.0 springs from the fourth industrial revolution, which is bringing innovation to fully automated and interconnected industrial production. This movement is composed of macro areas to expand the technological horizon starting from the tools used to date. The use of data, computing power and connectivity are the fundamental concepts on which the study of this thesis is based and are declined in big data, open data, Internet of Things (IoT), machine-to-machine and cloud computing for the centralization of information and its storage. Once the data has been collected, it is necessary to derive value from it in order to obtain advantages from ”machine learning”, i.e. machines that improve their performance by ”learning” from the data collected and analyzed. The advent of the Internet of Things can be seen in all respects as the greatest technological revolution of recent years, which will bring a huge amount of information into the hands of users. The latter can offer countless advantages in daily life and in the diagnostics of the production process. Industrial IoT (IIoT) enables manufacturing organizations to create a communication path through the automation pyramid, obtaining a real data stream in order to improve the machine performances. From an information security point of view, the importance of the information transmitted should not be underestimated and this also concerns an important aspect of industry 4.0. Protocols and authentication systems are constantly updated to ensure the privacy and security the customer needs. Through this thesis project, the implementation requirements will be dealt with in order to study and analyze different vendor technologies and to construct a cloud architecture. The focus is concentrated on the cybersecurity and on the information losses avoidance in order to get a robust transfer.
APA, Harvard, Vancouver, ISO, and other styles
7

Sajjad, Ali. "A secure and scalable communication framework for inter-cloud services." Thesis, City University London, 2015. http://openaccess.city.ac.uk/14415/.

Full text
Abstract:
A lot of contemporary cloud computing platforms offer Infrastructure-as-a-Service provisioning model, which offers to deliver basic virtualized computing resources like storage, hardware, and networking as on-demand and dynamic services. However, a single cloud service provider does not have limitless resources to offer to its users, and increasingly users are demanding the features of extensibility and inter-operability with other cloud service providers. This has increased the complexity of the cloud ecosystem and resulted in the emergence of the concept of an Inter-Cloud environment where a cloud computing platform can use the infrastructure resources of other cloud computing platforms to offer a greater value and flexibility to its users. However, there are no common models or standards in existence that allows the users of the cloud service providers to provision even some basic services across multiple cloud service providers seamlessly, although admittedly it is not due to any inherent incompatibility or proprietary nature of the foundation technologies on which these cloud computing platforms are built. Therefore, there is a justified need of investigating models and frameworks which allow the users of the cloud computing technologies to benefit from the added values of the emerging Inter-Cloud environment. In this dissertation, we present a novel security model and protocols that aims to cover one of the most important gaps in a subsection of this field, that is, the problem domain of provisioning secure communication within the context of a multi-provider Inter-Cloud environment. Our model offers a secure communication framework that enables a user of multiple cloud service providers to provision a dynamic application-level secure virtual private network on top of the participating cloud service providers. We accomplish this by taking leverage of the scalability, robustness, and flexibility of peer-to-peer overlays and distributed hash tables, in addition to novel usage of applied cryptography techniques to design secure and efficient admission control and resource discovery protocols. The peer-to-peer approach helps us in eliminating the problems of manual configurations, key management, and peer churn that are encountered when setting up the secure communication channels dynamically, whereas the secure admission control and secure resource discovery protocols plug the security gaps that are commonly found in the peer-to-peer overlays. In addition to the design and architecture of our research contributions, we also present the details of a prototype implementation containing all of the elements of our research, as well as showcase our experimental results detailing the performance, scalability, and overheads of our approach, that have been carried out on actual (as opposed to simulated) multiple commercial and non-commercial cloud computing platforms. These results demonstrate that our architecture incurs minimal latency and throughput overheads for the Inter-Cloud VPN connections among the virtual machines of a service deployed on multiple cloud platforms, which are 5% and 10% respectively. Our results also show that our admission control scheme is approximately 82% more efficient and our secure resource discovery scheme is about 72% more efficient than a standard PKI-based (Public Key Infrastructure) scheme.
APA, Harvard, Vancouver, ISO, and other styles
8

Pourghomi, Pardis. "Managing near field communication (NFC) payment applications through cloud computing." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/8538.

Full text
Abstract:
The Near Field Communication (NFC) technology is a short-range radio communication channel which enables users to exchange data between devices. NFC provides a contactless technology for data transmission between smart phones, Personal Computers (PCs), Personal Digital Assistants (PDAs) and such devices. It enables the mobile phone to act as identification and a credit card for customers. However, the NFC chip can act as a reader as well as a card, and also be used to design symmetric protocols. Having several parties involved in NFC ecosystem and not having a common standard affects the security of this technology where all the parties are claiming to have access to client’s information (e.g. bank account details). The dynamic relationships of the parties in an NFC transaction process make them partners in a way that sometimes they share their access permissions on the applications that are running in the service environment. These parties can only access their part of involvement as they are not fully aware of each other’s rights and access permissions. The lack of knowledge between involved parties makes the management and ownership of the NFC ecosystem very puzzling. To solve this issue, a security module that is called Secure Element (SE) is designed to be the base of the security for NFC. However, there are still some security issues with SE personalization, management, ownership and architecture that can be exploitable by attackers and delay the adaption of NFC payment technology. Reorganizing and describing what is required for the success of this technology have motivated us to extend the current NFC ecosystem models to accelerate the development of this business area. One of the technologies that can be used to ensure secure NFC transactions is cloud computing which offers wide range advantages compared to the use of SE as a single entity in an NFC enabled mobile phone. We believe cloud computing can solve many issues in regards to NFC application management. Therefore, in the first contribution of part of this thesis we propose a new payment model called “NFC Cloud Wallet". This model demonstrates a reliable structure of an NFC ecosystem which satisfies the requirements of an NFC payment during the development process in a systematic, manageable, and effective way.
APA, Harvard, Vancouver, ISO, and other styles
9

Sathyamoorthy, Peramanathan. "Enabling Energy-Efficient Data Communication with Participatory Sensing and Mobile Cloud : Cloud-assisted crowd-sourced data-driven optimization." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-274875.

Full text
Abstract:
This thesis proposes a novel power management solution for the resource-constrained devices in the context of Internet of Things (IoT). We focus on smartphones in the IoT, as they are getting increasingly popular and equipped with strong sensing capabilities. Smartphones have complex and chaotic asynchronous power consumption incurred by heterogeneous components including their onboard sensors. Their interaction with the cloud can support computation offloading and remote data access via the network. In this work, we aim at monitoring the power consumption behaviours of smartphones and profiling individual applications and platform to make better decisions in power management. A solution is to design architecture of cloud orchestration as an epic predictor of the behaviours of smart devices with respect to time, location, and context. We design and implement this architecture to provide an integrated cloud-based energy monitoring service. This service enables the monitoring of power consumption on smartphones and support data analysis on massive data logs collected by a large number of users.
APA, Harvard, Vancouver, ISO, and other styles
10

Eldh, Erik. "Cloud connectivity for embedded systems." Thesis, KTH, Kommunikationssystem, CoS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-118746.

Full text
Abstract:
Deploying an embedded system to act as a controller for electronics is not new.  Today these kinds of systems are all around us and are used for a multitude of purposes. In contrast, cloud computing is a relatively new approach for computing as a whole. This thesis project explores these two technologies in order to create a bridge between these two wildly different platforms. Such a bridge should enable new ways of exposing features and doing maintenance on embedded devices. This could save companies not only time and money while dealing with maintenance tasks for embedded systems, but this should also avoid the needed to host this maintenance software on dedicated servers – rather these tasks could use cloud resources only when needed. This thesis explores such a bridge and presents techniques suitable for joining these two computing paradigms together. Exploring what is included in cloud computing by examining available technologies for deployment is important to be able to get a picture of what the market has to offer. More importantly is how such a deployment can be done and what the benefits are. How technologies such as databases, load-balancers, and computing environments have been adapted to a cloud environment and what draw-backs and new features are available in this environment are of interest and how a solution can exploit these features in a real-world scenario.  Three different cloud providers and their products have been presented in order to create an overview of the current offerings.  In order to realize a solution a way of communicating and exchanging data is presented and discussed. Again to realize the concept in a real-world scenario. This thesis presents the concept of cloud connectivity for embedded systems. Following this the thesis describes a prototype of how such a solution could be realized and utilized. The thesis evaluates current cloud providers in terms of the requirements of the prototype. A middle-ware solution drawing strengths from the services offered by cloud vendors for deployment at a vendor is proposed. This middle-ware acts in a stateless manner to provide communication and bridging of functionality between two parties with different capabilities. This approach creates a flexible common ground for end-user clients and reduces the burden of having the embedded systems themselves process and distribute information to the clients.  The solution also provides and abstraction of the embedded systems further securing the communication with the systems by it only being enabled for valid middle-ware services.
Att använda ett inbyggt system som en kontrollenhet för elektronik är inget nytt. Dessa typer av system finns idag överallt och används i vidt spridda användningsområden medans datormolnet är en ny approach för dator användning i sin helhet. Utforska och skapa en länk mellan dessa två mycket olika platformar för att facilitera nya tillvägagångs sätt att sköta underhåll sparar företag inte tid och pengar när det kommer till inbyggda system utan också när det gäller driften för servrar. Denna examensarbete utforskar denna typ av länk och presenterar för endamålet lämpliga tekniker att koppla dem samman medans lämpligheten för en sådan lösning diskuteras. Att utforska det som inkluderas i konceptet molnet genom att undersöka tillgängliga teknologier för utveckling är viktigt för att få en bild av vad marknaden har att erbjuda. Mer viktigt är hur utveckling går till och vilka fördelarna är. Hur teknologoier som databaser, last distrubutörer och server miljöer har adapterats till molnmiljön och vilka nackdelar och fördelar som kommit ut av detta är av intresse och vidare hur en lösning kan använda sig av dessa fördelar i ett verkliget scenario. Tre olika moln leverantörer och deras produkter har presenterats för att ge en bild av vad som för tillfället erbjuds. För att realisera en lösning har ett sett att kommunicera och utbyta data presenterats och diskuterats. Åter igen för att realisera konceptet i ett verkligt scenario. Denna uppsats presenterar konceptet moln anslutbarhet för inbyggda system för att kunna få en lösning realiserad och använd. En mellanprograms lösning som drar styrka ifrån de tjänster som erbjudas av molnleverantörer för driftsättning hos en leverantor föreslås. Denna mellanprogramslösnings agerar tillståndslöst för att erbjuda kommunikation och funktions sammankoppling mellan de två olika deltagarna som har olika förutsätningar. Denna approach skapar en flexibel gemensam plattform för olika klienter hos slutanvändaren och minskar bördan hos de inbyggdasystemet att behöva göra analyser och distrubuera informationen till klienterna. Denna lösning erbjuder också en abstraktion av de inbyggdasystemen för att erbjuda ytterligare säkerhet när kommunikation sker med de inbyggdasystemet genom att den endast sker med giltiga mellanprogram.
APA, Harvard, Vancouver, ISO, and other styles
11

Alawadi, Tareq A. "Investigation of the effects of cloud attenuation on satellite communication systems." Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7991.

Full text
Abstract:
The aim of this project is to investigate the attenuation due to clouds at 20- 50GHz; to develop an accurate long-term prediction model of cloud attenuation applicable to slant-path links and evaluate the impact of cloud attenuation dynamics on the design of future portable EHF earth-space systems. Higher frequencies offer several advantages, for example, greater bandwidth and immunity to ionospheric effects. The EHF band is being targeted for the launch of earth-space communication systems to provide global delivery of bandwidthintensive services (e.g. interactive HDTV, broadband internet access and multimedia services, television receive-only, etc.) to portable terminal units. Since spectrum shortage and terminal bulk currently preclude the realization of these breakthrough-broadband wireless communication services at lower frequencies, a better understanding is needed in order to optimize their usage. One major obstacle in the design of EHF earth-space communication systems is the large and variable signal attenuation in the lower atmosphere, due to a range of mechanisms including attenuation (and scattering) due to clouds and rain, tropospheric scintillation caused by atmospheric turbulence and variable attenuation by atmospheric gasses. In particular, cloud attenuation becomes very significant at EHF. In this thesis, we start with an overview of literature review in the first chapter. Followed next by the theory and description of accepted-up to date- cloud attenuation models in the field (chapter 2). Then followed up by a description of the pre-processing, validations, sources and assumptions made in order to conduct the analysis of the cloud attenuation in this work (chapter 3). Afterwards, a comprehensive analysis of Meteorological and local tropospheric degradation was carried out (chapter 4). That was followed by an overview of cloud fade statistics and suggested methods to counter their effects (chapter 5). And finally the improved cloud attenuation model and the enhancement of the currently accepted cloud attenuation model (ITU-R 840.4) by terms of validating the effective temperature concept and ways of acquiring it (chapter 6).
APA, Harvard, Vancouver, ISO, and other styles
12

Teyeb, Hana. "Optimisation intégrée dans un environnement cloud." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL010/document.

Full text
Abstract:
Dans les systèmes cloud géographiquement distribués, un défi majeur auquel sont confrontés les fournisseurs de cloud consiste à optimiser et à configurer leurs infrastructures. En particulier, cela consiste à trouver un emplacement optimal pour les machines virtuelles (VMs) afin de minimiser les coûts tout en garantissant une bonne performance du système. De plus, en raison des fluctuations de la demande et des modèles de trafic, il est essentiel d'ajuster dynamiquement le schéma de placement des VMs en utilisant les techniques de migration des VMs. Cependant, malgré ses avantages apportés, dans le contexte du Cloud géo-distribué, la migration des VMs génère un trafic supplémentaire dans le réseau backbone ce qui engendre la dégradation des performances des applications dans les centres de données (DCs) source et destination. Par conséquent, les décisions de migration doivent être bien étudiés et basées sur des paramètres précis. Dans ce manuscrit, nous étudions les problèmes d'optimisation liés au placement, à la migration et à l'ordonnancement des VMs qui hébergent des applications hautement corrélées et qui peuvent être placés dans des DCs géo-distribués. Dans ce contexte, nous proposons un outil de gestion de DC autonome basé sur des modèles d'optimisation en ligne et hors ligne pour gérer l'infrastructure distribuée du Cloud. Notre objectif est de minimiser le volume du trafic global circulant entre les différents DCs du système.Nous proposons également des modèles d'optimisation stochastiques et déterministes pour traiter les différents modèles de trafic de communication. En outre, nous fournissons des algorithmes quasi-optimaux qui permettent d'avoir la meilleure séquence de migration inter-DC des machines virtuelles inter-communicantes. En plus, nous étudions l'impact de la durée de vie des VMs sur les décisions de migration afin de maintenir la stabilité du Cloud. Enfin, nous utilisons des environnements de simulation pour évaluer et valider notre approche. Les résultats des expériences menées montrent l'efficacité de notre approche
In geo-distributed cloud systems, a key challenge faced by cloud providers is to optimally tune and configure their underlying cloud infrastructure. An important problem in this context, deals with finding an optimal virtual machine (VM) placement, minimizing costs while at the same time ensuring good system performance. Moreover, due to the fluctuations of demand and traffic patterns, it is crucial to dynamically adjust the VM placement scheme over time. Hence, VM migration is used as a tool to cope with this problem. However, despite the benefits brought by VM migration, in geo-distributed cloud context, it generates additional traffic in the backbone links which may affect the application performance in both source and destination DCs. Hence, migration decisions need to be effective and based on accurate parameters. In this work, we study optimization problems related to the placement, migration and scheduling of VMs hosting highly correlated and distributed applications within geo-distributed DCs. In this context, we propose an autonomic DC management tool based on both online and offline optimization models to manage the distributed cloud infrastructure. Our objective is to minimize the overall expected traffic volume circulating between the different DCs of the system. To deal with different types of communication traffic patterns, we propose both deterministic and stochastic optimization models to solve VM placement and migration problem and to cope with the uncertainty of inter-VM traffic. Furthermore, we propose near-optimal algorithms that provide with the best inter-DCs migration sequence of inter-communicating VMs. Along with that, we study the impact of the VM's lifetime on the migration decisions in order to maintain the stability of the cloud system. Finally, to evaluate and validate our approach, we use experimental tests as well as simulation environments. The results of the conducted experiments show the effectiveness of our proposals
APA, Harvard, Vancouver, ISO, and other styles
13

Sant\'Ana, Fábio Sousa de. "Avaliação de desempenho de mecanismos de segurança em ambientes PACS (Picture Archiving and Communication System) baseados em computação em nuvem." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/17/17157/tde-29032017-162812/.

Full text
Abstract:
Introdução: A adoção de um Sistema de Arquivamento e Distribuição de Imagens (PACS, do inglês Picture Archiving and Communication System) é condição fundamental para a estruturação de um ambiente radiológico sem filme. Um PACS é composto basicamente por equipamentos e sistemas informatizados interconectados em rede, direcionados à aquisição, armazenamento (ou arquivamento), recuperação e apresentação de imagens médicas aos especialistas responsáveis por avaliá-las e laudá-las. A computação em nuvem vem ao encontro dos PACS e surge como uma maneira de simplificar o compartilhamento de imagens entre organizações de saúde e promover a virtualização de espaços físicos e para garantir o seu funcionamento ininterrupto.Objetivo: Este estudo teve como objetivo implementar um PACS simplificado em ambiente cloud computing privado, com foco nas funcionalidades de arquivamento e disponibilização de imagens médicas e avaliar questões de segurança e performance. Metodologia: As imagens que compuseram o PACS do ambiente cloud foram obtidas através do PACS físico atualmente em uso no Centro de Ciência das Imagens e Física Médica do Hospital das Clinicas da Faculdade de Medicina de Ribeirão Preto - CCIFM/HCFMRP. Para os procedimentos da avaliação de segurança foram construídos cenários que possibilitavam a: 1) anominização de dados de identificação dos pacientes através de criptografia computacional em base de dados utilizando o algoritmo de criptografia Advanced Encryption Standard - AES, 2) transferência de imagens médicas seguras através de conexão com a Internet utilizando Virtual Network Private - VPN sobre o protocolo Internet Protocol Security - IPSec (VPN/IPSec) e 3) envio seguro através de tunelamento baseado em Secure Shell - SSH. Resultados: Foi identificada uma queda de performance no envio de informações para a nuvem quando submetidos aos níveis de segurança propostos, sugerindo a relação entre aumento de segurança e perda de performance, apontando para a necessidade de estudos de desempenho quando da condução de projetos envolvam a adoção em ambientes clínicos de solução PACS baseada em cloud computing.
Introduction: the adoption of a PCAS (Picture Archiving and Communication System) is fundamental for the structuring of a radiological environment without film. A PACS comprises, essentially, hardware and information systems interconnected in a network, oriented towards acquisition, storage (or archiving), retrieving and presentation of medical images to specialists entrusted with analyzing and assessing them. Cloud computing comes to support of PCAS, simplifying medical imaging sharing between health care organizations and promoting the virtualization of physical infrastructure to assure uninterrupted availability of the PCAS. Goal: This study aimed to implement a simplified PCAS in a private cloud computing environment, and subsequently to evaluate its security and performance. Methodology: The images that formed the new PCAS were obtained from the exiting PCAS of Centro de Ciência das Imagens e Física Médica of Hospital das Clinicas da Faculdade de Medicina de Ribeirão Preto - CCIFM/HCFMRP. To evaluate its security, scenarios were built within the following framework: 1) patient identification data anonymization through computational database cryptography, using the AES (Advanced Encription Standards) algorithm ; 2) transfer of encrypted medical images on the Internet using VPN (Virtual Private Network) over IPSec (Internet Protocol Security); and 3) safe traffic through Secure Shell (SSH) tunneling. Results: There was a performance drop on traffic of information to the cloud under the proposed security levels that suggests a relationship between increase in security and loss of performance, pointing to the need for performance studies when the project involving driving adoption in clinical environments PACS solution based on cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Yu. "A Data-centric Internet of Things Framework Based on Public Cloud." Licentiate thesis, Linköpings universitet, Fysik, elektroteknik och matematik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159770.

Full text
Abstract:
The pervasive application of Internet of Things (IoT) has been seen in many aspects in human daily life and industrial production. The concept of IoT originates from traditional machine-to-machine (M2M) communications which aimed at solving domain-specific and applicationspecific problems. Today, the rapid progress of communication technologies, the maturation of Internet infrastructures, the continuously reduced cost of sensors, and emergence of more open standards, have witnessed the approaching of the expected IoT era, which envisions full connectivity between the physical world and the digital world via the Internet protocol. The popularity of cloud computing technology has enhanced this IoT transform, benefiting from the superior computing capability and flexible data storage, let alone the security, reliability and scalability advantages. However, there are still a series of obstacles confronted by the industry in deployment of IoT services. First, due to the heterogeneity of hardware devices and application scenarios, the interoperability and compatibility between link-layer protocols, sub-systems and back-end services are significantly challenging. Second, the device management requires a uniform scheme to implement the commissioning, communication, authorization and identity management to guarantee security. Last, the heterogeneity of data format, speed and storage mechanism for different services pose a challenge to further data mining. This thesis aims to solve these aforementioned challenges by proposing a data-centric IoT framework based on public cloud platforms. It targets at providing a universal architecture to facilitate the deployment of IoT services in massive IoT and broadband IoT categories. The framework involves three representative communication protocols, namely WiFi, Thread and Lo-RaWAN, to enable support for local, personal, and wide area networks. A security assessment taxonomy for wireless communications in building automation networks is proposed as a tool to evaluate the security performance of adopted protocols, so as to mitigate potential network flaws and guarantee the security. Azure cloud platform is adopted in the framework to provide device management, data processing and storage, visualization, and intelligent services, thanks to the mature cloud infrastructure and the uniform device model and data model. We also exhibit the value of the study by applying the framework into the digitalization procedure of the green plant wall industry. Based on the framework, a remote monitoring and management system for green plant wall is developed as a showcase to validate the feasibility. Furthermore, three specialized visualization methods are proposed and a neuron network-based anomaly detection method is deployed in the project, showing the potential of the framework in terms of data analytics and intelligence.
APA, Harvard, Vancouver, ISO, and other styles
15

Moiz, A. (Abdul). "Design, implementation and testing of a mobile cloud." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201512082287.

Full text
Abstract:
Telecommunication industry has experienced a major breakthrough in the last few decades due to the immense development in information technology. Ubiquitous connectivity, expeditious increase in the number of low cost yet powerful smart devices and quantum leap in social networking are posing new challenges to cope up with the current as well as future requirements. While substantial amount of work has been done in this context, particularly on cooperative and cognitive networks, the very approach has certain limitations and shortcomings. The three characteristic challenges are enhance system throughput, dynamic environment adaptability and productive utilization of the available resources. Here we present mobile cloud, a novel yet simplistic system model that employs cognitive and cooperative strategies to address all of these three challenges. The system exploits the short range link to establish a small social network among the nearby devices, adapts according to environment and uses various cooperation strategies to obtain efficient utilization of resources. Lastly, we implemented an experimental mobile cloud and attentively assessed its performance with varying parameters and legacy approach used in the similar context. The analysis provided sound understanding of system model compliance with the primary objectives as well as with the future networks.
APA, Harvard, Vancouver, ISO, and other styles
16

Persson, Mathias. "Communication Protocol for a Cyber-Physical System : Using Bluetooth, NFC and cloud." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175159.

Full text
Abstract:
The focus of this thesis is to utilize many of today’s current technologies to design a communication protocol that allows different devices to be incorporated into a system that can facilitate the flow of information between a user and a world of digital data. The protocol will take advantage of individual benefits from NFC, Bluetooth and cloud computing in its design to make the underlying complexity as transparent to the user as possible. Some of the main problems, such as security and reliability, are discussed and how they are incorporated into the core design of the protocol. The protocol is then applied to a case study to see how it can be utilized to create an integrity preserving system for managing medical records in a healthcare environment. The results from the case study gives merit to guidelines provided by the protocol specifications, making a system implementation based on the protocol theoretically possible. A real system implementation is required to verify the results extracted from the case study.
Denna uppsats fokuserar på att använda många av dagens teknologier för att konstruera ett kommunikationsprotokoll som möjliggör för olika enheter att inkorporeras i ett system som underlättar informationsflödet mellan en användare och en värld av digital data. Protokollet utnyttjar olika individuella fördelar hos NFC, Bluetooth and molntjänster i dess design för att göra den underliggande komplexiteten så transparant som möjligt för användaren. Några av de främsta problemen, så som säkerhet och tillförlitlighet, diskuteras och hur de inkorporeras i hjärtat av protokollet. Protokollet appliceras sedan i en fallstudie för att se hur det kan användas för att skapa ett system för sjukjournaler som bevarar integriteten hos patienter. Resultatet från fallstudien pekar mot att de riktlinjer som gavs av protokollspecifikationerna fungerar för att göra en systemimplementation på en teoretisk nivå. En verklig systemimplementation skulle behövas för att verifiera de resultat som framgår ur fallstudien.
APA, Harvard, Vancouver, ISO, and other styles
17

Chilwan, Ameen. "Dependability Differentiation in Cloud Services." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for telematikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-13120.

Full text
Abstract:
As cloud computing is becoming more mature and pervasive, almost all types of services are being deployed in clouds. This has also widened the spectrum of cloud users which encompasses from domestic users to large companies. One of the main concerns of large companies outsourcing their IT functions to clouds is the availability of their functions. On the other hand, availability requirements for domestic users are not very strict. This requires the cloud service providers to guarantee different dependability levels for different users and services. This thesis is based upon this requirement of dependability differentiation of cloud services depending upon the nature of services and target users.In this thesis, different types of services are identified and grouped together both according to their deployment nature and their target users. Also a range of techniques for guaranteeing dependability in the cloud environment are identified and classified. In order to quantify dependability provided by different techniques, a cloud system is modeled. Two different levels of dependability differentiation are considered, namely; differentiation depending upon the state of standby replica and differentiation depending upon the spatial separation of active and standby replicas. These two levels are separately modeled by using Markov state diagrams and reliability block diagrams respectively. Due to the limitations imposed by Markov models, the former differentiation level is also studied by using a simulation.Finally, numerical analysis is conducted and different techniques are compared. Also the best technique for each user and service class is identified depending upon the results obtained. The most crucial components for guaranteeing dependability in cloud environment are also identified. This will direct the future prospects of study and also provide an idea to cloud service providers about the cloud components that are worth investing in, for enhancing service availability.
APA, Harvard, Vancouver, ISO, and other styles
18

Moyer, Daniel William. "Punching Holes in the Cloud: Direct Communication between Serverless Functions Using NAT Traversal." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103627.

Full text
Abstract:
A growing use for serverless computing is large parallel data processing applications that take advantage of its on-demand scalability. Because individual serverless compute nodes, which are called functions, run in isolated containers, a major challenge with this paradigm is transferring temporary computation data between functions. Previous works have performed inter-function communication using object storage, which is slow, or in-memory databases, which are expensive. We evaluate the use of direct network connections between functions to overcome these limitations. Although function containers block incoming connections, we are able to bypass this restriction using standard NAT traversal techniques. By using an external server, we implement TCP hole punching to establish direct TCP connections between functions. In addition, we develop a communications framework to manage NAT traversal and data flow for applications using direct network connections. We evaluate this framework with a reduce-by-key application compared to an equivalent version that uses object storage for communication. For a job with 100+ functions, our TCP implementation runs 4.7 times faster at almost half the cost.
Master of Science
Serverless computing is a branch of cloud computing where users can remotely run small programs, called "functions," and pay only based on how long they run. A growing use for serverless computing is running large data processing applications that use many of these serverless functions at once, taking advantage of the fact that serverless programs can be started quickly and on-demand. Because serverless functions run on isolated networks from each other and can only make outbound connections to the public internet, a major challenge with this paradigm is transferring temporary computation data between functions. Previous works have used separate types of cloud storage services in combination with serverless computing to allow functions to exchange data. However, hard-drive--based storage is slow and memory-based storage is expensive. We evaluate the use of direct network connections between functions to overcome these limitations. Although functions cannot receive incoming network connections, we are able to bypass this restriction by using a standard networking technique called Network Address Translation (NAT) traversal. We use an external server as an initial relay to setup a network connection between two functions such that once the connection is established, the functions can communicate directly with each other without using the server anymore. In addition, we develop a communications framework to manage NAT traversal and data flow for applications using direct network connections. We evaluate this framework with an application for combining matching data entries and compare it to an equivalent version that uses storage based on hard drives for communication. For a job with over 100 functions, our implementation using direct network connections runs 4.7 times faster at almost half the cost.
APA, Harvard, Vancouver, ISO, and other styles
19

Wicaksana, Arief. "Infrastructure portable pour un système hétérogène reconfigurable dans un environnement de cloud-FPGA." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT088/document.

Full text
Abstract:
La haute performance ainsi que la basse consommation d’énergie offertes par lesField-Programmable Gate Arrays (FPGAs) contribuent à leur popularité en tant queaccélérateurs matériels. Cet argument a été soutenu par les intégrations récentes des FPGAs dans des systèmes cloud et centre de données. Toutefois, le potentiel d’une architecture reconfigurable peut être encore optimisé en traitant les FPGAs comme une ressource virtualisée et en les offrant une capacité de multitâche. La solution pour interrompre une tâche sur FPGAs à pour objectif d’effectuer un changement de contexte matériel (hardware context switch) a été un sujet de recherche depuis des nombreuses années. Les travaux précédents ont principalement proposé une stratégie pour extraire le contexte d’une tâche en cours de son exécution d’un FPGA pour offrir la possibilité de sa reprise plus tard. Cependant, la communication tout au long du processus n’a pas reçu autant d’attention.Dans cette thèse, nous étudions la gestion de communication d’une tâche matérielle durant son changement de contexte. Cette gestion de communicationest nécessaire pour garantir la cohérence de la communication d’une tâche dans un système reconfigurable avec la capacité de changement de contexte. Autrement, un changement de contexte matériel est seulement autorisé sous des contraintes restrictifs; il est possible après que les flux de communication soient fini et que toutes les données d’entrées/de sorties sont déjà consommées. De plus, certaines techniques demandent l’homogénéité au sein de la plate-forme pour qu’un changement de contexte matériel puisse se réaliser.Nous présentons içi un mécanisme qui conserve la cohérence de communication durant un changement de contexte matériel dans une architecture reconfigurable. Les données de communication sont gérées avec le contexte de tâche pour assurer leur intégrité. La gestion du contexte et les données de communication suivent un protocole spécifique pour des architectures hétérogènes reconfigurables. Ce protocole permet donc un changement de contexte matériel pendant que la tâche a encore des flux de communication. À partir des expérimentations, nous découvrons que le surcoût de la gestion de communication devient négligeable car notre mécanisme fournit une grande réactivité nécessaire pour l’allocation de tâche de façon préemptive - outre que sa consistance de communication. Enfin, les applications de solution proposée sont présentées dans un prototypage de tâche migration et dans un système utilisant un hyperviseur
Field-Programmable Gate Arrays (FPGAs) have been gaining popularity as hardware accelerators in heterogeneous architectures thanks to their high performance and low energy consumption. This argument has been supported by the recent integration of FPGA devices in cloud services and data centers. The potential offered by the reconfigurable architectures can still be optimized by treating FPGAs as virtualizable resources and offering them multitasking capability. The solution to preempt a hardware task on an FPGA with the objective of context switching it has been in research for many years. The previous works mainly proposed the strategy to extract the context of a running task from the FPGA to provide the possibility of its resumption at a later time. The communication during the process, on the contrary, has not been receiving much attention.In this work, we study the communication management of a hardware task whileit is being context switched. This communication management is necessary to ensure the consistency in the communication of a task with context switch capability in a reconfigurable system. Otherwise, a hardware context switch can only be allowed under restrictive constraints which may lead to a considerable penalty in performance; context switching a task is possible after the communication flows finish and the input/output data have been consumed. Furthermore, certain techniques demand homogeneity in the platform for a hardware context switch can take place.We present a mechanism which preserves the communication consistency during ahardware context switch in a reconfigurable architecture. The input/output communication data are managed together with the task context to ensure their integrity. The overall management of the hardware task context and communication data follows a dedicated protocol developed for heterogeneous reconfigurable architectures. This protocol thus allows a hardware context switch to take place while the task still has ongoing communication flows on Reconfigurable System-on-Chips (RSoCs). From the experiments, we discover that the overhead due to managing the communication data becomes negligible since our mechanism provides the necessary high responsiveness for preemptive scheduling, besides the consistency in communication. Finally, the applications of the proposed solution are presented in a task migration prototyping and in a hypervisor-based system
APA, Harvard, Vancouver, ISO, and other styles
20

Ramagoffu, Madisa Modisaotsile. "The impact of network related factors on Internet based technology in South Africa : a cloud computing perspective." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/22820.

Full text
Abstract:
Outsourcing, consolidation and cost savings of IT services, are increasingly becoming an imperative source of competitive advantage and a great challenge for most local and global businesses. These challenges not only affect consumers, but also the service providers’ community. As IT is slowly becoming commoditised, consumers, such as business organisations, are increasingly expecting IT services that will mimic other utility services such as water, electricity, and telecommunications.To this end, no one model has been able to emulate these utilities in the computing arena.Cloud Computing is the recent computing phenomenon that attempts to be the answer to most business IT requirements. This phenomenon is gaining traction in the IT industry, with a promise of advantages such as cost reduction, elimination of upfront capital outlay, pay per use models, shared infrastructure, and high flexibility allowing users and providers to handle high elasticity of demand. The critical success factor that remains unanswered for most IT organisations and its management is: What is the effect of the communication network factors on Internet based technology such as Cloud Computing, given the emerging market context.This study therefore, investigates the effect of four communication network factors (price, availability, reliability and security) in the adoption of Cloud Computing by IT managers in a South African context, including their propensity to adopt the technology. The study investigates numerous technology adoption theories, in which Technology, Organisation and Environment (TOE) framework is selected due to it having an organisational focus as opposed to an individual focus.Based on the results, this study proposes that Bandwidth (Pricing and Security) should be included into any adoption model that involves services running on the Internet. The study makes an attempt to contribute to the emerging literature of Cloud Computing, Internet in South Africa, in addition to offering organisations considering adoption and Cloud Providers’ significant ideas to consider for Cloud Computing adoption.
Dissertation (MBA)--University of Pretoria, 2012.
Gordon Institute of Business Science (GIBS)
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
21

Oueis, Jessica. "Gestion conjointe de ressources de communication et de calcul pour les réseaux sans fils à base de cloud." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM007/document.

Full text
Abstract:
Cette thèse porte sur le paradigme « Mobile Edge cloud» qui rapproche le cloud des utilisateurs mobiles et qui déploie une architecture de clouds locaux dans les terminaisons du réseau. Les utilisateurs mobiles peuvent désormais décharger leurs tâches de calcul pour qu’elles soient exécutées par les femto-cellules (FCs) dotées de capacités de calcul et de stockage. Nous proposons ainsi un concept de regroupement de FCs dans des clusters de calculs qui participeront aux calculs des tâches déchargées. A cet effet, nous proposons, dans un premier temps, un algorithme de décision de déportation de tâches vers le cloud, nommé SM-POD. Cet algorithme prend en compte les caractéristiques des tâches de calculs, des ressources de l’équipement mobile, et de la qualité des liens de transmission. SM-POD consiste en une série de classifications successives aboutissant à une décision de calcul local, ou de déportation de l’exécution dans le cloud.Dans un deuxième temps, nous abordons le problème de formation de clusters de calcul à mono-utilisateur et à utilisateurs multiples. Nous formulons le problème d’optimisation relatif qui considère l’allocation conjointe des ressources de calculs et de communication, et la distribution de la charge de calcul sur les FCs participant au cluster. Nous proposons également une stratégie d’éparpillement, dans laquelle l’efficacité énergétique du système est améliorée au prix de la latence de calcul. Dans le cas d’utilisateurs multiples, le problème d’optimisation d’allocation conjointe de ressources n’est pas convexe. Afin de le résoudre, nous proposons une reformulation convexe du problème équivalente à la première puis nous proposons deux algorithmes heuristiques dans le but d’avoir un algorithme de formation de cluster à complexité réduite. L’idée principale du premier est l’ordonnancement des tâches de calculs sur les FCs qui les reçoivent. Les ressources de calculs sont ainsi allouées localement au niveau de la FC. Les tâches ne pouvant pas être exécutées sont, quant à elles, envoyées à une unité de contrôle (SCM) responsable de la formation des clusters de calculs et de leur exécution. Le second algorithme proposé est itératif et consiste en une formation de cluster au niveau des FCs ne tenant pas compte de la présence d’autres demandes de calculs dans le réseau. Les propositions de cluster sont envoyées au SCM qui évalue la distribution des charges sur les différentes FCs. Le SCM signale tout abus de charges pour que les FCs redistribuent leur excès dans des cellules moins chargées.Dans la dernière partie de la thèse, nous proposons un nouveau concept de mise en cache des calculs dans l’Edge cloud. Afin de réduire la latence et la consommation énergétique des clusters de calculs, nous proposons la mise en cache de calculs populaires pour empêcher leur réexécution. Ici, notre contribution est double : d’abord, nous proposons un algorithme de mise en cache basé, non seulement sur la popularité des tâches de calculs, mais aussi sur les tailles et les capacités de calculs demandés, et la connectivité des FCs dans le réseau. L’algorithme proposé identifie les tâches aboutissant à des économies d’énergie et de temps plus importantes lorsqu’elles sont téléchargées d’un cache au lieu d’être recalculées. Nous proposons ensuite d’exploiter la relation entre la popularité des tâches et la probabilité de leur mise en cache, pour localiser les emplacements potentiels de leurs copies. La méthode proposée est basée sur ces emplacements, et permet de former des clusters de recherche de taille réduite tout en garantissant de retrouver une copie en cache
Mobile Edge Cloud brings the cloud closer to mobile users by moving the cloud computational efforts from the internet to the mobile edge. We adopt a local mobile edge cloud computing architecture, where small cells are empowered with computational and storage capacities. Mobile users’ offloaded computational tasks are executed at the cloud-enabled small cells. We propose the concept of small cells clustering for mobile edge computing, where small cells cooperate in order to execute offloaded computational tasks. A first contribution of this thesis is the design of a multi-parameter computation offloading decision algorithm, SM-POD. The proposed algorithm consists of a series of low complexity successive and nested classifications of computational tasks at the mobile side, leading to local computation, or offloading to the cloud. To reach the offloading decision, SM-POD jointly considers computational tasks, handsets, and communication channel parameters. In the second part of this thesis, we tackle the problem of small cell clusters set up for mobile edge cloud computing for both single-user and multi-user cases. The clustering problem is formulated as an optimization that jointly optimizes the computational and communication resource allocation, and the computational load distribution on the small cells participating in the computation cluster. We propose a cluster sparsification strategy, where we trade cluster latency for higher system energy efficiency. In the multi-user case, the optimization problem is not convex. In order to compute a clustering solution, we propose a convex reformulation of the problem, and we prove that both problems are equivalent. With the goal of finding a lower complexity clustering solution, we propose two heuristic small cells clustering algorithms. The first algorithm is based on resource allocation on the serving small cells where tasks are received, as a first step. Then, in a second step, unserved tasks are sent to a small cell managing unit (SCM) that sets up computational clusters for the execution of these tasks. The main idea of this algorithm is task scheduling at both serving small cells, and SCM sides for higher resource allocation efficiency. The second proposed heuristic is an iterative approach in which serving small cells compute their desired clusters, without considering the presence of other users, and send their cluster parameters to the SCM. SCM then checks for excess of resource allocation at any of the network small cells. SCM reports any load excess to serving small cells that re-distribute this load on less loaded small cells. In the final part of this thesis, we propose the concept of computation caching for edge cloud computing. With the aim of reducing the edge cloud computing latency and energy consumption, we propose caching popular computational tasks for preventing their re-execution. Our contribution here is two-fold: first, we propose a caching algorithm that is based on requests popularity, computation size, required computational capacity, and small cells connectivity. This algorithm identifies requests that, if cached and downloaded instead of being re-computed, will increase the computation caching energy and latency savings. Second, we propose a method for setting up a search small cells cluster for finding a cached copy of the requests computation. The clustering policy exploits the relationship between tasks popularity and their probability of being cached, in order to identify possible locations of the cached copy. The proposed method reduces the search cluster size while guaranteeing a minimum cache hit probability
APA, Harvard, Vancouver, ISO, and other styles
22

Goméz, Villanueva Daniel. "Secure E-mail System for Cloud Portals : Master Thesis in Information and Communication Systems Security." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-108080.

Full text
Abstract:
Email is a well established technology used worldwide for enterprise and private communication through the Internet. It allows people to communicate using text, but also other information formats used either as HTML or attached files. The communication is performed without the need of synchronized endpoints, based on the use of email servers that take care of storing and forwarding email letters. All these properties and much more standardized ones do not include security, which makes the choice of service provider hard when the letters sent in the email system include sensitive information. In the last few years there has been a big interest and growth in the area of cloud computing. Placing resources (computers, applications, information) out of local environments, thanks to the high speed connections in the Internet, provides countless possibilities. Actually, even email systems can be deployed in cloud computing environments, including all the email services (interface, client, and server) or a part of them. From a security point of view, the use of cloud computing leads to many threats generated by external parties and even the cloud providers. Because of these reasons, this work intends to present an innovative approach to security in a cloud environment, focusing on the security of an email system. The purpose is to find a solution for an email system deployable in a cloud environment, with all the functionality deployed on a external machine. This email system must be completely protected, minimizing the actions taken by the user, which should just connect to a portal through a web browser. Along this report there are details about the foundations, progress and findings of the research that has been carried out. The main objectives involve: researching on the concepts and state of the art of cloud computing, email systems and security; presenting a cloud computing architecture that will take care of the general aspects of security; designing an email system for that architecture that contains mechanisms protecting it from the possible security threats; and finally, implementing a simplified version of the design to test and prove the feasibility of it. After all the mentioned activities, the findings are commented, mentioning the applicability of research results to the current situation. Obviously, there is place for more research in depth of several topics related to cloud computing and email, that is why some of them are suggested.
APA, Harvard, Vancouver, ISO, and other styles
23

Cuttitta, Anthony R. "Talking about technology| A metaphoric analysis of cloud computing and web 2.0." Thesis, Northern Arizona University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1550099.

Full text
Abstract:

This research investigates the discourse around the terms web 2.0 and cloud computing, which are used as metaphors for information technology. In addition to the disruptive technologies and applications to which they refer, both of these terms have affected information technology, its use, and the way it is perceived. This research examines how this impact has varied over time and by audience. The usage of the terms is examined through a rhetorical analysis of a sampling of articles from the general publications The New York Times, The Washington Post, and USA Today, and the professional publications InformationWeek and CIO Magazine. The research is an analysis of these artifacts using critical methods influenced by metaphoric analysis, symbolic interactionism, and Burke's concept of symbolic action. Metaphors serve as cognitive tools in discourse communities for understanding new domains, the tenor or target of the metaphor, through references to shared symbols, the vehicle or source of the metaphor. Metaphors may be mostly descriptive, as epiphors, or persuasive, as diaphors. This research shows that the web 2.0 and cloud computing metaphors served a persuasive purpose for helping people understand disruptive technology through familiar experiences. Rhetors used the metaphors in persuading audiences whether or not to adopt the new technologies. As the new technologies became accepted and adopted, problems arose which were obscured in the original metaphor, so new metaphors emerged to highlight and conceal various aspects of the technologies. Some of these new metaphors arose with systematicity in the same domain of the original metaphor, while others came from different domains. The ability of the metaphor to be used in various rhetorical situations as the technology evolves affects the usefulness of the metaphor over time. The usage of web 2.0 shortly after the dot com boom and bust cycle of the late 1990s and early 2000s allowed rhetors to frame web 2.0 as an economic phenomenon, casting the collaborative aspects of the technology as tools for making money in a perceived second dot com bubble. The failure of the second dot com bubble to materialize, along with user frustration with the emphasis of the economic aspects of collaboration and the limited usefulness of the software release cycle in representing continuous technical change, led to infrequency of the use of web 2.0 as a metaphor. Other metaphors, like social networking and social media, arose as a new source domain to represent some of the collaborative aspects of the original technologies. Some minor referents of web 2.0, like software as a service and data centers, became referents of cloud computing, which uses a natural archetype of clouds as the source domain to reference the target domain of hosted information technology services accessible through multiple devices. As a natural domain, the cloud metaphor is more extensible than web 2.0 and as a result may have more longevity than web 2.0. The cloud computing metaphor also became associated with lightning, electricity, experimentation, and utility through a fuzzy semantic relationship. The utility metaphor worked with cloud to emphasize the ease of implementation of cloud based solutions. As practical problems arose with implementing cloud solutions, new metaphors arose. Some of these worked within the cloud domain, such as the idea of storms, to emphasize the downsides of cloud computing. Other metaphors arose in new source domains to emphasize territory and private ownership in hosted solutions. By providing an in-depth rhetorical analysis of these IT metaphors, this research can serve as a guide for evaluating rhetorical and metaphoric responses to future disruptive technical changes.

APA, Harvard, Vancouver, ISO, and other styles
24

Wiren, Jakob. "Data Storage Cost Optimization Based on Electricity Price Forecasting with Machine Learning in a Multi-Geographical Cloud Environment." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-152250.

Full text
Abstract:
As increased demand of cloud computing leads to increased electricity costs for cloud providers, there is an incentive to investigate in new methods to lower electricity costs in data centers. Electricity price markets suffer from sudden price spikes as well as irregularities between different geographical electricity markets. This thesis investigates in whether it is possible to leverage these volatilities and irregularities between different electricity price markets, to offload or move storage in order to reduce electricity price costs for data storage. By forecasting four different electricity price markets it was possible to predict sudden price spikes and leverage these forecasts in a simple optimization model to offload storage of data in data centers and successfully reduce electricity costs for data storage.
APA, Harvard, Vancouver, ISO, and other styles
25

Englund, Carl. "Evaluation of cloud-based infrastructures for scalable applications." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139935.

Full text
Abstract:
The usage of cloud computing in order to move away from local servers and infrastructure have grown enormously the last decade. The ability to quickly scale capacity of servers and their resources at once when needed is something that can both be a price saver for companies and help them deliver high end products that will function correctly at all times even under heavy load to their customers. To meet todays challenges, one of the strategic directions of Attentec, a software company located in Linköping, is to examine the world of cloud computing in order to deliver robust and scalable applications to their customers. This thesis investigates the usage of cloud services in order to deploy scalable applications which can adapt to usage peaks within minutes.
APA, Harvard, Vancouver, ISO, and other styles
26

Nasim, Robayet. "Cost- and Performance-Aware Resource Management in Cloud Infrastructures." Doctoral thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-48482.

Full text
Abstract:
High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization.  The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs).  For minimizing the operational cost, we mainly focus on optimizing energy consumption of PMs by applying dynamic VM consolidation methods. To make VM consolidation techniques more efficient, we propose to utilize multiple paths to spread traffic and deploy recent queue management schemes which can maximize network resource utilization and reduce both downtime and migration time for live migration techniques. In addition, a dynamic resource allocation scheme is presented to distribute workloads among geographically dispersed DCs considering their location based time varying costs due to e.g. carbon emission or bandwidth provision. For optimizing performance level objectives, we focus on interference among applications contending in shared resources and propose a novel VM consolidation scheme considering sensitivity of the VMs to their demanded resources. Further, to investigate the impact of uncertain parameters on cloud resource allocation and applications’ QoS such as unpredictable variations in demand, we develop an optimization model based on the theory of robust optimization. Furthermore, in order to handle the scalability issues in the context of large scale infrastructures, a robust and fast Tabu Search algorithm is designed and evaluated.
High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization.  The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs).
APA, Harvard, Vancouver, ISO, and other styles
27

Miccoli, Roberta. "Implementation of a complete sensor data collection and edge-cloud communication workflow within the WeLight project." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22563/.

Full text
Abstract:
This thesis aims at developing the full workflow of data collection from a laser sensor connected to a mobile application, working as edge device, which subsequently transmits the data to a Cloud platform for analysing and processing. The project is part of the We Light (WErable LIGHTing for smart apparels) project, in collaboration with TTLab of the INFN (National Institute of Nuclear Physics). The goal of We Light is to create an intelligent sports shirt, equipped with sensors that take information from the external environment and send it to a mobile device. The latter then sends the data via an application to an open source Cloud platform in order to create a real IoT system. The smart T-shirt is capable of emitting different levels of light depending on the perceived external light, with the aim of ensuring greater safety for road sports people. The thesis objective is to employ a prototype board provided by the CNR-IMAMOTER to collect data and send it to the specially created application via Bluetooth Low Energy connection. Furthermore, the connection between the edge device and the Thingsboard IoT platform is performed via MQTT protocol. Several device authentication techniques are implemented on TB and a special dashboard is created to display data from the IoT device; the user is also able to view data in numerical and even graphical form directly in the application without necessarily having to access TB. The app created is useful and versatile and can be adapted to be used for other IoT purposes, not only within the We Light project.
APA, Harvard, Vancouver, ISO, and other styles
28

Carlsson, Daniel. "Dynamics AX in the Cloud : Possibilities and Shortcomings." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-196060.

Full text
Abstract:
The usage of the cloud is rapidly increasing and is something that is of large interest to everyone involved in technology. The purpose of this thesis is to examine the benefits and possible shortcomings of using Microsoft Dynamics AX in the cloud, specifically Microsoft Azure, instead of using local datacenters. This thesis project has been done at Scania IT using their implementation of Dynamics AX. This thesis project consists of an extensive literature study regarding both ERP Systems as well as other systems in regards to the cloud. It was decided early on to focus on the new version of Dynamics AX, which currently is only available in the cloud and compare this implementation to the two versions that the majority are using today, AX 2009 and AX 2012. The benefits of AX and Azure both being Microsoft products are clear with the welldesigned integrations and support all the way through the clients to the servers regarding backups and load balancing. It is shown how the developers have to work differently in regards to integrations with outside systems, especially in regards to AX 2009 with the frameworks having changed. The addition of Data Entities mean that the developers can save a lot of time by only needing a reference to the location of the object in the database instead of having to keep track of all the tables themselves. The analysis focuses on the differences in four different areas, performance & accessibility, scalability, cost savings as well as security & privacy. The background knowledge that is being used for the analysis primarily comes from the literature study as well as knowledge gained by studying the implementation at Scania today. The result shows that there are clear advantages regarding performance, cost savings and especially accessibility, however it is also clear that laws in a lot of countries still have not caught up with the fact that it is possible to use the cloud for data storage these days. Which in turn means that the best move in the near future for the majority of ERP users would be either a hybrid or private cloud within the borders of the same country.
Användningen av molnet är snabbt expanderande och är något som är relevant för alla inblandade inom teknologin. Meningen med det här projektet är att undersöka fördelarna och de möjliga problem som kan uppstå genom användning av Microsoft Dynamics AX I molnet, specifikt Microsoft Azure, istället för lokala datacenter. Det här projektet har utförts hos Scania IT med hjälp av deras nuvarande implementation av Dynamics AX. Arbetet innehåller en omfattande litteraturstudie angående både ERP system och andra system från varierande områden med fokus på molnet. Det beslutades tidigt att fokusera på den nya versionen av Dynamics AX, som för tillfället bara är tillgänglig I molnet, och jämföra denna implementationen med de två versionen som huvudsakligen används idag, AX 2009 och AX 2012. Fördelarna med både AX och Azure som Microsoft produkter är tydliga med välintegrerade hjälpmedel hela vägen från klienterna till servrarna med ett särskilt fokus på säkerhetskopiering och lastbalansering. Det visas hur utvecklare behöver ändra sitt arbetssätt i avseende på integrationer med andra system, särskilt för AX 2009 då ramverken har ändrats. Tillägget av Data Entities betyder att utvecklare kan spara mycket tid på att bara behöva ha koll på en referens till platsen för ett objekt istället för att behöva veta exakt i vilken tabell i databasen objektet befinner sig. Analysen fokuserar på skillnaderna inom fyra olika områden, prestanda & tillgänglighet, skalbarhet, kostnadsbesparingar samt säkerhet & integritet. Kunskapen för analysen kommer framförallt ifrån litteraturstudien samt den kunskap som har intagits från implementationen samt medarbetarna vid Scania idag. Resultatet visar att det finns tydliga fördelar när det kommer till prestanda, kostnadsbesparingar och framför allt, tillgänglighet. Dock är det även tydligt att lagar i många länder ännu inte har hunnit ikapp det faktum att molnet är en av de bättre möjligheterna att spara data i idag. Detta betyder i sin tur betyder att det bästa nästa steget för majoriteten av ERP användarna idag är ett hybrid- eller privatmoln inom landsgränserna.
APA, Harvard, Vancouver, ISO, and other styles
29

Henriksson, Johannes, and Alexander Magnusson. "Impact of using cloud-based SDNcontrollers on the networkperformance." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44152.

Full text
Abstract:
Software-Defined Networking (SDN) is a network architecture that differs from traditionalnetwork planes. SDN has tree layers: infrastructure, controller, and application. Thegoal of SDN is to simplify management of larger networks by centralizing control into thecontroller layer instead of having it in the infrastructure. Given the known advantages ofSDN networks, and the flexibility of cloud computing. We are interested if this combinationof SDN and cloud services affects network performance, and what affect the cloud providersphysical location have on the network performance. These points are important whenSDN becomes more popular in enterprise networks. This seems like a logical next step inSDN, centralizing branch networks into one cloud-based SDN controller. These questionswere created with a literature studies and answered with an experimentation method. Theexperiments consist of two network topologies both locally hosted SDN (baseline) and cloudhosted SDN. The topology used Zodiac FX switches and Linux hosts. The following metricswas measured: throughput, latency, jitter, packet loss, and time to add new hosts. Theconclusion is that SDN as a cloud service is possible and does not significantly affect networkperformance. One limitation with this thesis was the hardware, resulting in big fluctuationin throughput and packet loss.
APA, Harvard, Vancouver, ISO, and other styles
30

Shafabakhsh, Benyamin. "Research on Interprocess Communication in Microservices Architecture." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277940.

Full text
Abstract:
With the substantial growth of cloud computing over the past decade, microservices has gained significant popularity in the industry as a new architectural pattern. It promises a cloud-native architecture that breaks large applications into a collection of small, independent, and distributed packages. Since microservices-based applications are distributed, one of the key challenges when designing an application is the choice of mechanism by which services communicate with each other. There are several approaches for implementing Interprocess communication (IPC) in microservices, and each comes with different advantages and trade-offs. While theoretical and informal comparison exists between them, this thesis has taken an experimental approach to compare and contrast common forms of IPC communications. In this the- sis, IPC methods have been categorized into Synchronous and Asynchronous categories. The Synchronous type consists of REST API and Google gRPC, while the Asynchronous type is using a message broker known as RabbitMQ. Further, a collection of microservices for an e-commerce scenario has been designed and developed using all the three IPC methods. A load test has been executed against each model to obtain quantitative data related to Performance Efficiency, and Availability of every method. Developing the same set of functionalities using different IPC methods has offered a qualitative data related to Scalability, and Complexity of each IPC model. The evaluation of the experiment indicates that, although there is no universal IPC solution that can be applied in all cases, Asynchronous IPC patterns shall be the preferred option when designing the system. Nevertheless, the findings of this work also suggest there exist scenarios where Synchronous patterns can be more suitable.
Med den kraftiga tillväxten av molntjänster under det senaste decenniet har mikrotjänster fått en betydande popularitet i branschen som ett nytt arkitektoniskt mönster. Det erbjuder en moln-baserad arkitektur som delar stora applikationer i en samling små, oberoende och distribuerade paket. Eftersom microservicebaserade applikationer distribueras och körs på olika maskiner, är en av de viktigaste utmaningarna när man utformar en applikation valet av mekanism med vilken tjänster kommunicerar med varandra. Det finns flera metoder för att implementera Interprocess-kommunikation (IPC) i mikrotjänster och var och en har olika fördelar och nackdelar. Medan det finns teoretisk och in- formell jämförelse mellan dem, har denna avhandling tagit ett experimentellt synsätt för att jämföra och kontrastera vanliga former av IPC-kommunikation. I denna avhandling har IPC-metoder kategoriserats i synkrona och asynkrona kategorier. Den synkrona typen består av REST API och Google gRPC, medan asynkron typ använder en meddelandemäklare känd som RabbitMQ. Dessutom har en samling mikroservice för ett e-handelsscenario utformats och utvecklats med alla de tre olika IPC-metoderna. Ett lasttest har utförts mot var- je modell för att erhålla kvantitativa data relaterade till prestandaeffektivitet, och tillgänglighet för varje metod. Att utveckla samma uppsättning funktionaliteter med olika IPC-metoder har erbjudit en kvalitativ data relaterad till skalbarhet och komplexitet för varje IPC-modell. Utvärderingen av experimentet indikerar att även om det inte finns någon universell IPC-lösning som kan tillämpas i alla fall, ska asynkrona IPC-mönster vara det föredragna alternativet vid utformningen av systemet. Ändå tyder resultaten från detta arbete också på att det finns scenarier där synkrona mönster är mer lämpliga.
APA, Harvard, Vancouver, ISO, and other styles
31

Kathirvel, Anitha, and Siddharth Madan. "Efficient Privacy Preserving Key Management for Public Cloud Networks." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-148048.

Full text
Abstract:
Most applications and documents are stored in a public cloud for storage and management purposes in a cloud computing environment. The major advantages of storing applications and documents in public cloud are lower cost through use of shared computing resources and no upfront infrastructure costs. However, in this case the management of data and other services is insecure. Therefore, security is a major problem in a public cloud as the cloud and the network are open to many other users. In order to provide security, it is necessary for data owners to store their data in the public cloud in a secure way and to use an appropriate access control scheme. Designing a computation and communication efficient key management scheme to selectively share documents based on fine-grained attribute-based access control policies in a public cloud is a challenging task. There are many existing approaches that encrypt documents prior to storage in the public cloud: These approaches use different keys and a public key cryptographic system to implement attribute-based encryption and/or proxy re-encryption. However, these approaches do not efficiently handle users joining and leaving the system when identity attributes and policies change. Moreover, these approaches require keeping multiple encrypted copies of the same documents, which has a high computational cost or incurs unnecessary storage costs. Therefore, this project focused on the design and development of an efficient key management scheme to allow the data owner to store data in a cloud service in a secure way. Additionally, the proposed approach enables cloud users to access the data stored in a cloud in a secure way. Many researchers have proposed key management schemes for wired and wireless networks. All of these existing key management schemes differ from the key management schemes proposed in this thesis. First, the key management scheme proposed in this thesis increases access level security. Second, the proposed key management scheme minimizes the computational complexity of the cloud users by performing only one mathematical operation to find the new group key that was computed earlier by the data owner. In addition, this proposed key management scheme is suitable for a cloud network. Third, the proposed key distribution and key management scheme utilizes privacy preserving methods, thus preserving the privacy of the user. Finally, a batch key updating algorithm (also called batch rekeying) has been proposed to reduce the number of rekeying operations required for performing batch leave or join operations. The key management scheme proposed in this thesis is designed to reduce the computation and communication complexity in all but a few cases, while increasing the security and privacy of the data.
De flesta program och dokument lagras i ett offentligt moln för lagring och hantering ändamål i en molnmiljö. De stora fördelarna med att lagra program och dokument i offentliga moln är lägre kostnad genom användning av delade datorresurser och ingen upfront infrastruktur costs.However, i detta fall hanteringen av data och andra tjänster är osäker. Därför är säkerhet ett stort problem i en offentlig moln som molnet och nätverket är öppna för många andra användare. För att ge trygghet, är det nödvändigt för dataägare att lagra sina data i det offentliga molnet på ett säkert sätt och att använda en lämplig åtkomstkontroll schema. Utforma en beräkning och kommunikation effektiv nyckelhantering system för att selektivt dela dokument som grundar sig på finkorniga attributbaserad åtkomstkontroll politik i en offentlig moln är en utmanande uppgift. Det finns många befintliga metoder som krypterar dokument före lagring i det offentliga molnet: Dessa metoder använder olika tangenter och en publik nyckel kryptografiskt system för att genomföra attributbaserad kryptering och / eller proxy re-kryptering. Dock har dessa metoder inte effektivt hantera användare som ansluter och lämnar systemet när identitetsattribut och politik förändras. Dessutom är dessa metoder kräver att hålla flera krypterade kopior av samma dokument, som har en hög beräkningskostnad eller ådrar sig onödiga lagringskostnader. Därför fokuserade projektet på design och utveckling av en effektiv nyckelhantering system för att möjliggöra dataägaren att lagra data i en molntjänst på ett säkert sätt. Dessutom, den föreslagna metoden gör det möjligt för molnanvändare att få tillgång till uppgifter lagras i ett cloud på ett säkert sätt. Många forskare har föreslagit viktiga förvaltningssystem för fasta och trådlösa nätverk. Alla dessa befintliga system ke, skiljer sig från de centrala förvaltningssystemen som föreslås i denna avhandling. Först föreslog nyckelhanteringssystemet i denna avhandling ökar Medverkan nivå säkerhet. För det andra, minimerar den föreslagna nyckelhanteringssystemet beräkningskomplexiteten för molnanvändare genom att utföra endast en matematisk operation för att hitta den nya gruppknapp som tidigare beräknades av dataägaren. Dessutom är denna föreslagna nyckelhanteringsschema lämpligt för ett moln nätverk. För det tredje, den föreslagna nyckeldistribution och nyckelhantering systemet utnyttjar integritets bevara metoder och därmed skydda privatlivet för användaren. Slutligen har ett parti viktig uppdatering algoritm (även kallad batch nya nycklar) föreslagits för att minska antalet Ny serieläggning av operationer som krävs för att utföra batch ledighet eller gå med i verksamheten. Nyckelhanteringssystemet som föreslås i denna avhandling är utformad för att minska beräknings-och kommunikations komplexitet i alla utom ett fåtal fall, och samtidigt öka säkerheten och integriteten av uppgifterna.
APA, Harvard, Vancouver, ISO, and other styles
32

McIntosh, Linda Anne-Marie. "Reducing Technology Costs for Small Real Estate Businesses Using Cloud and Mobility." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3534.

Full text
Abstract:
Increased client accessibility strategies, awareness of technology cost, and factors of third-party data security capabilities are elements small real estate business (SREB) owners need to know before adopting cloud and mobility technology. The purpose of this multiple case study was to explore the strategies SREB owners use to implement cloud and mobility products to reduce their technology costs. The target population consisted of 3 SREB owners who had experience implementing cloud and mobility products in their businesses in the state of Texas. The conceptual framework of this research study was the technology acceptance model theory. Semistructured interviews were conducted and the data analyzed for emergent themes. Member checking was subsequently employed to ensure the trustworthiness of the findings. Three important themes emerged: client accessibility strategies, product affordability, and transferability of information technology security risks. The findings revealed SREB owners used informal strategies based on the customer-centric necessity to implement cloud and mobility technology costs. The SREB owners' highest strategic priority was the ability to access their clients, followed by cost reduction and securing client information. The findings may contribute to social change by providing possible insights to survivability for SREB owners through cost reduction, reduced security risks, and the increased ability to deliver the dream of home ownership to their clients while contributing to the economy and enhancing the community standards of living.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhou, Lin. "Energy efficient transmitter design with compact antenna for future wireless communication systems." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33104.

Full text
Abstract:
This thesis explores a novel technique for transceiver design in future wireless systems, which is cloud radio access networks (CRANs) with single radio frequency (RF) chain antennas at each remote radio head (RRH). This thesis seeks to make three contributions. Firstly, it proposes a novel algorithm to solve the oscillatory/unstable behaviour of electronically steerable parasitic array radiators (ESPAR) when it provides multi-antenna functionality with a single RF chain. This thesis formulates an optimization problem and derives closed-form expressions when calculating the configuration of an ESPAR antenna (EA) for arbitrary signals transmission. This results in simplified processing at the transmitter. The results illustrate that the EA transmitter, when utilizing novel closed-form expressions, shows significant improvement over the performance of the EA transmitter without any pre-processing. It performs at nearly the same symbol error rate (SER) as standard multiple antenna systems. Secondly, this thesis illustrates how a practical peak power constraint can be put into an EA transceiver design. In an EA, all the antenna elements are fed centrally by a single power amplifier. This makes it more probable that during use, the power amplifier reaches maximum power during transmission. Considering limited power availability, this thesis proposes a new algorithm to achieve stable signal transmission. Thirdly, this thesis shows that an energy efficiency (EE) optimization problem can be formulated and solved in CRANs that deploy single RF chain antennas at RRHs. The closed-form expressions of the precoder and power allocation schemes to transmit desired signals are obtained to maximise EE for both single-user and multi-user systems. The results show that the CRANs with single RF chain antennas provide superior EE performance compared to the standard multiple antenna based systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Corser, Kristy L. "Teaching and learning with cloud platforms in the primary school classroom." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/157473/1/Kristy_Corser_Thesis.pdf.

Full text
Abstract:
This research investigated teaching and learning with Chromebook computers and Google's G Suite for Education in a Queensland year 5 primary school classroom. The research used Actor Network Theory and Communities of Practice theory to explore the material aspects of using technologies in the classroom, and how cloud-based technologies promote collaborative learning. Analysis of classroom practice revealed the potential advantages of using cloud platforms in education, while analysis of technology policies from the Federal government level to the classroom revealed misalignments in expectation for students' learning. The findings inform recommendations for technology policy development, curriculum planning and teachers' pedagogical practices.
APA, Harvard, Vancouver, ISO, and other styles
35

Lascano, Jorge Edison. "A Pattern Language for Designing Application-Level Communication Protocols and the Improvement of Computer Science Education through Cloud Computing." DigitalCommons@USU, 2017. https://digitalcommons.usu.edu/etd/6547.

Full text
Abstract:
Networking protocols have been developed throughout time following layered architectures such as the Open Systems Interconnection model and the Internet model. These protocols are grouped in the Internet protocol suite. Most developers do not deal with low-level protocols, instead they design application-level protocols on top of the low-level protocol. Although each application-level protocol is different, there is commonality among them and developers can apply lessons learned from one protocol to the design of new ones. Design patterns can help by gathering and sharing proven and reusable solution to common, reoccurring design problems. The Application-level Communication Protocols Design Patterns language captures this knowledge about application-level protocol design, so developers can create better, more fitting protocols base on these common and well proven solutions. Another aspect of contemporary development technics is the need of distribution of software artifacts. Most of the development companies have started using Cloud Computing services to overcome this need; either public or private clouds are widely used. Future developers need to manage this technology infrastructure, software, and platform as services. These two aspects, communication protocols design and cloud computing represent an opportunity to contribute to the software development community and to the software engineering education curriculum. The Application-level Communication Protocols Design Patterns language aims to help solve communication software design. The use of cloud computing in programming assignments targets on a positive influence on improving the Analysis to Reuse skills of students of computer science careers.
APA, Harvard, Vancouver, ISO, and other styles
36

Nasim, Robayet. "Architectural Evolution of Intelligent Transport Systems (ITS) using Cloud Computing." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-35719.

Full text
Abstract:
With the advent of Smart Cities, Intelligent Transport System (ITS) has become an efficient way of offering an accessible, safe, and sustainable transportation system. Utilizing advances in Information and Communication Technology (ICT), ITS can maximize the capacity of existing transportation system without building new infrastructure. However, in spite of these technical feasibilities and significant performance-cost ratios, the deployment of ITS is limited in the real world because of several challenges associated with its architectural design. This thesis studies how to design a highly flexible and deployable architecture for ITS, which can utilize the recent technologies such as - cloud computing and the publish/subscribe communication model. In particular, our aim is to offer an ITS infrastructure which provides the opportunity for transport authorities to allocate on-demand computing resources through virtualization technology, and supports a wide range of ITS applications. We propose to use an Infrastructure as a Service (IaaS) model to host large-scale ITS applications for transport authorities in the cloud, which reduces infrastructure cost, improves management flexibility and also ensures better resource utilization. Moreover, we use a publish/subscribe system as a building block for developing a low latency ITS application, which is a promising technology for designing scalable and distributed applications within the ITS domain. Although cloud-based architectures provide the flexibility of adding, removing or moving ITS services within the underlying physical infrastructure, it may be difficult to provide the required quality of service (QoS) which decrease application productivity and customer satisfaction, leading to revenue losses. Therefore, we investigate the impact of service mobility on related QoS in the cloud-based infrastructure. We investigate different strategies to improve performance of a low latency ITS application during service mobility such as utilizing multiple paths to spread network traffic, or deploying recent queue management schemes. Evaluation results from a private cloud testbed using OpenStack show that our proposed architecture is suitable for hosting ITS applications which have stringent performance requirements in terms of scalability, QoS and latency.
Baksidestext: Intelligent Transport System (ITS) can utilize advances in Information and Communication Technology (ICT) and maximize the capacity of existing transportation systems without building new infrastructure. However, in spite of these technical feasibilities and significant performance-cost ratios, the deployment of ITS is limited in the real world because of several challenges associated with its architectural design.  This thesis studies how to design an efficient deployable architecture for ITS, which can utilize the advantages of cloud computing and the publish/subscribe communication model. In particular, our aim is to offer an ITS infrastructure which provides the opportunity for transport authorities to allocate on-demand computing resources through virtualization technology, and supports a wide range of ITS applications. We propose to use an Infrastructure as a Service (IaaS) model to host large-scale ITS applications, and to use a publish/subscribe system as a building block for developing a low latency ITS application. We investigate different strategies to improve performance of an ITS application during service mobility such as utilizing multiple paths to spread network traffic, or deploying recent queue management schemes.

Artikel 4 Network Centric Performance Improvement for Live VM Migration finns i avhandlingen som manuskript. Nu publicerat konferenspaper. 

APA, Harvard, Vancouver, ISO, and other styles
37

Wiss, Thomas. "Evaluation of Internet of Things Communication Protocols Adapted for Secure Transmission in Fog Computing Environments." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35298.

Full text
Abstract:
A current challenge in the Internet of Things is the seeking after conceptual structures to connect the presumably billions of devices of innumerable forms and capabilities. An emerging architectural concept, the fog cloud computing, moves the seemingly unlimited computational power of the distant cloud to the edge of the network, closer to the potentially computationally limited things, effectively diminishing the experienced latency. To allow computationally-constrained devices partaking in the network they have to be relieved from the burden of constant availability and extensive computational execution. Establishing a publish/subscribe communication pattern with the utilization of the popular Internet of Things application layer protocol Constrained Application Protocol is depicted one approach of overcoming this issue. In this project, a Java based library to establish a publish/subscribe communication pattern for the Constrained Application Protocol was develop. Furthermore, efforts to build and assess prototypes of several publish/subscribe application layer protocols executed over varying common as well as secured versions of the standard and non-standard transport layer protocols were made to take advantage, evaluate, and compare the developed library. The results indicate that the standard protocol stacks represent solid candidates yet one non-standard protocol stack is the considered prime candidate which still maintains a low response time while not adding a significant amount of communication overhead.
APA, Harvard, Vancouver, ISO, and other styles
38

Mharsi, Niezi. "Cloud-Radio Access Networks : design, optimization and algorithms." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT043/document.

Full text
Abstract:
Cloud-Radio Access Network (C-RAN) est une architecture prometteuse pour faire face à l’augmentation exponentielle des demandes de trafic de données et surmonter les défis des réseaux de prochaine génération (5G). Le principe de base de CRAN consiste à diviser la station de base traditionnelle en deux entités : les unités de bande de base (BaseBand Unit, BBU) et les têtes radio distantes (Remote Radio Head, RRH) et à mettre en commun les BBUs de plusieurs stations dans des centres de données centralisés (pools de BBU). Ceci permet la réduction des coûts d’exploitation, l’amélioration de la capacité du réseau ainsi que des gains en termes d’utilisation des ressources. Pour atteindre ces objectifs, les opérateurs réseaux ont besoin d’investiguer de nouveaux algorithmes pour les problèmes d’allocation de ressources permettant ainsi de faciliter le déploiement de l’architecture C-RAN. La plupart de ces problèmes sont très complexes et donc très difficiles à résoudre. Par conséquent, nous utilisons l’optimisation combinatoire qui propose des outils puissants pour adresser ce type des problèmes.Un des principaux enjeux pour permettre le déploiement du C-RAN est de déterminer une affectation optimale des RRHs (antennes) aux centres de données centralisés (BBUs) en optimisant conjointement la latence sur le réseau de transmission fronthaul et la consommation des ressources. Nous modélisons ce problème à l’aide d’une formulation mathématique basée sur une approche de programmation linéaire en nombres entiers permettant de déterminer les stratégies optimales pour le problème d’affectation des ressources entre RRH-BBU et nous proposons également des heuristiques afin de pallier la difficulté au sens de la complexité algorithmique quand des instances larges du problème sont traitées, permettant ainsi le passage à l’échelle. Une affectation optimale des antennes aux BBUs réduit la latence de communication attendue et offre des gains en termes d’utilisation des ressources. Néanmoins, ces gains dépendent fortement de l’augmentation des niveaux d’interférence inter-cellulaire causés par la densité élevée des antennes déployées dans les réseaux C-RANs. Ainsi, nous proposons une formulation mathématique exacte basée sur les méthodes Branch-and-Cut qui consiste à consolider et ré-optimiser les rayons de couverture des antennes afin de minimiser les interférences inter-cellulaires et de garantir une couverture maximale du réseau conjointement. En plus de l’augmentation des niveaux d’interférence, la densité élevée des cellules dans le réseau CRAN augmente le nombre des fonctions BBUs ainsi que le trafic de données entre les antennes et les centres de données centralisés avec de fortes exigences en termes de latence sur le réseau fronthaul. Par conséquent, nous discutons dans la troisième partie de cette thèse comment placer d’une manière optimale les fonctions BBUs en considérant la solution split du 3GPP afin de trouver le meilleur compromis entre les avantages de la centralisation dans C-RAN et les forts besoins en latence et bande passante sur le réseau fronthaul. Nous proposons des algorithmes (exacts et heuristiques) issus de l’optimisation combinatoire afin de trouver rapidement des solutions optimales ou proches de l’optimum, même pour des instances larges du problèmes
Cloud Radio Access Network (C-RAN) has been proposed as a promising architecture to meet the exponential growth in data traffic demands and to overcome the challenges of next generation mobile networks (5G). The main concept of C-RAN is to decouple the BaseBand Units (BBU) and the Remote Radio Heads (RRH), and place the BBUs in common edge data centers (BBU pools) for centralized processing. This gives a number of benefits in terms of cost savings, network capacity improvement and resource utilization gains. However, network operators need to investigate scalable and cost-efficient algorithms for resource allocation problems to enable and facilitate the deployment of C-RAN architecture. Most of these problems are very complex and thus very hard to solve. Hence, we use combinatorial optimization which provides powerful tools to efficiently address these problems.One of the key issues in the deployment of C-RAN is finding the optimal assignment of RRHs (or antennas) to edge data centers (BBUs) when jointly optimizing the fronthaul latency and resource consumption. We model this problem by a mathematical formulation based on an Integer Linear Programming (ILP) approach to provide the optimal strategies for the RRH-BBU assignment problem and we propose also low-complexity heuristic algorithms to rapidly reach good solutions for large problem instances. The optimal RRH-BBU assignment reduces the expected latency and offers resource utilization gains. Such gains can only be achieved when reducing the inter-cell interference caused by the dense deployment of cell sites. We propose an exact mathematical formulation based on Branch-and-Cut methods that enables to consolidate and re-optimize the antennas radii in order to jointly minimize inter-cell interference and guarantee a full network coverage in C-RAN. In addition to the increase of inter-cell interference, the high density of cells in C-RAN increases the amount of baseband processing as well as the amount of data traffic demands between antennas and centralized data centers when strong latency requirements on fronthaul network should be met. Therefore, we discuss in the third part of this thesis how to determine the optimal placement of BBU functions when considering 3GPP split option to find optimal tradeoffs between benefits of centralization in C-RAN and transport requirements. We propose exact and heuristic algorithms based on combinatorial optimization techniques to rapidly provide optimal or near-optimal solutions even for large network sizes
APA, Harvard, Vancouver, ISO, and other styles
39

Cao, Weidan. "Every Cloud Has A Silver Lining: An Investigation of Cancer Patients' Social Support, Coping Strategies, and Posttraumatic Growth." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/425014.

Full text
Abstract:
Media & Communication
Ph.D.
This dissertation investigated social support, coping strategies, and posttraumatic growth among cancer patients in China. Study 1 examined sources of social support to explore helpful social support and unhelpful social support from different sources. Optimal matching theory (Cutrona & Russell, 1990) and Goldsmith’s (2004) social support theory served as the theoretical framework for Study 1. Twenty cancer patients in a cancer hospital were recruited to participate in phone interviews. An analysis of the detailed notes of the interviews revealed the major sources of patients’ social support came from family members and nurses. Patients described much more helpful support than unhelpful social support. Several other issues were discussed that were not covered by the research questions but were salient in the interviews were also discussed, such as nondisclosure practices in China and the use of euphemism when disclosing a cancer diagnosis in East Asian countries. The purpose of Study 2 was to test a model of the relationships between social support, uncontrollability appraisal, adaptive coping strategies, and posttraumatic growth. Two rounds of data collection were conducted among 201 cancer patients in a cancer hospital in China. The results of the hierarchical multiple regression indicated that, controlling for demographic factors such as age and education, social support and adaptive coping were positively correlated with posttraumatic growth. Uncontrollability, however, was not significantly correlated with posttraumatic growth. The results of the structural equation model indicated that higher levels of social support predicted higher levels of adaptive coping, higher levels of uncontrollability appraisal predicted lower levels of adaptive coping, and higher levels of adaptive coping predicted higher levels of posttraumatic growth. Moreover, adaptive coping was a mediator between social support and growth, as well as a mediator between uncontrollability and posttraumatic growth. The implications of the findings and the contributions of the dissertation are discussed.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
40

Aldosari, Mansour. "Design and analysis of green mobile communication networks." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/design-and-analysis-of-green-mobile-communication-networks(37b5278a-45da-4a81-b89c-54c7d876586a).html.

Full text
Abstract:
Increasing energy consumption is a result of the rapid growth in cellular communication technologies and a massive increase in the number of mobile terminals (MTs) and communication sites. In cellular communication networks, energy efficiency (EE) and spectral efficiency (SE) are two of the most important criteria employed to evaluate the performance of networks. A compromise between these two conflicting criteria is therefore required, in order to achieve the best cellular network performance. Fractional frequency reuse (FFR), classed as either strict FFR or soft frequency reuse (SFR), is an intercell interference coordination (ICIC) technique applied to manage interference when more spectrum is used, and to enhance the EE. A conventional cellular model's downlink is designed as a reference in the presence of inter-cell interference (ICI) and a general fading environment. Energy-efficient cellular models,such as cell zooming, cooperative BSs and relaying models are designed, analysed and compared with the reference model, in order to reduce network energy consumption without degrading the SE. New mathematical models are derived herein to design a distributed antenna system (DAS), in order to enhance the system's EE and SE. DAS is designed in the presence of ICI and composite fading and shadowing with FFR. A coordinate multi-point (CoMP) technique is applied, using maximum ratio transmission (MRT) to serve the mobile terminal (MT), with all distributed antenna elements (DAEs), transmit antenna selection (TAS) being applied to select the best DAE and general selection combining (GSC) being applied to select more than one DAE. Furthermore, a Cloud radio access network (C-RAN) is designed and analysed with two different schemes, using the high-power node (HPN) and a remote radio head (RRH), in order to improve the EE and SE of the system. Finally, a trade-off between the two conflicting criteria, EE and SE, is handled carefully in this thesis, in order to ensure a green cellular communication network.
APA, Harvard, Vancouver, ISO, and other styles
41

Jordaan, Pieter Willem. "Synthesis and evaluation of a data management system for machine-to-machine communication." Thesis, North-West University, 2013. http://hdl.handle.net/10394/9073.

Full text
Abstract:
A use case for a data management system for machine-to-machine communication was defined. A centralized system for managing data flow and storage is required for machines to securely communicate with other machines. Embedded devices are typical endpoints that must be serviced by this system and the system must, therefore, be easy to use. These systems have to bill the data usage of the machines that make use of its services. Data management systems are subject to variable load and must there- fore be able to scale dynamically on demand in order to service end- points. For robustness of such an online-service it must be highly available. By following design science research as the research methodology, cloud-based computing was investigated as a target deployment for such a data management system in this research project. An implementation of a cloud-based system was synthesised, evaluated and tested, and shown to be valid for this use case. Empirical testing and a practical field test validated the proposal.
Thesis (MIng (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
APA, Harvard, Vancouver, ISO, and other styles
42

de, Beste Eugene. "Enabling the processing of bioinformatics workflows where data is located through the use of cloud and container technologies." University of the Western Cape, 2019. http://hdl.handle.net/11394/6767.

Full text
Abstract:
>Magister Scientiae - MSc
The growing size of raw data and the lack of internet communication technology to keep up with that growth is introducing unique challenges to academic researchers. This is especially true for those residing in rural areas or countries with sub-par telecommunication infrastructure. In this project I investigate the usefulness of cloud computing technology, data analysis workflow languages and portable computation for institutions that generate data. I introduce the concept of a software solution that could be used to simplify the way that researchers execute their analysis on data sets at remote sources, rather than having to move the data. The scope of this project involved conceptualising and designing a software system to simplify the use of a cloud environment as well as implementing a working prototype of said software for the OpenStack cloud computing platform. I conclude that it is possible to improve the performance of research pipelines by removing the need for researchers to have operating system or cloud computing knowledge and that utilising technologies such as this can ease the burden of moving data.
APA, Harvard, Vancouver, ISO, and other styles
43

Kuylenstierna, Adam. "Underground in the Cloud : En kvalitativ studie om den digitala musikplattformen Soundcloud." Thesis, Stockholms universitet, Institutionen för journalistik, medier och kommunikation (JMK), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-58932.

Full text
Abstract:
Under 2000-talet har medie- och musikklimatet varit i omstöpning. Förutsättningar för att dela, distribuera och upptäcka musik har förändrats i grunden. Soundcloud, en digital musikplattform som startade sin verksamhet hösten 2008, står på många sätt i centrum för denna utveckling. Det övergripande syftet med föreliggande uppsats är att utifrån ett antal ämnesområden kopplade till teknik och musik, sociala funktioner och relationer, ett förändrat musiklandskap samt Soundclouds samarbete med Audible Magic tematiskt analysera och problematisera den digitala musikplattformen Soundcloud med utgångspunkt i kvalitativa intervjuer med artister/producenter/DJ:s verkandes inom den elektroniska scenen. Mer specifikt behandlas användarnas förhållning till upplevelsen av musik genom den särskilda teknologi och grafiska inramning som Soundcloud tillhandahåller samt de sociala funktioner som implementerats. Vidare problematiseras synen på Soundcloud som ett community, förhållandet mellan global och lokal tillhörighet samt hur artister respektive lyssnare förhåller sig till ett nytt medie- och musikklimat. Slutligen belyser uppsatsen Soundclouds nyligen initierade partnerskap med Audible Magic – ett företag som specialiserat sig på automatiserad identifiering av medieinnehåll. I tidigare forskning behandlas den så kallade modscenen samt den forskning som bedrivits kring det komplexa förhållandet mellan musik, teknologi och sociala praktiker. Det teoretiska ramverket är baserat på Chris Andersons ”Long Tail”-teori, med betoning på demokratiseringen av distributionsmedel respektive produktionsverktyg samt nya filter för urval och sammankoppling mellan utbud och efterfrågan. Vidare berörs Gerd Leonhards samtidsforskning om det nya medie- och musikklimat som står inför dörren. Studien har belyst Soundclouds distinkta grafiska gränssnitt och sociala funktioner och dess inverkan, i både positiv och negative bemärkelse, för hur musik upplevs. Studien visar att Soundcloud kan ses ett steg framåt i processen att göra delning av musik i digital tidsålder mer social, en utveckling som emellertid bromsas av faktum av att banden mellan användarna i många fall är allt för svaga, vilket bland annat beror på Soundclouds allt för kraftfulla filtrering- och urvalsmekanismer. En konsekvens av detta är att Soundcloud i sig ofta inte ses som ett community, utan blott som ett verktyg eller en länk mellan lokala noder. Denna slutsats stärks av faktum att samtliga informanter framhöll lokal tillhörighet som grundläggande. Musikaliska samarbeten via en digital musikplattform som Soundcloud menar man aldrig kommer att kunna ersätta det kreativa utbyte som sker fysisk mellan människor. Partnerskapet med Audible Magic är ännu i ett tidigt stadium. Studien visar att samarbete kan få både negativa och positiva följder. Det negativa består i att DJ-mixar kan komma att försvinna från Soundcloud. Det positiva i att en ökad upphovsrättslig kontroll kan fungera som ett incitament för att göra artister mer originella i sitt arbete
APA, Harvard, Vancouver, ISO, and other styles
44

Hakim, Zadok. "Factors That Contribute to The Resistance to Cloud Computing Adoption by Tech Companies vs. Non-Tech Companies." Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1034.

Full text
Abstract:
Cloud computing (CC) may be the most significant development in recent history for businesses seeking to utilize technology. However, the adoption of CC hinges on many factors, and can have a greater positive impact on organizational performance. This study examined the different factors that contribute to the resistance to CC adoption. Anchored in The Theory of Technology-Organization-Environment (TOE), the study used a qualitative, grounded theory approach to develop a theoretical model for the acceptance of CC across firms. CC can have significant effects on efficiency and productivity for firms, but these effects will only be realized if IT usage becomes utilized globally. Thus, it was essential to understand the determinants of IT adoption, which was the goal of this research. The central research question involved understanding and examining the factors of resistance that contribute to cloud computing adoption across firms. Data was collected through semi-structured interviews with 22 chief information officers (CIOs) of various firms, including those considered technology companies (TCs) and those considered non-technology companies (NTCs). Data was analyzed using qualitative thematic analysis to determine what factors influence the adoption of CC systems and, moreover, to determine what factors create resistance to the adoption of CC in firms despite its well-documented advantages and benefits. Additionally, by examinging and focusing on the factors of resistance, the rsults of this study were generalized across a wider array of firms located in the Southeastern region of the US. A total of 12 categories were identified. These were organized into two groups. The core category being financial risks represented the probability of loss inherent in financing methods which may impair the ability to provide adequate return. The categories lack of knowledge, resistance to change, excessive cost to adopt, and cost saving fit under financial risks. Together these categories were indicators of the factors of resistance to adopt cloud computing technology. The core category security risks represented the overall perception of privacy in online environment. The categories process of research, accessing organization fit, perceived security risks, phased deployment, approval to adopt, and increase flexibility fit under security risks. Together these categories were direct indicators of the factors of resistance that contribute to the adoption of cloud computing technology by both TC and NTC. The result of this study showed that the predominant and critical factors of resistance that contribute to cloud computing adoption by TC were financial risks and security risks vs. security risks by NTC. A critical distinction between TC and NTC is that 86.4% of NTC’s participants did not care about cost, they only cared about data security. A model was subsequently developed based on the lived experiences of Chief Information Officers (CIOs) who have been faced with challenges regarding cloud acceptance, and cloud computing adoption. The theoretical model produced by this study may guide future researchers and enhance the understanding and implementation of cloud computing technologies. The results of this study will add to the body of literature and may guide companies attempting to implement cloud computing to do so more successfully.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Ji. "Gestion multisite de workflows scientifiques dans le cloud." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT260/document.

Full text
Abstract:
Les in silico expérimentations scientifiques à grande échelle contiennent généralement plusieurs activités de calcule pour traiter big data. Workflows scientifiques (SWfs) permettent aux scientifiques de modéliser les activités de traitement de données. Puisque les SWfs moulinent grandes quantités de données, les SWfs orientés données deviennent un problème important. Dans un SWf orienté donnée, les activités sont liées par des dépendances de données ou de contrôle et une activité correspond à plusieurs tâches pour traiter les différentes parties de données. Afin d’exécuter automatiquement les SWfs orientés données, Système de management pour workflows scientifiques (SWfMSs) peut être utilisé en exploitant High Perfmance Comuting (HPC) fournisse par un cluster, grille ou cloud. En outre, SWfMSs génèrent des données de provenance pour tracer l’exécution des SWfs.Puisque le cloud fournit des services stables, diverses ressources, la capacité de calcul et de stockage virtuellement infinie, il devient une infrastructure intéressante pour l’exécution de SWf. Le cloud données essentiellement trois types de services, i.e. Infrastructure en tant que Service (IaaS), Plateforme en tant que Service (PaaS) et Logiciel en tant que Service (SaaS). SWfMSs peuvent être déployés dans le cloud en utilisant des Machines Virtuelles (VMs) pour exécuter les SWfs orientés données. Avec la méthode de pay-as-you-go, les utilisateurs de cloud n’ont pas besoin d’acheter des machines physiques et la maintenance des machines sont assurée par les fournisseurs de cloud. Actuellement, le cloud généralement se compose de plusieurs sites (ou centres de données), chacun avec ses propres ressources et données. Du fait qu’un SWf orienté donnée peut-être traite les données distribuées dans différents sites, l’exécution de SWf orienté donnée doit être adaptée aux multisite cloud en utilisant des ressources de calcul et de stockage distribuées.Dans cette thèse, nous étudions les méthodes pour exécuter SWfs orientés données dans un environnement de multisite cloud. Certains SWfMSs existent déjà alors que la plupart d’entre eux sont conçus pour des grappes d’ordinateurs, grille ou cloud d’un site. En outre, les approches existantes sont limitées aux ressources de calcul statique ou à l’exécution d’un seul site. Nous vous proposons des algorithmes pour partitionner SWfs et d’un algorithme d’ordonnancement des tâches pour l’exécution des SWfs dans un multisite cloud. Nos algorithmes proposés peuvent réduire considérablement le temps global d’exécution d’un SWf dans un multisite cloud.En particulier, nous proposons une solution générale basée sur l’ordonnancement multi-objectif afin d’exécuter SWfs dans un multisite cloud. La solution se compose d’un modèle de coût, un algorithme de provisionnement de VMs et un algorithme d’ordonnancement des activités. L’algorithme de provisionnement de VMs est basé sur notre modèle de coût pour générer les plans à provisionner VMs pour exécuter SWfs dans un cloud d’un site. L’algorithme d’ordonnancement des activités permet l’exécution de SWf avec le coût minimum, composé de temps d’exécution et le coût monétaire, dans un multisite cloud. Nous avons effectué beaucoup d’expérimentations et les résultats montrent que nos algorithmes peuvent réduire considérablement le coût global pour l’exécution de SWf dans un multisite cloud
Large-scale in silico scientific experiments generally contain multiple computational activities to process big data. Scientific Workflows (SWfs) enable scientists to model the data processing activities. Since SWfs deal with large amounts of data, data-intensive SWfs is an important issue. In a data-intensive SWf, the activities are related by data or control dependencies and one activity may consist of multiple tasks to process different parts of experimental data. In order to automatically execute data-intensive SWfs, Scientific Work- flow Management Systems (SWfMSs) can be used to exploit High Performance Computing (HPC) environments provided by a cluster, grid or cloud. In addition, SWfMSs generate provenance data for tracing the execution of SWfs.Since a cloud offers stable services, diverse resources, virtually infinite computing and storage capacity, it becomes an interesting infrastructure for SWf execution. Clouds basically provide three types of services, i.e. Infrastructure-as-a-Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). SWfMSs can be deployed in the cloud using Virtual Machines (VMs) to execute data-intensive SWfs. With a pay-as-you- go method, the users of clouds do not need to buy physical machines and the maintenance of the machines are ensured by the cloud providers. Nowadays, a cloud is typically made of several sites (or data centers), each with its own resources and data. Since a data- intensive SWf may process distributed data at different sites, the SWf execution should be adapted to multisite clouds while using distributed computing or storage resources.In this thesis, we study the methods to execute data-intensive SWfs in a multisite cloud environment. Some SWfMSs already exist while most of them are designed for computer clusters, grid or single cloud site. In addition, the existing approaches are limited to static computing resources or single site execution. We propose SWf partitioning algorithms and a task scheduling algorithm for SWf execution in a multisite cloud. Our proposed algorithms can significantly reduce the overall SWf execution time in a multisite cloud.In particular, we propose a general solution based on multi-objective scheduling in order to execute SWfs in a multisite cloud. The general solution is composed of a cost model, a VM provisioning algorithm, and an activity scheduling algorithm. The VM provisioning algorithm is based on our proposed cost model to generate VM provisioning plans to execute SWfs at a single cloud site. The activity scheduling algorithm enables SWf execution with the minimum cost, composed of execution time and monetary cost, in a multisite cloud. We made extensive experiments and the results show that our algorithms can reduce considerably the overall cost of the SWf execution in a multisite cloud
APA, Harvard, Vancouver, ISO, and other styles
46

Gråd, Martin. "Improving Conventional Image-based 3D Reconstruction of Man-made Environments Through Line Cloud Integration." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148452.

Full text
Abstract:
Image-based 3D reconstruction refers to the capture and virtual reconstruction of real scenes, through the use of ordinary camera sensors. A common approach is the use of the algorithms Structure from Motion, Multi-view Stereo and Poisson Surface Reconstruction, that fares well for many types of scenes. However, a problem that this pipeline suffers from is that it often falters when it comes to texture-less surfaces and areas, such as those found in man-made environments. Building facades, roads and walls often lack detail and easily trackable feature points, making this approach less than ideal for such scenes. To remedy this weakness, this thesis investigates an expanded approach, incorporating line segment detection and line cloud generation into the already existing point cloud-based pipeline. Texture-less objects such as building facades, windows and roofs are well-suited for line segment detection, and line clouds are fitting for encoding 3D positional data in scenes consisting mostly of objects featuring many straight lines. A number of approaches have been explored in order to determine the usefulness of line clouds in this context, each of them addressing different aspects of the reconstruction procedure.
APA, Harvard, Vancouver, ISO, and other styles
47

Matoussi, Salma. "User-Centric Slicing with Functional Splits in 5G Cloud-RAN." Electronic Thesis or Diss., Sorbonne université, 2021. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2021SORUS004.pdf.

Full text
Abstract:
Le réseau d’accès radio (RAN) 5G vise à faire évoluer de nouvelles technologies couvrant l’infrastructure Cloud, les techniques de virtualisation et le réseau défini par logiciel (SDN). Des solutions avancées sont introduites pour répartir les fonctions du réseau d’accès radio entre des emplacements centralisés et distribués (découpage fonctionnel) afin d’améliorer la flexibilité du RAN. Cependant, l’une des préoccupations majeures est d’allouer efficacement les ressources RAN, tout en prenant en compte les exigences hétérogènes des services 5G. Dans cette thèse, nous abordons la problématique du provisionnement des ressources Cloud RAN centré sur l’utilisateur (appelé tranche d’utilisateurs ). Nous adoptons un déploiement flexible du découpage fonctionnel. Notre recherche vise à répondre conjointement aux besoins des utilisateurs finaux, tout en minimisant le coût de déploiement. Pour surmonter la grande complexité impliquée, nous proposons d’abord une nouvelle implémentation d’une architecture Cloud RAN, permettant le déploiement à la demande des ressources, désignée par AgilRAN. Deuxièmement, nous considérons le sous-problème de placement des fonctions de réseau et proposons une nouvelle stratégie de sélection de découpage fonctionnel centrée sur l’utilisateur nommée SPLIT-HPSO. Troisièmement, nous intégrons l’allocation des ressources radio. Pour ce faire, nous proposons une nouvelle heuristique appelée E2E-USA. Dans la quatrième étape, nous envisageons une approche basée sur l’apprentissage en profondeur pour proposer un schéma d’allocation temps réel des tranches d’utilisateurs, appelé DL-USA. Les résultats obtenus prouvent l’efficacité de nos stratégies proposées
5G Radio Access Network (RAN) aims to evolve new technologies spanning the Cloud infrastructure, virtualization techniques and Software Defined Network capabilities. Advanced solutions are introduced to split the RAN functions between centralized and distributed locations to improve the RAN flexibility. However, one of the major concerns is to efficiently allocate RAN resources, while supporting heterogeneous 5G service requirements. In this thesis, we address the problematic of the user-centric RAN slice provisioning, within a Cloud RAN infrastructure enabling flexible functional splits. Our research aims to jointly meet the end users’ requirements, while minimizing the deployment cost. The problem is NP-hard. To overcome the great complexity involved, we propose a number of heuristic provisioning strategies and we tackle the problem on four stages. First, we propose a new implementation of a cost efficient C-RAN architecture, enabling on-demand deployment of RAN resources, denoted by AgilRAN. Second, we consider the network function placement sub-problem and propound a new scalable user-centric functional split selection strategy named SPLIT-HPSO. Third, we integrate the radio resource allocation scheme in the functional split selection optimization approach. To do so, we propose a new heuristic based on Swarm Particle Optimization and Dijkstra approaches, so called E2E-USA. In the fourth stage, we consider a deep learning based approach for user-centric RAN Slice Allocation scheme, so called DL-USA, to operate in real-time. The results obtained prove the efficiency of our proposed strategies
APA, Harvard, Vancouver, ISO, and other styles
48

Adeka, Muhammad I. "Cryptography and Computer Communications Security. Extending the Human Security Perimeter through a Web of Trust." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/11380.

Full text
Abstract:
This work modifies Shamir’s algorithm by sharing a random key that is used to lock up the secret data; as against sharing the data itself. This is significant in cloud computing, especially with homomorphic encryption. Using web design, the resultant scheme practically globalises secret sharing with authentications and inherent secondary applications. The work aims at improving cybersecurity via a joint exploitation of human factors and technology; a human-centred cybersecurity design as opposed to technology-centred. The completed functional scheme is tagged CDRSAS. The literature on secret sharing schemes is reviewed together with the concepts of human factors, trust, cyberspace/cryptology and an analysis on a 3-factor security assessment process. This is followed by the relevance of passwords within the context of human factors. The main research design/implementation and system performance are analysed, together with a proposal for a new antidote against 419 fraudsters. Two twin equations were invented in the investigation process; a pair each for secret sharing and a risk-centred security assessment technique. The building blocks/software used for the CDRSAS include Shamir’s algorithm, MD5, HTML5, PHP, Java, Servlets, JSP, Javascript, MySQL, JQuery, CSS, MATLAB, MS Excel, MS Visio, and Photoshop. The codes are developed in Eclipse IDE, and the Java-based system runs on Tomcat and Apache, using XAMPP Server. Its code units have passed JUnit tests. The system compares favourably with SSSS. Defeating socio-cryptanalysis in cyberspace requires strategies that are centred on human trust, trust-related human attributes, and technology. The PhD research is completed but there is scope for future work.
APA, Harvard, Vancouver, ISO, and other styles
49

Fawzy, Kamel Menatalla Ashraf. "Vendor Lock-in in the transistion to a Cloud Computing platform." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209121.

Full text
Abstract:
The thesis introduces a study about the vulnerabilities that a company as Scania IT faces towards vendor lock-in in the transition to a cloud computing platform. Cloud computing is a term that refers to anetwork of remote servers hosted on the internet to store, manage and process data, rather than on a local server or a personal computer. Vendor lock-in is an outcome that causes companies to pay a significant cost to move between cloud providers. The effects that cause vendor lock-in that will be described are portability, interoperability and federation are called the lock-in effects. The goal of the research is to help Scania IT understand the vendor lock-in and the vulnerabilities they can face in the transition to the cloud as well as to clarify the concern that they may have against falling in vendor lock-in. The main purpose of the research is to present the various lock-in effects that are related to the transition from one cloud provider to another and the vulnerabilities that cause companies to fall in vendor lock-in. The thesis presents the reasons that motivates why Scania IT would consider using the cloud and the concerns that they may have against usage of a cloud computing platform. The results will be based on a case study of a similar company that has moved to a cloud provider and specifically Microsoft Azure and an interview of Microsoft Azure point of view with the risk of vendor lock-in. Finally, a process of interviews with different people from Scania IT to extract the current bottleneck in the development process that caused the company to think of a cloud computing platform. The results show that companies should consider many risks and factors while moving to the cloud, as vendor lock-in, cloud maturity index and their IT strategies. As a result, the thesis gives recommendations of the steps needed to minimize the risks of the cloud while maintaining the positivity of the cloud.
Uppsatsen presenterar en studie om de sårbarheter som ett företag som Scania IT har mot inlåsning i övergången till molntjänster. Molntjänster är en term som hänvisar till ett nätverk av servrar som finns på internet för att lagra, hantera och processa data, istället för på en lokal server eller en persondator. Inlåsning är ett resultat i vilket orsakar att företagen behöver betala en betydande kostnad för att flytta mellan molnleverantörer. De effekter som orsakar inlåsning vilket kommer att beskrivas är portabilitet, interoperabilitet och federation, dessa kallas inlåsningseffekter. Målet med forskningen är att hjälpa Scania IT att förstå inlåsning och sårbarheter som de kan möta i övergången till molnet. Dessutom är målet att klarlägga riskerna som de kan ha mot att falla i inlåsning. Det huvudsakliga syftet med forskningen är att presentera de olika inlåsningseffekter som är relaterade till övergången från en molnleverantör till en annan samt de sårbarheter som orsakar företagen att falla i inlåsning. Uppsatsen presenterar skäl som motiverar varför Scania IT ska överväga att använda molnet samt den oro som de kan ha mot användning av en molnleverantör. Resultaten kommer att baseras på en fallstudie av ett liknande företag som har flyttat till en molnleverantör och specifikt Microsoft Azure samt en intervju av Microsoft Azure synvinkel med risken för inlåsning. Slutligen, en rad av intervjuer med olika personer från Scania IT för att extrahera den nuvarande flaskhalsen i utvecklingsprocessen som orsakade företaget att tänka på molntjänster. Resultaten visar att företagen bör överväga många risker och faktorer när de flyttar till molnet, som exempelvis inlåsning, cloud maturity index och deras IT-strategier. Som ett resultat ger examensarbetet nödvändiga rekommendationer för att minimera riskerna för molnet samtidigt som positivitet av molnet.
APA, Harvard, Vancouver, ISO, and other styles
50

Boulos, Karen. "BBU-RRH Association Optimization in Cloud-Radio Access Networks." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS209/document.

Full text
Abstract:
De nos jours, la demande en trafic mobile a considérablement augmenté. Face à cette croissance, plusieurs propositions font l'objet d'étude pour remédier à un tel défi. L’architecture des réseaux d’accès de type Cloud (C-RAN) est l’une des propositions pour faire face à cette demande croissante, et constitue une solution candidate potentielle pour les réseaux futurs 5G. L'architecture C-RAN dissocie deux éléments principaux de la station de base: La BBU ou ``Baseband Unit", qui constitue une unité intelligente pour le traitement des données en bande de base, et le RRH ou ``Remote Radio Head", constituant en une antenne passive pour fournir l'accès aux utilisateurs (UEs). Grâce à l’architecture C-RAN, les BBUs sont centralement regroupées, alors que les RRHs sont distribués sur plusieurs sites. Plusieurs avantages sont ainsi dérivés, tels que le gain en multiplexage statistique, l’efficacité d’utilisation des ressources, et l’économie de puissance. Contrairement à l’architecture conventionnelle où chaque RRH est exclusivement associé à une BBU, dans l’architecture C-RAN, plusieurs RRHs sont regroupés en une seule BBU lorsque les conditions de charge sont faibles. Ceci présente plusieurs avantages, tel que l’amélioration en efficacité énergétique et la minimisation en consommation de puissance. Dans cette thèse, nous adressons le problème d’optimisation des associations BBU-RRH. Nous nous intéressons à l’optimisation des regroupements des RRHs aux BBUs en tenant compte de critères multiples. Plusieurs contraintes sont ainsi envisagées, tel que la réduction de la consommation d'énergie sous garantie de Qualité de Service (QoS) minimale. En outre, la prise en compte du changement du niveau d’interférence en activant/désactivant les BBUs est primordiale pour l’amélioration de l’efficacité spectrale. En plus, décider dynamiquement de la réassociation des RRHs aux BBUs sous des conditions de charges variables représente un défi, vu que les UEs connectés aux RRHs changeant leurs associations font face à des ``handovers" (HOs)
The demand on mobile traffic has been largely increasing nowadays. Facing such growth, several propositions are being studied to cope with this challenge. Cloud-Radio Access Networks Architecture (C-RAN) is one of the proposed solutions to address the increased demand, and is a potential candidate for future 5G networks. The C-RAN architecture dissociates two main elements composing the base station: The Baseband Unit (BBU), consisting in an intelligent element to perform baseband tasks functionalities, and the Remote Radio Head (RRH), that consists in a passive antenna element to provide access for serviced User Equipments (UEs). In C-RAN architecture, the BBUs migrate to a Cloud data center, while RRHs remain distributed across multiple sites. Several advantages are derived, such as statistical multiplexing gain, efficiency in resource utilization and power saving. Contrarily to conventional architecture, where each RRH is associated to one BBU, in C-RAN architecture, multiple RRHs can be embraced by one single BBU when network load conditions are low, bringing along several benefits, such as enhanced energy efficiency, and power consumption minimization. In this thesis, the BBU-RRH association optimization problem is addressed. Our aim is to optimize the BBU-RRH association schemes, taking into consideration several criteria. The problem presents many constraints: For example, achieving minimized power consumption while guaranteeing a minimum level of Quality of Service (QoS) is a challenging task. Further, taking into account the interference level variation while turning ON/OFF BBUs is paramount to achieve enhanced spectral efficiency. Moreover, deciding how to re-associate RRHs to BBUs under dynamic load conditions is also a challenge, since connected UEs face handovers (HOs) when RRHs change their associations
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography