Dissertationen zum Thema „Solutions cloud“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Solutions cloud.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Solutions cloud" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Dellner, Felix. „Decision modelling for cloud-based solutions“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-344874.

Der volle Inhalt der Quelle
Annotation:
The company Omegapoint AB has expressed their interest in devising a way of determining whether or not a software product is best suited for provisioning through a cloud-based or on-premises solution. Choosing the wrong type of solution for a product can become costly and/or result in poor sustainability. Based on a literature study on cloud computing, including authors with different opinions and perspectives, a survey form is developed with critical questions to lead the decision on what type of solution to use. On top of the survey form, a goal programming model is proposed and applied to a set of test cases.The results show that the decision tool can be used to provide guidance. However, for actual implementations of products, further study is always recommended since the model might not be capturing all preconditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Huang, Chengcheng. „Secure network solutions for cloud services“. Thesis, Federation University Australia, 2013. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/81565.

Der volle Inhalt der Quelle
Annotation:
Securing a cloud network is an important challenge for delivering cloud services to cloud users. There are a number of secure network protocols, such as VPN protocols, currently available to provide different secure network solutions for enterprise clouds. For example, PPTP, L2TP, GRE, IPsec and SSL/TLS are the most widely used VPN protocols in today’s securing network solutions. However, there are some significant challenges in the implementation stage. For example, which VPN solution is easy to deploy in delivering cloud services? Which solution can provide the best network throughput in delivering the cloud services? Which solution can provide the lowest network latency in delivering the cloud services? This thesis addresses these issues by implementing different VPNs in a test bed environment set up by the Cisco routers. Open source measurement tools will be utilized to acquire the results. This thesis also reviews cloud computing and cloud services and look at their relationships. It also explores the benefits and the weaknesses of each securing network solution. The results can not only provide experimental evidence, but also facilitate the network implementers in development and deployment of secure network solutions for cloud services.
Master of Computing (Research)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Adam, Elena Daniela. „Knowledge management cloud-based solutions in small enterprises“. Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-28275.

Der volle Inhalt der Quelle
Annotation:
Purpose – The aim of this study is to determine if adopting cloud-based knowledge management is a viable way forward for small enterprises and to investigate what are the main factors that might facilitate or inhibit these companies to adopt such solutions.Design/Methodology/Approach - In order to understand the main factors that could influence the adoption of a cloud-based knowledge management solution in small enterprises, I used a qualitative research approach, based on four semi-structured interviews with four small companies from Romania.Findings – The results of the study suggest that implementing knowledge management in the cloud is particularly beneficial for small enterprises, as a lower investment in IT infra-structure can create a competitive advantage and help them implement knowledge man-agement activities as a strategic resource. Moreover, the study suggests that relative ad-vantage, compatibility and technology readiness will influence companies in moving their knowledge to the cloud. Also, the study reveals that companies which did not adopt such a solution had already established systems for managing knowledge and failed to realize its benefits, it was not perceived as needed, they had a low level of awareness or cited security and uncertainty reasons.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Fehse, Carsten. „Infrastructure suitability assessment modeling for cloud computing solutions“. Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5580.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited
Maturing virtualization in information technology systems has enabled increased implementations of the cloud com-puting paradigm, dissolving the need to co-locate user and computing power by providing desired services through the network. This thesis researches the support that current network modeling and simulation applications can provide to IT projects in planning, implementing and maintaining networks for cloud solutions. A problem-appropriate do-main model and subsequent requirements are developed for the assessment of several network modeling and simula-tion tools, which leads to the identification of a capability gap precluding the use of such tools in early stages of cloud computing projects. Consequently, a practical, modular designed methodology is proposed to measure the essential properties necessary for developing appropriate cloud computing network traffic models. The conducted proof-of-concept experiment applied to a virtual desktop environment finds the proposed methodology suitable and problem-appropriate, and results in recommended steps to close the identified capability gap.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Johansson, Jonas. „Evaluation of Cloud Native Solutions for Trading Activity Analysis“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300141.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has become increasingly popular over recent years, allowing computing resources to be scaled on-demand. Cloud Native applications are specifically created to run on the cloud service model. Currently, there is a research gap regarding the design and implementation of cloud native applications, especially regarding how design decisions affect metrics such as execution time and scalability of systems. The problem investigated in this thesis is whether the execution time and quality scalability, ηt of cloud native solutions are affected when housing the functionality of multiple use cases within the same cloud native application. In this work, a cloud native application for trading data analysis is presented, where the functionality of 3 use cases are implemented to the application: (1) creating reports of trade prices, (2) anomaly detection, and (3) analysis of relation diagram of trades. The execution time and scalability of the application are evaluated and compared to readily available solutions, which serve as a baseline for the evaluation. The results of use cases 1 and 2 are compared to Amazon Athena, while use case 3 is compared to Amazon Neptune. The results suggest that having functionalities combined into the same application could improve both execution time and scalability of the system. The impact depends on the use case and hardware configuration. When executing the use cases in a sequence, the mean execution time of the implemented system was decreased up to 17.2% while the quality scalability score was improved by 10.3% for use case 2. The implemented application had significantly lower execution time than Amazon Neptune but did not surpass Amazon Athena for respective use cases. The scalability of the systems varied depending on the use case. While not surpassing the baseline in all use cases, the results show that the execution time of a cloud native system could be improved by having functionality of multiple use cases within one system. However, the potential performance gains differ depending on the use case and might be smaller than the performance gains of choosing another solution.
Cloud computing har de senaste åren blivit alltmer populärt och möjliggör att skala beräkningskapacitet och resurser på begäran. Cloud native-applikationer är specifikt skapade för att köras på distribuerad infrastruktur. För närvarande finns det luckor i forskningen gällande design och implementering av cloud native applikationer, särskilt angående hur designbeslut påverkar mätbara värden som exekveringstid och skalbarhet. Problemet som undersöks i denna uppsats är huruvida exekveringstiden och måttet av kvalitetsskalbarhet, ηt påverkas när funktionaliteten av flera användningsfall intregreras i samma cloud native applikation. I det här arbetet skapades en cloud native applikation som kombinerar flera användningsfall för att analysera transaktionsbaserad börshandelsdata. Funktionaliteten av 3 användningsfall implementeras i applikationen: (1) generera rapporter över handelspriser, (2) detektering av avvikelser och (3) analys av relations-grafer. Applikationens exekveringstid och skalbarhet utvärderas och jämförs med kommersiella cloudtjänster, vilka fungerar som en baslinje för utvärderingen. Resultaten från användningsfall 1 och 2 jämförs med Amazon Athena, medan användningsfall 3 jämförs med Amazon Neptune. Resultaten antyder att systemets exekveringstid och skalbarhet kan förbättras genom att funktionalitet för flera användningsfall implementeras i samma system. Effekten varierar beroende på användningsfall och hårdvarukonfiguration. När samtliga användningsfall körs i en sekvens, minskar den genomsnittliga körtiden för den implementerade applikationen med upp till 17,2% medan kvalitetsskalbarheten ηt förbättrades med 10,3%för användningsfall 2. Den implementerade applikationen har betydligt kortare exekveringstid än Amazon Neptune men överträffar inte Amazon Athena för respektive användningsfall. Systemens skalbarhet varierade beroende på användningsfall. Även om det inte överträffar baslinjen i alla användningsfall, visar resultaten att exekveringstiden för en cloud native applikation kan förbättras genom att kombinera funktionaliteten hos flera användningsfall inom ett system. De potentiella prestandavinsterna varierar dock beroende på användningsfallet och kan vara mindre än vinsterna av att välja en annan lösning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Łaskawiec, Sebastian. „Effective solutions for high performance communication in the cloud“. Rozprawa doktorska, Uniwersytet Technologiczno-Przyrodniczy w Bydgoszczy, 2020. http://dlibra.utp.edu.pl/Content/2268.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Štangl, Rastislav. „Procurement application solutions in cloud environment, evolution and trends“. Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199230.

Der volle Inhalt der Quelle
Annotation:
The goal of the presented thesis is to screen market and to assess the Procurement application solutions provided in cloud with focus on SMB companies. Understanding of Procurement process is base for identification of key functions and requirements on Procurement applications. Market research results in initial high level picture about the Procurement applications available with attention to cloud. Procurement applications assessment determines conclusions on market evolution and trends. The first part of the thesis introduces theoretical background of Procurement process and cloud computing. This part is completed by the list of key functions and requirements, of expected benefits, and of the steps that are important during assessment and selection of Procurement application in business. The second part investigates the Procurement application market and vendors. The screening is oriented on application features, client set, delivery models, and on other characteristics. Eight Procurement application solutions were selected for detailed assessment: Bellwether ePMX, Compleat Spend Control, Coupa, eBid eXchange, Ion Wave Technologies, PurchaseControl, Trade Interchange (ARCUS), and Xtenza. Assessment of the applications results in comparative data that are presented in predefined structure. The primary focus is on application features and Procurement functions supported. Other important evaluation categories are delivery models, security, underlying infrastructure, approval workflows, and commercials. Communication with application vendors contributed to the assessment. This part is finalized with evolution and trend conclusions focusing on cloud proliferation dynamics, and on forecast of new application functions and features that will result in new business benefits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zanni, Alessandro <1987&gt. „Middleware Solutions for Effective Cloud-CPS Integration in Pervasive Environment“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8527/1/AlessandroZanni_Tesi.pdf.

Der volle Inhalt der Quelle
Annotation:
The proliferation of a wide range of highly different mobile devices, from tiny sensors to powerful smartphone, with increasing connection abilities, has led to the modification of the way we interact with the surrounded environment. It is growing the manifest need of intermediate middleware solutions that can effectively integrate device localities with the global cloud resources, overcoming the issues related to their direct connection. In this thesis work, the primary objective is to present some promising and feasible real-world solutions, trying to face and cover many different challenges and open-points of the mobile devices applications. The solutions proposed are applied at different levels of the stack, thus dividing them in relation to their internal architectures, in order to underline the intrinsic characteristics associated with the architectural solution, the requirements mainly stressed, and highlighting the most suitable scenarios they can work with. This thesis work aims to push forward the research in the field, mainly based on theoretical architecture and methodological approaches so far, introducing some industrially-relevant implementations, grouped in relation to the specific applications to face, that specifically target the issues of practical feasibility, cost-effectiveness, and efficiency of middleware solutions over easily-deployable environments. The described solutions are specifically designed for the support of mobile services, also in hostile environments, with the main requirements to provide and greatly increase mobile devices requirements. The designs, implementations, and experimental works demonstrate the suitability of the proposed solutions to address several different open points and challenges of mobile devices applications in an efficient and effective way and, also, with applications in large-scale scenarios. Finally, as notable side effect of the present work, the present work present a complete overview of the very recent literature about the intermediate middleware that are emerging.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

GIRAU, ROBERTO. „Architectural and application solutions for the cloud Internet of Things“. Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/249555.

Der volle Inhalt der Quelle
Annotation:
The aim of my thesis is to present Lysis, a cloud platform for distributed applications of the Internet of Things. The main features that have been incorporated into its design are as follows: each object is an independent social agent; the PaaS (Platform as a Service) is fully exploited; re-usability at different layers is considered; the data is under the control of users. The first feature has been introduced by adopting the social IoT concept, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners. This provides the benefits of improving the network scalability and information discovery efficiency. The major components of PaaS services are used for an easy management and development of applications by both users and programmers. The reusability allows the programmers to generate templates of objects and services available to the whole Lysis community. The data generated by the devices is stored at the object owners cloud spaces. As SIoT implementation, in Lysis, the devices need to discover each other in order to create new relationships. To this, three solutions are proposed: the first one relies on the channel scanning; the second one assumes that channel scanning is not possible and makes use of the device localization features; the third one is similar to the second one, but the already existing objects social network is exploited. To show the effectiveness of Lysis platform, I exploited its feature in a Mobile CrowdSensing (MCS) scenario. MCS is defined as a pervasive sensing paradigm where mobile devices gather data with the aim of performing a specific application. I propose a new algorithm to address the resource management issue so that MCS tasks are fairly assigned to the objects, with the objectives of maximizing the lifetime of the task groups. In a further step, I propose an integration of vehicles networks to SIoT leading to the new paradigm of the Social Internet of Vehicles. As regard to this, I show results of software simulations analyzing realistic vehicular mobility trace in order to study the characteristics of the resulting social network structure. Additionally, I present an implementation of a SIoV-enabled system and its integration to Lysis platform. Finally, I investigate the issues related to the portability of Lysis architecture on edge cloud infrastructures. In fact, objects might be located far from the datacenter hosting the conventional cloud, resulting in long delays and inefficient use of communication resources. This thesis investigates how to address these issues by exploiting the computing resources at the edge of the network to host the virtual objects of the SIoT and provides early experimental results. Additionally, a solution is presented that integrates the SIoT concept in the architecture proposed within the INPUT project. More specifically the feature is exploited of the INPUT project, which allows for running the virtual representation of a smart/social object in the access router that is nearest to the physical object. In this way, it is expected that delay will decrease and efficiency in the usage of network resources will increase.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

MILIA, GABRIELE. „Cloud-based solutions supporting data and knowledge integration in bioinformatics“. Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266783.

Der volle Inhalt der Quelle
Annotation:
In recent years, computer advances have changed the way the science progresses and have boosted studies in silico; as a result, the concept of “scientific research” in bioinformatics has quickly changed shifting from the idea of a local laboratory activity towards Web applications and databases provided over the network as services. Thus, biologists have become among the largest beneficiaries of the information technologies, reaching and surpassing the traditional ICT users who operate in the field of so-called "hard science" (i.e., physics, chemistry, and mathematics). Nevertheless, this evolution has to deal with several aspects (including data deluge, data integration, and scientific collaboration, just to cite a few) and presents new challenges related to the proposal of innovative approaches in the wide scenario of emergent ICT solutions. This thesis aims at facing these challenges in the context of three case studies, being each case study devoted to cope with a specific open issue by proposing proper solutions in line with recent advances in computer science. The first case study focuses on the task of unearthing and integrating information from different web resources, each having its own organization, terminology and data formats in order to provide users with flexible environment for accessing the above resources and smartly exploring their content. The study explores the potential of cloud paradigm as an enabling technology to severely curtail issues associated with scalability and performance of applications devoted to support the above task. Specifically, it presents Biocloud Search EnGene (BSE), a cloud-based application which allows for searching and integrating biological information made available by public large-scale genomic repositories. BSE is publicly available at: http://biocloud-unica.appspot.com/. The second case study addresses scientific collaboration on the Web with special focus on building a semantic network, where team members, adequately supported by easy access to biomedical ontologies, define and enrich network nodes with annotations derived from available ontologies. The study presents a cloud-based application called Collaborative Workspaces in Biomedicine (COWB) which deals with supporting users in the construction of the semantic network by organizing, retrieving and creating connections between contents of different types. Public and private workspaces provide an accessible representation of the collective knowledge that is incrementally expanded. COWB is publicly available at: http://cowb-unica.appspot.com/. Finally, the third case study concerns the knowledge extraction from very large datasets. The study investigates the performance of random forests in classifying microarray data. In particular, the study faces the problem of reducing the contribution of trees whose nodes are populated by non-informative features. Experiments are presented and results are then analyzed in order to draw guidelines about how reducing the above contribution. With respect to the previously mentioned challenges, this thesis sets out to give two contributions summarized as follows. First, the potential of cloud technologies has been evaluated for developing applications that support the access to bioinformatics resources and the collaboration by improving awareness of user's contributions and fostering users interaction. Second, the positive impact of the decision support offered by random forests has been demonstrated in order to tackle effectively the curse of dimensionality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

De, Souza Felipe Rodrigo. „Scheduling Solutions for Data Stream Processing Applications on Cloud-Edge Infrastructure“. Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN082.

Der volle Inhalt der Quelle
Annotation:
L’évolution des technologies ont conduit à une forte connexion entre les applications et le matériel produisant des quantités de données en perpétuelle augmentation. Ces données sont utilisées par les entreprises, les organisations et les individus pour prendre des décisions quotidiennes. Pour que les données collectées soient réellement utiles il convient de les traiter à temps et donc suffisamment rapidement. La vitesse à laquelle les informations sont extraites depuis les données générées par un système ou un environnement surveillé a un impact sur la capacité des entités (entreprises, organisations ou individus) à réagir aux changements. Une solution pour le traitement des données dans un délais réduit consiste à utiliser des applications de traitement de flux de données.Les applications de traitement de flux de données peuvent être modélisées sous forme de graphes orientés, où les sommets sont des sources de données, des opérateurs ou des récepteurs de données(i.e., data sinks), et les arêtes représentent les flux de données entre les opérateurs. Une source de données est un composant d’application responsable de la génération des données. Les opérateurs reçoivent un flux de données, appliquent une transformation ou effectuent une fonction définie par l’utilisateur sur le flux de données entrant et produisent un nouveau flux de sortie, jusqu’à ce que ce dernier atteigne un récepteur de données,où les données sont alors stockées, visualisées ou envoyées à une autre application. Habituellement, les applications de traitement de flux de données sont conçues pour fonctionner sur des infrastructures cloud ou sur une grappe homogène de ressources (i.e., cluster) en raison du nombre de ressources que ces infrastructures peuvent fournir et de la bonne connectivité de leur réseau. Dans les scénarios où les données utilisées par l’application de traitement du flux de données sont produites dans le cloud lui-même alors le déploiement de l’ensemble de l’application sur le cloud est une approche pertinente. Cependant, à l’heure où l’Internet des objets devient de plus en plus omniprésent, il existe un nombre croissant de scénarios où les applications de traitement de flux de données consomment des flux de données générés à la périphérie du réseau (via les nombreux appareils et capteurs répartis géographiquement). Dans de tels la bonne connectivité de leur réseau. Dans les scénarios où les données utilisées par l’application de traitement du flux de données sont produites dans le cloud lui-même alors le déploiement de l’ensemble de l’application sur le cloud est une approche pertinente.Cependant, à l’heure où l’Internet des objets devient de plus en plus omniprésent, il existe un nombre croissant de scénarios où les applications de traitement de flux de données consomment des flux de données générés à la périphérie du réseau (via les nombreux appareils et capteurs répartis géographiquement). Dans de tels scénarios, l’envoi de toutes les données via Internet pour être traitées sur un cloud distant, loin de la périphérie du réseau d’où proviennent les données, conduirait à générer un trafic réseau considérable. Cela augmente ainsi de façon significative la latence de bout en bout pour l’application; c’est-à-dire, le délai entre le moment où les données sont collectées et la fin du traitement. L’informatique de périphérie (edge computing) est devenu un paradigme pour alléger les tâches de traitement du cloud vers des ressources situées plus près des sources de données. Bien que l’utilisation combinée de ces ressources soit parfois appelée fog computing, la communauté scientifique ne semble pas avoir atteint un consensus sur la terminologie. Nous appelons la combinaison de ressources cloud et de ressources périphériques une infrastructure cloud-edge
Technology has evolved to a point where applications and devicesare highly connected and produce ever-increasing amounts of dataused by organizations and individuals to make daily decisions. Forthe collected data to become information that can be used indecision making, it requires processing. The speed at whichinformation is extracted from data generated by a monitored systemTechnology has evolved to a point where applications and devicesare highly connected and produce ever-increasing amounts of dataused by organizations and individuals to make daily decisions. Forthe collected data to become information that can be used indecision making, it requires processing. The speed at whichinformation is extracted from data generated by a monitored systemor environment affects how fast organizations and individuals canreact to changes. One way to process the data under short delays isthrough Data Stream Processing (DSP) applications. DSPapplications can be structured as directed graphs, where the vertexesare data sources, operators, and data sinks, and the edges arestreams of data that flow throughout the graph. A data source is anapplication component responsible for data ingestion. Operatorsreceive a data stream, apply some transformation or user-definedfunction over the data stream and produce a new output stream,until the latter reaches a data sink, where the data is stored,visualized or provided to another application
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

FERNANDES, CARVALHO DHIEGO. „New LoRaWAN solutions and cloud-based systems for e-Health applications“. Doctoral thesis, Università degli studi di Brescia, 2021. http://hdl.handle.net/11379/544075.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Reyes, Eumir P. (Eumir Paulo Reyes Morales). „A systems thinking approach to business intelligence solutions based on cloud computing“. Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59267.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M. in System Design and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 73-74).
Business intelligence is the set of tools, processes, practices and people that are used to take advantage of information to support decision making in the organizations. Cloud computing is a new paradigm for offering computing resources that work on demand, are scalable and are charged by the time they are used. Organizations can save large amounts of money and effort using this approach. This document identifies the main challenges companies encounter while working on business intelligence applications in the cloud, such as security, availability, performance, integration, regulatory issues, and constraints on network bandwidth. All these challenges are addressed with a systems thinking approach, and several solutions are offered that can be applied according to the organization's needs. An evaluations of the main vendors of cloud computing technology is presented, so that business intelligence developers identify the available tools and companies they can depend on to migrate or build applications in the cloud. It is demonstrated how business intelligence applications can increase their availability with a cloud computing approach, by decreasing the mean time to recovery (handled by the cloud service provider) and increasing the mean time to failure (achieved by the introduction of more redundancy on the hardware). Innovative mechanisms are discussed in order to improve cloud applications, such as private, public and hybrid clouds, column-oriented databases, in-memory databases and the Data Warehouse 2.0 architecture. Finally, it is shown how the project management for a business intelligence application can be facilitated with a cloud computing approach. Design structure matrices are dramatically simplified by avoiding unnecessary iterations while sizing, validating, and testing hardware and software resources.
by Eumir P. Reyes.
S.M.in System Design and Management
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

GARAU, GIANFRANCO. „CLOUD-BASED SOLUTIONS IMPROVING TRANSPARENCY, OPENNESS AND EFFICIENCY OF OPEN GOVERNMENT DATA“. Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/249540.

Der volle Inhalt der Quelle
Annotation:
A central pillar of open government programs is the disclosure of data held by public agencies using Information and Communication Technologies (ICT). This disclosure relies on the creation of open data portals (e.g. Data.gov) and has subsequently been associated with the expression Open Government Data (OGD). The overall goal of these governmental initiatives is not limited to enhance transparency of public sectors but aims to raise awareness of how released data can be put to use in order to enable the creation of new products and services by private sectors. Despite the usage of technological platforms to facilitate access to government data, open data portals continue to be organized in order to serve the goals of public agencies without opening the doors to public accountability, information transparency, public scrutiny, etc. This thesis considers the basic aspects of OGD including the definition of technical models for organizing such complex contexts, the identification of techniques for combining data from several portals and the proposal of user interfaces that focus on citizen-centred usability. In order to deal with the above issues, this thesis presents a holistic approach to OGD that aims to go beyond problems inherent their simple disclosure by providing a tentative answer to the following questions: 1) To what extent do the OGD-based applications contribute towards the creation of innovative, value-added services? 2) What technical solutions could increase the strength of this contribution? 3) Can Web 2.0 and Cloud technologies favour the development of OGD apps? 4) How should be designed a common framework for developing OGD apps that rely on multiple OGD portals and external web resources? In particular, this thesis is focused on devising computational environments that leverage the content of OGD portals (supporting the initial phase of data disclosure) for the creation of new services that add value to the original data. The thesis is organized as follows. In order to offer a general view about OGD, some important aspects about open data initiatives are presented including their state of art, the existing approaches for publishing and consuming OGD across web resources, and the factors shaping the value generated through government data portals. Then, an architectural framework is proposed that gathers OGD from multiple sites and supports the development of cloud-based apps that leverage these data according to potentially different exploitation roots ranging from traditional business to specialized supports for citizens. The proposed framework is validated by two cloud-based apps, namely ODMap (Open Data Mapping) and NESSIE (A Network-based Environment Supporting Spatial Information Exploration). In particular, ODMap supports citizens in searching and accessing OGD from several web sites. NESSIE organizes data captured from real estate agencies and public agencies (i.e. municipalities, cadastral offices and chambers of commerce) in order to provide citizens with a geographic representation of real estate offers and relevant statistics about the price trend.
A central pillar of open government programs is the disclosure of data held by public agencies using Information and Communication Technologies (ICT). This disclosure relies on the creation of open data portals (e.g. Data.gov) and has subsequently been associated with the expression Open Government Data (OGD). The overall goal of these governmental initiatives is not limited to enhance transparency of public sectors but aims to raise awareness of how released data can be put to use in order to enable the creation of new products and services by private sectors. Despite the usage of technological platforms to facilitate access to government data, open data portals continue to be organized in order to serve the goals of public agencies without opening the doors to public accountability, information transparency, public scrutiny, etc. This thesis considers the basic aspects of OGD including the definition of technical models for organizing such complex contexts, the identification of techniques for combining data from several portals and the proposal of user interfaces that focus on citizen-centred usability. In order to deal with the above issues, this thesis presents a holistic approach to OGD that aims to go beyond problems inherent their simple disclosure by providing a tentative answer to the following questions: 1) To what extent do the OGD-based applications contribute towards the creation of innovative, value-added services? 2) What technical solutions could increase the strength of this contribution? 3) Can Web 2.0 and Cloud technologies favour the development of OGD apps? 4) How should be designed a common framework for developing OGD apps that rely on multiple OGD portals and external web resources? In particular, this thesis is focused on devising computational environments that leverage the content of OGD portals (supporting the initial phase of data disclosure) for the creation of new services that add value to the original data. The thesis is organized as follows. In order to offer a general view about OGD, some important aspects about open data initiatives are presented including their state of art, the existing approaches for publishing and consuming OGD across web resources, and the factors shaping the value generated through government data portals. Then, an architectural framework is proposed that gathers OGD from multiple sites and supports the development of cloud-based apps that leverage these data according to potentially different exploitation roots ranging from traditional business to specialized supports for citizens. The proposed framework is validated by two cloud-based apps, namely ODMap (Open Data Mapping) and NESSIE (A Network-based Environment Supporting Spatial Information Exploration). In particular, ODMap supports citizens in searching and accessing OGD from several web sites. NESSIE organizes data captured from real estate agencies and public agencies (i.e. municipalities, cadastral offices and chambers of commerce) in order to provide citizens with a geographic representation of real estate offers and relevant statistics about the price trend.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhao, Jian, und 趙建. „Performance modeling and optimization solutions for networking systems“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/196434.

Der volle Inhalt der Quelle
Annotation:
This thesis targets at modeling and resolving practical problems using mathematical tools in two representative networking systems nowadays, i.e., peer-to-peer (P2P) video streaming system and cloud computing system. In the first part, we study how to mitigate the following tussle between content service providers and ISPs in P2P video streaming systems: network-agnostic P2P protocol designs bring lots of inter-ISP traffic and increase traffic relay cost of ISPs; in turn, ISPs start to throttle P2P packets, which significantly deteriorates P2P streaming performance. First, we investigate the problem in a mesh-based P2P live streaming system. We use end-to-end streaming delays as performance, and quantify the amount of inter-ISP traffic with the number of copies of the live streams imported into each ISP. Considering multiple ISPs at different bandwidth levels, we model the generic relationship between the volume of inter-ISP traffic and streaming performance, which provides useful insights on the design of effective locality-aware peer selection protocols and server deployment strategies across multiple ISPs. Next, we study a similar problem in a hybrid P2P-cloud CDN system for VoD streaming. We characterize the relationship between the costly bandwidth consumption from the cloud CDN and the inter-ISP traffic. We apply a loss network model to derive the bandwidth consumption under any given chunk distribution pattern among peer caches and any streaming request dispatching strategy among ISPs, and derive the optimal peer caching and request dispatching strategies which minimize the bandwidth demand from the cloud CDN. Based on the fundamental insights from our analytical results, we design a locality-aware, hybrid P2P-cloud CDN streaming protocol. In the second part, we study the profit maximization and cost minimization problems in Infrastructure-as- a- Service (IaaS) cloud systems. The first problem is how a geo-distributed cloud system should price its datacenter resources at different locations, such that its overall profit is maximized over long-term operation. We design an efficient online algorithm for dynamic pricing of VM resources across datacenters, together with job scheduling and server provisioning in each datacenter, to maximize the cloud's profit over the long run. Theoretical analysis shows that our algorithm can schedule jobs within their respective deadlines, while achieving a time-averaged overall profit closely approaching the offline maximum, which is computed by assuming perfect information on future job arrivals is freely available. The second problem is how federated clouds should trade their computing resources among each other to reduce the cost, by exploiting diversities of different clouds' workloads and operational costs. We formulate a global cost minimization problem among multiple clouds under the cooperative scenario where each individual cloud's workload and cost information is publicly available. Taking into considerations jobs with disparate length, a non-preemptive approximation algorithm for leftover job migration and new job scheduling is designed. Given to the selfishness of individual clouds, we further design a randomized double auction mechanism to elicit clouds' truthful bidding for buying or selling virtual machines. The auction mechanism is proven to be truthful, and to guarantee the same approximation ratio to what the cooperative approximation algorithm achieves.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Ibatuan, Charles R. II. „Cloud computing solutions for the Marine Corps: an architecture to support expeditionary logistics“. Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37643.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited
The Department of Defense (DoD) is planning an aggressive move toward cloud computing technologies. This concept has been floating around the private information technology sector for a number of years and has benefited organizations with cost savings, increased efficiencies, and flexibility by sharing computer resources through networked connections. The push for cloud computing has been driven by the 25 Point Implementation Plan to Reform Federal Information Technology Management that highlighted the shift to a cloud first policy. The cloud first policy has driven the DoD, specifically the Marine Corps, toward cloud computing technologies making this relatively new paradigm inevitable. The Marine Corps has provided its cloud computing guidance through its Private Cloud Computing Environment Strategy. However, the urgency for the Marine Corps to implement a cloud computing architecture that will support enhanced logistical systems in an expeditionary environment needs to be tempered by a comprehensive evaluation of current cloud computing technologies, virtualization technologies, and local versus remote logistical data types and sub-sets. This thesis seeks as its goal to explore and analyze current cloud computing architectures and virtualization technologies to determine and develop a cloud computing architecture that best supports expeditionary logistics for the Marine Corps.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Andell, Oscar, und Albin Andersson. „Slow rate denial of service attacks on dedicated- versus cloud based server solutions“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148480.

Der volle Inhalt der Quelle
Annotation:
Denial of Service (DoS) attacks remain a serious threat to internet stability. A specific kind of low bandwidth DoS attack, called a slow rate attack can with very limited resources potentially cause major interruptions to the availability of the attacked web servers. This thesis examines the impact of slow rate application layer DoS attacks against three different server solutions. The server solutions are a static cloud solution and a load-balancing cloud solution running on AmazonWeb Services (AWS) as well as a dedicated server. To identify the impact in terms of responsiveness and service availability a number of experiments were conducted on the web servers using publicly available DoS tools. The response times of the requests were measured. The results show that the dedicated and static cloud based server solutions are severely impacted by the attacks while the AWS load-balancing cloud solution is not impacted nearly as much. We concluded that all solutions were impacted by the attacks and that the readily available DoS tools are sufficient for creating a denial of service state on certain web servers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Ali, Ayaz. „Analysis of key security and privacy concerns and viable solutions in Cloud computing“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-90806.

Der volle Inhalt der Quelle
Annotation:
Cloud Security and Privacy is the most concerned area in the development of newly advance technological domains like cloud computing, the cloud of things, the Internet of Things. However, with the growing popularity and diverse nature of these technologies, security and privacy are becoming intricate matters and affect the adoption of cloud computing. Many small and large enterprises are in conflict while migrating towards cloud technology and this is because of no proper cloud adoption policy guideline, generic solutions for system migration issues, systematic models to analyze security and privacy performance provided by different cloud models. Our proposed literature review focuses on the problems and identifies solutions in the category of security and privacy. A comprehensive analysis of various identified techniques published during 2010 – 2018 has been presented. We have reviewed 51 studies including research papers and systematic literature reviews on the factors of security and privacy. After analyzing, the papers have been classified into 5 major categories to get an appropriate solution for our required objectives of this study. This work will facilitate the researchers and as well the companies to select appropriate guideline while adopting cloud services.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Bulusu, Santosh, und Kalyan Sudia. „A Study on Cloud Computing Security Challenges“. Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2820.

Der volle Inhalt der Quelle
Annotation:
Context: Scientific computing in the 21st century has evolved from fixed to distributed work environment. The current trend of Cloud Computing (CC) allows accessing business applications from anywhere just by connecting to the Internet. Evidence shows that, switching to CC organizations' annual expenditure and maintenance are being reduced to a greater extent. However, there are several challenges that come along with various benefits of CC. Among these include security aspects. Objectives: This thesis aims to identify security challenges for adapting cloud computing and their solutions from real world for the challenge that do not have any proper mitigation strategies identified through literature review. For this the objective is to identify existing cloud computing security challenges and their solutions. Identify the challenges that have no mitigation strategies and gather solutions/guidelines/practices from practitioners, for a challenge with more references but no mitigation strategies identified (in literature). Methods: This study presents a literature review and a snowball sampling to identify CC security challenges and their solutions/mitigation strategies. The literature review is based on search in electronic databases and snowball sample is based on the primary studies searched and selected from electronic databases. Using the challenges and their solutions identified form literature review, challenges with no mitigation strategies are identified. From these identified challenges with no mitigation strategies, a challenge with more references is identified. The surveys are employed in the later stages to identify the mitigation strategies for this challenge. Finally the results from the survey are discussed in a narrative fashion. Results: 43 challenges and 89 solutions are identified from literature review using snowball sampling. In addition to these mitigation strategies few guidelines are also identified. The challenge with more (i.e., more articles mentioning the challenge) and no mitigation identified is incompatibility. The responses identified for the three insecure areas of incompatibility (i.e., interoperability, migration and IDM integration with CC) in cloud computing security are mostly guidelines/practices opined by experienced practitioners. Conclusions: This study identifies cloud computing security challenges and their solutions. Where these (challenges and solutions) are common to cloud computing applications and cannot be generalized to either service or deployment models (viz. SaaS, PaaS, IaaS, etc.). The study also identifies that there are methods guidelines/practices identified from practitioners) to provide secure interoperability, migration and integration of on-premise authentication systems with cloud applications, but these methods are developed by individuals (practitioners/organization) specific to their context. The study also identifies the non-existence of global standards for any of these operations (providing interoperability/migration/IDM integration with cloud). This identified non-existence of global standards and guidelines could be help academics to know the state of practice and formulate better methods/standards to provide secure interoperability. The identified cloud computing security challenges (43) and solutions (89), can be referred by practitioners to understand which areas of security need to be concentrated while adapting/migrating to a cloud computing environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Sthapit, Saurav. „Computation offloading for algorithms in absence of the Cloud“. Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31440.

Der volle Inhalt der Quelle
Annotation:
Mobile cloud computing is a way of delegating complex algorithms from a mobile device to the cloud to complete the tasks quickly and save energy on the mobile device. However, the cloud may not be available or suitable for helping all the time. For example, in a battlefield scenario, the cloud may not be reachable. This work considers neighbouring devices as alternatives to the cloud for offloading computation and presents three key contributions, namely a comprehensive investigation of the trade-off between computation and communication, Multi-Objective Optimisation based approach to offloading, and Queuing Theory based algorithms that present the benefits of offloading to neighbours. Initially, the states of neighbouring devices are considered to be known and the decision of computation offloading is proposed as a multi-objective optimisation problem. Novel Pareto optimal solutions are proposed. The results on a simulated dataset show up to 30% increment in performance even when cloud computing is not available. However, information about the environment is seldom known completely. In Chapter 5, a realistic environment is considered such as delayed node state information and partially connected sensors. The network of sensors is modelled as a network of queues (Open Jackson network). The offloading problem is posed as minimum cost problem and solved using Linear solvers. In addition to the simulated dataset, the proposed solution is tested on a real computer vision dataset. The experiments on the random waypoint dataset showed up to 33% boost on performance whereas in the real dataset, exploiting the temporal and spatial distribution of the targets, a significantly higher increment in performance is achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Larsson, Niklas, und Josefsson Fredrik Ågren. „A study of slow denial of service mitigation tools and solutions deployed in the cloud“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157721.

Der volle Inhalt der Quelle
Annotation:
Slow rate Denial of Service (DoS) attacks have been shown to be a very effective way of attacking vulnerable servers while using few resources. This thesis investigates the effectiveness of mitigation tools used for protection against slow DoS attacks, specifically slowheader and slow body. Finally, we propose a service that cloud providers could implement to ensure better protection against slow rate DoS attacks. The tools studied in this thesis are, a Web Application firewall, a reverse proxy using an event-based architecture and Amazon’s Elastic Load Balancing. To gather data a realistic HTTP load script was built that simulated load on the server while using probe requests to gather response time data from the server. The script recorded the impact the attacks had for each server configuration.The results show that it’s hard to protect against slow rate DoS attacks while only using firewalls or load balancers. We found that using a reverse proxy with an event-based architecture was the best way to protect against slow rate DoS attacks and that such a service would allow the customer to use their server of choice while also being protected.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Paolucci, Cristian. „Prototyping a scalable Aggregate Computing cluster with open-source solutions“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15716/.

Der volle Inhalt der Quelle
Annotation:
L'Internet of Things è un concetto che è stato ora adottato in modo pervasivo per descrivere un vasto insieme di dispositivi connessi attraverso Internet. Comunemente, i sistemi IoT vengono creati con un approccio bottom-up e si concentrano principalmente sul singolo dispositivo, il quale è visto come la basilare unità programmabile. Da questo metodo può emergere un comportamento comune trovato in molti sistemi esistenti che deriva dall'interazione di singoli dispositivi. Tuttavia, questo crea un'applicazione distribuita spesso dove i componenti sono strettamente legati tra di loro. Quando tali applicazioni crescono in complessità, tendono a soffrire di problemi di progettazione, mancanza di modularità e riusabilità, difficoltà di implementazione e problemi di test e manutenzione. L'Aggregate Programming fornisce un approccio top-down a questi sistemi, in cui l'unità di calcolo di base è un'aggregazione anziché un singolo dispositivo. Questa tesi consiste nella progettazione e nella distribuzione di una piattaforma, basata su tecnologie open-source, per supportare l'Aggregate Computing nel cloud, in cui i dispositivi saranno in grado di scegliere dinamicamente se il calcolo si trova su se stessi o nel cloud. Anche se Aggregate Computing è intrinsecamente progettato per un calcolo distribuito, il Cloud Computing introduce un'alternativa scalabile, affidabile e altamente disponibile come strategia di esecuzione. Quest'opera descrive come sfruttare una Reactive Platform per creare un'applicazione scalabile nel cloud. Dopo che la struttura, l'interazione e il comportamento dell'applicazione sono stati progettati, viene descritto come la distribuzione dei suoi componenti viene effettuata attraverso un approccio di containerizzazione con Kubernetes come orchestratore per gestire lo stato desiderato del sistema con una strategia di Continuous Delivery.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Guardo, Ermanno Lorenzo. „Edge Computing: challenges, solutions and architectures arising from the integration of Cloud Computing with Internet of Things“. Doctoral thesis, Università di Catania, 2018. http://hdl.handle.net/10761/3908.

Der volle Inhalt der Quelle
Annotation:
The rapid spread of the Internet of Things (IoT) is causing the exponential growth of objects connected to the network, in fact, according to estimates, in 2020 there will be about 3/4 devices per person totaling of over 20 billion connected devices. Therefore, the use of content that requires intensive bandwidth consumption is growing. In order to meet these growing needs, the computing power and storage space are transferred to the network edge to reduce the network latency and increase the bandwidth availability. Edge computing allows to approach high-bandwidth content and sensitive apps to the user or data source and is preferred to use it for many IoT applications respect to cloud computing. Its distributed approach addresses the needs of IoT and industrial IoT, as well as the immense amount of data generated by smart sensors and IoT devices, which would be costly and time-consuming to send to the cloud for processing and analysis. Edge computing reduces both the bandwidth needed and the communication among sensors and cloud, which can negatively affect the IoT performance. The goal of edge computing is to improve efficiency and reduce the amount of data transported to the cloud for processing, analysis and storage. The research activity carried out during the three years of the Ph.D. program focused on the study, design and development of architectures and prototypes based on the Edge Computing in various contexts such as smart cities and agriculture. Therefore, the well-known paradigms of Fog Computing and Mobile Edge Computing have been faced. In this thesis, will be discussed the work carried out through the exploitation of the Fog Computing and Mobile Edge Computing paradigms, considered suitable solutions to address the challenges of the fourth industrial revolution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Smeltzer, Brandon. „The solubility of Triton X-114 And Tergitol 15-S-9 in high pressure carbon dioxide solutions“. [Tampa, Fla] : University of South Florida, 2005. http://purl.fcla.edu/usf/dc/et/SFE0001454.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Carvalho, Márcio Barbosa de. „Um framework para a construção automatizada de cloud monitoring slices baseados em múltiplas soluções de monitoramento“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/118256.

Der volle Inhalt der Quelle
Annotation:
Computação em nuvem é um paradigma em que provedores oferecem recursos computacionais como serviços, que são contratados sob demanda e são acessados através da Internet. Os conjuntos de recursos computacionais contratados são chamados de cloud slices, cujo monitoramento fornece métricas essenciais para atividades como a operação da infraestrutura, verificação do cumprimento de SLAs e medição da qualidade do serviço percebida pelos usuários. Além disso, o monitoramento também é oferecido como serviço para os usuários, que podem contratar métricas ou serviços de monitoramento diferenciados para seus cloud slices. O conjunto de métricas associadas a um cloud slice juntamente com as configurações necessárias para coletá-las pelas soluções de monitoramento é chamado de monitoring slice, cuja função é acompanhar o funcionamento do cloud slice. Entretanto, a escolha de soluções para serem utilizadas nos monitoring slices é prejudicada pela falta de integração entre soluções e plataformas de computação em nuvem. Para contornar esta falta de integração, os administradores precisam implementar scripts geralmente complexos para coletar informações sobre os cloud slices hospedados na plataforma, descobrir as operações realizadas na plataforma, determinar quais destas operações precisam ser refletidas no monitoramento de acordo com as necessidades do administrador e gerar as configurações dos monitoring slices. Nesta dissertação é proposto um framework que mantém monitoring slices atualizados automaticamente quando cloud slices são criados, modificados ou destruídos na plataforma de nuvem. Neste framework, os monitoring slices são mantidos de acordo com regras predefinidas pelos administradores oferecendo a flexibilidade que não está disponível nas soluções de monitoramento atuais. Desta forma, o desenvolvimento de scripts complexos é substituído pela configuração dos componentes do framework de acordo com as necessidades dos administradores em relação ao monitoramento. Estes componentes realizam a integração do framework com as plataformas e soluções de monitoramento e podem já ter sido desenvolvidos por terceiros. Caso o componente necessário não esteja disponível, o administrador pode desenvolvê-lo facilmente aproveitando as funcionalidades oferecidas pelo framework. Para avaliar o framework no contexto de nuvens do modelo IaaS, foi desenvolvido o protótipo chamado FlexACMS (Flexible Automated Cloud Monitoring Slices). A avaliação do FlexACMS mostrou que o tempo de resposta para a criação de monitoring slices é independente do número de cloud slices no framework. Desta forma, foi demonstrada a viabilidade e escalabilidade do FlexACMS para a criação de monitoring slices para nuvens IaaS.
Cloud computing is a paradigm that providers offer computing resources as services, which are acquired on demand and are accessed through the Internet. The set of acquired computing resources are called cloud slices, whose monitoring offers essential metrics for activities as infrastructure operation, SLA supervision, and quality of service measurement. Beyond, the monitoring is also offered as a service to users, that can acquire both differentiated metrics or monitoring services to their cloud slices. The set of metrics associated to a cloud slice and the required configuration to collect them by monitoring solutions is called monitoring slice, whose function is keep up with the cloud slice functioning. However, the monitoring solution choice to compose monitoring slices is harmed by lack of integration between solutions and cloud platforms. To overcome this lack of integration, the administrators need to develop scripts usually complex to collect information about cloud slices hosted by the platform, to discover the operations performed in the platform, to determine which operations need to be reflected in the monitoring according to the administrator’s needs, and to generate the monitoring slice configuration. This dissertation proposes a framework that keeps monitoring slices updated automatically when cloud slices are created, modified, or destroyed in the cloud platform. In this framework, the monitoring slices are kept according to rules defined by administrators, which offers the flexibility that is not available in current monitoring solutions. In this way, the framework replaces the development of complex scripts by the configuration of framework’s components according to administrator’s needs in regards to monitoring. These components perform the framework integration with platforms and monitoring solutions and may be already developed by third parties. If required component is not available, the administrator can easily develop it availing functionalities offered by the framework. In order to evaluate the framework in the context of IaaS clouds, a prototype called FlexACMS (Flexible Automated Cloud Monitoring Slices) was developed. The FlexACMS evaluation showed that response time to create monitoring slices is independent of the number of cloud slices in the framework. In this way, the FlexACMS feasibility and scalability was demonstrated for creation of monitoring slices for IaaS clouds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ecarot, Thibaud. „Efficient allocation for distributed and connected Cloud“. Thesis, Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0017/document.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail, nous nous intéressons à la modélisation des ressources Cloud, indépendamment des couches existantes, afin d’apporter un cadre (framework) de représentation unique et ouvert à l'arrivée anticipée du XaaS (Anything as a Service). Nous fournissons, à l'aide de ce framework, un outil de placement des ressources pour une plate-forme donnée. Les travaux de thèse se portent aussi sur la prise en compte des intérêts des utilisateurs ou consommateurs et des fournisseurs. Les solutions existantes ne se focalisent que sur l’intérêt des fournisseurs et ce au détriment des consommateurs contraints par le modèle d’affaire des fournisseurs. La thèse propose des algorithmes évolutionnaires en mesure de répondre à cet objectif
This thesis focuses on optimal and suboptimal allocation of cloud resources from infrastructure providers taking into account both the users or consumers and the providers interests in the mathematical modeling of this joint optimization problem. Compared to the state of the art that has so far remained provider centric, our algorithms optimize the dynamic allocation of cloud resources while taking into account the users and the providers objectives and requirements and consequently frees the users (or consumers) from provider lock in (providers’ business interests). Evolutionary algorithms are proposed to address this challenge and compared to the state of the art
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ecarot, Thibaud. „Efficient allocation for distributed and connected Cloud“. Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0017.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail, nous nous intéressons à la modélisation des ressources Cloud, indépendamment des couches existantes, afin d’apporter un cadre (framework) de représentation unique et ouvert à l'arrivée anticipée du XaaS (Anything as a Service). Nous fournissons, à l'aide de ce framework, un outil de placement des ressources pour une plate-forme donnée. Les travaux de thèse se portent aussi sur la prise en compte des intérêts des utilisateurs ou consommateurs et des fournisseurs. Les solutions existantes ne se focalisent que sur l’intérêt des fournisseurs et ce au détriment des consommateurs contraints par le modèle d’affaire des fournisseurs. La thèse propose des algorithmes évolutionnaires en mesure de répondre à cet objectif
This thesis focuses on optimal and suboptimal allocation of cloud resources from infrastructure providers taking into account both the users or consumers and the providers interests in the mathematical modeling of this joint optimization problem. Compared to the state of the art that has so far remained provider centric, our algorithms optimize the dynamic allocation of cloud resources while taking into account the users and the providers objectives and requirements and consequently frees the users (or consumers) from provider lock in (providers’ business interests). Evolutionary algorithms are proposed to address this challenge and compared to the state of the art
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Khalil, Sabine. „La mise en oeuvre des solutions "cloud" dans les grandes entreprises françaises : au-delà de la gouvernance des TIC ?“ Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0064.

Der volle Inhalt der Quelle
Annotation:
L'avènement de l'internet a entraîné des changements majeurs dans les entreprises ces dernières décennies. De nouveaux modèles d'affaires et services ont émergé affectant les processus métiers et les modes de fonctionnement au sein des entreprises. L'adoption des solutions cloud n'a fait qu'accentuer ces transformations. Si ces solutions ont permis d’améliorer l’automatisation des processus, d'accroître l'agilité organisationnelle, de réduire le time-to-market, et d'assurer des services informatiques à la demande, elles ont également engendré de nouveaux risques pour les entreprises liés à la sécurité, la fiabilité des services, et même la nécessité de nouvelles compétences spécifiques. Comme pour la gouvernance des TIC, les entreprises doivent gouverner leurs solutions cloud afin d'en tirer le maximum davantage et de réduire les risques associés. Bien que de nombreux travaux se soient intéressés à la gouvernance des TIC, peu se sont penchés sur la manière dont les entreprises gouvernent leurs solutions cloud. A cet effet, nous avons décidé de mener une étude qualitative, basés sur la conduite d'entretiens, auprès de 35 grandes entreprises françaises ayant adopté des solutions cloud. Cette étude nous a permis d'explorer les modèles de gouvernance déployés dans les entreprises françaises et d'identifier les liens éventuels entre le modèle de gouvernance déployé et les niveaux d'intensité d'adoption des solutions cloud. Ce travail de thèse met en évidence les différents impacts liés à l'adoption du cloud et souligne l'émergence de plusieurs modèles de gouvernance au sein des entreprises interrogées. Cependant différents facteurs de contingence semblent influencer ces modèles de gouvernance
Throughout the last decades, the Internet has brought a myriad of innovative services in organizations. Cloud computing has been a part of Information Technologies that have transformed organizations. It enhances automation and agility, allows scalability and ubiquity as well as reduces time-to-market. However, previous research studies in the IT field has taught us that organizations cannot sustain in a competitive market without, first, investing in IT, and then, effectively governing it. For instance, the extensive list of failing organizations, due to their bad IT governance, has raised awareness regarding the importance held for effective governance. Therefore, organizations can use their investments in IT to their benefits when governed effectively. Similarly, in order to reduce risks generated when adopting cloud services, and benefit from the promised advantages, organizations should effectively govern them. Nevertheless, to our knowledge, no cloud governance model, addressing the various angles of cloud computing, exists. While this research work is motivated by the primordial need for governance, it explores cloud governance adopted by large French organizations, and whether it can be achieved through the organizations’ IT governance. In addition, it aims at studying the possible link between the organization’s effective cloud governance and the intensity level of their adoption. We conduct 35 interviews with large French organizations that have adopted cloud services in order to meet the objectives of this research work. The two rounds of interviews with the 35 organizations highlight numerous major impacts of cloud computing. From the results arise different possible governance models when adopting cloud services in large organizations, along with various factors affecting this governance. We finally stress on the originality of our contributions in the IT and cloud governance literature, as well as we draw the light on the impact of theoretical and practical implications to the Information Systems community
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Meenakshi, Sundaram Vignesh. „Developing Bleeding-edge microservice solutions for complex problems : Non-intrusive technology in Walking Meetings“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214670.

Der volle Inhalt der Quelle
Annotation:
The last decade has seen an emergence of various types of cloud services and development frameworks offered by leading companies in the software industry. While each of these services has been used to solve specific tasks, their specifications have changed over time as they have matured. Therefore, integrating these components to solve a whole new task tends to get tricky due to their incompatible and experimental nature. While some technology components might continue to be developed, others might deprecate. In this thesis, using a user-centered design and agile development approach, we have attempted to develop a cloud solution using microservice software architecture by integrating state-of-the-art technology components to solve a totally new task of providing a non-intrusive technology experience during walking meetings. We present our results based on interaction with the research group, user studies as a part of the research study “Movement of the mind”, and expectations of the working prototype within the context of walking meetings. We also present the features of the prototype and our motivation for choosing the tools to develop them. Finally, we discuss the development challenges faced during our attempt and conclude whether it is plausible to integrate various components of bleeding-edge technology to solve complex real-life problems or rather wait for these technologies to mature.
Under det senaste decenniet har marknaden erbjudits en mängd olika typer av molntjänster och utvecklings-ramverk framtagna av ledande företag inom mjukvaruindustrin. Dessa tjänster har ofta använts för att lösa specifika uppgifter. Olika komponenterna som ingår i dessa specifika lösningar har med tiden utvecklats ändrats allteftersom de har mognat. Att integrera dessa komponenter för att lösa en helt ny uppgift tenderar därför att bli svårt på grund av deras instabila, inkompatibla och experimentella karaktär. Medan vissa teknikkomponenter kan fortsätta att utvecklas kan andra avstanna och utgå. Vi har närmat oss detta problemområde genom agil och iterativ utveckling samt användar-centrerad design-metod. En moln-baserad lösning som bland annat integrerat bleeding-edge teknikkomponenter har utvecklats och utvärderats med syfte att ge en icke-påträngande tekniskt support för gå-möten. De resultat som här presenteras och diskuteras baseras på interaktion med forskargruppen inom projektet "Med rörelse i tankarna", användarstudier och användartesteter i fält på olika arbetsplatser där den prototyp som utvecklats sökt motsvara användarnas utryckta förväntningar på tekniskt support för gångmöten. Vi diskuterar också prototypens egenskaper och vår motivation för val av metoder för att utveckla den. Slutligen diskuterar vi de utvecklingsutmaningar vi ställdes inför under vårt försök och om det är rimligt att integrera olika bleeding-edge komponenter för att lösa komplexa verkliga problem eller huruvida man hellre bör vänta på att dessa teknologier nått en stabilare mognadsgrad.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Labes, Stine [Verfasser], Rüdiger [Akademischer Betreuer] Zarnekow, Rüdiger [Gutachter] Zarnekow und Stefan [Gutachter] Tai. „Towards successful business models of cloud service providers through cooperation-based solutions / Stine Labes ; Gutachter: Rüdiger Zarnekow, Stefan Tai ; Betreuer: Rüdiger Zarnekow“. Berlin : Technische Universität Berlin, 2017. http://d-nb.info/1156270219/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Zhang, Fei [Verfasser], Ramin [Akademischer Betreuer] Yahyapour, Ramin [Gutachter] Yahyapour, Fu [Gutachter] Xiaoming und Jin [Gutachter] Hai. „Challenges and New Solutions for Live Migration of Virtual Machines in Cloud Computing Environments / Fei Zhang ; Gutachter: Ramin Yahyapour, Fu Xiaoming, Jin Hai ; Betreuer: Ramin Yahyapour“. Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2018. http://d-nb.info/1160442126/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Signorini, Matteo. „Towards an internet of trust: issues and solutions for identification and authentication in the internet of things“. Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/350029.

Der volle Inhalt der Quelle
Annotation:
La Internet de las Cosas está avanzando lentamente debido a la falta de confianza en dispositivos que puedan interactuar de manera autónoma. Además, se requieren nuevos enfoques para mitigar o al menos paliar ataques de atacantes omnipresentes cada vez más poderosos. Para hacer frente a estas cuestiones, esta tesis investiga los conceptos de identidad y autenticidad. En cuanto a la identidad, se propone un novedoso enfoque sensible al contexto basado en la tecnología de cadenas de bloques donde el paradigma estándar se sustituye por un enfoque basado en la identificación de atributos. Referente a la authentication, nuevas propuestas permiten validar mensajes en escenarios en línea y fuera de línea. Además, se introduce un nuevo enfoque basado en software para escenarios en línea que proporciona propiedades intrínsecas de hardware independientemente de elementos físicos. Por último, la tecnología PUF permite el diseño novel de protocolos de autenticación en escenarios donde sin conexión.
The Internet of Things is advancing slowly due to the lack of trust in devices that can autonomously interact. Standard solutions and new technologies have strengthened its security, but ubiquitous and powerful attackers are still an open problem that requires novel approaches. To address the above issues, this thesis investigates the concepts of identity and authenticity. As regards identity, a new context-aware and self-enforced approach based on the blockchain technology is proposed. With this solution, the standard paradigm focused on fixed identifiers is replaced with an attribute-based identification approach that delineates democratically approved names. With respect to authentication, new approaches are analyzed from both the online and offline perspective to enable smart things in the validation of exchanged messages. Further, a new software approach for online scenarios is introduced which provides hardware-intrinsic properties without relying on any physical element. Finally, PUF technology is leveraged to design novel offline disposable authentication protocols.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Ziebell, Robert-Christian. „Digital transformation of HR - History, implementation approach and success factors - Cumulative PhD Thesis“. Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/117608.

Der volle Inhalt der Quelle
Annotation:
[ES] La digitalización de los procesos de RRHH en soluciones basadas en la nube progresa continuamente. Esta tesis examina tales transformaciones, deriva un modelo de proceso concreto e identifica los factores críticos de éxito. La metodología utilizada para la investigación es de carácter cualitativo. Como base y medida preparatoria para abordar las cuestiones de investigación, se llevó a cabo un amplio estudio bibliográfico en el ámbito de los recursos humanos, con especial atención a las publicaciones sobre la gestión electrónica de los recursos humanos (en adelante, "e-HRM"). Basándose en este conocimiento y en la combinación de una amplia experiencia práctica con proyectos de transformación de RRHH, se publicó un estudio que presenta el desarrollo histórico de e-HRM y que ha derivado en un modelo de procesos optimizado que tiene en cuenta los requisitos técnicos de RRHH así como las limitaciones de la nueva tecnología de la nube. Posteriormente, se entrevistó a varios expertos en RRHH que ya habían adquirido experiencia de primera mano con los procesos de RRHH en un entorno de nube para averiguar qué factores de éxito eran relevantes para dicha transformación de RRHH. Las principales conclusiones de esta tesis son la derivación de un modelo de procedimiento de proyecto de mejores prácticas para la transformación de los procesos de RRHH en una solución basada en la nube y la identificación de obstáculos potenciales en la implementación de dichos proyectos. Además, se elaboran los motivos de dicha transformación, los factores que impulsan el proceso dentro de una organización, el grado actual de digitalización de los recursos humanos, los parámetros operativos y estratégicos necesarios y, en última instancia, el impacto en los métodos de trabajo. Como resultado, se realiza una evaluación del uso de las métricas de HR y se derivan nuevas ratios potenciales.
[CAT] La digitalització dels processos de RRHH en solucions basades en el núvol progressa contínuament. Aquesta tesi examina tals transformacions, deriva un model de procés concret i identifica els factors crítics d'èxit. La metodologia utilitzada per a la investigació és de caràcter qualitatiu. Com a base i mesura preparatòria per a abordar les qüestions d'investigació, es va dur a terme un ampli estudi bibliogràfic en l'àmbit dels recursos humans, amb especial atenció a les publicacions sobre la gestió electrònica dels recursos humans (en endavant, "e-HRM "). Basant-se en aquest coneixement i en la combinació d'una àmplia experiència pràctica amb projectes de transformació de RRHH, es va publicar un estudi que presenta el desenvolupament històric d'e-HRM i que ha derivat en un model de processos optimitzat que té en compte els requisits tècnics de RRHH així com les limitacions de la nova tecnologia del núvol. Posteriorment, es va entrevistar a diversos experts en RRHH que ja havien adquirit experiència de primera mà amb els processos de RRHH en un entorn de núvol per esbrinar quins factors d'èxit eren rellevants per a aquesta transformació de RRHH. Les principals conclusions d'aquesta tesi són la derivació d'un model de procediment de projecte de millors pràctiques per a la transformació dels processos de RRHH en una solució basada en el núvol i la identificació d'obstacles potencials en la implementació d'aquests projectes. A més, s'elaboren els motius de la transformació, els factors que impulsen el procés dins d'una organització, el grau actual de digitalització dels recursos humans, els paràmetres operatius i estratègics necessaris i, en última instància, l'impacte en els mètodes de treball . Com a resultat, es realitza una avaluació de l'ús de les mètriques de HR i es deriven nous ràtios potencials.
[EN] The digitisation of HR processes into cloud-based solutions is progressing continuously. This thesis examines such transformations, derives a concrete process model and identifies the critical success factors. The methodology used for the investigation is of a qualitative nature. As a basis and preparatory measure to address the research questions, an extensive literature study in the HR field was carried out, with a special focus on publications on electronic human resources management (hereinafter e-HRM). Based on this knowledge and the combination of extensive practical experience with HR transformation projects, a study was published which presents the historical development of e-HRM and derived an optimised process model taking into account the technical HR requirements as well as the limitations of the new cloud technology. Subsequently, several HR experts who had already gained first-hand experience with HR processes in a cloud environment were interviewed to find out which success factors were relevant for such an HR transformation. Main findings of this thesis are the derivation of a best-practice project procedure model for the transformation of HR processes into a cloud-based solution and the identification of potential obstacles in the implementation of such projects. In addition, the motives for such a transformation, the drivers within an organisation, the current degree of HR digitisation, the necessary operational and strategic parameters and ultimately the impact on working methods are worked out. As a further result, an assessment of the use of HR metrics is given and potential new key figures are derived.
Ziebell, R. (2019). Digital transformation of HR - History, implementation approach and success factors - Cumulative PhD Thesis [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/117608
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Kűfner, Jiří. „Transformace obchodního modelu IT průmyslu směrem k SaaS a její důsledky na roli podnikové informatiky v oblasti Business Intelligence“. Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193382.

Der volle Inhalt der Quelle
Annotation:
The theme of this thesis is one of ways, which develops area of Business Intelligence in the last few years. This way are BI systems design on the principle of Cloud computing, also called BI SaaS. This is a cost available solution also suitable for small companies. This work is aimed to acquaint the reader with basic theoretical knowledge in the areas of Cloud computing, Business Intelligence and their connections. Wants to show risks of this technologies and compare between on-premise and cloud BI. In the next step want to offer a practical comparison of selected BI SaaS systems. Finally it aims to create forecast of possible future paths of development in Business Intelligence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Thibert, Emmanuel. „Thermodynamique et cinétique des solutions solides HCl-H2O et HNO3-H2O : implications atmosphériques“. Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00755697.

Der volle Inhalt der Quelle
Annotation:
Le but de notre travail est de contribuer à la compréhension des interactions entre les gaz acides et la glace, à la fois pendant la phase atmosphérique de la glace, c'est à dire dans les nuages, et dans la neige après dépôt au sol. Les gaz polaires en général, et les acides en particulier, interagissent fortement avec la glace dans la quelle ils peuvent se dissoudre. Dans les nuages, ces interactions peuvent modifier fortement la composition de l'air, et ce point reste une inconnue majeure en chimie atmosphérique. La compréhension de la relation entre la composition de l'air et celle de la glace, appelée fonction de transfert air-neige, est également indispensable pour reconstituer la composition des paléoatmosphères à partir des carottes de glace. Afin de contribuer à élucider ces problèmes, nous avons étudié l'incorporation dans la glace des composés gazeux HCI et HN03. Les compositions à l'équilibre thermodynamique des solutions solides HCl-glace et HN03-glace, en fonction de la température et de la pression partielle du gaz, ont été obtenues expérimentalement en mesurant les profils de diffusion du gaz dans des monocristaux de glace. A -15 °C, le coefficient de diffusion est de l'ordre de 10-12 cm2/s pour HCI et de 10-10 cm2/s pour HN03. A la même température, sous une pression de 6 x 10-3 Pa, HN03 est environ 25 fois moins soluble que HCI avec pour solubilité respectives de 2,2 x 10-7 et 5 x 10-6 fraction molaire. Ces données ont été appliquées à différents phénomènes d'intérêt atmosphérique. Dans le cadre de la fonction de transfert air-neige, nos résultats ont été comparés à des données de terrain obtenues au Groenland. Il apparaît que, dans les flocons de neige, HCI en solution solide n'est pas en équilibre avec HCI en phase gazeuse. La teneur en HCI dans la neige est déterminée par des facteurs cinétiques lors de la formation des cristaux. Les résultats concernant HN03 suggèrent en revanche que, dans les flocons analysés, HN03 est en équilibre avec la phase gazeuse sans doute grâce à sa cinétique de diffusion plus rapide. Suite à ces résultats, nous avons proposé un mécanisme d'incorporation des gaz dans la glace lors de la croissance des cristaux. Celui-ci suggère que la relation liant la composition atmosphérique à la composition de la glace des nuages est fortement influencée par la dynamique atmosphérique et, en particulier, par les paramètres température et vitesse de refroidissement lors de la phase de formation du nuage. Les données obtenues au laboratoire intéressent aussi le domaine de l'hydrologie appliqué à la composition des eaux de fonte des neiges. Les résultats sur les solubilités de HCI et de HN03 et leur localisation probable dans le névé en cas de sursaturation expliquent semi quantitativement le phénomène observé d'élution préférentielle de l'ion nitrate par rapport à l'ion chlorure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Upton, Nigel Keith. „Algorithmic solution of air-pollutant cloud models“. Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304572.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Moilanen, T. (Tuomo). „Scalable cloud database solution for sensor networks“. Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201306061571.

Der volle Inhalt der Quelle
Annotation:
This Master’s thesis presents the theory, design, and implementation of a scalable cloud database solution for a sensor network utilizing a cloud-based database-as-a-service platform. The sensor network is a part of the project called Internet of Things, a project studying on networking physical objects and visualizing of sensor data in a 3D environment. The challenge comes from being able to handle the massive data set expected to be produced by the sensor network. This is an example of the concept of Big Data, a data set so large that it becomes difficult and time consuming to be processed with conventional relational database methods. The solution can be found in NoSQL architectures. However, the distributed NoSQL architectures trade off several advantages of the relational database model to achieve database scaling, query performance, and partition tolerance. In this work, the database infrastructure is acquired as-a-service from one of the major cloud database service providers. Several decision criteria have been used to choose the database infrastructure service, such as scalability, absence of administrative duties, query functions, and service pricing. The work presents the rationale behind the database service selection and the design and implementation of the database and client software. Amazon’s DynamoDB is chosen as a target system due to its high scalability and performance, which offset its restrictions to query versatility. The resulting database design offering scalability beyond the traditional database management systems is presented with additional discussion on how to further improve the system and to implement new features
Tämä diplomityö käsittelee sensoriverkon datan tallennukseen hyödynnettävän skaalautuvan pilvitietokantaratkaisun teoriaa, suunnittelua ja toteutusta käyttäen olemassaolevan pilvipalvelutarjoajan tietokanta-alustaa. Sensoriverkko on osa Internet of Things -projektia, joka tutkii fyysisten objektien verkottumista ja sensoridatan visualisointia 3D-ympäristössä. Sensoriverkon tuottama valtava datamäärä on haasteellista hallita. Tästä ilmiöstä käytetään nimitystä Big Data, jolla tarkoitetaan niin suurta datan määrää, että sitä on vaikeata ja hidasta käsitellä perinteisillä tietokantasovelluksilla. Ratkaisu on hajautetut NoSQL tietokanta-arkkitehtuurit. NoSQL-arkkitehtuureissa joudutaan kuitenkin tekemään myönnytyksiä, jotta saavutaan tietokannan skaalautuvuus, hakujen suorituskyky ja partitioiden sieto. Tässä työssä tietokantainfrastruktuuri hankitaan palveluna suurelta pilvitietokantapalveluiden tarjoajalta. Tietokantapalvelun valintaan vaikuttavat useat kriteerit kuten skaalautuvuus, hallinnoinnin helppous, hakufunktiot sekä palvelun kustannukset. Työssä esitetään tietokantapalvelun valintaperusteet sekä tietokannan ja asiakasohjelman suunnittelu ja toteutus. Tietokantapalveluksi valitaan Amazon DynamoDB, jonka suuri skaalautuvuus ja suorituskyky kompensoivat tietokantahakujen rajoituksia. Työssä kehitetään tietokantasuunnitelma, joka tarjoaa perinteisiä relaatiotietokantoja suuremman skaalautuvuuden, sekä käsitellään menetelmiä uusien ominaisuuksien lisäämiseksi ja järjestelmän parantamiseksi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Dsouza, Shawn Dexter. „Cloud-based Ontology Solution for Conceptualizing Human Needs“. Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/33183.

Der volle Inhalt der Quelle
Annotation:
The current generation has seen technology penetrate every aspect of our life. However, even with recent advancements, adopters of contemporary technology are often angry and frustrated with their devices. With the increasing number of devices available to us in our day-to-day lives, and with the emergence of newer technologies like the Internet of Things, there is a stronger need than ever for computers to better understand human needs. However, there is still no machine understandable vocabulary that conceptualizes and describes the human-needs domain. As such, in this thesis we present a cloud-based ontology solution that conceptualizes the needs-domain by describing the relationships between the concepts of an Agent, a Role, a Need, and a Satisfier. The thesis focusses on the design of an OWL ontology which is based on an existing human-needs model. The human-needs model chosen for the ontology stems from a trans-disciplinary approach led by Manfred Max-Neef, called the Fundamental Human Needs model. It is seen as classifiable, finite and constant across all cultures and time periods. The methodology approach used to develop a new ontology is METHONTOLOGY, which is geared toward conceptualizing an ontology from scratch with the mindset of continual evaluation. We then further discuss the overall FHN Ontology comprising of various components including a RESTful Web Service and a SPARQL endpoint for querying and updating the FHN Ontology. The ontology is evaluated via competency questions for validation and via the Ontology Pitfall Scanner for verification and correctness across multiple criteria. The entire system is tested and evaluated by implementing a native android application which serves as a REST client to connect to the FHN Ontology end-point
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Avellana, Pardina Albert. „Comparison of Virtual Networks Solutions for Community Clouds“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177017.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has a huge importance and big impact nowadays on the IT world.The idea of community clouds has emerged recently in order to satisfy several user expectations. Clommunity is a European project that aims to provide a design and implementation of a self-congured, fully distributed, decentralized, scalable and robust cloud for a community of users across a commmunity network. One of the aspects to analyze in this design is which kind of Virtual Private Network (VPN) is going to beused to interconnect the nodes of the community members interested in access cloud services. In this thesis we will study, compare and analyze the possibility of using Tinc, IPOP or SDN-based solutions such as OpenFlow to establish such a VPN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Le, Trung-Dung. „Gestion de masses de données dans une fédération de nuages informatiques“. Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S101.

Der volle Inhalt der Quelle
Annotation:
Les fédérations de nuages informatiques peuvent être considérées comme une avancée majeure dans l’informatique en nuage, en particulier dans le domaine médical. En effet, le partage de données médicales améliorerait la qualité des soins. La fédération de ressources permettrait d'accéder à toutes les informations, même sur une personne mobile, avec des données hospitalières distribuées sur plusieurs sites. En outre, cela permettrait d’envisager de plus grands volumes de données sur plus de patients et ainsi de fournir des statistiques plus fines. Les données médicales sont généralement conformes à la norme DICOM (Digital Imaging and Communications in Medicine). Les fichiers DICOM peuvent être stockés sur différentes plates-formes, telles qu’Amazon, Microsoft, Google Cloud, etc. La gestion des fichiers, y compris le partage et le traitement, sur ces plates-formes, suit un modèle de paiement à l’utilisation, selon des modèles de prix distincts et en s’appuyant sur divers systèmes de gestion de données (systèmes de gestion de données relationnelles ou SGBD ou systèmes NoSQL). En outre, les données DICOM peuvent être structurées en lignes ou colonnes ou selon une approche hybride (ligne-colonne). En conséquence, la gestion des données médicales dans des fédérations de nuages soulève des problèmes d’optimisation multi-objectifs (MOOP - Multi-Objective Optimization Problems) pour (1) le traitement des requêtes et (2) le stockage des données, selon les préférences des utilisateurs, telles que le temps de réponse, le coût monétaire, la qualités, etc. Ces problèmes sont complexes à traiter en raison de la variabilité de l’environnement (liée à la virtualisation, aux communications à grande échelle, etc.). Pour résoudre ces problèmes, nous proposons MIDAS (MedIcal system on clouD federAtionS), un système médical sur les fédérations de groupes. Premièrement, MIDAS étend IReS, une plate-forme open source pour la gestion de flux de travaux d’analyse sur des environnements avec différents systèmes de gestion de bases de données. Deuxièmement, nous proposons un algorithme d’estimation des valeurs de coût dans une fédération de nuages, appelé Algorithme de régression %multiple linéaire dynamique (DREAM). Cette approche permet de s’adapter à la variabilité de l'environnement en modifiant la taille des données à des fins de formation et de test, et d'éviter d'utiliser des informations expirées sur les systèmes. Troisièmement, l’algorithme génétique de tri non dominé à base de grilles (NSGA-G) est proposé pour résoudre des problèmes d’optimisation multi-crtières en présence d’espaces de candidats de grande taille. NSGA-G vise à trouver une solution optimale approximative, tout en améliorant la qualité du font de Pareto. En plus du traitement des requêtes, nous proposons d'utiliser NSGA-G pour trouver une solution optimale approximative à la configuration de données DICOM. Nous fournissons des évaluations expérimentales pour valider DREAM, NSGA-G avec divers problèmes de test et jeux de données. DREAM est comparé à d'autres algorithmes d'apprentissage automatique en fournissant des coûts estimés précis. La qualité de la NSGA-G est comparée à celle des autres algorithmes NSGA présentant de nombreux problèmes dans le cadre du MOEA. Un jeu de données DICOM est également expérimenté avec NSGA-G pour trouver des solutions optimales. Les résultats expérimentaux montrent les qualités de nos solutions en termes d'estimation et d'optimisation de problèmes multi-objectifs dans une fédération de nuages
Cloud federations can be seen as major progress in cloud computing, in particular in the medical domain. Indeed, sharing medical data would improve healthcare. Federating resources makes it possible to access any information even on a mobile person with distributed hospital data on several sites. Besides, it enables us to consider larger volumes of data on more patients and thus provide finer statistics. Medical data usually conform to the Digital Imaging and Communications in Medicine (DICOM) standard. DICOM files can be stored on different platforms, such as Amazon, Microsoft, Google Cloud, etc. The management of the files, including sharing and processing, on such platforms, follows the pay-as-you-go model, according to distinct pricing models and relying on various systems (Relational Data Management Systems or DBMSs or NoSQL systems). In addition, DICOM data can be structured following traditional (row or column) or hybrid (row-column) data storages. As a consequence, medical data management in cloud federations raises Multi-Objective Optimization Problems (MOOPs) for (1) query processing and (2) data storage, according to users preferences, related to various measures, such as response time, monetary cost, qualities, etc. These problems are complex to address because of heterogeneous database engines, the variability (due to virtualization, large-scale communications, etc.) and high computational complexity of a cloud federation. To solve these problems, we propose a MedIcal system on clouD federAtionS (MIDAS). First, MIDAS extends IReS, an open source platform for complex analytics workflows executed over multi-engine environments, to solve MOOP in the heterogeneous database engines. Second, we propose an algorithm for estimating of cost values in a cloud environment, called Dynamic REgression AlgorithM (DREAM). This approach adapts the variability of cloud environment by changing the size of data for training and testing process to avoid using the expire information of systems. Third, Non-dominated Sorting Genetic Algorithm based ob Grid partitioning (NSGA-G) is proposed to solve the problem of MOOP is that the candidate space is large. NSGA-G aims to find an approximate optimal solution, while improving the quality of the optimal Pareto set of MOOP. In addition to query processing, we propose to use NSGA-G to find an approximate optimal solution for DICOM data configuration. We provide experimental evaluations to validate DREAM, NSGA-G with various test problem and dataset. DREAM is compared with other machine learning algorithms in providing accurate estimated costs. The quality of NSGA-G is compared to other NSGAs with many problems in MOEA framework. The DICOM dataset is also experimented with NSGA-G to find optimal solutions. Experimental results show the good qualities of our solutions in estimating and optimizing Multi-Objective Problem in a cloud federation
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Абоах, Ангел Хенріетта, und Angel Henrietta Aboah. „Методи підвищення рівня безпеки інформаційно-комунікаційної системи візового обслуговування“. Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2021. http://elartu.tntu.edu.ua/handle/lib/36745.

Der volle Inhalt der Quelle
Annotation:
Цілі та завдання дослідження.  Це розробка методів підвищення рівня безпеки інформаційно-комунікаційної системи візового обслуговування.  Для досягнення цієї мети необхідно виконати наступні завдання:  · Забезпечення візової служби  · Аналіз потоків даних у сервісі Visa  · Розробка безпеки візової служби  · Аудит/Хмарна безпека у візовій службі  · Інструкції щодо впровадження показників у візовій службі  Об'єктивним розділом дослідження є опрацювання розвитку та вдосконалення інформаційної безпеки візового сервісу.  Суб'єктивним розділом дослідження є стандарти та правила безпеки, програмні засоби аудиту безпеки, посібники з безпеки.  Технічний аспект отриманих результатів:  · Вибір належних показників безпеки для вимірювання рівня безпеки візового обслуговування.  · Відповідна важливість отриманих рішень.
The goals and tasks of the study. This is to develop methods for increasing the security level of the information and communication system of visa service. Achieving this goal requires the following tasks: • Securing visa service • Analysis of data flows in Visa service • Design security of visa service • Audit/Cloud Security in visa service • Implementation’s guidelines of metrics at visa service The objective section of the study is the processing of development and improvement of visa service information security. The subjective section of the study is the security standards and regulations, software tools for security audit, security guides. Technical aspect of the acquired results: • Choosing proper security metrics to measure security level of visa service. • The applicable importance of the solutions obtained.
Introduction...7 Chapter 1...9 1.1. Review of Visa Facilitation Service...9 1.2. Business Model...9 1.3. Materiality...1 1 1.4. Sustainable Development Goals... 1 2 1.5. Design Security of visa Service... 1 3 Chapter 2... 2 0 2.1. Physical Security... 2 0 2.2. Information Security & Data Protection... 2 1 2.3. Process Excellence... 2 1 2.4. Business Continuity... 2 2 2.5. Integrated Security... 2 3 2.6. Audit Security in visa service... 2 3 2.7.1. Control mapping... 2 5 2.7.2. Gap level... 2 6 2.7.3. Consensus assessments initiative questionnaire... 2 56 2.7.4. CSP CAIQ Answer... 2 7 Chapter 3... 2 9 3.1. Visa Process Infrastructure System... 2 9 3.2. Visa Processing management System Features... 2 9 3.3. Privacy and data protection... 3 0 3.4. Models for smart travel... 3 1 3.5. Aspirational Smart Travel... 3 2 3.6. Visas and Borders: The Key for Seamless Travel... 3 4 3.7. A Roadmap to Implementation... 3 4 3.8. Implementations guidelines of metrics at visa Service... 4 1 Chapter 4 .... 4 7 Health and Safety Regulations... 4 7 4.1. Regulations of Health and Safety... 4 7 4.2. Occupational Safety Management for Remote Office... 4 7 64.3. Key Management System Components for State Institution... 4 8 4.4. Structure of Labor Protection Service... 4 8 4.5. Size of Occupational Safety 4 Service...8 4.6. Occupational Injuries Analysis...4 9 4.7. The responsibility of officials for violation of occupational safety... 5 0 4.8. Investigation and registration of accidents and occupational diseases... 5 1 4.9. Prevention of occupational injuries at a state institution... 5 1 Conclusion... 5 3 Reference...5 ... 6 Appendix A Title 5 Page...8 Appendix 5 B...9 Appendix C...6 0
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Noman, Ali. „Addressing the Data Location Assurance Problem of Cloud Storage Environments“. Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37375.

Der volle Inhalt der Quelle
Annotation:
In a cloud storage environment, providing geo-location assurance of data to a cloud user is very challenging as the cloud storage provider physically controls the data and it would be challenging for the user to detect if the data is stored in different datacenters/storage servers other than the one where it is supposed to be. We name this problem as the “Data Location Assurance Problem” of a Cloud Storage Environment. Aside from the privacy and security concerns, the lack of geo-location assurance of cloud data involved in the cloud storage has been identified as one of the main reasons why organizations that deal with sensitive data (e.g., financial data, health-related data, and data related to Personally Identifiable Infor-mation, PII) cannot adopt a cloud storage solution even if they might wish to. It might seem that cryptographic techniques such as Proof of Data Possession (PDP) can be a solution for this problem; however, we show that those cryptographic techniques alone cannot solve that. In this thesis, we address the data location assurance (DLA) problem of the cloud storage environment which includes but is not limited to investigating the necessity for a good data location assurance solution as well as challenges involved in providing this kind of solution; we then come up with efficient solutions for the DLA problem. Note that, for the totally dis-honest cloud storage server attack model, it may be impossible to offer a solution for the DLA problem. So the main objective of this thesis is to come up with solutions for the DLA problem for different system and attack models (from less adversarial system and attack models to more adversarial ones) available in existing cloud storage environments so that it can meet the need for cloud storage applications that exist today.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Pereira, Rosangela de Fátima. „A data-driven solution for root cause analysis in cloud computing environments“. Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-03032017-082237/.

Der volle Inhalt der Quelle
Annotation:
The failure analysis and resolution in cloud-computing environments are a a highly important issue, being their primary motivation the mitigation of the impact of such failures on applications hosted in these environments. Although there are advances in the case of immediate detection of failures, there is a lack of research in root cause analysis of failures in cloud computing. In this process, failures are tracked to analyze their causal factor. This practice allows cloud operators to act on a more effective process in preventing failures, resulting in the number of recurring failures reduction. Although this practice is commonly performed through human intervention, based on the expertise of professionals, the complexity of cloud-computing environments, coupled with the large volume of data generated from log records generated in these environments and the wide interdependence between system components, has turned manual analysis impractical. Therefore, scalable solutions are needed to automate the root cause analysis process in cloud computing environments, allowing the analysis of large data sets with satisfactory performance. Based on these requirements, this thesis presents a data-driven solution for root cause analysis in cloud-computing environments. The proposed solution includes the required functionalities for the collection, processing and analysis of data, as well as a method based on Bayesian Networks for the automatic identification of root causes. The validation of the proposal is accomplished through a proof of concept using OpenStack, a framework for cloud-computing infrastructure, and Hadoop, a framework for distributed processing of large data volumes. The tests presented satisfactory performance, and the developed model correctly classified the root causes with low rate of false positives.
A análise e reparação de falhas em ambientes de computação em nuvem é uma questão amplamente pesquisada, tendo como principal motivação minimizar o impacto que tais falhas podem causar nas aplicações hospedadas nesses ambientes. Embora exista um avanço na área de detecção imediata de falhas, ainda há percalços para realizar a análise de sua causa raiz. Nesse processo, as falhas são rastreadas a fim de analisar o seu fator causal ou seus fatores causais. Essa prática permite que operadores da nuvem possam atuar de modo mais efetivo na prevenção de falhas, reduzindo-se o número de falhas recorrentes. Embora essa prática seja comumente realizada por meio de intervenção humana, com base no expertise dos profissionais, a complexidade dos ambientes de computação em nuvem, somada ao grande volume de dados oriundos de registros de log gerados nesses ambientes e à ampla inter-dependência entre os componentes do sistema tem tornado a análise manual inviável. Por esse motivo, torna-se necessário soluções que permitam automatizar o processo de análise de causa raiz de uma falha ou conjunto de falhas em ambientes de computação em nuvem, e que sejam escaláveis, viabilizando a análise de grande volume de dados com desempenho satisfatório. Com base em tais necessidades, essa dissertação apresenta uma solução guiada por dados para análise de causa raiz em ambientes de computação em nuvem. A solução proposta contempla as funcionalidades necessárias para a aquisição, processamento e análise de dados no diagnóstico de falhas, bem como um método baseado em Redes Bayesianas para a identificação automática de causas raiz de falhas. A validação da proposta é realizada por meio de uma prova de conceito utilizando o OpenStack, um arcabouço para infraestrutura de computação em nuvem, e o Hadoop, um arcabouço para processamento distribuído de grande volume de dados. Os testes apresentaram desempenhos satisfatórios da arquitetura proposta, e o modelo desenvolvido classificou corretamente com baixo número de falsos positivos.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Laube, Martin, Andreas Günder, Jan Bierod, Volker Jesberger und Stefan Rauch. „Cytroconnect – a cloud-based IOT-service as connectivity solution for electrohydraulic systems“. Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A71189.

Der volle Inhalt der Quelle
Annotation:
Conventional electrohydraulic solutions integrate easily into modern machine concepts by utilizing field bus technology. Nevertheless, most use cases are limited to machine automation concepts. Integration into higher-level data and IoT systems is the key for positioning of electrohydraulic solutions within the factory of the future. CytroConnect is a new approach for the integration of electrohydraulic systems into IoT environments and the corresponding market offerings. Bosch Rexroth decided not only to integrate IoT-ready features like pre-installed sensor packages but also a modular automation concept providing decentralized intelligence with an open multi-ethernet interface. An edge-to-cloud connectivity stack operated by Bosch turns the target into a Connected Product. The convergence of physical and digital product can be realized. Based on that the digital service CytroConnect solves concrete holistic use cases like visualization and condition monitoring by offering a web-based dashboard of all relevant sensor data that is accessible everywhere. Modular paid add-ons offered as risk-free monthly subscriptions address further smart maintenance and prediction use cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Gustafsson, Julia, und Mariam Said. „Security Aspects of Cloud Computing – Perspectives within Organizations“. Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-10458.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has become a significant and well-known term within a short period of time. Some parts of it might even be considered as unclear, including its vague definition. Cloud computing has rapidly and successfully come to perform an essential role within information technology and therefore in how organizations are managing their IT departments today. Its many advantages allure organizations to deploy a cloud solution. Despite the flourishing growth of cloud computing it still has its draw backs. One of its problems has come to be acknowledged as security issues, which has resulted in many companies deciding not to deploy a cloud solution and instead retain their traditional system. This qualitative study will come to investigate the perspective of organizations regarding security within cloud computing. The aim is to outline the security aspects conferred by Swedish organizations as there already is existing information concerning security issues. The empirical study is based on the gathered information from conducted semi-structured interviews. This study resulted in the findings of seven security aspects outlined by organizations, with the main reason concerning the uncertainty and towards the services of cloud computing. These security aspects are essential as they are set by organizations that have the potentiality to become cloud users, but for certain reasons decide not to. From the outlined security aspects, a close relationship can be identified to the already known security problems. These problems have strengthened the meaning of the security aspects, and that they are based on real concerns that can be connected to real problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Perera, Ashansa. „Window-based Cost-effective Auto-scaling Solution with Optimized Scale-in Strategy“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-194210.

Der volle Inhalt der Quelle
Annotation:
Auto-scaling is a major way of minimizing the gap between the demand and the availability of the computing resources for the applications with dynamic workloads. Even though a lot of effort has been taken to address the requirement of auto-scaling for the distributed systems, most of the available solutions are application-specific and consider only on fulfilling the application level requirements. Today, with the pay-as-you-go model of cloud computing, many different price plans have been offered by the cloud providers which leads the resource price to become an important decision-making criterion at the time of auto-scaling. One major step is using the spot instances which are more advantageous in the aspect of cost for elasticity. However, using the spot instances for auto-scaling should be handled carefully to avoid its drawbacks since the spot instances can be terminated at any time by the infrastructure providers. Despite the fact that some cloud providers such as Amazon Web Services and Google Compute Engine have their own auto-scaling solutions, they do not follow the goal of cost-effectiveness. In this work, we introduce our auto-scaling solution that is targeted for middle-layers in-between the cloud and the application, such as Karamel. Our work combines the aspect of minimizing the cost of the deployment with maintaining the demand for the resources. Our solution is a rule-based system that is built on top of resource utilization metrics as a more general metric for workloads. Further, the machine terminations and the billing period of the instances are taken into account as the cloud source events. Different strategies such as window based profiling, dynamic event profiling, and optimized scale-in strategy have been used to achieve our main goal of providing a cost-effective auto-scaling solution for cloud-based deployments. With the help of our simulation methodology, we explore our parameter space to find the best values under different workloads. Moreover, our cloud-based experiments show that our solution performs much more economically compare to the available cloud-based auto-scaling solutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Rodrigues, João Miguel Cardia Melro. „TSKY: a dependable middleware solution for data privacy using public storage clouds“. Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11071.

Der volle Inhalt der Quelle
Annotation:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
This dissertation aims to take advantage of the virtues offered by data storage cloud based systems on the Internet, proposing a solution that avoids security issues by combining different providers’ solutions in a vision of a cloud-of-clouds storage and computing. The solution, TSKY System (or Trusted Sky), is implemented as a middleware system, featuring a set of components designed to establish and to enhance conditions for security, privacy, reliability and availability of data, with these conditions being secured and verifiable by the end-user, independently of each provider. These components, implement cryptographic tools, including threshold and homomorphic cryptographic schemes, combined with encryption, replication, and dynamic indexing mecha-nisms. The solution allows data management and distribution functions over data kept in different storage clouds, not necessarily trusted, improving and ensuring resilience and security guarantees against Byzantine faults and at-tacks. The generic approach of the TSKY system model and its implemented services are evaluated in the context of a Trusted Email Repository System (TSKY-TMS System). The TSKY-TMS system is a prototype that uses the base TSKY middleware services to store mailboxes and email Messages in a cloud-of-clouds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Karlsson, Joel. „Internet of Things – Does Particle Photon rely too much on its own cloud solution?“ Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138563.

Der volle Inhalt der Quelle
Annotation:
Internet of Things is rapidly growing and there are many devices and cloud solutions on the market. This thesis addresses the usage of Particle Photon alongside Microsoft Azure and intends to determine its suitability as an IoT solution. Particle Photon is bundled with Particle Cloud which is a comprehensive solution that makes IoT simple, swift and cheap – but how good is the device if the service ceases to exist? To determine this dependency and its overall IoT suitability, tests were performed to measure transmission limitations as well as reliability both with and without the included cloud service. In addition, the research method includes a Microsoft Azure implementation. The results show that Particle Photon and Microsoft Azure makes a great IoT solution, with or without its own cloud solution – even though most of the ease-of-use and benefits comes from the cloud service. Using Particle Cloud alongside Particle Photon and Microsoft Azure reduces transmission time and increases reliability compared to only using Particle Photon.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Alreshidi, Eissa. „Towards facilitating team collaboration during construction project via the development of cloud-based BIM governance solution“. Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/88955/.

Der volle Inhalt der Quelle
Annotation:
Construction projects involve multi-discipline, multi-actor collaboration, and during their lifecycle, enormous amounts of data are generated. This data is often sensitive, raising major concerns related to access rights, ownership, intellectual property (IP) and secu- rity. Thus, dealing with this information raises several issues, such as data inconsistency, different versions of data, data loss etc. Therefore, the collaborative Building Information Modelling (BIM) approach has recently been considered a useful contributory technique to minimise the complexity of team collaboration during construction projects. Further- more, it has been argued that there is a role for Cloud technology in facilitating team collaboration across a building's lifecycle, by applying the ideologies of BIM governance. Therefore, this study investigates and seeks to develop a BIM governance solution util- ising a Cloud infrastructure. The study employed two research approaches: the first being a wide consultation with key BIM experts taking the form of: (i) a comprehensive questionnaire; followed by (ii) several semi-structured interviews. The second approach was an iterative software engineering approach including: (i) Software Modelling, using Business Process Model Notation (BPMN) and Unifed Modelling Language (UML), and (ii) Software Prototype Development. The fndings reveal several remaining barriers to BIM adoption, including Information Communication Technology (ICT) and collabora- tion issues; therefore highlighting an urgent need to develop a BIM governance solution underpinned by Cloud technology, to tackle these barriers and issues. The key fndings from this research led to: (a) the development of a BIM governance framework (G-BIM); (b) defnition of functional, non-functional, and domain specific requirements for develop- ing a Cloud-based BIM Governance Platfrom (GovernBIM); (c) development of a set of BPMN diagrams to describe the internal and external business procedures of the Govern- BIM platform lifecycle; (d) evaluation of several fundamental use cases for the adoption of the GovernBIM platform; (e) presentation of a core BIM governance model (class di- agram) to present the internal structure of the GovernBIM platform; (f) provision of a well-structured, Cloud-based architecture to develop a GovernBIM platform for practical implementation; and (j) development of a Cloud-based prototype focused on the main identified functionalities of BIM governance. Despite the fact that a number of concerns remain (i.e. privacy and security) the proposed Cloud-based GovernBIM solution opens up an opportunity to provide increased control over the collaborative process, and to resolve associated issues, e.g. ownership, data inconsistencies, and intellectual property. Finally, it presents a road map for further development of Cloud-based BIM governance platforms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Raymondi, Luis Guillermo Antezana, Fabricio Eduardo Aguirre Guzman, Jimmy Armas-Aguirre und Paola Agonzalez. „Technological solution for the identification and reduction of stress level using wearables“. IEEE Computer Society, 2020. http://hdl.handle.net/10757/656578.

Der volle Inhalt der Quelle
Annotation:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
In this article, a technological solution is proposed to identify and reduce the level of mental stress of a person through a wearable device. The proposal identifies a physiological variable: Heart rate, through the integration between a wearable and a mobile application through text recognition using the back camera of a smartphone. As part of the process, the technological solution shows a list of guidelines depending on the level of stress obtained in a given time. Once completed, it can be measured again in order to confirm the evolution of your stress level. This proposal allows the patient to keep his stress level under control in an effective and accessible way in real time. The proposal consists of four phases: 1. Collection of parameters through the wearable; 2. Data reception by the mobile application; 3. Data storage in a cloud environment and 4. Data collection and processing; this last phase is divided into 4 sub-phases: 4.1. Stress level analysis, 4.2. Recommendations to decrease the level obtained, 4.3. Comparison between measurements and 4.4. Measurement history per day. The proposal was validated in a workplace with people from 20 to 35 years old located in Lima, Peru. Preliminary results showed that 80% of patients managed to reduce their stress level with the proposed solution.
Revisión por pares
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie