Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: CLOUD BASED APPLICATIONS.

Rozprawy doktorskie na temat „CLOUD BASED APPLICATIONS”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „CLOUD BASED APPLICATIONS”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Schroeter, Julia. "Feature-based configuration management of reconfigurable cloud applications". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-141415.

Pełny tekst źródła
Streszczenie:
A recent trend in software industry is to provide enterprise applications in the cloud that are accessible everywhere and on any device. As the market is highly competitive, customer orientation plays an important role. Companies therefore start providing applications as a service, which are directly configurable by customers in an online self-service portal. However, customer configurations are usually deployed in separated application instances. Thus, each instance is provisioned manually and must be maintained separately. Due to the induced redundancy in software and hardware components, resources are not optimally utilized. A multi-tenant aware application architecture eliminates redundancy, as a single application instance serves multiple customers renting the application. The combination of a configuration self-service portal with a multi-tenant aware application architecture allows serving customers just-in-time by automating the deployment process. Furthermore, self-service portals improve application scalability in terms of functionality, as customers can adapt application configurations on themselves according to their changing demands. However, the configurability of current multi-tenant aware applications is rather limited. Solutions implementing variability are mainly developed for a single business case and cannot be directly transferred to other application scenarios. The goal of this thesis is to provide a generic framework for handling application variability, automating configuration and reconfiguration processes essential for self-service portals, while exploiting the advantages of multi-tenancy. A promising solution to achieve this goal is the application of software product line methods. In software product line research, feature models are in wide use to express variability of software intense systems on an abstract level, as features are a common notion in software engineering and prominent in matching customer requirements against product functionality. This thesis introduces a framework for feature-based configuration management of reconfigurable cloud applications. The contribution is three-fold. First, a development strategy for flexible multi-tenant aware applications is proposed, capable of integrating customer configurations at application runtime. Second, a generic method for defining concern-specific configuration perspectives is contributed. Perspectives can be tailored for certain application scopes and facilitate the handling of numerous configuration options. Third, a novel method is proposed to model and automate structured configuration processes that adapt to varying stakeholders and reduce configuration redundancies. Therefore, configuration processes are modeled as workflows and adapted by applying rewrite rules triggered by stakeholder events. The applicability of the proposed concepts is evaluated in different case studies in the industrial and academic context. Summarizing, the introduced framework for feature-based configuration management is a foundation for automating configuration and reconfiguration processes of multi-tenant aware cloud applications, while enabling application scalability in terms of functionality.
Style APA, Harvard, Vancouver, ISO itp.
2

Yangui, Sami. "Service-based applications provisioning in the cloud". Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0024/document.

Pełny tekst źródła
Streszczenie:
Le Cloud Computing ou "informatique en nuage" est un nouveau paradigme émergeant pour l’exploitation des services informatiques distribuées à large échelle s’exécutant à des emplacements géographiques répartis. Ce paradigme est de plus en plus utilisé pour le déploiement et l’exécution des applications en général et des applications à base de services en particulier. Les applications à base de services sont décrites à l’aide du standard Service Component Architecture (SOA) et consistent à inter-lier un ensemble de services élémentaires et hétérogènes en utilisant des spécifications de composition de services appropriées telles que Service Component Architecture (SCA) ou encore Business Process Execution Language (BPEL). Provisionner une application dans le Cloud consiste à : (1) allouer les ressources dont elle a besoin pour s’exécuter, (2) déployer ses sources sur les ressources allouées et (3) démarrer l’application. Cependant, les solutions Cloud existantes sont limitées en termes de plateformes d’exécution. Ils ne peuvent pas toujours satisfaire la forte hétérogénéité des composants des applications à base de services. Pour remédier à ces problèmes, les mécanismes de provisioning des applications dans le Cloud doivent être reconsidérés. Ces mécanismes doivent être assez flexibles pour supporter la forte hétérogénéité des composants sans imposer de modifications et/ou d’adaptations du côté du fournisseur Cloud. Elles doivent également permettre le déploiement automatique des composants dans le Cloud. Si l’application à déployer est mono-composant, le déploiement est fait automatiquement et de la même manière, et ce quelque soit le fournisseur Cloud choisi. Si l’application est à base de services hétérogènes, des fonctionnalités appropriées doivent être mises à la disposition des développeurs pour qu’ils puissent définir et créer les ressources nécessaires aux composants avant de déployer l’application. Dans ce travail, nous proposons une approche appelée SPD permettant le provisioning des applications à base de services dans le Cloud. L’approche SPD est constituée de 3 étapes : (1) découper des applications à base de services en un ensemble de services élémentaires et autonomes, (2) encapsuler les services dans des micro-conteneurs spécifiques et (3) déployer les micro-conteneurs dans le Cloud. Pour le découpage, nous avons élaboré un ensemble d’algorithmes formels assurant la préservation de la sémantique des applications une fois découpées. Pour l’encapsulation, nous avons réalisé des prototypes de conteneurs de services permettant l’hébergement et l’exécution des services avec seulement le minimum des fonctionnalités nécessaires. Pour le déploiement, deux cas sont traités i.e. déploiement sur une infrastructure Cloud (IaaS) et déploiement sur une plateforme Cloud (PaaS). Pour automatiser le processus de déploiement, nous avons défini : (i) un modèle de description des ressources unifié basé sur le standard Open Cloud Computing Interface (OCCI) permettant de décrire l’application et ses ressources d’une manière générique quelque soit la plateforme de déploiement cible et (ii) une API appelée COAPS implémentant ce modèle et permettant de l’approvisionnement et la gestion des applications en utilisant des opérations génériques quelque soit la plateforme cible
Cloud Computing is a new supplement, consumption, and delivery model for IT services based on Internet protocols. It is increasingly used for hosting and executing applications in general and service-based applications in particular. Service-based applications are described according to Service Oriented Architecture (SOA) and consist of assembling a set of elementary and heterogeneous services using appropriate service composition specifications like Service Component Architecture (SCA) or Business Process Execution Language (BPEL). Provision an application in the Cloud consists of allocates its required resources from a Cloud provider, upload source codes over their resources before starting the application. However, existing Cloud solutions are limited to static programming frameworks and runtimes. They cannot always meet with the application requirements especially when their components are heterogeneous as service-based applications. To address these issues, application provisioning mechanisms in the Cloud must be reconsidered. The deployment mechanisms must be flexible enough to support the strong application components heterogeneity and requires no modification and/or adaptation on the Cloud provider side. They also should support automatic provisioning procedures. If the application to deploy is mono-block (e.g. one-tier applications), the provisioning is performed automatically and in a unified way whatever is the target Cloud provider through generic operations. If the application is service-based, appropriate features must be provided to developers in order to create themselves dynamically the required resources before the deployment in the target provider using generic operations. In this work, we propose an approach (called SPD) to provision service-based applications in the Cloud. The SPD approach consists of 3 steps: (1) Slicing the service-based application into a set of elementary and autonomous services, (2) Packaging the services in micro-containers and (3) Deploying the micro-containers in the Cloud. Slicing the applications is carried out by formal algorithms that we have defined. For the slicing, proofs of preservation of application semantics are established. For the packaging, we performed prototype of service containers which provide the minimal functionalities to manage hosted services life cycle. For the deployment, both cases are treated i.e. deployment in Cloud infrastructure (IaaS) and deployment in Cloud platforms (PaaS). To automate the deployment, we defined: (i) a unified description model based on the Open Cloud Computing Interface (OCCI) standard that allows the representation of applications and its required resources independently of the targeted PaaS and (ii) a generic PaaS application provisioning and management API (called COAPS API) that implements this model
Style APA, Harvard, Vancouver, ISO itp.
3

Shen, Weier. "POINT CLOUD BASED OBJECT DETECTION AND APPLICATIONS". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1558726015256402.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Englund, Carl. "Evaluation of cloud-based infrastructures for scalable applications". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139935.

Pełny tekst źródła
Streszczenie:
The usage of cloud computing in order to move away from local servers and infrastructure have grown enormously the last decade. The ability to quickly scale capacity of servers and their resources at once when needed is something that can both be a price saver for companies and help them deliver high end products that will function correctly at all times even under heavy load to their customers. To meet todays challenges, one of the strategic directions of Attentec, a software company located in Linköping, is to examine the world of cloud computing in order to deliver robust and scalable applications to their customers. This thesis investigates the usage of cloud services in order to deploy scalable applications which can adapt to usage peaks within minutes.
Style APA, Harvard, Vancouver, ISO itp.
5

Bhowmick, Satyajit. "A Fog-based Cloud Paradigm for Time-Sensitive Applications". University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1467988828.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Pan, Xiaozhong. "Rich Cloud-based Web Applications with Cloudbrowser 2.0". Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/52988.

Pełny tekst źródła
Streszczenie:
When developing web applications using traditional methods, developers need to partition the application logic between client side and server side, then implement these two parts separately (often using two different programming languages) and write the communication code to synchronize the application's state between the two parts. CloudBrowser is a server- centric web framework that eliminates this need for partitioning applications entirely. In CloudBrowser, the application code is executed in server side virtual browsers which preserve the application's presentation state. The client web browsers act like rendering devices, fetching and rendering the presentation state from the virtual browsers. The client-server communication and user interface rendering is implemented by the framework under the hood. CloudBrowser applications are developed in a way similar to regular web pages, using no more than HTML, CSS and JavaScript. Since the user interface state is preserved, the framework also provides a continuous experience for users who can disconnect from the application at any time and reconnect to pick up at where they left off. The original implementation of CloudBrowser was single-threaded and supported deployment on only one process. We implemented CloudBrowser 2.0, a multi-process implementation of CloudBrowser. CloudBrowser 2.0 can be deployed on a cluster of servers as well as a single multi-core server. It distributes the virtual browsers to multiple processes and dispatches client requests to the associated virtual browsers. CloudBrowser 2.0 also refines the CloudBrowser application deployment model to make the framework a PaaS platform. The developers can develop and deploy different types of applications and the platform will automatically scale them to multiple servers.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
7

Jung, Gueyoung. "Multi-dimensional optimization for cloud based multi-tier applications". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37267.

Pełny tekst źródła
Streszczenie:
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these applications at a very fine granularity. Meanwhile, resource virtualization has recently gained considerable attention in the design of computer systems and become a key ingredient for cloud computing. It provides significant improvement of aggregated power efficiency and high resource utilization by enabling resource consolidation. It also allows infrastructure providers to manage their resources in an agile way under highly dynamic conditions. However, these trends also raise significant challenges to researchers and practitioners to successfully achieve agile resource management in consolidated environments. First, they must deal with very different responsiveness of different applications, while handling dynamic changes in resource demands as applications' workloads change over time. Second, when provisioning resources, they must consider management costs such as power consumption and adaptation overheads (i.e., overheads incurred by dynamically reconfiguring resources). Dynamic provisioning of virtual resources entails the inherent performance-power tradeoff. Moreover, indiscriminate adaptations can result in significant overheads on power consumption and end-to-end performance. Hence, to achieve agile resource management, it is important to thoroughly investigate various performance characteristics of deployed applications, precisely integrate costs caused by adaptations, and then balance benefits and costs. Fundamentally, the research question is how to dynamically provision available resources for all deployed applications to maximize overall utility under time-varying workloads, while considering such management costs. Given the scope of the problem space, this dissertation aims to develop an optimization system that not only meets performance requirements of deployed applications, but also addresses tradeoffs between performance, power consumption, and adaptation overheads. To this end, this dissertation makes two distinct contributions. First, I show that adaptations applied to cloud infrastructures can cause significant overheads on not only end-to-end response time, but also server power consumption. Moreover, I show that such costs can vary in intensity and time scale against workload, adaptation types, and performance characteristics of hosted applications. Second, I address multi-dimensional optimization between server power consumption, performance benefit, and transient costs incurred by various adaptations. Additionally, I incorporate the overhead of the optimization procedure itself into the problem formulation. Typically, system optimization approaches entail intensive computations and potentially have a long delay to deal with a huge search space in cloud computing infrastructures. Therefore, this type of cost cannot be ignored when adaptation plans are designed. In this multi-dimensional optimization work, scalable optimization algorithm and hierarchical adaptation architecture are developed to handle many applications, hosting servers, and various adaptations to support various time-scale adaptation decisions.
Style APA, Harvard, Vancouver, ISO itp.
8

Bandini, Alessandro. "Programming and Deployment of Cloud-based Data Analysis Applications". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13803/.

Pełny tekst źródła
Streszczenie:
Cloud Computing constitutes a model capable of enabling the network access in a shared, practical and on demand of different computational resources like networks, memory, application or services. This work has as goal the explanation of the project made within Cloud Computing. After an introduction of the theory that lies behind Cloud computing's technologies, there is the practical part of the the work, starting from a more specific platform, Hadoop, which allows storage and data analysis and then moving to more general purpose platforms, Amazon Web Services and Google App Engine, where different types of services have been tried. The major part of the project is based on Google App Engine, where storage and computational services have been used to run MapReduce jobs. MapReduce is a different programming approach for solving data analysis problems, that is suited for big data. The first jobs are written in python, an imperative programming language. Later on, a functional approach on the same problems has been tried, with the Scala language and Spark platform, to compare the code. As Cloud computing is mainly used to host websites, a simple site was developed as integral part of the work. The development of the site is not explained as it goes beyond this thesis' main focus, only the relevant aspects will be treated.
Style APA, Harvard, Vancouver, ISO itp.
9

Luo, Xi. "Feature-based Configuration Management of Applications in the Cloud". Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-116674.

Pełny tekst źródła
Streszczenie:
The complex business applications are increasingly offered as services over the Internet, so-called software-as-a-Service (SaaS) applications. The SAP Netweaver Cloud offers an OSGI-based open platform, which enables multi-tenant SaaS applications to run in the cloud. A multi-tenant SaaS application is designed so that an application instance is used by several customers and their users. As different customers have different requirements for functionality and quality of the application, the application instance must be configurable. Therefore, it must be able to add new configurations into a multi-tenant SaaS application at run-time. In this thesis, we proposed concepts of a configuration management, which are used for managing and creating client configurations of cloud applications. The concepts are implemented in a tool that is based on Eclipse and extended feature models. In addition, we evaluate our concepts and the applicability of the developed solution in the SAP Netwaver Cloud by using a cloud application as a concrete case example.
Style APA, Harvard, Vancouver, ISO itp.
10

CONDORI, EDWARD JOSE PACHECO. "DEPLOYMENT OF DISTRIBUTED COMPONENT-BASED APPLICATIONS ON CLOUD INFRASTRUCTURES". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23645@1.

Pełny tekst źródła
Streszczenie:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A implantação de aplicações baseadas em componentes distribuídos é composta por um conjunto de atividades geridas por uma Infraestrutura de Implantação. Aplicações atuais estão se tornando cada vez mais complexas, necessitando de um ambiente alvo dinâmico e multi-plataforma. Assim, a atividade de planejamento de uma implantação é o passo mais crítico, pois define a configuração da infraestrutura de execução de forma a atender os requisitos do ambiente alvo de uma aplicação. Por outro lado, o modelo de serviço na nuvem chamado Infraestrutura como Serviço(IaaS) oferece recursos computacionais sob demanda, com características dinâmicas, escaláveis e elásticas. Nesta dissertação nós estendemos a Infraestrutura de Implantação para componentes SCS de forma a permitir o uso de nuvens privadas ou públicas como o ambiente alvo de uma implantação, através do uso de uma cloud API e políticas flexíveis para especificar um ambiente alvo personalizado. Além disso, hospedamos a infraestrutura de implantação na nuvem. Isto permitiu-nos usar recursos computacionais sob demanda para instanciar os serviços da Infraestrutura de Implantação, produzindo uma Plataforma como Serviço(PaaS) experimental.
Deployment of distributed component-based applications is composed of a set of activities managed by a Deployment Infrastructure. Current applications are becoming increasingly more complex, requiring a multi-platform and a dynamic target environment. Thus, the planning activity is the most critical step because it defines the configuration of the execution infrastructure in order to satisfy the requirements of the application’s target environment. On the other hand, the cloud service model called Infrastructure as a Service (IaaS) offers on-demand computational resources with dynamic, scalable, and elastic features. In this work we have extended the Deployment Infrastructure for SCS componentes to support private or public clouds as its target environment, through the use of a cloud API and flexible policies to specify a customized target environment. Additionally, we host the Deployment Infrastructure on the cloud, which allow us to use on-demand computational resources to instantiate Deployment Infrastructure services, creating an experimental Platform as a Service (PaaS).
Style APA, Harvard, Vancouver, ISO itp.
11

Ouffoué, Georges. "Attack tolerance for services-based applications in the Cloud". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS562/document.

Pełny tekst źródła
Streszczenie:
Les services Web permettent la communication de systèmes hétérogènes sur le Web. Ces facilités font que ces services sont particulièrement adaptés au déploiement dans le cloud. Les efforts de formalisation et de vérification permettent d’améliorer la confiance dans les services Web, néanmoins des problèmes tels que la haute disponibilité et la sécurité ne sont pas entièrement pris en compte. Par ailleurs, les services Web déployés dans une infrastructure cloud héritent des vulnérabilités de cette dernière. En raison de cette limitation, ils peuvent être incapables d’exécuter parfaitement leurs tâches. Dans cette thèse, nous pensons qu’une bonne tolérance nécessite un monitoring constant et des mécanismes de réaction fiables. Nous avons donc proposé une nouvelle méthodologie de monitoring tenant compte des risques auxquels peuvent être confrontés nos services. Pour mettre en oeuvre cette méthodologie, nous avons d’abord développé une méthode de tolérance aux attaques qui s’appuie sur la diversification au niveau modèle. On définit un modèle du système puis on dérive des variantes fonctionnellement équivalents qui remplaceront ce dernier en cas d’attaque. Pour ne pas dériver manuellement les variants et pour augmenter le niveau de diversification nous avons proposé une deuxième méthode complémentaire. Cette dernière consiste toujours à avoir des variants de nos services; mais contrairement à la première méthode, nous proposons un modèle unique avec des implantations différentes tant au niveau des interfaces, du langage qu’au niveau des exécutables. Par ailleurs, pour détecter les attaques internes, nous avons proposé un mécanisme de détection et de réaction basé sur la reflexivité. Lorsque le programme tourne, nous l’analysons pour détecter les exécutions malveillantes. Comme contremesure, on génère de nouvelles implantations en utilisant la reflexivité. Pour finir, nous avons étendu notre environnement formel et outillé de services Web en y incorporant de manière cohérente tous ces mécanismes. L’idée est de pouvoir combiner ces différentes méthodes afin de tirer profit des avantages de chacune d’elle. Nous avons validé toute cette approche par des expériences réalistes
Web services allow the communication of heterogeneous systems on the Web. These facilities make them particularly suitable for deploying in the cloud. Although research on formalization and verification has improved trust in Web services, issues such as high availability and security are not fully addressed. In addition, Web services deployed in cloud infrastructures inherit their vulnerabilities. Because of this limitation, they may be unable to perform their tasks perfectly. In this thesis, we claim that a good tolerance requires attack detection and continuous monitoring on the one hand; and reliable reaction mechanisms on the other hand. We therefore proposed a new formal monitoring methodology that takes into account the risks that our services may face. To implement this methodology, we first developed an approach of attack tolerance that leverages model-level diversity. We define a model of the system and derive more robust functionally equivalent variants that can replace the first one in case of attack. To avoid manually deriving the variants and to increase the level of diversity, we proposed a second complementary approach. The latter always consists in having different variants of our services; but unlike the first, we have a single model and the implementations differ at the language, source code and binaries levels. Moreover, to ensure detection of insider attacks, we investigated a new detection and reaction mechanism based on software reflection. While the program is running, we analyze the methods to detect malicious executions. When these malicious activities are detected, using reflection again, new efficient implementations are generated as countermeasure. Finally, we extended a formal Web service testing framework by incorporating all these complementary mechanisms in order to take advantage of the benefits of each of them. We validated our approach with realistic experiments
Style APA, Harvard, Vancouver, ISO itp.
12

Sellami, Rami. "Supporting multiple data stores based applications in cloud environments". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL002/document.

Pełny tekst źródła
Streszczenie:
Avec l’avènement du cloud computing et des big data, de nouveaux systèmes de gestion de bases de données sont apparus, connus en général sous le vocable systèmes NoSQL. Par rapport aux systèmes relationnels, ces systèmes se distinguent par leur absence de schéma, une spécialisation pour des types de données particuliers (documents, graphes, clé/valeur et colonne) et l’absence de langages de requêtes déclaratifs. L’offre est assez pléthorique et il n’y a pas de standard aujourd’hui comme peut l’être SQL pour les systèmes relationnels. De nombreuses applications peuvent avoir besoin de manipuler en même temps des données stockées dans des systèmes relationnels et dans des systèmes NoSQL. Le programmeur doit alors gérer deux (au moins) modèles de données différents et deux (au moins) langages de requêtes différents pour pouvoir écrire son application. De plus, il doit gérer explicitement tout son cycle de vie. En effet, il a à (1) coder son application, (2) découvrir les services de base de données déployés dans chaque environnement Cloud et choisir son environnement de déploiement, (3) déployer son application, (4) exécuter des requêtes multi-sources en les programmant explicitement dans son application, et enfin le cas échéant (5) migrer son application d’un environnement Cloud à un autre. Toutes ces tâches sont lourdes et fastidieuses et le programmeur risque d’être perdu dans ce haut niveau d’hétérogénéité. Afin de pallier ces problèmes et aider le programmeur tout au long du cycle de vie des applications utilisant des bases de données multiples, nous proposons un ensemble cohérent de modèles, d’algorithmes et d’outils. En effet, notre travail dans ce manuscrit de thèse se présente sous forme de quatre contributions. Tout d’abord, nous proposons un modèle de données unifié pour couvrir l’hétérogénéité entre les modèles de données relationnelles et NoSQL. Ce modèle de données est enrichi avec un ensemble de règles de raffinement. En se basant sur ce modèle, nous avons défini notre algèbre de requêtes. Ensuite, nous proposons une interface de programmation appelée ODBAPI basée sur notre modèle de données unifié, qui nous permet de manipuler de manière uniforme n’importe quelle source de données qu’elle soit relationnelle ou NoSQL. ODBAPI permet de programmer des applications indépendamment des bases de données utilisées et d’exprimer des requêtes simples et complexes multi-sources. Puis, nous définissons la notion de bases de données virtuelles qui interviennent comme des médiateurs et interagissent avec les bases de données intégrées via ODBAPI. Ce dernier joue alors le rôle d’adaptateur. Les bases de données virtuelles assurent l’exécution des requêtes d’une façon optimale grâce à un modèle de coût et un algorithme de génération de plan d’exécution optimal que nous définis. Enfin, nous proposons une approche automatique de découverte de bases de données dans des environnements Cloud. En effet, les programmeurs peuvent décrire leurs exigences en termes de bases de données dans des manifestes, et grâce à notre algorithme d’appariement, nous sélectionnons l’environnement le plus adéquat à notre application pour la déployer. Ainsi, nous déployons l’application en utilisant une API générique de déploiement appelée COAPS. Nous avons étendue cette dernière pour pouvoir déployer les applications utilisant plusieurs sources de données. Un prototype de la solution proposée a été développé et mis en œuvre dans des cas d'utilisation du projet OpenPaaS. Nous avons également effectué diverses expériences pour tester l'efficacité et la précision de nos contributions
The production of huge amount of data and the emergence of Cloud computing have introduced new requirements for data management. Many applications need to interact with several heterogeneous data stores depending on the type of data they have to manage: traditional data types, documents, graph data from social networks, simple key-value data, etc. Interacting with heterogeneous data models via different APIs, and multiple data stores based applications imposes challenging tasks to their developers. Indeed, programmers have to be familiar with different APIs. In addition, the execution of complex queries over heterogeneous data models cannot, currently, be achieved in a declarative way as it is used to be with mono-data store application, and therefore requires extra implementation efforts. Moreover, developers need to master and deal with the complex processes of Cloud discovery, and application deployment and execution. In this manuscript, we propose an integrated set of models, algorithms and tools aiming at alleviating developers task for developing, deploying and migrating multiple data stores applications in cloud environments. Our approach focuses mainly on three points. First, we provide a unified data model used by applications developers to interact with heterogeneous relational and NoSQL data stores. This model is enriched by a set of refinement rules. Based on that, we define our query algebra. Developers express queries using OPEN-PaaS-DataBase API (ODBAPI), a unique REST API allowing programmers to write their applications code independently of the target data stores. Second, we propose virtual data stores, which act as a mediator and interact with integrated data stores wrapped by ODBAPI. This run-time component supports the execution of single and complex queries over heterogeneous data stores. It implements a cost model to optimally execute queries and a dynamic programming based algorithm to generate an optimal query execution plan. Finally, we present a declarative approach that enables to lighten the burden of the tedious and non-standard tasks of (1) discovering relevant Cloud environments and (2) deploying applications on them while letting developers to simply focus on specifying their storage and computing requirements. A prototype of the proposed solution has been developed and implemented use cases from the OpenPaaS project. We also performed different experiments to test the efficiency and accuracy of our proposals
Style APA, Harvard, Vancouver, ISO itp.
13

Faniyi, Funmilade Olugbenga. "Self-aware software architecture style and patterns for cloud-based applications". Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/6032/.

Pełny tekst źródła
Streszczenie:
Modern cloud-reliant software systems are faced with the problem of cloud service providers violating their Service Level Agreement (SLA) claims. Given the large pool of cloud providers and their instability, cloud applications are expected to cope with these dynamics autonomously. This thesis investigates an approach for designing self-adaptive cloud architectures using a systematic methodology that guides the architect while designing cloud applications. The approach termed \(Self-aware\) \(Architecture\) \(Pattern\) promotes fine-grained representation of architectural concerns to aid design-time analysis of risks and trade-offs. To support the coordination and control of architectural components in decentralised self-aware cloud applications, we propose a \(Reputation-aware\) \(posted\) \(offer\) \(market\) \(coordination\) \(mechanism\). The mechanism builds on the classic posted offer market mechanism and extends it to track behaviour of unreliable cloud services. The self-aware cloud architecture and its reputation-aware coordination mechanism are quantitatively evaluated within the context of an Online Shopping application using synthetic and realistic workload datasets under various configurations (failure, scale, resilience levels etc.). Additionally, we qualitatively evaluated our self-aware approach against two classic self-adaptive architecture styles using independent experts' judgment, to unveil its strengths and weaknesses relative to these styles.
Style APA, Harvard, Vancouver, ISO itp.
14

Mohamed, Mohamed. "Generic monitoring and reconfiguration for service-based applications in the cloud". Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0025/document.

Pełny tekst źródła
Streszczenie:
Le Cloud Computing est un paradigme émergent dans les technologies de l'information. L'un de ses atouts majeurs étant la mise à disposition des ressources fondée sur le modèle pay-as-you-go. Les ressources Cloud se situent dans un environnement très dynamique. Cependant, chaque ressource provisionnée offre des services fonctionnels et peut ne pas offrir des services non fonctionnels tels que la supervision, la reconfiguration, la sécurité, etc. Dans un tel environnement dynamique, les services non fonctionnels ont une importance critique pour le maintien du niveau de service des ressources ainsi que le respect des contrats entre les fournisseurs et les consommateurs. Dans notre travail, nous nous intéressons à la supervision, la reconfiguration et la gestion autonomique des ressources Cloud. En particulier, nous mettons l'accent sur les applications à base de services. Ensuite, nous poussons plus loin notre travail pour traiter les ressources Cloud d'une manière générale. Par conséquent, cette thèse contient deux contributions majeures. Dans la première contribution, nous étendons le standard SCA (Service Component Architecture) afin de permettre l'ajout de besoins en supervision et reconfiguration à la description des composants. Dans ce contexte, nous proposons une liste de transformations qui permet d'ajouter automatiquement aux composants des facilités de supervision et de reconfiguration, et ce, même si ces facilités n'ont pas été prévues dans la conception des composants. Ceci facilite la tâche au développeur en lui permettant de se concentrer sur les services fonctionnels de ses composants. Pour être en conformité avec la scalabilité des environnements Cloud, nous utilisons une approche basée sur des micro-conteneurs pour le déploiement de composants. Dans la deuxième contribution, nous étendons le standard OCCI (Open Cloud Computing Interface) pour ajouter dynamiquement des facilités de supervision et de reconfiguration aux ressources Cloud, indépendamment de leurs niveaux de service. Cette extension implique la définition de nouvelles Ressources, Links et Mixins OCCI pour permettre d'ajouter dynamiquement des facilités de supervision et de reconfiguration à n'importe quelle ressource Cloud. Nous étendons par la suite nos deux contributions de supervision et reconfiguration afin d'ajouter des capacités de gestion autonomique aux applications SCA et ressources Cloud. Les solutions que nous proposons sont génériques, granulaires et basées sur les standards de facto (i.e., SCA et OCCI). Dans ce manuscrit de thèse, nous décrivons les détails de nos implémentations ainsi que les expérimentations que nous avons menées pour l'évaluation de nos propositions
Cloud Computing is an emerging paradigm in Information Technologies (IT). One of its major assets is the provisioning of resources based on pay-as-you-go model. Cloud resources are situated in a highly dynamic environment. However, each provisioned resource comes with functional properties and may not offer non functional properties like monitoring, reconfiguration, security, accountability, etc. In such dynamic environment, non functional properties have a critical importance to maintain the service level of resources and to make them respect the contracts between providers and consumers. In our work, we are interested in monitoring, reconfiguration and autonomic management of Cloud resources. Particularly, we put the focus on Service-based applications. Afterwards, we push further our work to treat Cloud resources. Consequently, this thesis contains two major contributions. On the first hand, we extend Service Component Architecture (SCA) in order to add monitoring and reconfiguration requirements description to components. In this context, we propose a list of transformations that dynamically adds monitoring and reconfiguration facilities to components even if they were designed without them. That alleviates the task of the developer and lets him focus just on the business of his components. To be in line with scalability of Cloud environments, we use a micro-container based approach for the deployment of components. On the second hand, we extend Open Cloud Computing Interface standards to dynamically add monitoring and reconfiguration facilities to Cloud resources while remaining agnostic to their level. This extension entails the definition of new Resources, Links and Mixins to dynamically add monitoring and reconfiguration facilities to resources. We extend the two contributions to couple monitoring and reconfiguration in order to add self management capabilities to SCA-based applications and Cloud resource. The solutions that we propose are generic, granular and are based on the de facto standards (i.e., SCA and OCCI). In this thesis manuscript, we give implementation details as well as experiments that we realized to evaluate our proposals
Style APA, Harvard, Vancouver, ISO itp.
15

Truong, Huu Tram. "Workflow-based applications performance and execution cost optimization on cloud infrastructures". Nice, 2010. http://www.theses.fr/2010NICE4091.

Pełny tekst źródła
Streszczenie:
Les infrastructures virtuelles de cloud sont de plus en plus exploitées pour relever les défis de calcul intensif en sciences comme dans l’industrie. Elles fournissent des ressources de calcul, de communication et de stockage à la demande pour satisfaire les besoins des applications à grande échelle. Pour s’adapter à la diversité de ces infrastructures, de nouveaux outils et modèles sont nécessaires. L’estimation de la quantité de ressources consommées par chaque application est un problème particulièrement difficile, tant pour les utilisateurs qui visent à minimiser leurs coûts que pour les fournisseurs d’infrastructure qui visent à contrôler l’allocation des ressources. Même si une quantité quasi illimitée de ressources peut être allouée, un compromis doit être trouvé entre le coût de l’infrastructure allouée, la performance attendue et la performance optimale atteignable qui dépend du niveau de parallélisme inhérent à l’application. Partant du cas d’utilisation de l’analyse d’images médicales, un domaine scientifique représentatif d’un grand nombre d’applications à grandes échelle, cette thèse propose un modèle de coût à grain fin qui s’appuie sur l’expertise extraite de l’application formalisée comme un flot. Quatre stratégies d’allocation des ressources basées sur ce modèle de coût sont introduites. En tenant compte à la fois des ressources de calcul et de communication, ces stratégies permettent aux utilisateurs de déterminer la quantité de ressources de calcul et de bande passante à réserver afin de composer leur environnement d’exécution. De plus, l’optimisation du transfert de données et la faible fiabilité des systèmes à grande échelle, qui sont des problèmes bien connus ayant un impact sur la performance de l’application et donc sur le coût d’utilisation des infrastructures, sont également pris en considération. Les expériences exposées dans cette thèse ont été effectuées sur la plateforme Aladdin/Grid’50000, en utilisant l’intergiciel HIPerNet. Ce gestionnaire de plateforme vituelle permet la virtualisation de ressources de calcul et de communication. Une application réelle d’analyse d’images médicales a été utilisée pour toutes les validations expérimentales. Les résultats expérimentaux montrent la validité de l’approche en termes de contrôle du coût de l’infrastructure et de la performance des applications. Nos contributions facilitent à la fois l’exploitation des infrastructures de cloud, offrant une meilleure qualité de services aux utilisateurs, et la planification de la mise à disposition des ressources virtualisées
Cloud computing is increasingly exploited to tackle the computing challenges raised in both science and industry. Clouds provide computing, network and storage resources on demand to satisfy the needs of large-scale distributed applications. To adapt to the diversity of cloud infrastructures and usage, new tools and models are needed. Estimating the amount of resources consumed by each application in particular is a difficult problem, both for end users who aim at minimizing their cost and infrastructure providers who aim at controlling their resources allocation. Although a quasi-unlimited amount of resources may be allocated, a trade-off has to be found between the allocated infrastructure cost, the expected performance and the optimal performance achievable that depends on the level of parallelization of the applications. Focusing on medical image analysis, a scientific domain representative of the large class of data intensive distributed applications, this thesis propose a fine-grained cost function model relying on the expertise captured form the application. Based on this cost function model, four resources allocation strategies are proposed. Taking into account both computing and network resources, these strategies help users to determine the amount of resources to reserve and compose their execution environment. In addition, the data transfer overhead and the low reliability level, which are well-known problems of large-scale distributed systems impacting application performance and infrastructure usage cost, are also considered. The experiments reported in this thesis were carried out on the Aladdin/Grid’50000 infrastructure, using the HIPerNet virtualization middleware. This virtual platform manager enables the joint virtualization of computing and network resources. A real medical image analysis application was considered for all experimental validations. The experimental results assess the validity of the approach in terms of infrastructure cost and application performance control. Our contributions both facilitate the exploitation of cloud infrastructure, delivering a higher quality of services to end users, and help the planning of cloud resources delivery
Style APA, Harvard, Vancouver, ISO itp.
16

Arthur, Victor Arthur. "Understanding Financial Value of Cloud-Based Business Applications: A Phenomenological Study". ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3274.

Pełny tekst źródła
Streszczenie:
An understanding of opportunities and challenges in cloud computing is needed to better manage technology costs and create financial value. The purposes of this transcendental phenomenological study were to understand the lived experiences of minority business owners who operated business applications in the cloud and to explore how these experiences created financial value for businesses despite security challenges. Historically, minority business owners have experienced high rates of business failures and could benefit from information to help them manage business costs in order to position their businesses to grow and succeed. Modigliani-Miller's theorem on capital structure and Brealey and Young's concept of financial leverage were the conceptual frameworks that grounded this study. Data consisted of observational field notes and 15 individual semistructured interviews with open-ended questions. I used the in vivo and pattern coding approaches to analyze the data for emerging themes that addressed the research questions. The findings were that drivers of positive cloud-based experiences, such as easy access, ease of use, flexibility, and timesavings, created financial value for small business owners. In addition, the findings confirmed that opportunities in the cloud such as cost savings, efficiency, and ease of collaboration outweighed security challenges. Finally, the results indicated that cost-effective approaches such as the subscription model for acquiring technology created financial value for businesses. The findings of this study can be used by business owners, especially minority small business owners, to decide whether to move operations to the cloud to create financial value for their businesses.
Style APA, Harvard, Vancouver, ISO itp.
17

FERNANDES, CARVALHO DHIEGO. "New LoRaWAN solutions and cloud-based systems for e-Health applications". Doctoral thesis, Università degli studi di Brescia, 2021. http://hdl.handle.net/11379/544075.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Manjunatha, Ashwin Kumar. "A Domain Specific Language Based Approach for Developing Complex Cloud Computing Applications". Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1309236898.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Lelli, Matteo. "Design and Implementation of a Cloud-based Middleware for Persuasive Android Applications". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10371/.

Pełny tekst źródła
Streszczenie:
Negli ultimi decenni, le tecnologie e i prodotti informatici sono diventati pervasivi e sono ora una parte essenziale delle nostre vite. Ogni giorno ci influenzano in maniera più o meno esplicita, cambiando il nostro modo di vivere e i nostri comportamenti più o meno intenzionalmente. Tuttavia, i computer non nacquero inizialmente per persuadere: essi furono costruiti per gestire, calcolare, immagazzinare e recuperare dati. Non appena i computer si sono spostati dai laboratori di ricerca alla vita di tutti i giorni, sono però diventati sempre più persuasivi. Questa area di ricerca è chiamata pesuasive technology o captology, anche definita come lo studio dei sistemi informatici interattivi progettati per cambiare le attitudini e le abitudini delle persone. Nonostante il successo crescente delle tecnologie persuasive, sembra esserci una mancanza di framework sia teorici che pratici, che possano aiutare gli sviluppatori di applicazioni mobili a costruire applicazioni in grado di persuadere effettivamente gli utenti finali. Tuttavia, il lavoro condotto dal Professor Helal e dal Professor Lee al Persuasive Laboratory all’interno dell’University of Florida tenta di colmare questa lacuna. Infatti, hanno proposto un modello di persuasione semplice ma efficace, il quale può essere usato in maniera intuitiva da ingegneri o specialisti informatici. Inoltre, il Professor Helal e il Professor Lee hanno anche sviluppato Cicero, un middleware per dispositivi Android basato sul loro precedente modello, il quale può essere usato in modo molto semplice e veloce dagli sviluppatori per creare applicazioni persuasive. Il mio lavoro al centro di questa tesi progettuale si concentra sull’analisi del middleware appena descritto e, successivamente, sui miglioramenti e ampliamenti portati ad esso. I più importanti sono una nuova architettura di sensing, una nuova struttura basata sul cloud e un nuovo protocollo che permette di creare applicazioni specifiche per smartwatch.
Style APA, Harvard, Vancouver, ISO itp.
20

Nallur, Vivek. "A decentralized self-adaptation mechanism for service-based applications in the cloud". Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3383/.

Pełny tekst źródła
Streszczenie:
This thesis presents a Cloud-based-Multi-Agent System (Clobmas) that uses multiple double auctions, to enable applications to self-adapt, based on their QoS requirements and budgetary constraints. We design a marketplace that allows applications to select services, in a decentralized manner. We marry the marketplace with a decentralized service evaluation and- selection mechanism, and a price adjustment technique to allow for QoS constraint satisfaction. Applications in the cloud using the Software-As-A-Service paradigm will soon be commonplace. In this context, long-lived applications will need to adapt their QoS, based on various parameters. Current service-selection mechanisms fall short on the dimensions that service based applications vary on. Clobmas is shown to be an effective mechanism, to allow both applications (service consumers) and clouds (service providers) to self-adapt to dynamically changing QoS requirements. Furthermore, we identify the various axes on which service applications vary, and the median values on those axes. We measure Clobmas on all of these axes, and then stress-test it to show that it meets all of our goals for scalability.
Style APA, Harvard, Vancouver, ISO itp.
21

Selvadhurai, Arunprasaath. "NETWORK MEASUREMENT TOOL COMPONENTS FOR ENABLING PERFORMANCE INTELLIGENCE WITHIN CLOUD-BASED APPLICATIONS". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1367446588.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Gonidis, Fotios. "A framework enabling the cross-platform development of service-based cloud applications". Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/13333/.

Pełny tekst źródła
Streszczenie:
Among all the different kinds of service offering available in the cloud, ranging from compute, storage and networking infrastructure to integrated platforms and software services, one of the more interesting is the cloud application platform, a kind of platform as a service (PaaS) which integrates cloud applications with a collection of platform basic services. This kind of platform is neither so open that it requires every application to be developed from scratch, nor so closed that it only offers services from a pre-designed toolbox. Instead, it supports the creation of novel service-based applications, consisting of micro-services supplied by multiple third-party providers. Software service development at this granularity has the greatest prospect for bringing about the future software service ecosystem envisaged for the cloud. Cloud application developers face several challenges when seeking to integrate the different micro-service offerings from third-party providers. There are many alternative offerings for each kind of service, such as mail, payment or image processing services, and each assumes a slightly different business model. We characterise these differences in terms of (i) workflow, (ii) exposed APIs and (iii) configuration settings. Furthermore, developers need to access the platform basic services in a consistent way. To address this, we present a novel design methodology for creating service-based applications. The methodology is exemplified in a Java framework, which (i) integrates platform basic services in a seamless way and (ii) alleviates the heterogeneity of third-party services. The benefit is that designers of complete service-based applications are no longer locked into the vendor-specific vagaries of third-party micro-services and may design applications in a vendor-agnostic way, leaving open the possibility of future micro-service substitution. The framework architecture is presented in three phases. The first describes the abstraction of platform basic services and third-party micro-service workflows,. The second describes the method for extending the framework for each alternative micro-service implementation, with examples. The third describes how the framework executes each workflow and generates suitable client adaptors for the web APIs of each micro-service.
Style APA, Harvard, Vancouver, ISO itp.
23

MILIA, GABRIELE. "Cloud-based solutions supporting data and knowledge integration in bioinformatics". Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266783.

Pełny tekst źródła
Streszczenie:
In recent years, computer advances have changed the way the science progresses and have boosted studies in silico; as a result, the concept of “scientific research” in bioinformatics has quickly changed shifting from the idea of a local laboratory activity towards Web applications and databases provided over the network as services. Thus, biologists have become among the largest beneficiaries of the information technologies, reaching and surpassing the traditional ICT users who operate in the field of so-called "hard science" (i.e., physics, chemistry, and mathematics). Nevertheless, this evolution has to deal with several aspects (including data deluge, data integration, and scientific collaboration, just to cite a few) and presents new challenges related to the proposal of innovative approaches in the wide scenario of emergent ICT solutions. This thesis aims at facing these challenges in the context of three case studies, being each case study devoted to cope with a specific open issue by proposing proper solutions in line with recent advances in computer science. The first case study focuses on the task of unearthing and integrating information from different web resources, each having its own organization, terminology and data formats in order to provide users with flexible environment for accessing the above resources and smartly exploring their content. The study explores the potential of cloud paradigm as an enabling technology to severely curtail issues associated with scalability and performance of applications devoted to support the above task. Specifically, it presents Biocloud Search EnGene (BSE), a cloud-based application which allows for searching and integrating biological information made available by public large-scale genomic repositories. BSE is publicly available at: http://biocloud-unica.appspot.com/. The second case study addresses scientific collaboration on the Web with special focus on building a semantic network, where team members, adequately supported by easy access to biomedical ontologies, define and enrich network nodes with annotations derived from available ontologies. The study presents a cloud-based application called Collaborative Workspaces in Biomedicine (COWB) which deals with supporting users in the construction of the semantic network by organizing, retrieving and creating connections between contents of different types. Public and private workspaces provide an accessible representation of the collective knowledge that is incrementally expanded. COWB is publicly available at: http://cowb-unica.appspot.com/. Finally, the third case study concerns the knowledge extraction from very large datasets. The study investigates the performance of random forests in classifying microarray data. In particular, the study faces the problem of reducing the contribution of trees whose nodes are populated by non-informative features. Experiments are presented and results are then analyzed in order to draw guidelines about how reducing the above contribution. With respect to the previously mentioned challenges, this thesis sets out to give two contributions summarized as follows. First, the potential of cloud technologies has been evaluated for developing applications that support the access to bioinformatics resources and the collaboration by improving awareness of user's contributions and fostering users interaction. Second, the positive impact of the decision support offered by random forests has been demonstrated in order to tackle effectively the curse of dimensionality.
Style APA, Harvard, Vancouver, ISO itp.
24

Daud, Malik Imran. "Ontology-based Access Control in Open Scenarios: Applications to Social Networks and the Cloud". Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/396179.

Pełny tekst źródła
Streszczenie:
La integració d'Internet a la societat actual ha fet possible compartir fàcilment grans quantitats d'informació electrònica i recursos informàtics (que inclouen maquinari, serveis informàtics, etc.) en entorns distribuïts oberts. Aquests entorns serveixen de plataforma comuna per a usuaris heterogenis (per exemple, empreses, individus, etc.) on es proporciona allotjament d'aplicacions i sistemes d'usuari personalitzades; i on s'ofereix un accés als recursos compartits des de qualsevol lloc i amb menys esforços administratius. El resultat és un entorn que permet a individus i empreses augmentar significativament la seva productivitat. Com ja s'ha dit, l'intercanvi de recursos en entorns oberts proporciona importants avantatges per als diferents usuaris, però, també augmenta significativament les amenaces a la seva privacitat. Les dades electròniques compartides poden ser explotades per tercers (per exemple, entitats conegudes com "Data Brokers"). Més concretament, aquestes organitzacions poden agregar la informació compartida i inferir certes característiques personals sensibles dels usuaris, la qual cosa pot afectar la seva privacitat. Una manera de del.liar aquest problema consisteix a controlar l'accés dels usuaris als recursos potencialment sensibles. En concret, la gestió de control d'accés regula l'accés als recursos compartits d'acord amb les credencials dels usuaris, el tipus de recurs i les preferències de privacitat dels propietaris dels recursos/dades. La gestió eficient de control d'accés és crucial en entorns grans i dinàmics. D'altra banda, per tal de proposar una solució viable i escalable, cal eliminar la gestió manual de regles i restriccions (en la qual, la majoria de les solucions disponibles depenen), atès que aquesta constitueix una pesada càrrega per a usuaris i administradors . Finalment, la gestió del control d'accés ha de ser intuïtiu per als usuaris finals, que en general no tenen grans coneixements tècnics.
La integración de Internet en la sociedad actual ha hecho posible compartir fácilmente grandes cantidades de información electrónica y recursos informáticos (que incluyen hardware, servicios informáticos, etc.) en entornos distribuidos abiertos. Estos entornos sirven de plataforma común para usuarios heterogéneos (por ejemplo, empresas, individuos, etc.) donde se proporciona alojamiento de aplicaciones y sistemas de usuario personalizadas; y donde se ofrece un acceso ubicuo y con menos esfuerzos administrativos a los recursos compartidos. El resultado es un entorno que permite a individuos y empresas aumentar significativamente su productividad. Como ya se ha dicho, el intercambio de recursos en entornos abiertos proporciona importantes ventajas para los distintos usuarios, no obstante, también aumenta significativamente las amenazas a su privacidad. Los datos electrónicos compartidos pueden ser explotados por terceros (por ejemplo, entidades conocidas como “Data Brokers”). Más concretamente, estas organizaciones pueden agregar la información compartida e inferir ciertas características personales sensibles de los usuarios, lo cual puede afectar a su privacidad. Una manera de paliar este problema consiste en controlar el acceso de los usuarios a los recursos potencialmente sensibles. En concreto, la gestión de control de acceso regula el acceso a los recursos compartidos de acuerdo con las credenciales de los usuarios, el tipo de recurso y las preferencias de privacidad de los propietarios de los recursos/datos. La gestión eficiente de control de acceso es crucial en entornos grandes y dinámicos. Por otra parte, con el fin de proponer una solución viable y escalable, es necesario eliminar la gestión manual de reglas y restricciones (en la cual, la mayoría de las soluciones disponibles dependen), dado que ésta constituye una pesada carga para usuarios y administradores. Por último, la gestión del control de acceso debe ser intuitivo para los usuarios finales, que por lo general carecen de grandes conocimientos técnicos.
Thanks to the advent of the Internet, it is now possible to easily share vast amounts of electronic information and computer resources (which include hardware, computer services, etc.) in open distributed environments. These environments serve as a common platform for heterogeneous users (e.g., corporate, individuals etc.) by hosting customized user applications and systems, providing ubiquitous access to the shared resources and requiring less administrative efforts; as a result, they enable users and companies to increase their productivity. Unfortunately, sharing of resources in open environments has significantly increased the privacy threats to the users. Indeed, shared electronic data may be exploited by third parties, such as Data Brokers, which may aggregate, infer and redistribute (sensitive) personal features, thus potentially impairing the privacy of the individuals. A way to palliate this problem consists on controlling the access of users over the potentially sensitive resources. Specifically, access control management regulates the access to the shared resources according to the credentials of the users, the type of resource and the privacy preferences of the resource/data owners. The efficient management of access control is crucial in large and dynamic environments such as the ones described above. Moreover, in order to propose a feasible and scalable solution, we need to get rid of manual management of rules/constraints (in which most available solutions rely) that constitutes a serious burden for the users and the administrators. Finally, access control management should be intuitive for the end users, who usually lack technical expertise, and they may find access control mechanism more difficult to understand and rigid to apply due to its complex configuration settings.
Style APA, Harvard, Vancouver, ISO itp.
25

Al-Kaseem, Bilal R. "Optimised cloud-based 6LoWPAN network using SDN/NFV concepts for energy-aware IoT applications". Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15642.

Pełny tekst źródła
Streszczenie:
The Internet of Things (IoT) concept has been realised with the advent of Machineto-Machine (M2M) communication through which the vision of future Internet has been revolutionised. IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) provides feasible IPv6 connectivity to previously isolated environments, e.g. wireless M2M sensors and actuator networks. This thesis's contributions include a novel mathematical model, energy-efficient algorithms, and a centralised software controller for dynamic consolidation of programmability features in cloud-based M2M networks. A new generalised joint mathematical model has been proposed for performance analysis of the 6LoWPAN MAC and PHY layers. The proposed model differs from existing analytical models as it precisely adopts the 6LoWPAN specifications introduced by the Internet Engineering Task Force (IETF) working group. The proposed approach is based on Markov chain modelling and validated through Monte-Carlo simulation. In addition, an intelligent mechanism has been proposed for optimal 6LoWPAN MAC layer parameters set selection. The proposed mechanism depends on Artificial Neural Network (ANN), Genetic Algorithm (GA), and Particles Swarm Optimisation (PSO). Simulation results show that utilising the optimal MAC parameters improve the 6LoWPAN network throughput by 52-63% and reduce end-to-end delay by 54-65%. This thesis focuses on energy-efficient data extraction and dissemination in a wireless M2M sensor network based on 6LoWPAN. A new scalable and self-organised clustering technique with a smart sleep scheduler has been proposed for prolonging M2M network's lifetime and enhancing network connectivity. These solutions succeed in overcoming performance degradation and unbalanced energy consumption problems in homogeneous and heterogeneous sensor networks. Simulation results show that by adopting the proposed schemes in multiple mobile sink sensory field will improve the total aggregated packets by 38-167% and extend network lifetime by 30-78%. Proof-of-concept real-time hardware testbed experiments are used to verify the effectiveness of Software-Defined Networking (SDN), Network Function Virtualisation (NFV) and cloud computing on a 6LoWPAN network. The implemented testbed is based on open standards development boards (i.e. Arduino), with one sink, which is the M2M 6LoWPAN gateway, where the network coordinator and the customised SDN controller operated. Experimental results indicate that the proposed approach reduces network discovery time by 60% and extends the node lifetime by 65% in comparison with the traditional 6LoWPAN network. Finally, the thesis is concluded with an overall picture of the research conducted and some suggestions for future work.
Style APA, Harvard, Vancouver, ISO itp.
26

Chakraborty, Suryadip. "Data Aggregation in Healthcare Applications and BIGDATA set in a FOG based Cloud System". University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1471346052.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Raffa, Viviana. "Edge/cloud virtualization techniques and resources allocation algorithms for IoT-based smart energy applications". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22864/.

Pełny tekst źródła
Streszczenie:
Nowadays, the installation of residential battery energy storage (BES) has increased as a consequence of the decrease in the cost of batteries. The coupling of small-scale energy generation (residential PV) and residential BES promotes the integration of microgrids (MG), i.e., clusters of local energy sources, energy storages, and customers which are represented as a single controllable entity. The operations between multiple grid-connected MGs and the distribution network can be coordinated by controlling the power exchange; however, in order to achieve this level of coordination, a control and communication MG interface should be developed as an add-on DMS (Distribution Management System) functionality to integrate the MG energy scheduling with the network optimal power flow. This thesis proposes an edge-cloud architecture that is able to integrate the microgrid energy scheduling method with the grid constrained power flow, as well as providing tools for controlling and monitoring edge devices. As a specific case study, we consider the problem of determining the energy scheduling (amount extracted/stored from/in batteries) for each prosumer in a microgrid with a certain global objective (e.g. to make a few energy exchanges as possible with the main grid). The results show that, in order to have better optimization of the BES scheduling, it is necessary to evaluate the composition of a microgrid in such a way as to have balanced deficits and surpluses, which can be performed with Machine Learning (ML) techniques based on past production and consumption data for each prosumer.
Style APA, Harvard, Vancouver, ISO itp.
28

Bekcheva, Maria. "Flatness-based constrained control and model-free control applications to quadrotors and cloud computing". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS218.

Pełny tekst źródła
Streszczenie:
La première partie de la thèse est consacrée à la commande avec contraintes de systèmes différentiellement plats. Deux types de systèmes sont étudiés : les systèmes non linéaires de dimension finie et les systèmes linéaires à retards. Nous présentons une approche unifiée pour intégrer les contraintes d'entrée/état/sortie dans la planification des trajectoires. Pour cela, nous spécialisons les sorties plates (ou les trajectoires de référence) sous forme de courbes de Bézier. En utilisant la propriété de platitude, les entrées/états du système peuvent être exprimés sous la forme d'une combinaison de sorties plates (courbes de Bézier) et de leurs dérivées. Par conséquent, nous obtenons explicitement les expressions des points de contrôle des courbes de Bézier d'entrées/états comme une combinaison des points de contrôle des sorties plates. En appliquant les contraintes souhaitées à ces derniers points de contrôle, nous trouvons les régions faisables pour les points de contrôle de Bézier de sortie, c'est-à-dire un ensemble de trajectoires de référence faisables. Ce cadre permet d’éviter le recours, en général fort coûteux d’un point de vue informatique, aux schémas d’optimisation. Pour résoudre les incertitudes liées à l'imprécision de l'identification et modélisation des modèles et les perturbations, nous utilisons la commande sans modèle (Model Free Control-MFC) et dans la deuxième partie de la thèse, nous présentons deux applications démontrant l'efficacité de notre approche : 1. Nous proposons une conception de contrôleur qui évite les procédures d'identification du système du quadrotor tout en restant robuste par rapport aux perturbations endogènes (la performance de contrôle est indépendante de tout changement de masse, inertie, effets gyroscopiques ou aérodynamiques) et aux perturbations exogènes (vent, bruit de mesure). Pour atteindre notre objectif en se basant sur la structure en cascade d'un quadrotor, nous divisons le système en deux sous-systèmes de position et d'attitude contrôlés chacun indépendamment par la commande sans modèle de deuxième ordre dynamique. Nous validons notre approche de contrôle avec trois scénarios réalistes : en présence d'un bruit inconnu, en présence d’un vent variant dans le temps et en présence des variations inconnues de masse, tout en suivant des manœuvres agressives. 2. Nous utilisons la commande sans modèle et les correcteurs « intelligents » associés, pour contrôler (maintenir) l'élasticité horizontale d'un système de Cloud Computing. Comparée aux algorithmes commerciaux d’Auto-Scaling, notre approche facilement implémentable se comporte mieux, même avec de fluctuations aigües de charge. Ceci est confirmé par des expériences sur le cloud public Amazon Web Services (AWS)
The first part of the thesis is devoted to the control of differentially flat systems with constraints. Two types of systems are studied: non-linear finite dimensional systems and linear time-delay systems. We present an approach to embed the input/state/output constraints in a unified manner into the trajectory design for differentially flat systems. To that purpose, we specialize the flat outputs (or the reference trajectories) as Bézier curves. Using the flatness property, the system’s inputs/states can be expressed as a combination of Bézier curved flat outputs and their derivatives. Consequently, we explicitly obtain the expressions of the control points of the inputs/states Bézier curves as a combination of the control points of the flat outputs. By applying desired constraints to the latter control points, we find the feasible regions for the output Bézier control points i.e. a set of feasible reference trajectories. This framework avoids the use of generally high computing cost optimization schemes. To resolve the uncertainties arising from imprecise model identification and the unknown pertubations, we employ the Model-Free Control (MFC) and in the second part of the thesis we present two applications demonstrating the effectiveness of our approach: 1. We propose a controller design that avoids the quadrotor’s system identification procedures while staying robust with respect to the endogenous (the control performance is independent of any mass change, inertia, gyroscopic or aerodynamic effects) and exogenous disturbances (wind, measurement noise). To reach our goal, based on the cascaded structure of a quadrotor, we divide the system into positional and attitude subsystems each controlled by an independent Model-Free controller of second order dynamics. We validate our control approach in three realistic scenarios: in presence of unknown measurement noise, with unknown time-varying wind disturbances and mass variation while tracking aggressive manoeuvres. 2. We employ the Model-Free Control to control (maintain) the “horizontal elasticity” of a Cloud Computing system. When compared to the commercial “Auto-Scaling” algorithms, our easily implementable approach behaves better, even with sharp workload fluctuations. This is confirmed by experiments on the Amazon Web Services (AWS) public cloud
Style APA, Harvard, Vancouver, ISO itp.
29

Mlawanda, Joyce. "A comparative study of cloud computing environments and the development of a framework for the automatic deployment of scaleable cloud based applications". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/19994.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng)--Stellenbosch University, 2012
ENGLISH ABSTRACT: Modern-day online applications are required to deal with an ever-increasing number of users without decreasing in performance. This implies that the applications should be scalable. Applications hosted on static servers are in exible in terms of scalability. Cloud computing is an alternative to the traditional paradigm of static application hosting and o ers an illusion of in nite compute and storage resources. It is a way of computing whereby computing resources are provided by a large pool of virtualised servers hosted on the Internet. By virtually removing scalability, infrastructure and installation constraints, cloud computing provides a very attractive platform for hosting online applications. This thesis compares the cloud computing infrastructures Google App Engine and AmazonWeb Services for hosting web applications and assesses their scalability performance compared to traditionally hosted servers. After the comparison of the three application hosting solutions, a proof-of-concept software framework for the provisioning and deployment of automatically scaling applications is built on Amazon Web Services which is shown to be best suited for the development of such a framework.
Style APA, Harvard, Vancouver, ISO itp.
30

Schroeter, Julia [Verfasser], Uwe [Akademischer Betreuer] Aßmann i Vander [Akademischer Betreuer] Alves. "Feature-based configuration management of reconfigurable cloud applications / Julia Schroeter. Gutachter: Uwe Aßmann ; Vander Alves". Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://d-nb.info/1068446412/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Stolarz, Wojciech. "Cost-effectiveness of tenant-based allocation model in SaaS applications running in a public Cloud". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2855.

Pełny tekst źródła
Streszczenie:
Context. Cloud computing is getting more and more interest with every year. It is an approach that allows Internet based applications to work in distributed and virtualized cloud environment. It is characterized by on-demand resources and payper-use pricing. Software-as-a-Service (SaaS) is a software distribution paradigm in cloud computing and represents the highest, software layer in the cloud stack. Since most cloud services providers charge for the resource use it is important to create resource efficient applications. One of the way to achieve that is multi-tenant architecture of SaaS applications. It allows the application for efficient self-managing of the resources Objectives. In this study I investigate the influence of tenant-based resource allocation model on cost-effectiveness of SaaS systems. I try to find out weather that model can decrease the system's actual costs in commercial public cloud environment. Methods. I am implementing two authorial SaaS systems: first tenant-unaware and then using tenant-based resource allocation model. Then they are deployed into Amazon public cloud environment. Tests focused on measuring over- and underutilization are conducted in order to compare cost-effectiveness of the solutions. Public cloud provider's billing service is used as a final cost measure. Results. The tenant-based resource allocation model proved to decrease my system's running costs. It also reduced the system's resources underutilization. Similar research was done, but the model was tested in private cloud. In this work the systems were deployed into commercial public cloud. Conclusions. The tenant-based resource allocation model is one of the method to tackle under-optimal resource utilization. When compared to traditional resource scaling it can reduce the costs of running SaaS systems in cloud environments. The more tenant-oriented the SaaS systems are the more benefits that model can provide.
Style APA, Harvard, Vancouver, ISO itp.
32

Khaleel, Ali. "Optimisation of a Hadoop cluster based on SDN in cloud computing for big data applications". Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17076.

Pełny tekst źródła
Streszczenie:
Big data has received a great deal attention from many sectors, including academia, industry and government. The Hadoop framework has emerged for supporting its storage and analysis using the MapReduce programming module. However, this framework is a complex system that has more than 150 parameters and some of them can exert a considerable effect on the performance of a Hadoop job. The optimum tuning of the Hadoop parameters is a difficult task as well as being time consuming. In this thesis, an optimisation approach is presented to improve the performance of a Hadoop framework by setting the values of the Hadoop parameters automatically. Specifically, genetic programming is used to construct a fitness function that represents the interrelations among the Hadoop parameters. Then, a genetic algorithm is employed to search for the optimum or near the optimum values of the Hadoop parameters. A Hadoop cluster is configured on two severe at Brunel University London to evaluate the performance of the proposed optimisation approach. The experimental results show that the performance of a Hadoop MapReduce job for 20 GB on Word Count Application is improved by 69.63% and 30.31% when compared to the default settings and state of the art, respectively. Whilst on Tera sort application, it is improved by 73.39% and 55.93%. For better optimisation, SDN is also employed to improve the performance of a Hadoop job. The experimental results show that the performance of a Hadoop job in SDN network for 50 GB is improved by 32.8% when compared to traditional network. Whilst on Tera sort application, the improvement for 50 GB is on average 38.7%. An effective computing platform is also presented in this thesis to support solar irradiation data analytics. It is built based on RHIPE to provide fast analysis and calculation for solar irradiation datasets. The performance of RHIPE is compared with the R language in terms of accuracy, scalability and speedup. The speed up of RHIPE is evaluated by Gustafson's Law, which is revised to enhance the performance of the parallel computation on intensive irradiation data sets in a cluster computing environment like Hadoop. The performance of the proposed work is evaluated using a Hadoop cluster based on the Microsoft azure cloud and the experimental results show that RHIPE provides considerable improvements over the R language. Finally, an effective routing algorithm based on SDN to improve the performance of a Hadoop job in a large scale cluster in a data centre network is presented. The proposed algorithm is used to improve the performance of a Hadoop job during the shuffle phase by allocating efficient paths for each shuffling flow, according to the network resources demand of each flow as well as their size and number. Furthermore, it is also employed to allocate alternative paths for each shuffling flow in the case of any link crashing or failure. This algorithm is evaluated by two network topologies, namely, fat tree and leaf-spine, built by EstiNet emulator software. The experimental results show that the proposed approach improves the performance of a Hadoop job in a data centre network.
Style APA, Harvard, Vancouver, ISO itp.
33

Runsewe, Olubisi A. "A Novel Cloud Broker-based Resource Elasticity Management and Pricing for Big Data Streaming Applications". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39251.

Pełny tekst źródła
Streszczenie:
The pervasive availability of streaming data from various sources is driving todays’ enterprises to acquire low-latency big data streaming applications (BDSAs) for extracting useful information. In parallel, recent advances in technology have made it easier to collect, process and store these data streams in the cloud. For most enterprises, gaining insights from big data is immensely important for maintaining competitive advantage. However, majority of enterprises have difficulty managing the multitude of BDSAs and the complex issues cloud technologies present, giving rise to the incorporation of cloud service brokers (CSBs). Generally, the main objective of the CSB is to maintain the heterogeneous quality of service (QoS) of BDSAs while minimizing costs. To achieve this goal, the cloud, although with many desirable features, exhibits major challenges — resource prediction and resource allocation — for CSBs. First, most stream processing systems allocate a fixed amount of resources at runtime, which can lead to under- or over-provisioning as BDSA demands vary over time. Thus, obtaining optimal trade-off between QoS violation and cost requires accurate demand prediction methodology to prevent waste, degradation or shutdown of processing. Second, coordinating resource allocation and pricing decisions for self-interested BDSAs to achieve fairness and efficiency can be complex. This complexity is exacerbated with the recent introduction of containers. This dissertation addresses the cloud resource elasticity management issues for CSBs as follows: First, we provide two contributions to the resource prediction challenge; we propose a novel layered multi-dimensional hidden Markov model (LMD-HMM) framework for managing time-bounded BDSAs and a layered multi-dimensional hidden semi-Markov model (LMD-HSMM) to address unbounded BDSAs. Second, we present a container resource allocation mechanism (CRAM) for optimal workload distribution to meet the real-time demands of competing containerized BDSAs. We formulate the problem as an n-player non-cooperative game among a set of heterogeneous containerized BDSAs. Finally, we incorporate a dynamic incentive-compatible pricing scheme that coordinates the decisions of self-interested BDSAs to maximize the CSB’s surplus. Experimental results demonstrate the effectiveness of our approaches.
Style APA, Harvard, Vancouver, ISO itp.
34

Gezzi, Giacomo. "Smart execution of distributed application by balancing resources in mobile devices and cloud-based avatars". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10330/.

Pełny tekst źródła
Streszczenie:
L’obiettivo del progetto di tesi svolto è quello di realizzare un servizio di livello middleware dedicato ai dispositivi mobili che sia in grado di fornire il supporto per l’offloading di codice verso una infrastruttura cloud. In particolare il progetto si concentra sulla migrazione di codice verso macchine virtuali dedicate al singolo utente. Il sistema operativo delle VMs è lo stesso utilizzato dal device mobile. Come i precedenti lavori sul computation offloading, il progetto di tesi deve garantire migliori performance in termini di tempo di esecuzione e utilizzo della batteria del dispositivo. In particolare l’obiettivo più ampio è quello di adattare il principio di computation offloading a un contesto di sistemi distribuiti mobili, migliorando non solo le performance del singolo device, ma l’esecuzione stessa dell’applicazione distribuita. Questo viene fatto tramite una gestione dinamica delle decisioni di offloading basata, non solo, sullo stato del device, ma anche sulla volontà e/o sullo stato degli altri utenti appartenenti allo stesso gruppo. Per esempio, un primo utente potrebbe influenzare le decisioni degli altri membri del gruppo specificando una determinata richiesta, come alta qualità delle informazioni, risposta rapida o basata su altre informazioni di alto livello. Il sistema fornisce ai programmatori un semplice strumento di definizione per poter creare nuove policy personalizzate e, quindi, specificare nuove regole di offloading. Per rendere il progetto accessibile ad un più ampio numero di sviluppatori gli strumenti forniti sono semplici e non richiedono specifiche conoscenze sulla tecnologia. Il sistema è stato poi testato per verificare le sue performance in termini di mecchanismi di offloading semplici. Successivamente, esso è stato anche sottoposto a dei test per verificare che la selezione di differenti policy, definite dal programmatore, portasse realmente a una ottimizzazione del parametro designato.
Style APA, Harvard, Vancouver, ISO itp.
35

Reinert, Manuel [Verfasser], i Matteo [Akademischer Betreuer] Maffei. "Cryptographic techniques for privacy and access control in cloud-based applications / Manuel Reinert ; Betreuer: Matteo Maffei". Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2018. http://d-nb.info/1162892269/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Reinert, Manuel Verfasser], i Matteo [Akademischer Betreuer] [Maffei. "Cryptographic techniques for privacy and access control in cloud-based applications / Manuel Reinert ; Betreuer: Matteo Maffei". Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:291-scidok-ds-272720.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Gotin, Manuel [Verfasser], i R. H. [Akademischer Betreuer] Reussner. "QoS-Based Optimization of Runtime Management of Sensing Cloud Applications / Manuel Gotin ; Betreuer: R. H. Reussner". Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/1235072312/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Muthiah, Karthika Ms. "Performance Evaluation of Hadoop based Big Data Applications with HiBench Benchmarking tool on IaaS Cloud Platforms". UNF Digital Commons, 2017. https://digitalcommons.unf.edu/etd/771.

Pełny tekst źródła
Streszczenie:
Cloud computing is a computing paradigm where large numbers of devices are connected through networks that provide a dynamically scalable infrastructure for applications, data and storage. Currently, many businesses, from small scale to big companies and industries, are changing their operations to utilize cloud services because cloud platforms could increase company’s growth through process efficiency and reduction in information technology spending [Coles16]. Companies are relying on cloud platforms like Amazon Web Services, Google Compute Engine, and Microsoft Azure, etc., for their business development. Due to the emergence of new technologies, devices, and communications, the amount of data produced is growing rapidly every day. Big data is a collection of large dataset, typically hundreds of gigabytes, terabytes or petabytes. Big data storage and the analytics of this huge volume of data are a great challenge for companies and new businesses to handle, which is a primary focus of this paper. This research was conducted on Amazon’s Elastic Compute Cloud (EC2) and Microsoft Azure platforms using the HiBench Hadoop Big Data Benchmark suite [HiBench16]. Processing huge volumes of data is a tedious task that is normally handled through traditional database servers. In contrast, Hadoop is a powerful framework is used to handle applications with big data requirements efficiently by using the MapReduce algorithm to run them on systems with many commodity hardware nodes. Hadoop’s distributed file system facilitates rapid storage and data transfer rates of big data among the nodes and remains operational even when a node failure has occurred in a cluster. HiBench is a big data benchmarking tool that is used for evaluating the performance of big data applications whose data are handled and controlled by the Hadoop framework cluster. Hadoop cluster environment was enabled and evaluated on two cloud platforms. A quantitative comparison was performed on Amazon EC2 and Microsoft Azure along with a study of their pricing models. Measures are suggested for future studies and research.
Style APA, Harvard, Vancouver, ISO itp.
39

Lounis, Ahmed. "Security in cloud computing". Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP1945/document.

Pełny tekst źródła
Streszczenie:
Le Cloud Computing, ou informatique en nuages, est un environnement de stockage et d’exécution flexible et dynamique qui offre à ses utilisateurs des ressources informatiques à la demande via internet. Le Cloud Computing se développe de manière exponentielle car il offre de nombreux avantages rendus possibles grâce aux évolutions majeures des Data Centers et de la virtualisation. Cependant, la sécurité est un frein majeur à l’adoption du Cloud car les données et les traitements seront externalisés hors de site de confiance de client. Cette thèse contribue à résoudre les défis et les issues de la sécurité des données dans le Cloud pour les applications critiques. En particulier, nous nous intéressons à l’utilisation de stockage pour les applications médicales telles que les dossiers de santé électroniques et les réseaux de capteurs pour la santé. D’abord, nous étudions les avantages et les défis de l’utilisation du Cloud pour les applications médicales. Ensuite, nous présentons l’état de l’art sur la sécurité dans le Cloud et les travaux existants dans ce domaine. Puis nous proposons une architecture sécurisée basée sur le Cloud pour la supervision des patients. Dans cette solution, nous avons développé un contrôle d’accès à granularité fine pour résoudre les défis de la sécurité des données dans le Cloud. Enfin, nous proposons une solution de gestion des accès en urgence
Cloud computing has recently emerged as a new paradigm where resources of the computing infrastructures are provided as services over the Internet. However, this paradigm also brings many new challenges for data security and access control when business or organizations data is outsourced in the cloud, they are not within the same trusted domain as their traditional infrastructures. This thesis contributes to overcome the data security challenges and issues due to using the cloud for critical applications. Specially, we consider using cloud storage services for medical applications such as Electronic Health Record (EHR) systems and medical Wireless Sensor Networks. First, we discuss the benefits and challenges of using cloud services for healthcare applications. Then, we study security risks of the cloud, and give an overview on existing works. After that, we propose a secure and scalable cloud-based architecture for medical applications. In our solution, we develop a fine-grained access control in order to tackle the challenge of sensitive data security, complex and dynamic access policies. Finally, we propose a secure architecture for emergency management to meet the challenge of emergency access
Style APA, Harvard, Vancouver, ISO itp.
40

Dörr, Stefan [Verfasser], i Alexander [Akademischer Betreuer] Verl. "Cloud-based cooperative long-term SLAM for mobile robots in industrial applications / Stefan Dörr ; Betreuer: Alexander Verl". Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2020. http://d-nb.info/1223928780/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Sinclair, J. G. "An approach to compliance conformance for cloud-based business applications leveraging service level agreements and continuous auditing". Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.676738.

Pełny tekst źródła
Streszczenie:
Organisations increasingly use flexible, adaptable and scalable IT infrastructures, such as cloud computing resources, for hosting business applications and storing customer data. To prevent the misuse of personal data, auditors can assess businesses for legal compliance conformance. For data privacy compliance there are many applicable pieces of legislation as well as regulations and standards. Businesses operate globally and typically have systems that are dynamic and mobile; in contrast current data privacy laws often have geographical jurisdictions and so conflicts can arise between the law and the technological framework of cloud computing. Traditional auditing approaches are unsuitable for cloud-based environments because of the complexity of potentially short-lived, migratory and scalable real-time virtual systems. My research goal is to address the problem of auditing cloud-based services for data privacy compliance by devising an appropriate machine-readable Service Level Agreement (SLA) framework for specifying applicable legal conditions. This allows the development of a scalable Continuous Compliance Auditing Service (CCAS) for monitoring data privacy in cloud-based environments. The CCAS architecture utilises agreed SLA conditions to process service events for compliance conformance. The CCAS architecture has been implemented and customised for a real world Electronic Health Record (EHR) scenario in order to demonstrate geo-location compliance monitoring using data privacy restrictions. Finally, the automated audit process of CCAS has been compared and evaluated against traditional auditing approaches and found to have the potential for providing audit capabilities in complex IT environments.
Style APA, Harvard, Vancouver, ISO itp.
42

Silva, Jefferson de Carvalho. "A framework for building component-based applications on a cloud computing platform for high performance computing services". Universidade Federal do CearÃ, 2016. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=16543.

Pełny tekst źródła
Streszczenie:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
Developing High Performance Computing applications (HPC), which optimally access the available computing resources in a higher level of abstraction, is a challenge for many scientists. To address this problem, we present a proposal of a component computing cloud called HPC Shelf, where HPC applications perform and SAFe framework, a front-end aimed to create applications in HPC Shelf and the author's main contribution. SAFe is based on Scientific Workflows Management Systems (SWMS) projects and it allows the specification of computational solutions formed by components to solve problems specified by the expert user through a high level interface. For that purpose, it implements SAFeSWL, an architectural and orchestration description language for describing scientific worflows. Compared with other SWMS alternatives, besides rid expert users from concerns about the construction of parallel and efficient computational solutions from the components offered by the cloud, SAFe integrates itself to a system of contextual contracts which is aligned to a system of dynamic discovery (resolution) of components. In addition, SAFeSWL allows explicit control of life cycle stages (resolution, deployment, instantiation and execution) of components through embedded operators, aimed at optimizing the use of cloud resources and minimize the overall execution cost of computational solutions (workflows). Montage and Map/Reduce are the case studies that have been applied for demonstration, evaluation and validation of the particular features of SAFe in building HPC applications aimed to the HPC Shelf platform.
Desenvolver aplicaÃÃes de ComputaÃÃo de Alto Desempenho (CAD), que acessem os recursos computacionais disponÃveis de forma otimizada e em um nÃvel maior de abstraÃÃo, à um desafio para cientistas de diversos domÃnios. Esta Tese apresenta a proposta de uma nuvem de componentes chamada HPC Shelf, pano de fundo onde as aplicaÃÃes CAD executam, e o arcabouÃo SAFe, Front-End para criaÃÃo de aplicaÃÃes na HPC Shelf e contribuiÃÃo principal do autor. O SAFe toma como base o projeto de sistemas gerenciadores de workflows cientÃficos (SGWC), permitindo a implementaÃÃo de soluÃÃes computacionais baseadas em componentes para resolver os problemas especificados por meio de uma interface de nÃvel de abstraÃÃo mais alto. Para isso, foi desenvolvido o SAFeSWL, uma linguagem de descriÃÃo arquitetural e orquestraÃÃo de worflows cientÃficos. Comparado com outros SGWC, alÃm de livrar usuÃrios finais de preocupaÃÃes em relaÃÃo à construÃÃo de soluÃÃes computacionais paralelas e eficientes a partir dos componentes oferecidos pela nuvem, o SAFe faz uso de um sistema de contratos contextuais integrado a um sistema de descoberta (resoluÃÃo) dinÃmica de componentes. A linguagem SAFeSWL permite o controle explÃcito das etapas do ciclo de vida de um componente em execuÃÃo (resoluÃÃo, implantaÃÃo, instanciaÃÃo e execuÃÃo), atravÃs de operadores embutidos, a fim de otimizar o uso dos recursos da nuvem e minimizar os custos de sua utilizaÃÃo. Montage e Map/Reduce constituem os estudos de caso aplicados para demonstraÃÃo e avaliaÃÃo das propriedades originais do SAFe e do SAFeSWL na construÃÃo de aplicaÃÃes de CAD.
Style APA, Harvard, Vancouver, ISO itp.
43

Messaoudi, Farouk. "User equipment based-computation offloading for real-time applications in the context of Cloud and edge networks". Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S104/document.

Pełny tekst źródła
Streszczenie:
Le délestage de calcul ou de code est une technique qui permet à un appareil mobile avec une contrainte de ressources d'exécuter à distance, entièrement ou partiellement, une application intensive en calcul dans un environnement Cloud avec des ressources suffisantes. Le délestage de code est effectué principalement pour économiser de l'énergie, améliorer les performances, ou en raison de l'incapacité des appareils mobiles à traiter des calculs intensifs. Plusieurs approches et systèmes ont été proposés pour délester du code dans le Cloud tels que CloneCloud, MAUI et Cyber Foraging. La plupart de ces systèmes offrent une solution complète qui traite différents objectifs. Bien que ces systèmes présentent en général de bonnes performances, un problème commun entre eux est qu'ils ne sont pas adaptés aux applications temps réel telles que les jeux vidéo, la réalité augmentée et la réalité virtuelle, qui nécessitent un traitement particulier. Le délestage de code a connu un récent engouement avec l'avènement du MEC et son évolution vers le edge à multiple accès qui élargit son applicabilité à des réseaux hétérogènes comprenant le WiFi et les technologies d'accès fixe. Combiné avec l'accès mobile 5G, une pléthore de nouveaux services mobiles apparaîtront, notamment des service type URLLC et eV2X. De tels types de services nécessitent une faible latence pour accéder aux données et des capacités de ressources suffisantes pour les exécuter. Pour mieux trouver sa position dans une architecture 5G et entre les services 5G proposés, le délestage de code doit surmonter plusieurs défis; la latence réseau élevée, hétérogénéité des ressources, interopérabilité des applications et leur portabilité, la consommation d'énergie, la sécurité, et la mobilité, pour citer quelques uns. Dans cette thèse, nous étudions le paradigme du délestage de code pour des applications a temps réel, par exemple; les jeux vidéo sur équipements mobiles et le traitement d'images. L'accent sera mis sur la latence réseau, la consommation de ressources, et les performances accomplies. Les contributions de la thèse sont organisées sous les axes suivants : Étudier le comportement des moteurs de jeu sur différentes plateformes en termes de consommation de ressources (CPU / GPU) par image et par module de jeu ; Étudier la possibilité de distribuer les modules du moteur de jeu en fonction de la consommation de ressources, de la latence réseau, et de la dépendance du code ; Proposer une stratégie de déploiement pour les fournisseurs de jeux dans le Cloud, afin de mieux exploiter les ressources, en fonction de la demande variable en ressource par des moteurs de jeu et de la QoE du joueur ; Proposer une solution de délestage statique de code pour les moteurs de jeu en divisant la scène 3D en différents objets du jeu. Certains de ces objets sont distribués en fonction de la consommation de ressources, de la latence réseau et de la dépendance du code ; Proposer une solution de délestage dynamique de code pour les moteurs de jeu basée sur une heuristique qui calcule pour chaque objet du jeu, le gain du délestage. En fonction de ce gain, un objet peut être distribué ou non ; Proposer une nouvelle approche pour le délestage de code vers le MEC en déployant une application sur la bordure du réseau (edge) responsable de la décision de délestage au niveau du terminal et proposer deux algorithmes pour prendre la meilleure décision concernant les tâches à distribuer entre le terminal et le serveur hébergé dans le MEC
Computation offloading is a technique that allows resource-constrained mobile devices to fully or partially offload a computation-intensive application to a resourceful Cloud environment. Computation offloading is performed mostly to save energy, improve performance, or due to the inability of mobile devices to process a computation heavy task. There have been a numerous approaches and systems on offloading tasks in the classical Mobile Cloud Computing (MCC) environments such as, CloneCloud, MAUI, and Cyber Foraging. Most of these systems are offering a complete solution that deal with different objectives. Although these systems present in general good performance, one common issue between them is that they are not adapted to real-time applications such as mobile gaming, augmented reality, and virtual reality, which need a particular treatment. Computation offloading is widely promoted especially with the advent of Mobile Edge Computing (MEC) and its evolution toward Multi-access Edge Computing which broaden its applicability to heterogeneous networks including WiFi and fixed access technologies. Combined with 5G mobile access, a plethora of novel mobile services will appear that include Ultra-Reliable Low-latency Communications (URLLC) and enhanced Vehicle-toeverything (eV2X). Such type of services requires low latency to access data and high resource capabilities to compute their behaviour. To better find its position inside a 5G architecture and between the offered 5G services, computation offloading needs to overcome several challenges; the high network latency, resources heterogeneity, applications interoperability and portability, offloading frameworks overhead, power consumption, security, and mobility, to name a few. In this thesis, we study the computation offloading paradigm for real-time applications including mobile gaming and image processing. The focus will be on the network latency, resource consumption, and accomplished performance. The contributions of the thesis are organized on the following axes : Study game engines behaviour on different platforms regarding resource consumption (CPU/GPU) per frame and per game module; study the possibility to offload game engine modules based on resource consumption, network latency, and code dependency ; propose a deployment strategy for Cloud gaming providers to better exploit their resources based on the variability of the resource demand of game engines and the QoE ; propose a static computation offloading-based solution for game engines by splitting 3D world scene into different game objects. Some of these objects are offloaded based on resource consumption, network latency, and code dependency ; propose a dynamic offloading solution for game engines based on an heuristic that compute for each game object, the offloading gain. Based on that gain, an object may be offloaded or not ; propose a novel approach to offload computation to MEC by deploying a mobile edge application that is responsible for driving the UE decision for offloading, as well as propose two algorithms to make best decision regarding offloading tasks on UE to a server hosted on the MEC
Style APA, Harvard, Vancouver, ISO itp.
44

Guo, Jia. "Trust-based Service Management of Internet of Things Systems and Its Applications". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82854.

Pełny tekst źródła
Streszczenie:
A future Internet of Things (IoT) system will consist of a huge quantity of heterogeneous IoT devices, each capable of providing services upon request. It is of utmost importance for an IoT device to know if another IoT service is trustworthy when requesting it to provide a service. In this dissertation research, we develop trust-based service management techniques applicable to distributed, centralized, and hybrid IoT environments. For distributed IoT systems, we develop a trust protocol called Adaptive IoT Trust. The novelty lies in the use of distributed collaborating filtering to select trust feedback from owners of IoT nodes sharing similar social interests. We develop a novel adaptive filtering technique to adjust trust protocol parameters dynamically to minimize trust estimation bias and maximize application performance. Our adaptive IoT trust protocol is scalable to large IoT systems in terms of storage and computational costs. We perform a comparative analysis of our adaptive IoT trust protocol against contemporary IoT trust protocols to demonstrate the effectiveness of our adaptive IoT trust protocol. For centralized or hybrid cloud-based IoT systems, we propose the notion of Trust as a Service (TaaS), allowing an IoT device to query the service trustworthiness of another IoT device and also report its service experiences to the cloud. TaaS preserves the notion that trust is subjective despite the fact that trust computation is performed by the cloud. We use social similarity for filtering recommendations and dynamic weighted sum to combine self-observations and recommendations to minimize trust bias and convergence time against opportunistic service and false recommendation attacks. For large-scale IoT cloud systems, we develop a scalable trust management protocol called IoT-TaaS to realize TaaS. For hybrid IoT systems, we develop a new 3-layer hierarchical cloud structure for integrated mobility, service, and trust management. This architecture supports scalability, reconfigurability, fault tolerance, and resiliency against cloud node failure and network disconnection. We develop a trust protocol called IoT-HiTrust leveraging this 3-layer hierarchical structure to realize TaaS. We validate our trust-based IoT service management techniques developed with real-world IoT applications, including smart city air pollution detection, augmented map travel assistance, and travel planning, and demonstrate that our trust-based IoT service management techniques outperform contemporary non-trusted and trust-based IoT service management solutions.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
45

An, Kijin. "The Client Insourcing Refactoring to Facilitate the Re-engineering of Web-Based Applications". Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103391.

Pełny tekst źródła
Streszczenie:
Developers often need to re-engineer distributed applications to address changes in requirements, made only after deployment. Much of the complexity of inspecting and evolving distributed applications lies in their distributed nature, while the majority of mature program analysis and transformation tools works only with centralized software. Inspired by business process re-engineering, in which remote operations can be insourced back in house to restructure and outsource anew, this dissertation brings an analogous approach to the re-engineering of distributed applications. Our approach introduces a novel automatic refactoring---Client Insourcing---that creates a semantically equivalent centralized version of a distributed application. This centralized version is then inspected, modified, and redistributed to meet new requirements. This dissertation demonstrates the utility of Client Insourcing in helping meet the changed requirements in performance, reliability, and security. We implemented Client Insourcing in the important domain of full-stack JavaScript applications, in which both the client and server parts are written in JavaScript, and applied our implementation to re-engineer mobile web applications. Client Insourcing reduces the complexity of inspecting and evolving distributed applications, thereby facilitating their re-engineering. This dissertation is based on 4 conference papers and 2 doctoral symposium papers, presented at ICWE 2019, SANER 2020, WWW 2020, and ICWE 2021.
Doctor of Philosophy
Modern web applications are distributed across a browser-based client and a remote server. Software developers need to optimize the performance of web applications as well as correct and modify their functionality. However, the vast majority of mature development tools, used for optimizing, correcting, and modifying applications work only with non-distributed software, written to run on a single machine. To facilitate the maintenance and evolution of web applications, this dissertation research contributes new automated software transformation techniques. These contributions can be incorporated into the design of software development tools, thereby advancing the engineering of web applications.
Style APA, Harvard, Vancouver, ISO itp.
46

Al, Abdulatef Mohammed. "A Phenomenographic Study of the Integration of Cloud-Based Applications in Higher Education: Views of Ohio University Faculty Members". Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1584755409027278.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Thomas, Anita. "Classification of Man-made Urban Structures from Lidar Point Clouds with Applications to Extrusion-based 3-D City Models". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429484410.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Dinh-Xuan, Lam [Verfasser], i Phuoc [Gutachter] Tran-Gia. "Quality of Experience Assessment of Cloud Applications and Performance Evaluation of VNF-Based QoE Monitoring / Lam Dinh-Xuan ; Gutachter: Phuoc Tran-Gia". Würzburg : Universität Würzburg, 2018. http://d-nb.info/1169573053/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Etchevers, Xavier. "Déploiement d’applications patrimoniales en environnements de type informatique dans le nuage". Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM100/document.

Pełny tekst źródła
Streszczenie:
L'objectif de cette thèse est d'offrir une solution de bout en bout permettant de décrire et de déployer de façon fiable une application distribuée dans un environnement virtualisé. Ceci passe par la définition d'un formalisme permettant de décrirer une application ainsi que son environnement d'exécution, puis de fournir les outils capable d'interpéter ce formalisme pour déployer (installer, instancier et configurer) l'application sur une plate-forme de type cloud computing
Cloud computing aims to cut down on the outlay and operational expenses involved in setting up and running applications. To do this, an application is split into a set of virtualized hardware and software resources. This virtualized application can be autonomously managed, making it responsive to the dynamic changes affecting its running environment. This is referred to as Application Life-cycle Management (ALM). In cloud computing, ALM is a growing but immature market, with many offers claiming to significantly improve productivity. However, all these solutions are faced with a major restriction: the duality between the level of autonomy they offer and the type of applications they can handle. To address this, this thesis focuses on managing the initial deployment of an application to demonstrate that the duality is artificial. The main contributions of this work are presented in a platform named VAMP (Virtual Applications Management Platform). VAMP can deploy any legacy application distributed in the cloud, in an autonomous, generic and reliable way. It consists of: • a component-based model to describe the elements making up an application and their projection on the running infrastructure, as well as the dependencies binding them in the applicative architecture; • an asynchronous, distributed and reliable protocol for self-configuration and self-activation of the application; • mechanisms ensuring the reliability of the VAMP system itself. Beyond implementing the solution, the most critical aspects of running VAMP have been formally verified using model checking tools. A validation step was also used to demonstrate the genericity of the proposal through various real-life implementations
Style APA, Harvard, Vancouver, ISO itp.
50

Polat, Songül. "Combined use of 3D and hyperspectral data for environmental applications". Thesis, Lyon, 2021. http://www.theses.fr/2021LYSES049.

Pełny tekst źródła
Streszczenie:
La demande sans cesse croissante de solutions permettant de décrire notre environnement et les ressources qu'il contient nécessite des technologies qui permettent une description efficace et complète, conduisant à une meilleure compréhension du contenu. Les technologies optiques, la combinaison de ces technologies et un traitement efficace sont cruciaux dans ce contexte. Cette thèse se concentre sur les technologies 3D et les technologies hyper-spectrales (HSI). Tandis que les technologies 3D aident à comprendre les scènes de manière plus détaillée en utilisant des informations géométriques, topologiques et de profondeur, les développements rapides de l'imagerie hyper-spectrale ouvrent de nouvelles possibilités pour mieux comprendre les aspects physiques des matériaux et des scènes dans un large éventail d'applications grâce à leurs hautes résolutions spatiales et spectrales. Les travaux de recherches de cette thèse visent à l'utilisation combinée des données 3D et hyper-spectrales. Ils visent également à démontrer le potentiel et la valeur ajoutée d'une approche combinée dans le contexte de différentes applications. Une attention particulière est accordée à l'identification et à l'extraction de caractéristiques dans les deux domaines et à l'utilisation de ces caractéristiques pour détecter des objets d'intérêt.Plus spécifiquement, nous proposons différentes approches pour combiner les données 3D et hyper-spectrales en fonction des technologies 3D et d’imagerie hyper-spectrale (HSI) utilisées et montrons comment chaque capteur peut compenser les faiblesses de l'autre. De plus, une nouvelle méthode basée sur des critères de forme dédiés à la classification de signatures spectrales et des règles de décision liés à l'analyse des signatures spectrales a été développée et présentée. Les forces et les faiblesses de cette méthode par rapport aux approches existantes sont discutées. Les expérimentations réalisées, dans le domaine du patrimoine culturel et du tri de déchets plastiques et électroniques, démontrent que la performance et l’efficacité de la méthode proposée sont supérieures à celles des méthodes de machines à vecteurs de support (SVM).En outre, une nouvelle méthode d'analyse basée sur les caractéristiques 3D et hyper-spectrales est présentée. L'évaluation de cette méthode est basée sur un exemple pratique du domaine des déchet d'équipements électriques et électroniques (WEEE) et se concentre sur la séparation de matériaux comme les plastiques, les carte à circuit imprimé (PCB) et les composants électroniques sur PCB. Les résultats obtenus confirment qu'une amélioration des ré-sultats de classification a pu être obtenue par rapport aux méthodes proposées précédemment.L’avantage des méthodes et processus individuels développés dans cette thèse est qu’ils peuvent être transposé directement à tout autre domaine d'application que ceux investigué, et généralisé à d’autres cas d’étude sans adaptation préalable
Ever-increasing demands for solutions that describe our environment and the resources it contains, require technologies that support efficient and comprehensive description, leading to a better content-understanding. Optical technologies, the combination of these technologies and effective processing are crucial in this context. The focus of this thesis lies on 3D scanning and hyperspectral technologies. Rapid developments in hyperspectral imaging are opening up new possibilities for better understanding the physical aspects of materials and scenes in a wide range of applications due to their high spatial and spectral resolutions, while 3D technologies help to understand scenes in a more detailed way by using geometrical, topological and depth information. The investigations of this thesis aim at the combined use of 3D and hyperspectral data and demonstrates the potential and added value of a combined approach by means of different applications. Special focus is given to the identification and extraction of features in both domains and the use of these features to detect objects of interest. More specifically, we propose different approaches to combine 3D and hyperspectral data depending on the HSI/3D technologies used and show how each sensor could compensate the weaknesses of the other. Furthermore, a new shape and rule-based method for the analysis of spectral signatures was developed and presented. The strengths and weaknesses compared to existing approach-es are discussed and the outperformance compared to SVM methods are demonstrated on the basis of practical findings from the field of cultural heritage and waste management.Additionally, a newly developed analytical method based on 3D and hyperspectral characteristics is presented. The evaluation of this methodology is based on a practical exam-ple from the field of WEEE and focuses on the separation of materials like plastics, PCBs and electronic components on PCBs. The results obtained confirms that an improvement of classification results could be achieved compared to previously proposed methods.The claim of the individual methods and processes developed in this thesis is general validity and simple transferability to any field of application
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii