Dissertations / Theses on the topic 'Geo-distributed system'

To see the other types of publications on this topic, follow the link: Geo-distributed system.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Geo-distributed system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Anderson, Paul. "GeoS: A Service for the Management of Geo-Social Information in a Distributed System." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Applications and services that take advantage of social data usually infer social relationships using information produced only within their own context, using a greatly simplified representation of users' social data. We propose to combine social information from multiple sources into a directed and weighted social multigraph in order to enable novel socially-aware applications and services. We present GeoS, a geo-social data management service which implements a representative set of social inferences and can run on a decentralized system. We demonstrate GeoS' potential for social applications on a collection of social data that combines collocation information and Facebook friendship declarations from 100 students. We demonstrate its performance by testing it both on PlanetLab and a LAN with a realistic workload for a 1000 node graph.
2

Tejankar, Vinayak Prabhakar. "Optimization of Data Propagation Algorithm for Conflict-Free Replicated Data Type-based Datastores in Geo-Distributed Edge Environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Replication primarily provides data availability by having multiple copies over different systems and is exploited to make distributed systems scalable in num- bers and geographical areas. Placing a replica closer to the source of request can also significantly reduce the time required to service the request, improv- ing applications’ performance. However, modifications done at a single copy need to be propagated to all the standing copies to maintain the data’s consis- tency. Over the years, numerous strategies have been proposed for handling the tradeoff between consistency and availability, of which the majority pro- vides either strong consistency or eventual consistency. These models do not provide sufficient compatibility for developing modern applications for geo- distributed (edge) environments.Conflict-Free Replicated Data Types (CRDT) provides a new model of consistency referred to as strong eventual consistency. In principle, CRDTs guarantee conflict-free merge even when the updates arrive out of order using simple mathematical properties. Lasp is a coordination free distributed pro- gramming model for building modern distributed applications using CRDTs. Lasp uses a gossip protocol for disseminating state changes to all replicas in the system. The current implementation of gossip in Lasp is agnostic to the application’s behavior in propagating the updates efficiently to critical repli- cas in the system. In the thesis, we introduce an application-specific feature to optimize the dissemination of updates in Lasp. The proposed algorithm propagates the updates by catering to the different consistency requirements of the replicas in the system. The experimental results on a topology of 100 replicas found that the update latency at critical replicas with high consistency requirements is reduced by 40–50%, and the total bandwidth consumption in the system is reduced by 4–8% without significant repercussion on other repli- cas in the system.
Datareplikering erbjuder primärt tillgänglighet genom att tillhandahålla mul- tipla kopior fördelat över olika system, och utnyttjas för att göra distribuerade system skalbara i antal och över geografiska områden. Att placera en replika nära källan till en förfrågan kan dessutom signifikant reducera tiden det krävs att besvara förfrågan vilket förbättrar applikationens prestanda. Modifikatio- ner gjorda på en av kopiorna måste dock propageras till alla stående kopior för att upprätthålla datans konsistens . Över tid har många strategier föreslagits för att hantera avvägningen mellan konsistens och tillgänglighet, där majorite- ten erbjuder antingen stark eller eventuell konsistens. Dessa modeller erbjuder inte tillräcklig kompatibilitet för utveckling av moderna applikationer för geo- distribuerade (edge) miljöer.Konfliktfria replikerade datatyper (CRDT) erbjuder en ny modell av konsi- stens som kallas stark eventuell konsistens. I princip garanterar CRDTer kon- fliktfria sammanslagningar trots att uppdateringar sker i oordning, genom att använda dess matematiska egenskaper. Lasp är en koordineringsfri distribue- rad programmeringsmodell för att bygga moderna distribuerade applikationer med hjälp av CRDTer. Lasp använder skvallerprotokoll för att sprida tillstånds- förändringar till alla replikor i systemet. Den nuvarande implementationen av skvaller i Lasp är agnostiskt för applikationens beteende relaterat till effektiv propagering av uppdateringar till kritiska replikor i systemet. I det här exa- mensarbetet introducerar vi applikationsspecifik funktionalitet för att optime- ra spridandet av uppdateringar i Lasp. Den föreslagna algoritmen sprider upp- dateringarna genom att tillgodose de olika konsistenskraven för replikorna i systemet. Experimentella resultat i en topologi av 100 replikor visade att upp- dateringslatensen vid kritiska replikor med höga konsistenskrav minskas med 40–50% och att den totala bandbreddskonsumtionen i systemet minskas med 4–8% utan signifikanta negativa följder för andra replikor i systemet.
3

Liu, Yimei [Verfasser], Thomas Akademischer Betreuer] Wunderlich, Matthäus [Akademischer Betreuer] Schilcher, and Liqiu [Akademischer Betreuer] [Meng. "Distributed geo-services based on Wireless GIS : a case study for post-quake rescue information system / Yimei Liu. Gutachter: Thomas Wunderlich ; Matthäus Schilcher ; Liqiu Meng. Betreuer: Liqiu Meng." München : Universitätsbibliothek der TU München, 2011. http://d-nb.info/1014330742/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Silva, Marcel Santos [UNESP]. "Sistemas de informações geográficas: elementos para o desenvolvimento de bibliotecas digitais geográficas distribuídas." Universidade Estadual Paulista (UNESP), 2006. http://hdl.handle.net/11449/93711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2014-06-11T19:26:45Z (GMT). No. of bitstreams: 0 Previous issue date: 2006-08-29Bitstream added on 2014-06-13T20:55:15Z : No. of bitstreams: 1 santos_ms_me_mar.pdf: 2277134 bytes, checksum: 2c1d35de92755006ed8a8f0016328bfe (MD5)
Universidade Estadual Paulista (UNESP)
O desenvolvimento de tecnologias de informação e comunicação aplicadas às informações geográficas cresce de forma considerável e torna mais visível o aumento de Sistemas de Informações Geográficas, principalmente em ambientes governamentais, que buscam disponibilizar a informação geográfica a um número de pessoas cada vez maior. O objetivo deste trabalho é apresentar uma arquitetura com elementos para a construção de uma Biblioteca Digital Geográfica Distribuída, utilizando os padrões e os conceitos da Ciência da Informação juntamente com o Geoprocessamento. Serão apresentados os conceitos de bibliotecas digitais, os padrões de metadados para informações geográficas, além de geo-ontologias que contribuem para melhor organização e recuperação da informação geográfica. Utilizou-se os SIGs e a teoria da Ciência da Informação, focadas em especial para o desenvolvimento de Biblioteca Digital Geográfica Distribuída. A proposta para construção de uma Biblioteca Digital Geográfica Distribuída baseia-se no princípio de cooperação entre sistemas e considera o acesso livre as informações geográficas, a interoperabilidade possibilitada pela padronização dos metadados e das geo-ontologias. A arquitetura proposta para o desenvolvimento de Bibliotecas Digitais Geográficas Distribuídas atende os requisitos de representação da informação, as formas de comunicação e o protocolo de coleta de metadados e objetos digitais, possibilitando assim, o compartilhamento dos acervos informacionais geográficos distribuídos em diferentes Bibliotecas Digitais Geográficas. Apontam-se os elos entre o Geoprocessamento e a Ciência da Informação em relação à estruturação de ambientes de informações geográficas, que possam ser acessadas via rede de computadores.
The development of technologies of information and communication applied to the Geographical information grow in a considerable way and become more visible the increase of Geographic Information Systems, mainly in governments environments, that worry in supplying the geographic information for more and more people. The target of this work is to present an architecture with elements for the construction of a distributed geographical digital library, using patterns and concepts of the Information Science together with geoprocessing. The concepts of digital libraries and the patterns of metadata for geographical information will be presented, besides the geo-ontologies that contribute to better organization and recovery of geographical information. It was used the Geographic Information Systems and the theory of Information Science, focused mainly to the development of distributed geographical digital library.The proposal for construction of the distributed geographical digital library is on the principle of cooperation among systems and it considers the free access to geographical information, the interoperability facilitated by the standardization of the metadatas and geo-ontologies. The architecture proposed for the development of distributed geographical digital libraries assists the requirements of representations of the information, ways of communications and collection protocols for metadatas and digital objects, facilitating thus, the share of collections of geographical informations distributed at several Geographical Digital Libraries. The links between the geoprocessing and Information Science is pointed out with regard to the structuring of geographical information environment that can be accessed through computers network.
5

Qureshi, Asfandyar. "Power-Demand Routing in massive geo-distributed systems." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 165-171).
There is an increasing trend toward massive, geographically distributed systems. The largest Internet companies operate hundreds of thousands of servers in multiple geographic locations, and are growing at a fast clip. A single system's servers and data centers can consume many megawatts of electricity, as much as tens of thousands of US homes. Two important concerns have arisen: rising electric bills; and growing carbon footprints. Our work develops a new traffic engineering technique that can be used to address both these areas of concern. We introduce Power-Demand Routing (PDR), a technique that redistributes traffic between replicas with the express purpose of spatially redistributing the system's power consumption, in order to reduce operating costs. Cost can be described in monetary terms or in terms of pollution. Within existing Internet services, each client request requires a meaningful amount of marginal energy at the server. Thus, by rerouting requests from a server at one geographic location to another, we can spatially shift the systems marginal power consumption at Internet speeds. We show how PDR can be used to reduce electric bills. We describe how to couple request routing policy to real-time price signals from wholesale electricity markets. In response to price-differentials, PDR skews client load across a system's clusters and pushes server power-demand into the least expensive regions. Our analysis quantifies the potential reduction in energy costs. We use simulations driven by empirical data and models: we collected a real-world request traffic workload in collaboration with Akamai; constructed data center energy models; and compiled a database of historical electricity market prices. We conclude that existing systems can use PDR to cut their annual electric bills by millions of dollars. We also show how PDR can be used to reduce carbon footprints. Not all joules are created equal and in power pools like the grid the environmental impact per joule varies geographically and in time. We show how to construct carbon cost functions that can be used with PDR to dynamically push a system's power-demand toward clean energy.
by Asfandyar Qureshi.
Ph.D.
6

Bogdanov, Kirill. "Reducing Long Tail Latencies in Geo-Distributed Systems." Licentiate thesis, KTH, Network Systems Laboratory (NS Lab), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-194729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computing services are highly integrated into modern society. Millions of people rely on these services daily for communication, coordination, trading, and accessing to information. To meet high demands, many popular services are implemented and deployed as geo-distributed applications on top of third party virtualized cloud providers. However, the nature of such deployment provides variable performance characteristics. To deliver high quality of service, such systems strive to adapt to ever-changing conditions by monitoring changes in state and making run-time decisions, such as choosing server peering, replica placement, and quorum selection. In this thesis, we seek to improve the quality of run-time decisions made by geo-distributed systems. We attempt to achieve this through: (1) a better understanding of the underlying deployment conditions, (2) systematic and thorough testing of the decision logic implemented in these systems, and (3) by providing a clear view into the network and system states which allows these services to perform better-informed decisions. We performed a long-term cross datacenter latency measurement of the Amazon EC2 cloud provider. We used this data to quantify the variability of network conditions and demonstrated its impact on the performance of the systems deployed on top of this cloud provider. Next, we validate an application’s decision logic used in popular storage systems by examining replica selection algorithms. We introduce GeoPerf, a tool that uses symbolic execution and lightweight modeling to perform systematic testing of replica selection algorithms. We applied GeoPerf to test two popular storage systems and we found one bug in each. Then, using traceroute and one-way delay measurements across EC2, we demonstrated persistent correlation between network paths and network latency. We introduce EdgeVar, a tool that decouples routing and congestion based changes in network latency. By providing this additional information, we improved the quality of latency estimation, as well as increased the stability of network path selection. Finally, we introduce Tectonic, a tool that tracks an application’s requests and responses both at the user and kernel levels. In combination with EdgeVar, it provides a complete view of the delays associated with each processing stage of a request and response. Using Tectonic, we analyzed the impact of sharing CPUs in a virtualized environment and can infer the hypervisor’s scheduling policies. We argue for the importance of knowing these policies and propose to use them in applications’ decision making process.
Databehandlingstjänster är en välintegrerad del av det moderna samhället. Miljontals människor förlitar sig dagligen på dessa tjänster för kommunikation, samordning, handel, och åtkomst till information. För att möta höga krav implementeras och placeras många populära tjänster som geo-fördelning applikationer ovanpå tredje parters virtuella molntjänster. Det ligger emellertid i sakens natur att sådana utplaceringar resulterar i varierande prestanda. För att leverera höga servicekvalitetskrav behöver sådana system sträva efter att ständigt anpassa sig efter ändrade förutsättningar genom att övervaka tillståndsändringar och ta realtidsbeslut, som till exempel val av server peering, replika placering, och val av kvorum. Den här avhandlingen avser att förbättra kvaliteten på realtidsbeslut tagna av geo-fördelning system. Detta kan uppnås genom: (1) en bättre förståelse av underliggande utplaceringsvillkor, (2) systematisk och noggrann testning av beslutslogik redan implementerad i dessa system, och (3) en tydlig inblick i nätverket och systemtillstånd som tillåter dessa tjänster att utföra mer informerade beslut. Vi utförde en långsiktig korsa datacenter latensmätning av Amazons EC2 molntjänst. Mätdata användes sedan till att kvantifiera variationen av nätverkstillstånd och demonstrera dess inverkan på prestanda för system placerade ovanpå denna molntjänst. Därnäst validerades en applikations beslutslogik vanlig i populära lagringssystem genom att undersöka replika valalgoritmen. GeoPerf, ett verktyg som tillämpar symbolisk exekvering och lättviktsmodellering för systematisk testning av replika valalgoritmen, användes för att testa två populära lagringssystem och vi hittade en bugg i båda. Genom traceroute och envägslatensmätningar över EC2 demonstrerar vi ihängande korrelation mellan nätverksvägar och nätverkslatens. Vi introducerar också EdgeVar, ett verktyg som frikopplar dirigering och trängsel baserat på förändringar i nätverkslatens. Genom att tillhandahålla denna ytterligare information förbättrade vi kvaliteten på latensuppskattningen och stabiliteten på nätverkets val av väg. Slutligen introducerade vi Tectonic, ett verktyg som följer en applikations begäran och gensvar på både användare-läge och kernel-läge. Tillsammans med EdgeVar förses en komplett bild av fördröjningar associerade med varje beräkningssteg av begäran och gensvar. Med Tectonic kunde vi analysera inverkan av att dela CPUer i en virtuell miljö och kan avslöja hypervisor schemaläggningsprinciper. Vi argumenterar för betydelsen av att känna till dessa principer och föreslå användningen av de i beslutsprocessen.

QC 20161101

7

Falgert, Marcus. "Geo-distributed application deployment assistance based on past routing information." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud computing platforms allow users to deploy geographically distributed applications on servers around the world. Applications may be simple to deploy on these platforms, but it is up to the user and the application to decide which regions and servers to use for application placement. Furthermore, network conditions and routing between the geo-distributed servers change over time, which can lead to sub-optimal performance of applications deployed on such servers. A user could either employ a static deployment configuration of servers, or attempt to use a more dynamic configuration. However, both have inherent limitations. A static configuration will be sub-optimal, as it will be unable to adapt to changing network conditions. A more dynamic approach where an application could switch over or transition to a more suitable server could be beneficial, but this can be very complex in practice. Furthermore, such a solution is more about adapting to change as it happens, and not beforehand. This thesis will investigate the possibility of forecasting impending routing changes between servers, by leveraging messages generated by the Border Gateway Protocol (BGP) and past knowledge about routing changes. BGP routers can delay BGP updates due to factors such as the minimum route advertisement interval (MRAI). Thus, out proposed solution involves forwarding BGP updates downstream in the network, before BGP routers process them. As routing between servers changes, so does the latency, meaning that the latency then could be predicted to some degree. This observation could be applied to realize when the latency to a server increases or decreases past another server. This in turn facilitates the decision process of selecting the most optimal servers in terms of latency for application deployment. The solution presented in this thesis can successfully predict routing changes between end-points in an enclosed environment, and inform users ahead of time that the latency is about to change. The time gained by such predictions depend on factors such as the number of ASs between the end-points, the MRAI, and the update processing delay imposed on BGP routers. Time gains between tens of milliseconds to over 2 minutes has been observed.
8

Toumlilt, Ilyas. "Colony : a Hybrid Consistency System for Highly-Available Collaborative Edge Computing." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La distribution et la réplication des données en périphérie du réseau apportent une réponse immédiate, une autonomie et une disponibilité aux applications de périphérie, telles que les jeux, l’ingénierie coopérative ou le partage d’informations sur le terrain. Cependant, les développeurs d’applications et les utilisateurs exigent les meilleures garanties de cohérence possibles, ainsi qu’un support spécifique pour la collaboration de groupe. Pour relever ce défi, Colony garantit la cohérence Transactional Causal Plus Consistency (TCC+) à échelle planétaire, en complément de l’isolation des instantanés au sein des groupes de périphérie. Pour favoriser le passage à l’échelle, la tolérance aux pannes et la sécurité, sa topologie de communication logique est arborescente, avec des racines répliquées dans le nuage principal, mais avec la possibilité de migrer un nœud ou un groupe. Malgré cette approche hybride, les applications bénéficient de la même sémantique partout dans la topologie. Nos expériences montrent que la mise en cache locale et les groupes collaboratifs améliorent considérablement le débit et le temps de réponse, que les performances ne sont pas affectées en mode hors ligne et que la migration est transparente
Immediate response, autonomy and availability is brought to edge applications, such as gaming, cooperative engineering, or in-the-field information sharing, by distributing and replicating data at the edge. However, application developers and users demand the highest possible consistency guarantees, and specific support for group collaboration. To address this challenge, COLONY guarantees Transactional Causal Plus Consistency (TCC+) globally, dovetailing with Snapshot Isolation within edge groups. To help with scalability, fault tolerance and security, its logical communication topology is tree-like, with replicated roots in the core cloud, but with the flexibility to migrate a node or a group. Despite this hybrid approach, applications enjoy the same semantics everywhere in the topology. Our experiments show that local caching and peer groups improve throughput and response time significantly, performance is not affected in offline mode, and that migration is seamless
9

Vasilas, Dimitrios. "A flexible and decentralised approach to query processing for geo-distributed data systems." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse présente l'étude de la conception de systèmes de traitement de requêtes dans divers cadres géo-distribués. L'optimisation des mesures de performance telles que le temps de réponse, la fraîcheur ou le coût opérationnel implique des décisions de conception tel que le choix de l’état dérivé (indices, vues matérialisées, caches par ex.) à construire et maintenir, et la distribution et le placement de ces derniers et de leurs calculs. Ces métriques sont souvent opposées et les compromis dépendent de l'application et/ou de la spécificité de l'environnement. La capacité d'adapter la topologie et l'architecture du système de traitement de requêtes devient alors essentielle, ainsi que le placement de ses composants. Cette thèse apporte les contributions suivantes : - Une architecture flexible pour les systèmes de traitement de requêtes géo-distribués, basée sur des composants connectés dans un graphe bidirectionnel acyclique. - Une abstraction de micro-service et une API communes pour ces composants, le Query Processing Unit (QPU). Un QPU encapsule une tâche de traitement de requête primitive. Il existe plusieurs types de QPU qui peuvent être instanciés et composés en graphes complexes. - Un modèle pour construire des architectures de systèmes de traitement de requêtes modulaires composées d’une topologie distribuée de QPUs, permettant une conception flexible et des compromis selon les mesures de performance visées. - Proteus, un framework basé sur les QPU, permettant la construction et le déploiement de systèmes de traitement de requêtes. - Déploiements représentatifs de systèmes de traitement de requêtes à l'aide de Proteus, et leur évaluation expérimentale
This thesis studies the design of query processing systems, across a diversity of geo-distributed settings. Optimising performance metrics such as response time, freshness, or operational cost involves design decisions, such as what derived state (e.g., indexes, materialised views, or caches) to maintain, and how to distribute and where to place the corresponding computation and state. These metrics are often in tension, and the trade-offs depend on the specific application and/or environment. This requires the ability to adapt the query engine's topology and architecture, and the placement of its components. This thesis makes the following contributions: - A flexible architecture for geo-distributed query engines, based on components connected in a bidirectional acyclic graph. - A common microservice abstraction and API for these components, the Query Processing Unit (QPU). A QPU encapsulates some primitive query processing task. Multiple QPU types exist, which can be instantiated and composed into complex graphs. - A model for constructing modular query engine architectures as a distributed topology of QPUs, enabling flexible design and trade-offs between performance metrics. - Proteus, a QPU-based framework for constructing and deploying query engines. - Representative deployments of Proteus and experimental evaluation thereof
10

Silva, Marcel Santos. "Sistemas de informações geográficas : elementos para o desenvolvimento de bibliotecas digitais geográficas distribuídas /." Marília : [s.n.], 2006. http://hdl.handle.net/11449/93711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientador: Silvana Aparecida Borsetti Gregório Vidotti
Banca: Plácida Leopoldina Ventura Amorim da Costa Santos
Banca: Sérgio Antonio Rohm
Resumo: O desenvolvimento de tecnologias de informação e comunicação aplicadas às informações geográficas cresce de forma considerável e torna mais visível o aumento de Sistemas de Informações Geográficas, principalmente em ambientes governamentais, que buscam disponibilizar a informação geográfica a um número de pessoas cada vez maior. O objetivo deste trabalho é apresentar uma arquitetura com elementos para a construção de uma Biblioteca Digital Geográfica Distribuída, utilizando os padrões e os conceitos da Ciência da Informação juntamente com o Geoprocessamento. Serão apresentados os conceitos de bibliotecas digitais, os padrões de metadados para informações geográficas, além de geo-ontologias que contribuem para melhor organização e recuperação da informação geográfica. Utilizou-se os SIGs e a teoria da Ciência da Informação, focadas em especial para o desenvolvimento de Biblioteca Digital Geográfica Distribuída. A proposta para construção de uma Biblioteca Digital Geográfica Distribuída baseia-se no princípio de cooperação entre sistemas e considera o acesso livre as informações geográficas, a interoperabilidade possibilitada pela padronização dos metadados e das geo-ontologias. A arquitetura proposta para o desenvolvimento de Bibliotecas Digitais Geográficas Distribuídas atende os requisitos de representação da informação, as formas de comunicação e o protocolo de coleta de metadados e objetos digitais, possibilitando assim, o compartilhamento dos acervos informacionais geográficos distribuídos em diferentes Bibliotecas Digitais Geográficas. Apontam-se os elos entre o Geoprocessamento e a Ciência da Informação em relação à estruturação de ambientes de informações geográficas, que possam ser acessadas via rede de computadores.
Abstract: The development of technologies of information and communication applied to the Geographical information grow in a considerable way and become more visible the increase of Geographic Information Systems, mainly in governments environments, that worry in supplying the geographic information for more and more people. The target of this work is to present an architecture with elements for the construction of a distributed geographical digital library, using patterns and concepts of the Information Science together with geoprocessing. The concepts of digital libraries and the patterns of metadata for geographical information will be presented, besides the geo-ontologies that contribute to better organization and recovery of geographical information. It was used the Geographic Information Systems and the theory of Information Science, focused mainly to the development of distributed geographical digital library.The proposal for construction of the distributed geographical digital library is on the principle of cooperation among systems and it considers the free access to geographical information, the interoperability facilitated by the standardization of the metadatas and geo-ontologies. The architecture proposed for the development of distributed geographical digital libraries assists the requirements of representations of the information, ways of communications and collection protocols for metadatas and digital objects, facilitating thus, the share of collections of geographical informations distributed at several Geographical Digital Libraries. The links between the geoprocessing and Information Science is pointed out with regard to the structuring of geographical information environment that can be accessed through computers network.
Mestre
11

Franca, Rezende Tuanir. "Leaderless state-machine replication : from fail-stop to Byzantine failures." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les services distribués modernes doivent être hautement disponibles, car nos sociétés en sont de plus en plus dépendantes. La manière la plus courante d'obtenir une haute disponibilité est de répliquer les données dans plusieurs répliques du service. De cette façon, le service reste opérationnel en cas de pannes, car les clients peuvent être relayés vers d'autres répliques qui fonctionnent. Dans les systèmes distribués, la technique classique pour mettre en œuvre de tels services tolérants aux pannes est appelée réplication de machine d'état (State-Machine Replication, SMR), où un service est défini comme une machine d'état déterministe et chaque réplique conserve une copie locale de la machine. Pour garantir la cohérence du service, les répliques se coordonnent entre elles et conviennent de l'ordre des transitions à appliquer à leurs copies de la machine d'état. La réplication effectuée par les services Internet modernes s'étend sur plusieurs lieux géographiques (géo-réplication). Cela permet une disponibilité accrue et une faible latencea, puisque les clients peuvent communiquer avec la réplique géographique la plus proche. En raison de leur dépendance avec une réplique leader, coordonnant les changements de transition, les protocoles SMR classiques offrent une évolutivité et une disponibilité limitées dans ce contexte. Pour résoudre ce problème, les protocoles récents suivent plutôt une approche sans leader, dans laquelle chaque réplique est capable de progresser en utilisant un quorum de ses pairs. Ces nouveaux protocoles sans leader sont complexes et chacun d'entre eux présente une approche ad-hoc de l'absence de leader. La première contribution de cette thèse est un framework qui capture l'essence de SMR sans leader (Leaderless SMR) et la formalisation de certaines de ses limites. En raison de la nature de plus en plus sensible des services répliqués, l'utilisation de simples pannes bénignes n'est plus suffisante. Les recherches récentes se dirigent vers le développement de protocoles qui supportent le comportement arbitraire de certaines répliques (pannes Byzantines) et qui prospèrent également dans un environnement géo-répliqué. Les blockchains sont un exemple de ce nouveau type de services répliqués sensibles qui a fait l'objet de nombreuses recherches. Les blockchains sont alimentées par des protocoles de réplication byzantins adaptés pour fonctionner sur des centaines, voire des milliers de répliques. Lorsque le contrôle de membership à ces répliques est ouvert, c'est-à-dire que n'importe qui peut faire fonctionner une réplique, on dit que la blockchain est permissionless. Dans le cas inverse, lorsque l'adhésion est contrôlée par un ensemble d'entités connues, comme des entreprises, nous disons que la blockchain est permissioned. Les blockchains permissioned utilisent des protocoles SMR byzantins. Comme ces protocoles utilisent un leader, ils souffrent de problèmes d'évolutivité et de disponibilité, de la même manière que leurs homologues non byzantins. Dans la deuxième partie de cette thèse, nous adaptons notre framework pour supporter les pannes byzantines et présentons le premier framework pour le SMR byzantin sans leader. De plus, nous montrons que lorsqu'il est correctement instancié, il permet de contourner les problèmes de scalabilité dans les protocoles SMR byzantins dirigés par des leaders pour les permissioned blockchains
Modern distributed services are expected to be highly available, as our societies have been growing increasingly dependent on them. The common way to achieve high availability is through the replication of data in multiple service replicas. In this way, the service remains operational in case of failures as clients can be relayed to other working replicas. In distributed systems, the classic technique to implement such fault-tolerant services is called State-Machine Replication (SMR), where a service is defined as a deterministic state-machine and each replica keeps a local copy of the machine. To guarantee that the service remains consistent, replicas coordinate with each other and agree on the order of transitions to be applied to their copies of the state-machine. The replication performed by modern Internet services spans across several geographical locations (geo-replication). This allows for increased availability and low latency, since clients can communicate with the closest geo-graphical replica. Due to their reliance on a leader replica, classical SMR protocols offer limited scalability and availability under this setting. To solve this problem, recent protocols follow instead a leaderless approach, in which each replica is able to make progress using a quorum of its peers. These new leaderless protocols are complex and each one presents an ad-hoc approach to leaderlessness. The first contribution of this thesis is a framework that captures the essence of Leaderless State-Machine Replication (Leaderless SMR) and the formalization of some of its limits. Due to the increasingly sensitive nature of replicated services, leveraging simple benign failures is no longer enough. Recent research is headed towards developing protocols that support arbitrary behavior of some replicas (Byzantine failures) and that also thrive in a geo-replicated environment. An example of this new type of sensitive replicated services that has been the focus of a lot of research are blockchains. Blockchains are powered by Byzantine replication protocols adapted to work over hundreds or even thousands of replicas. When the membership control over such replicas is open, that is, anyone can run a replica, we say the blockchain is permissionless. In the converse case, when the membership is controlled by a set of known entities like companies, we say the blockchain is permissioned. When such Byzantine protocols follow the classic leader-driven approach they suffer from scalability and availability issues, similarly to their non-byzantine counterparts. In the second part of this thesis, we adapt our framework to support Byzantine failures and present the first framework for Byzantine Leaderless SMR. Furthermore, we show that when properly instantiated it allows to sidestep the scalability problems in leader-driven Byzantine SMR protocols for permissioned blockchains
12

Darrous, Jad. "Scalable and Efficient Data Management in Distributed Clouds : Service Provisioning and Data Processing." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur des solutions pour la gestion de données afin d'accélérer l'exécution efficace d'applications de type « Big Data » (très consommatrices en données) dans des centres de calculs distribués à grande échelle. Les applications de type « Big Data » sont de plus en plus souvent exécutées sur plusieurs sites. Les deux principales raisons de cette tendance sont 1) le déplacement des calculs vers les sources de données pour éliminer la latence due à leur transmission et 2) le stockage de données sur un site peut ne pas être réalisable à cause de leurs tailles de plus en plus importantes.La plupart des applications s'exécutent sur des clusters virtuels et nécessitent donc des images de machines virtuelles (VMI) ou des conteneurs d’application. Par conséquent, il est important de permettre l’approvisionnement rapide de ces services afin de réduire le temps d'attente avant l’exécution de nouveaux services ou applications. Dans la première partie de cette thèse, nous avons travaillé sur la récupération et le placement des données, en tenant compte de problèmes difficiles, notamment l'hétérogénéité des connexions au réseau étendu (WAN) et les besoins croissants en stockage pour les VMIs et les conteneurs d’application.Par ailleurs, les applications de type « Big Data » reposent sur la réplication pour fournir des services fiables et rapides, mais le surcoût devient de plus en plus grand. La seconde partie de cette thèse constitue l'une des premières études sur la compréhension et l'amélioration des performances des applications utilisant la technique, moins coûteuse en stockage, des codes d'effacement (erasure coding), en remplacement de la réplication
This thesis focuses on scalable data management solutions to accelerate service provisioning and enable efficient execution of data-intensive applications in large-scale distributed clouds. Data-intensive applications are increasingly running on distributed infrastructures (multiple clusters). The main two reasons for such a trend are 1) moving computation to data sources can eliminate the latency of data transmission, and 2) storing data on one site may not be feasible given the continuous increase of data size.On the one hand, most applications run on virtual clusters to provide isolated services, and require virtual machine images (VMIs) or container images to provision such services. Hence, it is important to enable fast provisioning of virtualization services to reduce the waiting time of new running services or applications. Different from previous work, during the first part of this thesis, we worked on optimizing data retrieval and placement considering challenging issues including the continuous increase of the number and size of VMIs and container images, and the limited bandwidth and heterogeneity of the wide area network (WAN) connections.On the other hand, data-intensive applications rely on replication to provide dependable and fast services, but it became expensive and even infeasible with the unprecedented growth of data size. The second part of this thesis provides one of the first studies on understanding and improving the performance of data-intensive applications when replacing replication with the storage-efficient erasure coding (EC) technique
13

Chen, Yi-Chia, and 陳奕佳. "An approach based on matching theory to distributed deployment of NFV-based network services to geo-distributed edge computing systems." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/h7kasp.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
107
With both Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) environment, network service providers can deploy their NFV-based network services on the edge servers maintained by edge service providers (ESPs) to serve end users. Nevertheless, determining which ESP to lease resource and how much to pay for the deployment will become an important issue for the network service providers. For the ESPs, determining which network serive to serve and how much to charge from its provider is also an important issue. We proposed a two-layer mechanism of matching game for the deployment of network services to the ESPs. Specifically, the upper layer is the bargaining process between network services and ESPs, we use matching auction; the lower layer is resource allocation for VNFs within an ESP, we use one-to-many matching model. The proposed mechanism can provide a weakly stable result, that is, there will be no network service that is more favorable than all ESPs' current matching results. However, there may exist a network service which can have more profit if it change its current partner. We simulated the proposed mechanisms and showed that we can have a higher average number of served network services, and both ESPs and network service providers can have higher profits.
14

Fouto, Pedro Filipe Veiga. "A novel causally consistent replication protocol with partial geo-replication." Master's thesis, 2018. http://hdl.handle.net/10362/63048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Distributed storage systems are a fundamental component of large-scale Internet services. To keep up with the increasing expectations of users regarding availability and latency, the design of data storage systems has evolved to achieve these properties, by exploiting techniques such as partial replication, geo-replication and weaker consistency models. While systems with these characteristics exist, they usually do not provide all these properties or do so in an inefficient manner, not taking full advantage of them. Additionally, weak consistency models, such as eventual consistency, put an excessively high burden on application programmers for writing correct applications, and hence, multiple systems have moved towards providing additional consistency guarantees such as implementing the causal (and causal+) consistency models. In this thesis we approach the existing challenges in designing a causally consistent replication protocol, with a focus on the use of geo and partial data replication. To this end, we present a novel replication protocol, capable of enriching an existing geo and partially replicated datastore with the causal+ consistency model. In addition, this thesis also presents a concrete implementation of the proposed protocol over the popular Cassandra datastore system. This implementation is complemented with experimental results obtained in a realistic scenario, in which we compare our proposal withmultiple configurations of the Cassandra datastore (without causal consistency guarantees) and with other existing alternatives. The results show that our proposed solution is able to achieve a balanced performance, with low data visibility delays and without significant performance penalties.
15

Fernandes, Flávio Duarte Pacheco. "LHView: Location Aware Hybrid Partial View." Master's thesis, 2017. http://hdl.handle.net/10362/66268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The rise of the Cloud creates enormous business opportunities for companies to provide global services, which requires applications supporting the operation of those services to scale while minimizing maintenance costs, either due to unnecessary allocation of resources or due to excessive human supervision and administration. Solutions designed to support such systems have tackled fundamental challenges from individual component failure to transient network partitions. A fundamental aspect that all scalable large systems have to deal with is the membership of the system, i.e, tracking the active components that compose the system. Most systems rely on membership management protocols that operate at the application level, many times exposing the interface of a logical overlay network, that should guarantee high scalability, efficiency, and robustness. Although these protocols are capable of repairing the overlay in face of large numbers of individual components faults, when scaling to global settings (i.e, geo-distributed scenarios), this robustness is a double edged-sword because it is extremely complex for a node in a system to distinguish between a set of simultaneously node failures and a (transient) network partition. Thus the occurrence of a network partition creates isolated sub-sets of nodes incapable of reconnecting even after the recovery from the partition. This work address this challenges by proposing a novel datacenter-aware membership protocol to tolerate network partitions by applying existing overlay management techniques and classification techniques that may allow the system to efficiently cope with such events without compromising the remaining properties of the overlay network. Furthermore, we strive to achieve these goals with a solution that requires minimal human intervention.

To the bibliography