Rozprawy doktorskie na temat „Scalability”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Scalability.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Scalability”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Singh, Arjun. "The scalability of AspectJ". Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/32349.

Pełny tekst źródła
Streszczenie:
To assess the scalability of using AspectJ, we refactored concerns that crosscut over half of the plug-ins that comprise the Eclipse IDE. Eclipse is a suitable candidate for furthering other studies on AspectJ's scalability because the system has an additional modularization mechanism typical of large systems that introduces new complexities for defining advice and aspects. We evaluated quantitative and qualitative properties of our AO refactored version of Eclipse and compared them to their equivalents in the original, OO version of Eclipse. Quantitatively, we evaluated execution time and memory usage. Qualitatively, we evaluated changes in scattering, coupling, and abstractions. Our assessment of the scalability of AspectJ shows that using the language in Eclipse resulted in changes in performance and improvements in code similar to those seen in previous studies on the scalability of AspectJ. This leads us to conclude that AspectJ scales up to large systems. We also conclude that it may be necessary for the system to be aware of aspects in order to deal with defining advice that cross system boundaries.
Science, Faculty of
Computer Science, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
2

Li, Yan. "Scalability of RAID systems". Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3382.

Pełny tekst źródła
Streszczenie:
RAID systems (Redundant Arrays of Inexpensive Disks) have dominated backend storage systems for more than two decades and have grown continuously in size and complexity. Currently they face unprecedented challenges from data intensive applications such as image processing, transaction processing and data warehousing. As the size of RAID systems increases, designers are faced with both performance and reliability challenges. These challenges include limited back-end network bandwidth, physical interconnect failures, correlated disk failures and long disk reconstruction time. This thesis studies the scalability of RAID systems in terms of both performance and reliability through simulation, using a discrete event driven simulator for RAID systems (SIMRAID) developed as part of this project. SIMRAID incorporates two benchmark workload generators, based on the SPC-1 and Iometer benchmark specifications. Each component of SIMRAID is highly parameterised, enabling it to explore a large design space. To improve the simulation speed, SIMRAID develops a set of abstraction techniques to extract the behaviour of the interconnection protocol without losing accuracy. Finally, to meet the technology trend toward heterogeneous storage architectures, SIMRAID develops a framework that allows easy modelling of different types of device and interconnection technique.Simulation experiments were first carried out on performance aspects of scalability. They were designed to answer two questions: (1) given a number of disks, which factors affect back-end network bandwidth requirements; (2) given an interconnection network, how many disks can be connected to the system. The results show that the bandwidth requirement per disk is primarily determined by workload features and stripe unit size (a smaller stripe unit size has better scalability than a larger one), with cache size and RAID algorithm having very little effect on this value. The maximum number of disks is limited, as would be expected, by the back-end network bandwidth. Studies of reliability have led to three proposals to improve the reliability and scalability of RAID systems. Firstly, a novel data layout called PCDSDF is proposed. PCDSDF combines the advantages of orthogonal data layouts and parity declustering data layouts, so that it can not only survivemultiple disk failures caused by physical interconnect failures or correlated disk failures, but also has a good degraded and rebuild performance. The generating process of PCDSDF is deterministic and time-efficient. The number of stripes per rotation (namely the number of stripes to achieve rebuild workload balance) is small. Analysis shows that the PCDSDF data layout can significantly improve the system reliability. Simulations performed on SIMRAID confirm the good performance of PCDSDF, which is comparable to other parity declustering data layouts, such as RELPR. Secondly, a system architecture and rebuilding mechanism have been designed, aimed at fast disk reconstruction. This architecture is based on parity declustering data layouts and a disk-oriented reconstruction algorithm. It uses stripe groups instead of stripes as the basic distribution unit so that it can make use of the sequential nature of the rebuilding workload. The design space of system factors such as parity declustering ratio, chunk size, private buffer size of surviving disks and free buffer size are explored to provide guidelines for storage system design. Thirdly, an efficient distributed hot spare allocation and assignment algorithm for general parity declustering data layouts has been developed. This algorithm avoids conflict problems in the process of assigning distributed spare space for the units on the failed disk. Simulation results show that it effectively solves the write bottleneck problem and, at the same time, there is only a small increase in the average response time to user requests.
Style APA, Harvard, Vancouver, ISO itp.
3

Duong, Tuyet. "BLOCKCHAIN SCALABILITY AND SECURITY". VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5559.

Pełny tekst źródła
Streszczenie:
Cryptocurrencies like Bitcoin have proven to be a phenomenal success. The underlying techniques hold huge promise to change the future of financial transactions, and eventually the way people and companies compute, collaborate, and interact. At the same time, the current Bitcoin-like proof-of-work based blockchain systems are facing many challenges. In more detail, a huge amount of energy/electricity is needed for maintaining the Bitcoin blockchain. In addition, their security holds if the majority of the computing power is under the control of honest players. However, this assumption has been seriously challenged recently and Bitcoin-like systems will fail when this assumption is broken. This research proposes novel blockchain designs to address the challenges. We first propose a novel blockchain protocol, called 2-hop blockchain, by combining proof-of-work and proof-of-stake mechanisms. That said, even if the adversary controls more than 50% computing power, the honest players still have the chance to defend the blockchain via honest stake. Then we revise and implement the design to obtain a practical cryptocurrency system called Twinscoin. In more detail, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide an analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically. We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact.
Style APA, Harvard, Vancouver, ISO itp.
4

Ben, Alaya Mahdi. "Towards interoperability, self-management, and scalability for scalability for machine-to-machine systems". Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0052/document.

Pełny tekst źródła
Streszczenie:
La communication Machine-to-Machine (M2M) est l'un des principaux fondements de l'Internet des Objets (IoT). C'est un phénomène qui a évolué discrètement au cours du temps et vient d’émerger à la surface pour do! nner naissance à une explosion de nouveaux usages et services. Capteurs, actionneurs, tags, véhicules et objets intelligents ont tous la possibilité de communiquer. Le nombre de connexions M2M est en constante augmentation et il est prévu de voir des milliards d’objets connectés dans un futur proche. Les applications M2M offrent des avantages dans divers domaines à savoir les villes intelligentes, les voitures connectées, les usines du futures, l’agriculture de précision, l’environnement, la santé, etc. La croissance rapide de cet écosystème est entrain de conduire le M2M vers un avenir prometteur. Cependant, les opportunités d'expansion des marchés M2M ne sont pas évidentes. En effet, un ensemble de challenges doivent être surmontés afin de permettre un déploiement à grande échelle dans des domaines diverses et variés à savoir les défis d’interopérabilité, de complexité et de scalabilité. Actuellement, le marché du M2M souffre d'une fragmentation verticale importante touchant la majorité des domaines industriels. En effet, diverses solutions propriétaires ont été conçues pour répondre à des applications spécifiques engendrant ainsi un sérieux problème d''interopérabilité. Pour adresser ce challenge, nous avons conçu, développer et expérimenté la plateforme OM2M offrant une architecture opérationnelle, flexible et extensible pour l'interopérabilité M2M conforme à la norme SmartM2M. Pour supporter les environnements contraints, nous avons proposé une nouvelle convention de nommage basée sur une structure de ressources non-hiérarchique permettant d’optimiser la taille des messages échangés. Pour assurer l’interopérabilité sémantique entre les applications et les machines, nous avons proposé l'ontologie IoT-O. Cette dernière est composée de cinq modèles de base représentant les capteurs, les actionneurs, les observations, les actuations et les web ! services pour permettre de converger rapidement vers un vocabulaire commun pour l'IoT. Une plateforme M2M horizontale permet d'interconnecter des machines hétérogènes largement distribués et qui évoluent fréquemment en fonction des changements de l’environnement. Maintenir ces systèmes complexes en vie est coûteux en termes de temps et d'argent. Pour adresser ce challenge, nous avons conçu, développé et intégré le framework FRAMESELF afin d'ajouter des capacités d'autogestion aux systèmes M2M basées sur le paradigme de l'informatique autonome. En étendant le modèle d'architecture de référence MAPE-K, notre solution permet d'adapter dynamiquement le comportement de la plateforme OM2M par en fonctions des changements du contexte et des politiques haut niveaux. Nous avons défini un ensemble de règles sémantiques pour faire du raisonnement sur l'ontologie IoT-O en tant que modèle de connaissance. Notre objectif est de permettre la découverte automatique entre les machines et les applications à travers un appariement sémantique et une reconfiguration dynam! ique de l'architecture des ressources. L’interopérabilité et l’autogestion ouvrent la voie à un déploiement de masse des systèmes M2M. Par contre, ces derniers se basent sur l'infrastructure actuelle d'internet qui n'a jamais été conçu pour ce genre de d'utilisation ce qui pose de nouvelles exigences en termes de scalabilité. Pour adresser ce challenge, nous avons conçu, simulé et validé l'approche OSCL proposant une nouvelle topologie de réseau maillé M2M comme alternative à l'approche centralisée actuelle. OSCL s'appuie sur les techniques de routage centrées sur l'information favorisant les communications à sauts multiples et un cache distribué pour une meilleure dissémination des données. Nous avons développé le simulateur OSCLsim pour valider l'approche proposée.[...]
Machine-to-Machine (M2M) is one of the main features of Internet of Things (IoT). It is a phenomenon that has been proceeding quietly in the background, and it is coming into the surface, where explosion of usage scenarios in businesses will happen. Sensors, actuators, tags, vehicles, and intelligent things all have the ability to communicate. The number of M2M connections is continuously increasing, and it has been predicted to see billions of machines interconnected in a near future. M2M applications provide advantages in various domains from smart cities, factories of the future, connected cars, home automation, e-health to precision agriculture. This fast-growing ecosystem is leading M2M towards a promising future. However, M2M market expansion opportunities are not straightforward. A set of challenges should be overcome to enable M2M mass-scale deployment across various industries including interoperability, complexity, and scalability issues. Currently, the M2M market is suffering from a high vertical fragmentation affecting the majority of business sectors. In fact, various vendor-specific M2M solutions have been designed independently for specific applications, which led to serious interoperability issues. To address this challenge, we designed, implemented, and experimented with the OM2M platform offering a flexible and extensible operational architecture for M2M interoperability compliant with the SmartM2M standard. To support constrained environments, we proposed an efficient naming convention relying on a non-hierarchical resource structure to reduce the payload size. To reduce the semantic gap between applications and machines, we proposed the IoT-O ontology for an effective semantic interoperability. IoT-O consists of five main parts, which are sensor, actuator, observation, actuation and service models and aims to quickly converge to a common IoT vocabulary. An interoperable M2M service platform enables one to interconnect heterogeneous devices that are widely distributed and frequently evolving according to their environment changes. Keeping M2M systems alive is costly in terms of time and money. To address this challenge, we designed, implemented, and integrated the FRAMESELF framework to retrofit self-management capabilities in M2M systems based on the autonomic computing paradigm. Extending the MAPE-K reference architecture model, FRAMESELF enables one to dynamically adapt the OM2M system behavior according to high level policies how the environment changes. We defined a set of semantic rules for reasoning about the IoT-O ontology as a knowledge model. Our goal is to enable automatic discovery of machines and applications through dynamic reconfiguration of resource architectures. Interoperability and self-management pave the way to mass-scale deployment of M2M devices. However, current M2M systems rely on current internet infrastructure, which was never designed to address such requirements, thus raising new requirements in term of scalability. To address this challenge, we designed, simulated and validated the OSCL overlay approach, a new M2M meshed network topology as an alternative to the current centralized approach. OSCL relies on the Named Data Networking (NDN) technique and supports multi-hop communication and distributed caching 5 to optimize networking and enhance data dissemination. We developed the OSCLsim simulator to validate the proposed approach. Finally, a theoretical model based on random graphs is formulated to describe the evolution and robustness of the proposed system
Style APA, Harvard, Vancouver, ISO itp.
5

Jenefeldt, Andreas, i Jakobsson Erik Foogel. "Scalability in Startups : A Case Study of How Technological Startup Companies Can Enhance Scalability". Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168150.

Pełny tekst źródła
Streszczenie:
Startups and new businesses are important for the development of technology, the economy, and the society as a whole. To be able to contribute towards this cause, startups need to be able to survive and grow. It is therefore important for startups to understand how they can scale up their business. This led to the purpose of this study: to determine success factors for technological startup companies to increase their scalability. Five areas were identified to have an impact on the scalability of a business, namely; partnerships, cloud computing, modularity, process automation and business model scalability. Within these areas, several subareas were found, which were certain areas of interest within the theory. Together, these subareas helped answer how companies can work with scalability in each area. These areas, and their subareas, went into an analytical model that formed the structure of the empirical and analytical parts of the study. The study is a multicase study, consisting of 15 B2B companies, of varying size and maturity, whom all offered software as a part of their solutions. The study concludes that there are six important factors for succeeding with scalability. An important factor to succeed with scalability is to adopt partnerships, since this will allow for outsourcing, and give access to resources, markets and customers. It is also concluded that cloud computing is a very scalable delivery method, but that it requires certain success factors, such as working with partners, having a customer focus, having the right knowledge internally, and having a standardized product. Further, modularity can enable companies to meet differing customer needs since it increases flexibility, can expand the offer, and make sales easier. The study concludes that process automation increases the efficiency in the company, and can be done through automating a number of processes. Focusing both internally and externally is another important factor for success, by allowing companies to develop a scalable product that is demanded by customers. Lastly, a scalable business model is found to be the final objective, and that it is important to work with the other areas to get there, something that includes trial and error to find what works best for each individual company. The six important factors formed the basis for the recommendations. The study recommend that startups should utilize partnerships and process automation. Startups should also be aware of, and work with, the success factors of cloud computing, use modularity when selling to markets with different customer needs, automate other processes before automating sales, keep customer focus when developing the product, and work actively to become more scalable.
Style APA, Harvard, Vancouver, ISO itp.
6

Krishna, Chaitanya Konduru. "Scalability Drivers in Requirements Engineering". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13480.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Mir, Taheri Seyed M. "Scalability of communicators in MPI". Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/33128.

Pełny tekst źródła
Streszczenie:
This thesis offers a novel framework for representing groups and communicators in Message Passing Interface (MPI) middleware. MPI is a widely used paradigm in a cluster environment that supports communication between the nodes. In our framework, we have implemented and evaluated scalable techniques for groups and communicators in MPI. We have tested this framework using FG-MPI, a fine-grain version of MPI that scales millions of MPI processes. Groups in MPI are the primary means for creating communicators. A group map is the underlying structure that stores participating processes in the communication. We introduce a framework for concise representations of the group map. This framework is based on the observation that a map can be decomposed into a set and a permutation. This decomposition allows us to use a compact set representation for the cases where specific mapping is not required i.e. lists with monotonically increasing order. In other cases, the representation adds a permutation as well. A variety of set compression techniques has been used. Furthermore, the framework is open to integration of new representations. One advantage of such decomposition is the ability to implicitly represent a set with set representations such as BDD. BDD and similar representations are well-suited for the types of operations used in construction of communicators. In addition to set representations for unordered maps, we incorporated Wavelet Trees on Runs. This library is designed to represent permutation. We have also included general compression techniques in the framework such as BWT. This allows some degree of compression in memory-constrained environments where there is no discernible pattern in the group structure. We have investigated time and space trade-offs among the representations to develop strategies available to the framework. The strategies tune the framework based on user's requirements. The first strategy optimizes the framework to be fast and is called the time strategy. The second strategy optimizes the framework in regard to space. The final hybrid strategy is a hybrid of both and tries to strike a reasonable trade-off between time and space. These strategies let the framework accommodate a wider range of applications and users.
Style APA, Harvard, Vancouver, ISO itp.
8

Hao, Fang. "Scalability techniques in QoS networks". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/9175.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wen, Yang Ph D. Massachusetts Institute of Technology. "Scalability of dynamic traffic assignment". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/47739.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2009.
Includes bibliographical references (p. 163-174).
This research develops a systematic approach to analyze the computational performance of Dynamic Traffic Assignment (DTA) models and provides solution techniques to improve their scalability for on-line applications for large-scale networks. DTA models for real-time use provide short-term predictions of network status and generate route guidance for travelers. The computational performance of such systems is a critical concern. Existing methodologies, which have limited capabilities for online large-scale applications, use single-processor configurations that are less scalable, and rely primarily on trade-offs that sacrifice accuracy for improved computational efficiency. In the proposed scalable methodology, algorithmic analyses are first used to identify the system bottlenecks for large-scale problems. Our analyses show that the computation time of DTA systems for a given time interval depends largely on a small set of parameters. Important parameters include the number of origin-destination (OD) pairs, the number of sensors, the number of vehicles, the size of the network, and the number of time-steps used by the simulator. Then scalable approaches are developed to solve the bottlenecks. A constraint generalized least-squares solution enabling efficient use of the sparse-matrix property is applied to the dynamic OD estimation, replacing the Kalman-Filter solution or other full-matrix algorithms. Parallel simulation with an adaptive network decomposition framework is proposed to achieve better load-balancing and improved efficiency. A synchronization-feedback mechanism is designed to ensure the consistency of traffic dynamics across processors while keeping communication overheads minimal. The proposed methodology is implemented in DynaMIT, a state-of-the-art DTA system. Profiling studies are used to validate the algorithmic analysis of the system bottlenecks.
(cont.) The new system is evaluated on two real-world networks under various scenarios. Empirical results of the case studies show that the proposed OD estimation algorithm is insensitive to an increase in the number of OD pairs or sensors, and the computation time is reduced from minutes to a few seconds. The parallel simulation is found to maintain accurate output as compared to the sequential simulation, and with adaptive load-balancing, it considerably speeds up the network models even under non-recurrent incident scenarios. The results demonstrate the practical nature of the methodology and its scalability to large-scale real-world problems.
by Yang Wen.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
10

Persson, Jonna. "SCALABILITY OF JAVASCRIPTLIBRARIES FOR DATAVISUALIZATION". Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19994.

Pełny tekst źródła
Streszczenie:
Visualization is an important tool for making data understandable. Visualization can be used for many different purposes, such as charts on the web used to visualize a dataset. Render time is important for websites since slow response times can cause users to leave the site. When creating a website with fast render times in mind, the selection of JavaScript library may be crucial. This work aims to determine if dataset size and/or chart type affects the render time of charts created by different JavaScript libraries. The comparison is done by a literature search to identify suitable libraries, followed by an experiment in which 40 websites are created to compare the performance of selected JavaScript libraries for rendering selected chart types. The results show that while both dataset size and chart type affect the render time in most cases, the libraries scale differently depending on the dataset size.
Style APA, Harvard, Vancouver, ISO itp.
11

Lifhjelm, Tobias. "A scalability evaluation on CockroachDB". Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184587.

Pełny tekst źródła
Streszczenie:
Databases are a cornerstone in data storage since they store and organize large amounts of data while allowing users to access specific parts of data easily. Databases must however adapt to an increasing amount of users without negatively affect the end-users. CochroachDB (CRDB) is a distributed SQLdatabase that combines consistency related to relational database management systems, with scalability to handle more user requests simultaneously while still being consistent. This paper presents a study that evaluates the scalability properties of CRDB by measuring how latency is affected by the addition of more nodes to a CRDB cluster. The findings show that the latency can decrease with the addition of nodes to a cluster. However, there are cases when more nodes increase the latency.
Style APA, Harvard, Vancouver, ISO itp.
12

Mathew, Ajit. "Multicore Scalability Through Asynchronous Work". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/104116.

Pełny tekst źródła
Streszczenie:
With the end of Moore's Law, computer architects have turned to multicore architecture to provide high performance. Unfortunately, to achieve higher performance, multicores require programs to be parallelized which is an untamed problem. Amdahl's law tells that the maximum theoretical speedup of a program is dictated by the size of the non-parallelizable section of a program. Hence to achieve higher performance, programmers need to reduce the size of sequential code in the program. This thesis explores asynchronous work as a means to reduce sequential portions of program. Using asynchronous work, a programmer can remove tasks which do not affect data consistency from the critical path and can be performed using background thread. Using this idea, the thesis introduces two systems. First, a synchronization mechanism, Multi-Version Read-Log-Update(MV-RLU), which extends Read-Log-Update (RLU) through multi-versioning. At the core of MV-RLU design is a concurrent garbage collection algorithm which reclaims obsolete versions asynchronously reducing blocking of threads. Second, a concurrent and highly scalable index-structure called Hydralist for multi-core. The key idea behind design of Hydralist is that an index-structure can be divided into two component (search layer and data layer) and updates to data layer can be done synchronously while updates to search layer can be propagated asynchronously using background threads.
Master of Science
Up until mid-2000s, Moore's law predicted that performance CPU doubled every two years. This is because improvement in transistor technology allowed smaller transistor which can switch at higher frequency leading to faster CPU clocks. But faster clock leads to higher heat dissipation and as chips reached their thermal limits, computer architects could no longer increase clock speeds. Hence they moved to multicore architecture, wherein a single die contains multiple CPUs, to allow higher performance. Now programmers are required to parallelize their code to take advangtage of all the CPUs in a chip which is a non trivial problem. The theoretical speedup achieved by a program on multicore architecture is dictated by Amdahl's law which describes the non parallelizable code in a program as the limiting factor for speedup. For example, a program with 99% parallelizable code can achieve speedup of 20 whereas a program with 50% parallelizable code can only achieve speedup of 2. Therefore to achieve high speedup, programmers need to reduce size of serial section in their program. One way to reduce sequential section in a program is to remove non-critical task from the sequential section and perform the tasks asynchronously using background thread. This thesis explores this technique in two systems. First, a synchronization mechanism which is used co-ordinate access to shared resource called Multi-Version Read-Log-Update (MV-RLU). MV-RLU achieves high performance by removing garbage collection from critical path and performing it asynchronously using background thread. Second, an index structure, Hydralist, which based on the insight that an index structure can be decomposed into two components, search layer and data layer, and decouples updates to both the layer which allows higher performance. Updates to search layer is done synchronously while updates to data layer is done asynchronously using background threads. Evaluation shows that both the systems perform better than state-of-the-art competitors in a variety of workloads.
Style APA, Harvard, Vancouver, ISO itp.
13

Monahan, Melissa A. "Scalability study for robotic hand platform /". Online version of thesis, 2010. http://hdl.handle.net/1850/12225.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Wittie, Mike P. "Towards Sustainable Scalability of Communication Networks". UNIVERSITY OF CALIFORNIA, SANTA BARBARA, 2012. http://pqdtopen.proquest.com/#viewpdf?dispub=3482054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Miller, John. "Distributed virtual environment scalability and security". Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/241109.

Pełny tekst źródła
Streszczenie:
Distributed virtual environments (DVEs) have been an active area of research and engineering for more than 20 years. The most widely deployed DVEs are network games such as Quake, Halo, and World of Warcraft (WoW), with millions of users and billions of dollars in annual revenue. Deployed DVEs remain expensive centralized implementations despite significant research outlining ways to distribute DVE workloads. This dissertation shows previous DVE research evaluations are inconsistent with deployed DVE needs. Assumptions about avatar movement and proximity - fundamental scale factors - do not match WoW's workload, and likely the workload of other deployed DVEs. Alternate workload models are explored and preliminary conclusions presented. Using realistic workloads it is shown that a fully decentralized DVE cannot be deployed to today's consumers, regardless of its overhead. Residential broadband speeds are improving, and this limitation will eventually disappear. When it does, appropriate security mechanisms will be a fundamental requirement for technology adoption. A trusted auditing system ('Carbon') is presented which has good security, scalability, and resource characteristics for decentralized DVEs. When performing exhaustive auditing, Carbon adds 27% network overhead to a decentralized DVE with a WoW-like workload. This resource consumption can be reduced significantly, depending upon the DVE's risk tolerance. Finally, the Pairwise Random Protocol (PRP) is described. PRP enables adversaries to fairly resolve probabilistic activities, an ability missing from most decentralized DVE security proposals. Thus, this dissertations contribution is to address two of the obstacles for deploying research on decentralized DVE architectures. First, lack of evidence that research results apply to existing DVEs. Second, the lack of security systems combining appropriate security guarantees with acceptable overhead.
Style APA, Harvard, Vancouver, ISO itp.
16

VIKSTÉN, HENRIK, i VIKTOR MATTSSON. "Performance and Scalability of Sudoku Solvers". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134823.

Pełny tekst źródła
Streszczenie:
This paper aims to clarify the differences in algorithm design with a goal of solving sudoku. A few algorithms were chosen suitable to the problem, yet different from one another. To take an everyday puzzle and utilize common computer science algorithms and learn more about them. Get relevant data to see how they perform in situations, how easily they can be modified and used in larger sudokus. Also how their performance scales when the puzzle grows larger. From the results Dancing links was the fastest and scaled best of the algorithms tested, while Brute-force and Simulated annealing struggled keeping consistent results.
Detta dokument syftar till att klargöra skillnaderna i algoritmer med målet att lösa sudoku. Välja ett par olika algorithmer lämpliga för problemet, men som samtidigt skiljer sig från varandra. Att ta ett dagligt pussel och utnyttja vanligt förekommande algoritmer inom datavetenskap och lära sig mer om dem. Få relevant data och se hur de presterar i olika situationer, hur lätt de kan modifieras och användas i större Sudokus. Även hur deras prestanda skalar när pusslet blir större. Dancing links var den snabbaste algoritmen och skalade bäst av de som testades. Brute-force och Simulated annealing inte var lika konsekventa genom alla tester och betydligt långsammare i överlag.
Style APA, Harvard, Vancouver, ISO itp.
17

Jogalekar, Prasad P. "Scalability analysis framework for distributed systems". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0033/NQ27014.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Plain, Simon E. M. "Bit rate scalability in audio coding". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0034/MQ64243.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Liatsos, Vassilios. "Scalability in planning with limited resources". Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395939.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Lilley, Jeremy (Jeremy Joseph) 1977. "Scalability in an International Naming System". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86540.

Pełny tekst źródła
Streszczenie:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 83-85).
by Jeremy Lilley.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
21

Boulgakov, Alexandre. "Improving scalability of exploratory model checking". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:76acb8bf-52e7-4078-ab4f-65f3ea07ba3d.

Pełny tekst źródła
Streszczenie:
As software and hardware systems grow more complex and we begin to rely more on their correctness and reliability, it becomes exceedingly important to formally verify certain properties of these systems. If done naïvely, verifying a system can easily require exponentially more work than running it, in order to account for all possible executions. However, there are often symmetries or other properties of a system that can be exploited to reduce the amount of necessary work. In this thesis, we present a number of approaches that do this in the context of the CSP model checker FDR. CSP is named for Communicating Sequential Processes, or parallel combinations of state machines with synchronised communications. In the FDR model, the component processes are typically converted to explicit state machines while their parallel combination is evaluated lazily during model checking. Our contributions are motivated by this model but applicable to other models as well. We first address the scalability of the component machines by proposing a lazy compiler for a subset of CSPM selected to model parameterised state machines. This is a typical case where the state space explosion can make model checking impractical, since the size of the state space is exponential in the number and size of the parameters. A lazy approach to evaluating these systems allows only the reachable subset of the state space to be explored. As an example, in studying security protocols, it is common to model an intruder parameterised by knowledge of each of a list of facts; even a relatively small 100 facts results in an intractable 2100 states, but the rest of the system can ensure that only a small number of these states are reachable. Next, we address the scalability of the overall combination by presenting novel algorithms for bisimulation reduction with respect to strong bisimulation, divergence- respecting delay bisimulation, and divergence-respecting weak bisimulation. Since a parallel composition is related to the Cartesian product of its components, performing a relatively time-consuming bisimulation reduction on the components can reduce its size significantly; an efficient bisimulation algorithm is therefore very desirable. This thesis is motivated by practical implementations, and we discuss an implementation of each of the proposed algorithms in FDR. We thoroughly evaluate their performance and demonstrate their effectiveness.
Style APA, Harvard, Vancouver, ISO itp.
22

Singh, Hermanpreet. "Controlling Scalability in Distributed Virtual Environments". Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/20372.

Pełny tekst źródła
Streszczenie:
A Distributed Virtual Environment (DVE) system provides a shared virtual environment where physically separated users can interact and collaborate over a computer network. More simultaneous DVE users could result in intolerable system performance degradation. We address the three major challenges to improve DVE scalability: effective DVE system performance measurement, understanding the controlling factors of system performance/quality and determining the consequences of DVE system changes. We propose a DVE Scalability Engineering (DSE) process that addresses these three major challenges for DVE design. DSE allow us to identify, evaluate, and leverage trade-offs among DVE resources, the DVE software, and the virtual environment. DSE has three stages. First, we show how to simulate different numbers and types of users on DVE resources. Collected user study data is used to identify representative user types. Second, we describe a modeling method to discover the major trade-offs between quality of service and DVE resource usage. The method makes use of a new instrumentation tool called ppt. ppt collects atomic blocks of developer-selected instrumentation at high rates and saves it for offline analysis. Finally, we integrate our load simulation and modeling method into a single process to explore the effects of changes in DVE resources. We use the simple Asteroids DVE as a minimal case study to describe the DSE process. The larger and commercial Torque and Quake III DVE systems provide realistic case studies and demonstrate DSE usage. The Torque case study shows the impact of many users on a DVE system. We apply the DSE process to significantly enhance the Quality of Experience given the available DVE resources. The Quake III case study shows how to identify the DVE network needs and evaluate network characteristics when using a mobile phone platform. We analyze the trade-offs between power consumption and quality of service. The case studies demonstrate the applicability of DSE for discovering and leveraging tradeoffs between Quality of Experience and DVE resource usage. Each of the three stages can be used individually to improve DVE performance. The DSE process enables fast and effective DVE performance improvement.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
23

Venkatachalam, Logambigai. "Scalability of Stepping Stones and Pathways". Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32326.

Pełny tekst źródła
Streszczenie:
Information Retrieval (IR) plays a key role in serving large communities of users who are in need of relevant answers for their search queries. IR encompasses various search models to address different requirements and has introduced a variety of supporting tools to improve effectiveness and efficiency. â Searchâ is the key focus of IR. The classic search methodology takes an input query, processes it, and returns the result as a ranked list of documents. However, this approach is not the most effective method to support the task of finding document associations (relationships between concepts or queries) both for direct or indirect relationships. The Stepping Stones and Pathways (SSP) retrieval methodology supports retrieval of ranked chains of documents that support valid relationships between any two given concepts. SSP has many potential practical and research applications, which are in need of a tool to find connections between two concepts. The early SSP â proof-of-conceptâ implementation could handle only 6000 documents. However, commercial search applications will have to deal with millions of documents. Hence, addressing the scalability limitation becomes extremely important in the current SSP implementation in order to overcome the limitations on handling large datasets. Research on various commercial search applications and their scalability indicates that the Lucene search tool kit is widely used due to its support for scalability, performance, and extensibility features. Many web-based and desktop applications have used this search tool kit to great success, including Wikipedia search, job search sites, digital libraries, e-commerce sites, and the Eclipse Integrated Development Environment (IDE). The goal of this research is to re-implement SSP in a scalable way, so that it can work for larger datasets and also can be deployed commercially. This work explains the approach adopted for re-implementation focusing on scalable indexing, searching components, new ways to process citations (references), a new approach for query expansion, document clustering, and document similarity calculation. The experiments performed to test the factors such as runtime and storage proved that the system can be scaled up to handle up to millions of documents.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
24

Clark, Jonathan. "Understanding scalability in distributed ledger technology". Master's thesis, Faculty of Commerce, 2021. http://hdl.handle.net/11427/32578.

Pełny tekst źródła
Streszczenie:
Distributed ledger technology (DLT) stands to benefit industries such as financial services with transparency and censorship resistance. DLT systems need to be scalable to handle mass user adoption. Mass user adoption is required to demonstrate the true value of DLT. This dissertation first analyses scalability in ethereum and EOS. Currently, ethereum 1.0 uses proof of work (PoW) and handles only 14 transactions per second (tps) compared to Visa's peak 47 000 tps. Ethereum 2.0, known as Serenity, introduces sharding, proof of stake (Casper), plasma and state channels in and effort to scale the system. EOS uses a delegated proof of stake (DPoS) protocol, where 21 super-nodes, termed ‘block producers' (BPs), facilitate consensus, bringing about significant scalability improvements (4000 tps). The trade-off is decentralisation. EOS is not sufficiently decentralised because the BPs yield significant power, but are not diverse. This dissertation conducts an empirical analysis using unsupervised machine learning to show that there is a high probability collusion is occurring between certain BPs. It then suggests possible protocol alterations such as inverse vote weighting that could curb adverse voting behaviour in DPoS. It further analyses whether universities are suitable BP's before mapping out required steps for universities to become block producers (leading to improved decentralisation in EOS)
Style APA, Harvard, Vancouver, ISO itp.
25

Jogalekar, Prasad P. (Prasad Prabhakar) Carleton University Dissertation Engineering Systems and Computer. "Scalability analysis framework for distributed systems". Ottawa, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Alpert, Cirrus, Michaela Turkowski i Tahiya Tasneem. "Scalability solutions for automated textile sorting : a case study on how dynamic capabilities can overcome scalability challenges". Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-26373.

Pełny tekst źródła
Streszczenie:
In light of the negative social and environmental impacts of the textile industry, a paradigm shift towards a more circular economy is inevitable. Automated textile sorting embodies a crucial but missing link to connect forward and reverse supply chains for circular economy, however scalability challenges exist. Therefore, the study explores how dynamic capabilities can overcome scalability challenges specific to automated textile sorting pilots in Northwestern Europe to create commercially viable solutions. A single case study using an abductive approach guided by the dynamic capabilities view explores automated textile sorting pilots’ approaches to dynamic capability microfoundations. Primary data include semi-structured interviews, which is complemented by secondary data documents, and both were analysed qualitatively via thematic analysis. The data reveal that known scalability challenges remain and new scalability challenges related to market disruptions exist, such as COVID-19. Scalability challenges are overcome through novel approaches to the microfoundations undergirding dynamic capabilities. These are found to take place in a continuous, overlapping process, and collaboration is found across all dynamic capabilities. As collaboration plays a prominent role, it should be integrated in approaches to dynamic capabilities. This study also adds to the literature on circular economy in the textile industry by confirming that known scalability challenges for automated textile sorting pilots remain, and new scalability challenges are developing in terms of market disruptions. Actors in the automated textile sorting supply chain may use these findings to support efforts to scale up automated textile sorting. For textile industry brands and recyclers, the findings can assess their readiness to participate in the automated textile sorting supply chain and support the achievement of their 2030 goals to use greater volumes of sorted textile waste fractions as feedstocks for their production processes and to be a collaborative member of the used textiles supply chain.
Style APA, Harvard, Vancouver, ISO itp.
27

Marfia, Gustavo. "P2P vehicular applications mobility, fairness and scalability /". Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1998391911&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Miranda, Bueno Alberto. "Scalability in extensible and heterogeneous storage systems". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/279389.

Pełny tekst źródła
Streszczenie:
The evolution of computer systems has brought an exponential growth in data volumes, which pushes the capabilities of current storage architectures to organize and access this information effectively: as the unending creation and demand of computer-generated data grows at an estimated rate of 40-60% per year, storage infrastructures need increasingly scalable data distribution layouts that are able to adapt to this growth with adequate performance. In order to provide the required performance and reliability, large-scale storage systems have traditionally relied on multiple RAID-5 or RAID-6 storage arrays, interconnected with high-speed networks like FibreChannel or SAS. Unfortunately, the performance of the current, most commonly-used storage technology-the magnetic disk drive-can't keep up with the rate of growth needed to sustain this explosive growth. Moreover, storage architectures based on solid-state devices (the successors of current magnetic drives) don't seem poised to replace HDD-based storage for the next 5-10 years, at least in data centers. Though the performance of SSDs significantly improves that of hard drives, it would cost the NAND industry hundreds of billions of dollars to build enough manufacturing plants to satisfy the forecasted demand. Besides the problems derived from technological and mechanical limitations, the massive data growth poses more challenges: to build a storage infrastructure, the most flexible approach consists in using pools of storage devices that can be expanded as needed by adding new devices or replacing older ones, thus seamlessly increasing the system's performance and capacity. This approach however, needs data layouts that can adapt to these topology changes and also exploit the potential performance offered by the hardware. Such strategies should be able to rebuild the data layout to accommodate the new devices in the infrastructure, extracting the utmost performance from the hardware and offering a balanced workload distribution. An inadequate data layout might not effectively use the enlarged capacity or better performance provided by newer devices, thus leading to unbalancing problems like bottlenecks or resource underusage. Besides, massive storage systems will inevitably be composed of a collection of heterogeneous hardware: as capacity and performance requirements grow, new storage devices must be added to cope with demand, but it is unlikely that these devices will have the same capacity or performance of those installed. Moreover, upon failure, disks are most commonly replaced by faster and larger ones, since it is not always easy (or cheap) to find a particular model of drive. In the long run, any large-scale storage system will have to cope with a myriad of devices. The title of this dissertation, "Scalability in Extensible and Heterogeneous Storage Systems", refers to the main focus of our contributions in scalable data distributions that can adapt to increasing volumes of data. Our first contribution is the design of a scalable data layout that can adapt to hardware changes while redistributing only the minimum data to keep a balanced workload. With the second contribution, we perform a comparative study on the influence of pseudo-random number generators in the performance and distribution quality of randomized layouts and prove that a badly chosen generator can degrade the quality of the strategy. Our third contribution is an an analysis of long-term data access patterns in several real-world traces to determine if it is possible to offer high performance and a balanced load with less than minimal data rebalancing. In our final contribution, we apply the knowledge learnt about long-term access patterns to design an extensible RAID architecture that can adapt to changes in the number of disks without migrating large amounts of data, and prove that it can be competitive with current RAID arrays with an overhead of at most 1.28% the storage capacity.
L'evolució dels sistemes de computació ha dut un creixement exponencial dels volums de dades, que porta al límit la capacitat d'organitzar i accedir informació de les arquitectures d'emmagatzemament actuals. Amb una incessant creació de dades que creix a un ritme estimat del 40-60% per any, les infraestructures de dades requereixen de distribucions de dades cada cop més escalables que puguin adaptar-se a aquest creixement amb un rendiment adequat. Per tal de proporcionar aquest rendiment, els sistemes d'emmagatzemament de gran escala fan servir agregacions RAID5 o RAID6 connectades amb xarxes d'alta velocitat com FibreChannel o SAS. Malauradament, el rendiment de la tecnologia més emprada actualment, el disc magnètic, no creix prou ràpid per sostenir tal creixement explosiu. D'altra banda, les prediccions apunten que els dispositius d'estat sòlid, els successors de la tecnologia actual, no substituiran els discos magnètics fins d'aquí 5-10 anys. Tot i que el rendiment és molt superior, la indústria NAND necessitarà invertir centenars de milions de dòlars per construir prou fàbriques per satisfer la demanda prevista. A més dels problemes derivats de limitacions tècniques i mecàniques, el creixement massiu de les dades suposa més problemes: la solució més flexible per construir una infraestructura d'emmagatzematge consisteix en fer servir grups de dispositius que es poden fer créixer bé afegint-ne de nous, bé reemplaçant-ne els més vells, incrementant així la capacitat i el rendiment del sistema de forma transparent. Aquesta solució, però, requereix distribucions de dades que es puguin adaptar a aquests canvis a la topologia i explotar el rendiment potencial que el hardware ofereix. Aquestes distribucions haurien de poder reconstruir la col.locació de les dades per acomodar els nous dispositius, extraient-ne el màxim rendiment i oferint una càrrega de treball balancejada. Una distribució inadient pot no fer servir de manera efectiva la capacitat o el rendiment addicional ofert pels nous dispositius, provocant problemes de balanceig com colls d¿ampolla o infrautilització. A més, els sistemes d'emmagatzematge massius estaran inevitablement formats per hardware heterogeni: en créixer els requisits de capacitat i rendiment, es fa necessari afegir nous dispositius per poder suportar la demanda, però és poc probable que els dispositius afegits tinguin la mateixa capacitat o rendiment que els ja instal.lats. A més, en cas de fallada, els discos són reemplaçats per d'altres més ràpids i de més capacitat, ja que no sempre és fàcil (o barat) trobar-ne un model particular. A llarg termini, qualsevol arquitectura d'emmagatzematge de gran escala estarà formada per una miríade de dispositius diferents. El títol d'aquesta tesi, "Scalability in Extensible and Heterogeneous Storage Systems", fa referència a les nostres contribucions a la recerca de distribucions de dades escalables que es puguin adaptar a volums creixents d'informació. La primera contribució és el disseny d'una distribució escalable que es pot adaptar canvis de hardware només redistribuint el mínim per mantenir un càrrega de treball balancejada. A la segona contribució, fem un estudi comparatiu sobre l'impacte del generadors de números pseudo-aleatoris en el rendiment i qualitat de les distribucions pseudo-aleatòries de dades, i provem que una mala selecció del generador pot degradar la qualitat de l'estratègia. La tercera contribució és un anàlisi dels patrons d'accés a dades de llarga duració en traces de sistemes reals, per determinar si és possible oferir un alt rendiment i una bona distribució amb una rebalanceig inferior al mínim. A la contribució final, apliquem el coneixement adquirit en aquest estudi per dissenyar una arquitectura RAID extensible que es pot adaptar a canvis en el número de dispositius sense migrar grans volums de dades, i demostrem que pot ser competitiva amb les distribucions ideals RAID actuals, amb només una penalització del 1.28% de la capacitat
Style APA, Harvard, Vancouver, ISO itp.
29

Berg, Hans Inge. "Simulation of Performance Scalability in Pervasive Systems". Thesis, Norwegian University of Science and Technology, Department of Telematics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10080.

Pełny tekst źródła
Streszczenie:

As increasingly more services and devices become integrated into pervasive systems, future network topologies will be vastly more sophisticated with numerous heterogeneous devices interconnected. To integrate a new service into this already complex network topology and traffic can give unwanted results if the functional blocks (applets) of a service are not placed at the best suited locations (devices). This thesis will look into the performance and scalability issues when confronted with options of multiple locations in which to run an applet. We will define a modelling framework taking into consideration system usage, network loads, device loads, overloads, timing requirements and propagation delays to mention some factors. In this framework we are able to set up our own scenarios with user patterns and the amount of users in the system. This framework will be written in Simula. From the output gained from this framework we can improve the system or the applets to improve overall traffic flow and resource usage. The framework will be run on a total of 8 different scenarios based on an airport usage model. We will have 6 static applets residing in their own devices and one dynamic applet which we will try to find the best location for within a predefined network topology. The amount of users can be set to a static amount or it can be a dynamic amount changing from hour to hour. The results produced give a better picture of the whole system working together. Based on these results it is possible to come to a conclusion of best suited applet location.

Style APA, Harvard, Vancouver, ISO itp.
30

Rodal, Morten. "Scalability of seismic codes on computational clusters". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9145.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Xu, Donghua. "Scalability and Composability Techniques for Network Simulation". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10450.

Pełny tekst źródła
Streszczenie:
Simulation has become an important way to observe and understand various networking phenomena under various conditions. As the demand to simulate larger and more complex networks increases, the limited computing capacity of a single workstation and the limited simulation capability of a single network simulator have become apparent obstacles to the simulationists. In this research we develop techniques that can scale a simulation to address the limited capacity of a single workstation, as well as techniques that can compose a simulation from different simulator components to address the limited capability of a single network simulator. We scale a simulation with two different approaches: 1) We reduce the resource requirement of a simulation substantially, so that larger simulations can fit into one single workstation. In this thesis, we develop three technqiues (Negative Forwarding Table, Multicast Routing Object Aggregation and NIx-Vector Unicast Routing) to aggregate and compress the large amount of superfluous or redundant routing state in large multicast simulations. 2) The other approach to scale network simulations is to partition a simulation model in a way that makes the best use of the resources of the available computer cluster, and distribute the simulation onto the different processors of the computer cluster to obtain the best parallel simulation performance. We develop a novel empirical methodology called BencHMAP (Benchmark-Based Hardware and Model Aware Partitioning) that runs small sets of benchmark simulations to derive the right formulas of calculating the weights that are used to partition the simulation on a given computer cluster. On the other hand, to address the problem of the limited capability of a network simulator, we develop techniques for building complex network simulations by composing from independent components. With different existing simulators good at different protocol layers/scenarios, we can make each simulator execute the layers where it excels, using a simulation backplane to be the interface between different simulators. In this thesis we demonstrate that these techniques enable us to not only scale up simulations by orders of magnitude with a good performance, but also compose complex simulations with high fidelity.
Style APA, Harvard, Vancouver, ISO itp.
32

Vieira, Joana. "Scalability Performance of Ericsson Radio Dot System". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177163.

Pełny tekst źródła
Streszczenie:
In the past, network providers resorted to indoor solutions for coverage reasons. However, as traffic volume grows and multiple hotspots appear indoors, capacity provision is also becoming a drive for in-building networks, in particular at the expense of LTE bit rate promises. Network vendors are aware of this reality and multiple indoor systems have been launched, as small cells and active DAS and, in particular, Ericsson Radio Dot System. A significant factor dictating the system ability to meet future demands is scalability, either in coverage area and capacity. The aim of this thesis is to evaluate Radio Dot System performance regarding those dimensions, where the factors limiting capacity and coverage are addressed added on by a cost analysis. Furthermore, a discussion on the deployment scenarios as a single-operator solution is done on a business perspective. For the cases evaluated, Radio Dot System provides both LTE and WCDMA coverage and capacity indoors for a range of buildings, from medium to very large. Also, a trade-off between network components and bandwidth allows spectrum flexibility. Moreover, Radio Dot System has an cost advantage over femtocell deployment and macro outside-in coverage regarding the scenarios analyzed. On the other hand, the deployment options as single-operator are, at the moment, limited to medium enterprise clients. However, if the usage of unlicensed spectrum bands, which have been issued in some countries, takes off, more opportunities may arise for single-operator in-building systems.
Tidigare använde sig operatörer av inomhuslösningar för täckningsskäl. Då trafikvolymen växer och flera hotspots tillkommer inomhus, blir även tillhandahållet av kapacitet ett steg för inbyggnadsnät, framför allt på bekostnad av LTE bithastighetslöften. Nätverksleverantörer är medvetna om denna verklighet och multipla inomhussystem har lanserats som små celler, aktiva DAS och speciellt Ericsson Radio System Dot. En betydande faktor som dikterar systemets förmåga att möta framtida krav är skalbarhet, antingen i täckningsområdet eller i kapacitet. Syftet med denna avhandling är att utvärdera Radio Dot Systemets prestanda avseende dessa dimensioner, där faktorer som begränsar kapaciteten och täckningen utvärderas följt av en kostnadsanalys. Vidare förs en diskussion om installationsscenarier som berör en enda aktör ur ett affärsmässigt perspektiv. För de fall som utvärderats, ger Radio Dot System både LTE- och WCDMA täckning och kapacitet inomhus för en rad byggnader, som ses som medelstora till mycket stora. En avvägning mellan nätverkskomponenter och bandbredd ger dessutom en viss flexibilitet gällande spektrum. Radio Dot System har dessutom en kostnadsfördel gentemot femtoceller och makro ut-in täckning gällande de scenarier som analyserats. Som ett singel-operatörssystem är driftmöjligheterna för tillfället begränsade till medelstora företagsklienter. Om användandet av licensfria spektrumband som i vissa länder har utfärdats tar fart, uppstår fler möjligheter för enkeloperatörers inbyggnadssystem.
Style APA, Harvard, Vancouver, ISO itp.
33

Messing, Andreas, i Henrik Rönnholm. "Scalability of dynamic locomotion for computer games". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166424.

Pełny tekst źródła
Streszczenie:
Dynamic locomotion in computer games is becomingmore common, but is still in its cradle. In this report weconduct an investigation into whether gait oscillators couldbe used as an in-game technique to generate dynamic locomotionin realtime. We tested this by using an industrialquality physics simulator to simulate a biped and aquadruped (humanoid and salamander) and measure quality,runtime and memory usage. The tests show that abiped need some balance system in order to walk indefinitely,while the salamander managed to walk properly withonly minor defects. Memory allows several thousands ofinstances while runtime allows up to a thousand instances.This leads to the conclusion that it is a cheap system, suitablewhere the quality is secondary to number of instances.
Dynamisk förflyttning i datorspel blir allt vanligare, menär fortfarande i sin vagga. I denna rapport genomför vi enutredning om ifall oscillatorer i form av ett CPG-systemkan användas som en spelteknik för att generera dynamiskrörelse i realtid. Vi testade detta genom att använda enfysiksimulator av industrikvalitet för att simulera en tvåfotingoch en fyrfoting (humanoid och salamander) och mätakvalitet, tidsåtgång och minnesanvändning . Försöken visaratt en tvåfoting behöver något sorts balanssystem för attgå längre sträckor, medan salamandern lyckades gå med endastmindre defekter. Minnet tillåter flera tusen instansermedan tidsåtgången tillåter upp till ett par hundra instanser.Detta leder till slutsatsen att det är ett billigt system lämpligtdär antalet instanser är viktigare än kvaliteten.
Style APA, Harvard, Vancouver, ISO itp.
34

Hussain, Shahid, i Hassan Shabbir. "Directory scalability in multi-agent based systems". Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3110.

Pełny tekst źródła
Streszczenie:
Simulation is one approach to analyze and model the real world complex problems. Multi-agent based systems provide a platform to develop simulations based on the concept of agent-oriented programming. In multi-agent systems, the local interaction between agents contributes to the emergence of the global phenomena by getting the result of the simulation runs. In MABS systems, interaction is one common aspect for all agents to perform their tasks. To interact with each other the agents require yellow page services from the platform to search for other agents. As more and more agents perform searches on this yellow page directory, there is a decrease in the performance due to a central bottleneck. In this thesis, we have investigated multiple solutions for this problem. The most promising solution is to integrate distributed shared memory with the directory systems. With our proposed solution, empirical analysis shows a statistically significant increase in performance of the directory service. We expect this result to make a considerable contribution to the state of the art in multiagent platforms.
Style APA, Harvard, Vancouver, ISO itp.
35

Gottemukkala, Vibby. "Scalability issues in distributed and parallel databases". Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8176.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Jiang, Tianji. "Accommodating heterogeneity and scalability for multicast communication". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/8190.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Tambouris, Efthimios. "Performance and scalability analysis of parallel systems". Thesis, Brunel University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341665.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Djugash, Joseph A. "Geolocation with Range: Robustness, Efficiency and Scalability". Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/63.

Pełny tekst źródła
Streszczenie:
Numerous geolocation technologies, such as GPS, can pinpoint a person’s or object’s position on Earth under ideal conditions. However, autonomous navigation of mobile robots requires a precision localization system that can operate under a variety of environmental and resource constraints. Take for example an emergency response scenario where a hospital building is on fire. This is a time sensitive life or death scenario where it is critical for first responders to locate possible survivors in a smoke filled room. The robot’s sensors need to work past environmental occlusions such as excessive smoke, debris, etc to provide support to the first responders. The robot itself also needs to effectively and accurately navigate the room with minimal help from other agents that might be present in the building since it is unrealistic to deploy unlimited robots for this task. The available resources need to be effectively used to best aid the rescue crew and ensure the safety of the rescue workers. Scenarios such as this present a crucial need for solutions that can work effectively in the presence of environmental constraints that can interfere with a sensor while giving equal weighting to resource constraints that impact the localization ability of a robot. This thesis presents one such experimentally proven solution that offers superior accuracy, robustness and scalability demonstrated via several realworld robot experiments and simulations. The geolocation technique explored uses a recently discovered sensor technology called the ranging radios that are able to communicate and measure range in the absence of line-of-sight between radio nodes. This provides a straightforward approach to tackle unknown occlusions in the environment and enables the use of range to localize the agent in a variety of different situations. One shortcoming of range-only data created by these ranging radios is that they generate a nonlinear and multi-modal measurement distribution that existing estimation techniques fail to accurately and efficiently model. To overcome this shortcoming, a novel and robust method for localization and SLAM (Simultaneous Localization and Mapping) given range-only data to stationary feature/nodes is developed and presented here. In addition to this centralized filtering technique, two key extensions are investigated and experimentally proven in order to provide a comprehensive framework for geolocation with range. The first is a decentralized filtering technique that distributes computational needs across several agents. This technique is especially useful in real-world scenarios where leveraging a large number of agents in an environment is not unrealistic. The second is a novel cooperative localization strategy, based on first principles, that leverages the motion of mobile agents in the system to provide better accuracy in a featureless environment. This technique is useful in cases where a limited number of mobile agents need to coordinate with each other to mutually improve their estimates. The developed techniques offer a unified global framework for geolocation with range that spans everything from static network localization to multi-robot cooperative localization with a level of accuracy and robustness which no other existing techniques can provide
Style APA, Harvard, Vancouver, ISO itp.
39

Rotsos, Charalampos. "Improving network extensibility and scalability through SDN". Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709033.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Konstantakopoulos, Theodoros K. 1977. "Energy scalability of on-chip interconnection networks". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40315.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Page 198 blank.
Includes bibliographical references (p. 191-197).
On-chip interconnection networks (OCN) such as point-to-point networks and buses form the communication backbone in multiprocessor systems-on-a-chip, multicore processors, and tiled processors. OCNs consume significant portions of a chip's energy budget, so their energy analysis early in the design cycle becomes important for architectural design decisions. Although innumerable studies have examined OCN implementation and performance, there have been few energy analysis studies. This thesis develops an analytical framework for energy estimation in OCNs, for any given topology and arbitrary communication patterns, and presents OCN energy results based on both analytical communication models and real network traces from applications running on a tiled multicore processor. This thesis is the first work to address communication locality in analyzing multicore interconnect energy and to use real multicore interconnect traces extensively. The thesis compares the energy performance of point-to-point networks with buses for varying degrees of communication locality. The model accounts for wire length, switch energy, and network contention. This work is the first to examine network contention from the energy standpoint.
(cont.) The thesis presents a detailed analysis of the energy costs of a switch and shows that the estimated values for channel energy, switch control logic energy, and switch queue buffer energy are 34.5pJ, 17pJ, and 12pJ, respectively. The results suggest that a one-dimensional point-to-point network results in approximately 66% energy savings over a bus for 16 or more processors, while a two-dimensional network saves over 82%, when the processors communicate with each other with equal likelihood. The savings increase with locality. Analysis of the effect of contention on OCNs for the Raw tiled microprocessor reports a maximum energy overhead of 23% due to resource contention in the interconnection network.
by Theodoros K. Konstantakopoulos.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
41

Xing, Kerry (Kerry K. ). "Cilkprof : a scalability profiler for Cilk programs". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91879.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-53).
This thesis describes the design and implementation of Cilkprof, a profiling tool that helps programmers to diagnose scalability problems in their Cilk programs. Cilkprof provides in-depth information about the scalability of programs, without adding excessive overhead. Cilkprof's output can be used to find scalability bottlenecks in the user's code. Cilkprof makes profiling measurements at the fork and join points of computations, which typically limits the amount of overhead incurred by the profiler. In addition, despite recording in-depth information, Cilkprof does not generate large log files that are typical of trace-based profilers. In addition, the profiling algorithm only incurs constant amortized overhead per measurement. CilkProf slows down the serial program execution by a factor of about 10 in the common case, on a well-coarsened parallel program. The slowdown is reasonable for the amount of information gained from the profiling. Finally, the approach taken by Cilkprof enables the creation of an API, which can allow users to specify their own profiling code, without having to change the Cilk runtime.
by Kerry Xing.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
42

Pak, Nikita. "Automation and scalability of in vivo neuroscience". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119094.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 121-124).
Many in vivo neuroscience techniques are limited in terms of scale and suffer from inconsistencies because of the reliance on human operators for critical tasks. Ideally, automation would yield repeatable and reliable experimental procedures. Precision engineering would also allow us to perform more complex experiments by allowing us to take novel approaches to existing problems. Two such tasks that would see great improvement through automation and scalability are accessibility to the brain as well as neuronal activity imaging. In this thesis, I will describe the development of two novel tools that increase the precision, repeatability, and scale of in vivo neural experimentation. The first tool is a robot that automatically performs craniotomies in mice and other mammals by sending an electrical signal through a drill and measuring the voltage drop across the animal. A well-characterized increase in conductance occurs after skull breakthrough due to the lower impedance of the meninges compared to the bone of the skull. This robot allows us access to the brain without damaging the tissue, a critical step in many neuroscience experiments. The second tool is a new type of microscope that can capture high resolution three-dimensional volumes at the speed of the camera frame rate, with isotropic resolution. This microscope is novel in that it uses two orthogonal views of the sample to create a higher resolution image than is possible with just a single view. Increased resolution will potentially allow us to record neuronal activity that we would otherwise miss because of the inability to distinguish two nearby neurons.
by Nikita Pak.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
43

Wong, Jeremy Ng 1981. "Modeling the scalability of acrylic stream programs". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/18004.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 109-110).
Despite the fact that the streaming application domain is becoming increasingly widespread, few studies have focused specifically on the performance characteristics of stream programs. We introduce two models by which the scalability of stream programs can be predicted to some degree of accuracy. This is accomplished by testing a series of stream benchmarks on our numerical representations of the two models. These numbers are then compared to actual speedups obtained by running the benchmarks through the Raw machine and a Magic network. Using the metrics, we show that stateless acyclic stream programs benefit considerably from data, parallelization. In particular, programs with low communication datarates experience up to a tenfold speedup increase when parallelized to a reasonable margin. Those with high communication data rates also experience approximately a twofold speedup. We find that the model that takes synchronization communication overhead into account, in addition to a cost proportional to the communication rate of the stream, provides the highest predictive accuracy.
by Jeremy Ng Wong.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
44

Davies, Neil J. "The performance and scalability of parallel systems". Thesis, University of Bristol, 1994. http://hdl.handle.net/1983/964dec41-9a36-44ea-9cfc-f6d1013fcd12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Ghaffari, Amir. "The scalability of reliable computation in Erlang". Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6789/.

Pełny tekst źródła
Streszczenie:
With the advent of many-core architectures, scalability is a key property for programming languages. Actor-based frameworks like Erlang are fundamentally scalable, but in practice they have some scalability limitations. The RELEASE project aims to scale the Erlang's radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on emergent commodity architectures with 10,000 cores. The RELEASE consortium works to scale Erlang at the virtual machine, language level, infrastructure levels, and to supply profiling and refactoring tools. This research contributes to the RELEASE project at the language level. Firstly, we study the provision of scalable persistent storage options for Erlang. We articulate the requirements for scalable and available persistent storage, and evaluate four popular Erlang DBMSs against these requirements. We investigate the scalability limits of the Riak NoSQL DBMS using Basho Bench up to 100 nodes on the Kalkyl cluster and establish for the first time scientifically the scalability limit of Riak as 60 nodes, thereby confirming developer folklore. We design and implement DE-Bench, a scalable fault-tolerant peer-to-peer benchmarking tool that measures the throughput and latency of distributed Erlang commands on a cluster of Erlang nodes. We employ DE-Bench to investigate the scalability limits of distributed Erlang on up to 150 nodes and 1200 cores. Our results demonstrate that the frequency of global commands limits the scalability of distributed Erlang. We also show that distributed Erlang scales linearly up to 150 nodes and 1200 cores with relatively heavy data and computation loads when no global commands are used. As part of the RELEASE project, the Glasgow University team has developed Scalable Distributed Erlang (SD Erlang) to address the scalability limits of distributed Erlang. We evaluate SD Erlang by designing and implementing the first ever demonstrators for SD Erlang, i.e. DE-Bench, Orbit and Ant Colony Optimisation(ACO). We employ DE-Bench to evaluate the performance and scalability of group operations in SD-Erlang up to 100 nodes. Our results show that the alternatives SD-Erlang offers for global commands (i.e. group commands) scale linearly up to 100 nodes. We also develop and evaluate an SD-Erlang implementation of Orbit, a symbolic computing kernel and a generalization of a transitive closure computation. Our evaluation results show that SD Erlang Orbit outperforms the distributed Erlang Orbit on 160 nodes and 1280 cores. Moreover, we develop a reliable distributed version of ACO and show that the reliability of ACO limits its scalability in traditional distributed Erlang. We use SD-Erlang to improve the scalability of the reliable ACO by eliminating global commands and avoiding full mesh connectivity between nodes. We show that SD Erlang reduces the network traffic between nodes in an Erlang cluster effectively.
Style APA, Harvard, Vancouver, ISO itp.
46

Stromatias, Evangelos. "Scalability and robustness of artificial neural networks". Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/scalability-and-robustness-of-artificial-neural-networks(b73b3f77-2bc3-4197-bd0f-dc7501b872cb).html.

Pełny tekst źródła
Streszczenie:
Artificial Neural Networks (ANNs) appear increasingly and routinely to gain popularity today, as they are being used in several diverse research fields and many different contexts, which may range from biological simulations and experiments on artificial neuronal models to machine learning models intended for industrial and engineering applications. One example is the recent success of Deep Learning architectures (e.g., Deep Belief Networks [DBN]), which appear in the spotlight of machine learning research, as they are capable of delivering state-of-the-art results in many domains. While the performance of such ANN architectures is greatly affected by their scale, their capacity for scalability both for training and during execution is limited by the increased power consumption and communication overheads, implicitly posing a limiting factor on their real-time performance. The on-going work on the design and construction of spike-based neuromorphic platforms offers an alternative for running large-scale neural networks, such as DBNs, with significantly lower power consumption and lower latencies, but has to overcome the hardware limitations and model specialisations imposed by these type of circuits. SpiNNaker is a novel massively parallel fully programmable and scalable architecture designed to enable real-time spiking neural network (SNN) simulations. These properties render SpiNNaker quite an attractive neuromorphic exploration platform for running large-scale ANNs, however, it is necessary to investigate thoroughly both its power requirements as well as its communication latencies. This research focusses on around two main aspects. First, it aims at characterising the power requirements and communication latencies of the SpiNNaker platform while running large-scale SNN simulations. The results of this investigation lead to the derivation of a power estimation model for the SpiNNaker system, a reduction of the overall power requirements and the characterisation of the intra- and inter-chip spike latencies. Then it focuses on a full characterisation of spiking DBNs, by developing a set of case studies in order to determine the impact of (a) the hardware bit precision; (b) the input noise; (c) weight variation; and (d) combinations of these on the classification performance of spiking DBNs for the problem of handwritten digit recognition. The results demonstrate that spiking DBNs can be realised on limited precision hardware platforms without drastic performance loss, and thus offer an excellent compromise between accuracy and low-power, low-latency execution. These studies intend to provide important guidelines for informing current and future efforts around developing custom large-scale digital and mixed-signal spiking neural network platforms.
Style APA, Harvard, Vancouver, ISO itp.
47

Bernatskiy, Anton. "Improving Scalability of Evolutionary Robotics with Reformulation". ScholarWorks @ UVM, 2018. https://scholarworks.uvm.edu/graddis/957.

Pełny tekst źródła
Streszczenie:
Creating systems that can operate autonomously in complex environments is a challenge for contemporary engineering techniques. Automatic design methods offer a promising alternative, but so far they have not been able to produce agents that outperform manual designs. One such method is evolutionary robotics. It has been shown to be a robust and versatile tool for designing robots to perform simple tasks, but more challenging tasks at present remain out of reach of the method. In this thesis I discuss and attack some problems underlying the scalability issues associated with the method. I present a new technique for evolving modular networks. I show that the performance of modularity-biased evolution depends heavily on the morphology of the robot’s body and present a new method for co-evolving morphology and modular control. To be able to reason about the new technique I develop reformulation framework: a general way to describe and reason about metaoptimization approaches. Within this framework I describe a new heuristic for developing metaoptimization approaches that is based on the technique for co-evolving morphology and modularity. I validate the framework by applying it to a practical task of zero-g autonomous assembly of structures with a fleet of small robots. Although this work focuses on the evolutionary robotics, methods and approaches developed within it can be applied to optimization problems in any domain.
Style APA, Harvard, Vancouver, ISO itp.
48

Desmouceaux, Yoann. "Network-Layer Protocols for Data Center Scalability". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX011/document.

Pełny tekst źródła
Streszczenie:
Du fait de la croissance de la demande en ressources de calcul, les architectures de centres de données gagnent en taille et complexité.Dès lors, cette thèse prend du recul par rapport aux architectures réseaux traditionnelles, et montre que fournir des primitives génériques directement à la couche réseau permet d'améliorer l'utilisation des ressources, et de diminuer le trafic réseau et le surcoût administratif.Deux architectures réseaux récentes, Segment Routing (SR) et Bit-Indexed Explicit Replication (BIER), sont utilisées pour construire et analyser des protocoles de couche réseau, afin de fournir trois primitives: (1) mobilité des tâches, (2) distribution fiable de contenu, et (3) équilibre de charge.Premièrement, pour la mobilité des tâches, SR est utilisé pour fournir un service de migration de machine virtuelles sans perte.Cela ouvre l'opportunité d'étudier comment orchestrer le placement et la migration de tâches afin de (i) maximiser le débit inter-tâches, tout en (ii) maximisant le nombre de nouvelles tâches placées, mais (iii) minimisant le nombre de tâches migrées.Deuxièmement, pour la distribution fiable de contenu, BIER est utilisé pour fournir un protocole de multicast fiable, dans lequel les retransmissions de paquets perdus sont ciblés vers l'ensemble précis de destinations n'ayant pas reçu ce packet : ainsi, le surcoût de trafic est minimisé.Pour diminuer la charge sur la source, cette approche est étendue en rendant possible des retransmissions par des pairs locaux, utilisant SR afin de trouver un pair capable de retransmettre.Troisièmement, pour l'équilibre de charge, SR est utilisé pour distribuer des requêtes à travers plusieurs applications candidates, chacune prenant une décision locale pour accepter ou non ces requêtes, fournissant ainsi une meilleure équité de répartition comparé aux approches centralisées.La faisabilité d'une implémentation matérielle de cette approche est étudiée, et une solution (utilisant des canaux cachés pour transporter de façon invisible de l'information vers l'équilibreur) est implémentée pour une carte réseau programmable de dernière génération.Finalement, la possibilité de fournir de l'équilibrage automatique comme service réseau est étudiée : en faisant passer (avec SR) des requêtes à travers une chaîne fixée d'applications, l'équilibrage est initié par la dernière instance, selon son état local
With the development of demand for computing resources, data center architectures are growing both in scale and in complexity.In this context, this thesis takes a step back as compared to traditional network approaches, and shows that providing generic primitives directly within the network layer is a great way to improve efficiency of resource usage, and decrease network traffic and management overhead.Using recently-introduced network architectures, Segment Routing (SR) and Bit-Indexed Explicit Replication (BIER), network layer protocols are designed and analyzed to provide three high-level functions: (1) task mobility, (2) reliable content distribution and (3) load-balancing.First, task mobility is achieved by using SR to provide a zero-loss virtual machine migration service.This then opens the opportunity for studying how to orchestrate task placement and migration while aiming at (i) maximizing the inter-task throughput, while (ii) maximizing the number of newly-placed tasks, but (iii) minimizing the number of tasks to be migrated.Second, reliable content distribution is achieved by using BIER to provide a reliable multicast protocol, in which retransmissions of lost packets are targeted towards the precise set of destinations having missed that packet, thus incurring a minimal traffic overhead.To decrease the load on the source link, this is then extended to enable retransmissions by local peers from the same group, with SR as a helper to find a suitable retransmission candidate.Third, load-balancing is achieved by way of using SR to distribute queries through several application candidates, each of which taking local decisions as to whether to accept those, thus achieving better fairness as compared to centralized approaches.The feasibility of hardware implementation of this approach is investigated, and a solution using covert channels to transparently convey information to the load-balancer is implemented for a state-of-the-art programmable network card.Finally, the possibility of providing autoscaling as a network service is investigated: by letting queries go through a fixed chain of applications using SR, autoscaling is triggered by the last instance, depending on its local state
Style APA, Harvard, Vancouver, ISO itp.
49

Scotece, Domenico <1988&gt. "Edge Computing for Extreme Reliability and Scalability". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9433/5/Tesi-Scotece.pdf.

Pełny tekst źródła
Streszczenie:
The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud.
Style APA, Harvard, Vancouver, ISO itp.
50

Bordes, Philippe. "Adapting video compression to new formats". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S003/document.

Pełny tekst źródła
Streszczenie:
Les nouvelles techniques de compression vidéo doivent intégrer un haut niveau d'adaptabilité, à la fois en terme de bande passante réseau, de scalabilité des formats (taille d'images, espace de couleur…) et de compatibilité avec l'existant. Dans ce contexte, cette thèse regroupe des études menées en lien avec le standard HEVC. Dans une première partie, plusieurs adaptations qui exploitent les propriétés du signal et qui sont mises en place lors de la création du bit-stream sont explorées. L'étude d'un nouveau partitionnement des images pour mieux s'ajuster aux frontières réelles du mouvement permet des gains significatifs. Ce principe est étendu à la modélisation long-terme du mouvement à l'aide de trajectoires. Nous montrons que l'on peut aussi exploiter la corrélation inter-composantes des images et compenser les variations de luminance inter-images pour augmenter l'efficacité de la compression. Dans une seconde partie, des adaptations réalisées sur des flux vidéo compressés existants et qui s'appuient sur des propriétés de flexibilité intrinsèque de certains bit-streams sont investiguées. En particulier, un nouveau type de codage scalable qui supporte des espaces de couleur différents est proposé. De ces travaux, nous dérivons des metadata et un modèle associé pour opérer un remapping couleur générique des images. Le stream-switching est aussi exploré comme une application particulière du codage scalable. Plusieurs de ces techniques ont été proposées à MPEG. Certaines ont été adoptées dans le standard HEVC et aussi dans la nouvelle norme UHD Blu-ray Disc. Nous avons investigué des méthodes variées pour adapter le codage de la vidéo aux différentes conditions de distribution et aux spécificités de certains contenus. Suivant les scénarios, on peut sélectionner et combiner plusieurs d'entre elles pour répondre au mieux aux besoins des applications
The new video codecs should be designed with an high level of adaptability in terms of network bandwidth, format scalability (size, color space…) and backward compatibility. This thesis was made in this context and within the scope of the HEVC standard development. In a first part, several Video Coding adaptations that exploit the signal properties and which take place at the bit-stream creation are explored. The study of improved frame partitioning for inter prediction allows better fitting the actual motion frontiers and shows significant gains. This principle is further extended to long-term motion modeling with trajectories. We also show how the cross-component correlation statistics and the luminance change between pictures can be exploited to increase the coding efficiency. In a second part, post-creation stream adaptations relying on intrinsic stream flexibility are investigated. In particular, a new color gamut scalability scheme addressing color space adaptation is proposed. From this work, we derive color remapping metadata and an associated model to provide low complexity and general purpose color remapping feature. We also explore the adaptive resolution coding and how to extend scalable codec to stream-switching applications. Several of the described techniques have been proposed to MPEG. Some of them have been adopted in the HEVC standard and in the UHD Blu-ray Disc. Various techniques for adapting the video compression to the content characteristics and to the distribution use cases have been considered. They can be selected or combined together depending on the applications requirements
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii