Tesi sul tema "Scalability"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Scalability".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Singh, Arjun. "The scalability of AspectJ". Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/32349.
Testo completoScience, Faculty of
Computer Science, Department of
Graduate
Li, Yan. "Scalability of RAID systems". Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3382.
Testo completoDuong, Tuyet. "BLOCKCHAIN SCALABILITY AND SECURITY". VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5559.
Testo completoBen, Alaya Mahdi. "Towards interoperability, self-management, and scalability for scalability for machine-to-machine systems". Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0052/document.
Testo completoMachine-to-Machine (M2M) is one of the main features of Internet of Things (IoT). It is a phenomenon that has been proceeding quietly in the background, and it is coming into the surface, where explosion of usage scenarios in businesses will happen. Sensors, actuators, tags, vehicles, and intelligent things all have the ability to communicate. The number of M2M connections is continuously increasing, and it has been predicted to see billions of machines interconnected in a near future. M2M applications provide advantages in various domains from smart cities, factories of the future, connected cars, home automation, e-health to precision agriculture. This fast-growing ecosystem is leading M2M towards a promising future. However, M2M market expansion opportunities are not straightforward. A set of challenges should be overcome to enable M2M mass-scale deployment across various industries including interoperability, complexity, and scalability issues. Currently, the M2M market is suffering from a high vertical fragmentation affecting the majority of business sectors. In fact, various vendor-specific M2M solutions have been designed independently for specific applications, which led to serious interoperability issues. To address this challenge, we designed, implemented, and experimented with the OM2M platform offering a flexible and extensible operational architecture for M2M interoperability compliant with the SmartM2M standard. To support constrained environments, we proposed an efficient naming convention relying on a non-hierarchical resource structure to reduce the payload size. To reduce the semantic gap between applications and machines, we proposed the IoT-O ontology for an effective semantic interoperability. IoT-O consists of five main parts, which are sensor, actuator, observation, actuation and service models and aims to quickly converge to a common IoT vocabulary. An interoperable M2M service platform enables one to interconnect heterogeneous devices that are widely distributed and frequently evolving according to their environment changes. Keeping M2M systems alive is costly in terms of time and money. To address this challenge, we designed, implemented, and integrated the FRAMESELF framework to retrofit self-management capabilities in M2M systems based on the autonomic computing paradigm. Extending the MAPE-K reference architecture model, FRAMESELF enables one to dynamically adapt the OM2M system behavior according to high level policies how the environment changes. We defined a set of semantic rules for reasoning about the IoT-O ontology as a knowledge model. Our goal is to enable automatic discovery of machines and applications through dynamic reconfiguration of resource architectures. Interoperability and self-management pave the way to mass-scale deployment of M2M devices. However, current M2M systems rely on current internet infrastructure, which was never designed to address such requirements, thus raising new requirements in term of scalability. To address this challenge, we designed, simulated and validated the OSCL overlay approach, a new M2M meshed network topology as an alternative to the current centralized approach. OSCL relies on the Named Data Networking (NDN) technique and supports multi-hop communication and distributed caching 5 to optimize networking and enhance data dissemination. We developed the OSCLsim simulator to validate the proposed approach. Finally, a theoretical model based on random graphs is formulated to describe the evolution and robustness of the proposed system
Jenefeldt, Andreas, e Jakobsson Erik Foogel. "Scalability in Startups : A Case Study of How Technological Startup Companies Can Enhance Scalability". Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168150.
Testo completoKrishna, Chaitanya Konduru. "Scalability Drivers in Requirements Engineering". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13480.
Testo completoMir, Taheri Seyed M. "Scalability of communicators in MPI". Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/33128.
Testo completoHao, Fang. "Scalability techniques in QoS networks". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/9175.
Testo completoWen, Yang Ph D. Massachusetts Institute of Technology. "Scalability of dynamic traffic assignment". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/47739.
Testo completoIncludes bibliographical references (p. 163-174).
This research develops a systematic approach to analyze the computational performance of Dynamic Traffic Assignment (DTA) models and provides solution techniques to improve their scalability for on-line applications for large-scale networks. DTA models for real-time use provide short-term predictions of network status and generate route guidance for travelers. The computational performance of such systems is a critical concern. Existing methodologies, which have limited capabilities for online large-scale applications, use single-processor configurations that are less scalable, and rely primarily on trade-offs that sacrifice accuracy for improved computational efficiency. In the proposed scalable methodology, algorithmic analyses are first used to identify the system bottlenecks for large-scale problems. Our analyses show that the computation time of DTA systems for a given time interval depends largely on a small set of parameters. Important parameters include the number of origin-destination (OD) pairs, the number of sensors, the number of vehicles, the size of the network, and the number of time-steps used by the simulator. Then scalable approaches are developed to solve the bottlenecks. A constraint generalized least-squares solution enabling efficient use of the sparse-matrix property is applied to the dynamic OD estimation, replacing the Kalman-Filter solution or other full-matrix algorithms. Parallel simulation with an adaptive network decomposition framework is proposed to achieve better load-balancing and improved efficiency. A synchronization-feedback mechanism is designed to ensure the consistency of traffic dynamics across processors while keeping communication overheads minimal. The proposed methodology is implemented in DynaMIT, a state-of-the-art DTA system. Profiling studies are used to validate the algorithmic analysis of the system bottlenecks.
(cont.) The new system is evaluated on two real-world networks under various scenarios. Empirical results of the case studies show that the proposed OD estimation algorithm is insensitive to an increase in the number of OD pairs or sensors, and the computation time is reduced from minutes to a few seconds. The parallel simulation is found to maintain accurate output as compared to the sequential simulation, and with adaptive load-balancing, it considerably speeds up the network models even under non-recurrent incident scenarios. The results demonstrate the practical nature of the methodology and its scalability to large-scale real-world problems.
by Yang Wen.
Ph.D.
Persson, Jonna. "SCALABILITY OF JAVASCRIPTLIBRARIES FOR DATAVISUALIZATION". Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19994.
Testo completoLifhjelm, Tobias. "A scalability evaluation on CockroachDB". Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184587.
Testo completoMathew, Ajit. "Multicore Scalability Through Asynchronous Work". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/104116.
Testo completoMaster of Science
Up until mid-2000s, Moore's law predicted that performance CPU doubled every two years. This is because improvement in transistor technology allowed smaller transistor which can switch at higher frequency leading to faster CPU clocks. But faster clock leads to higher heat dissipation and as chips reached their thermal limits, computer architects could no longer increase clock speeds. Hence they moved to multicore architecture, wherein a single die contains multiple CPUs, to allow higher performance. Now programmers are required to parallelize their code to take advangtage of all the CPUs in a chip which is a non trivial problem. The theoretical speedup achieved by a program on multicore architecture is dictated by Amdahl's law which describes the non parallelizable code in a program as the limiting factor for speedup. For example, a program with 99% parallelizable code can achieve speedup of 20 whereas a program with 50% parallelizable code can only achieve speedup of 2. Therefore to achieve high speedup, programmers need to reduce size of serial section in their program. One way to reduce sequential section in a program is to remove non-critical task from the sequential section and perform the tasks asynchronously using background thread. This thesis explores this technique in two systems. First, a synchronization mechanism which is used co-ordinate access to shared resource called Multi-Version Read-Log-Update (MV-RLU). MV-RLU achieves high performance by removing garbage collection from critical path and performing it asynchronously using background thread. Second, an index structure, Hydralist, which based on the insight that an index structure can be decomposed into two components, search layer and data layer, and decouples updates to both the layer which allows higher performance. Updates to search layer is done synchronously while updates to data layer is done asynchronously using background threads. Evaluation shows that both the systems perform better than state-of-the-art competitors in a variety of workloads.
Monahan, Melissa A. "Scalability study for robotic hand platform /". Online version of thesis, 2010. http://hdl.handle.net/1850/12225.
Testo completoWittie, Mike P. "Towards Sustainable Scalability of Communication Networks". UNIVERSITY OF CALIFORNIA, SANTA BARBARA, 2012. http://pqdtopen.proquest.com/#viewpdf?dispub=3482054.
Testo completoMiller, John. "Distributed virtual environment scalability and security". Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/241109.
Testo completoVIKSTÉN, HENRIK, e VIKTOR MATTSSON. "Performance and Scalability of Sudoku Solvers". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134823.
Testo completoDetta dokument syftar till att klargöra skillnaderna i algoritmer med målet att lösa sudoku. Välja ett par olika algorithmer lämpliga för problemet, men som samtidigt skiljer sig från varandra. Att ta ett dagligt pussel och utnyttja vanligt förekommande algoritmer inom datavetenskap och lära sig mer om dem. Få relevant data och se hur de presterar i olika situationer, hur lätt de kan modifieras och användas i större Sudokus. Även hur deras prestanda skalar när pusslet blir större. Dancing links var den snabbaste algoritmen och skalade bäst av de som testades. Brute-force och Simulated annealing inte var lika konsekventa genom alla tester och betydligt långsammare i överlag.
Jogalekar, Prasad P. "Scalability analysis framework for distributed systems". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0033/NQ27014.pdf.
Testo completoPlain, Simon E. M. "Bit rate scalability in audio coding". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0034/MQ64243.pdf.
Testo completoLiatsos, Vassilios. "Scalability in planning with limited resources". Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395939.
Testo completoLilley, Jeremy (Jeremy Joseph) 1977. "Scalability in an International Naming System". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86540.
Testo completoIncludes bibliographical references (leaves 83-85).
by Jeremy Lilley.
M.Eng.
Boulgakov, Alexandre. "Improving scalability of exploratory model checking". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:76acb8bf-52e7-4078-ab4f-65f3ea07ba3d.
Testo completoSingh, Hermanpreet. "Controlling Scalability in Distributed Virtual Environments". Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/20372.
Testo completoPh. D.
Venkatachalam, Logambigai. "Scalability of Stepping Stones and Pathways". Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32326.
Testo completoMaster of Science
Clark, Jonathan. "Understanding scalability in distributed ledger technology". Master's thesis, Faculty of Commerce, 2021. http://hdl.handle.net/11427/32578.
Testo completoJogalekar, Prasad P. (Prasad Prabhakar) Carleton University Dissertation Engineering Systems and Computer. "Scalability analysis framework for distributed systems". Ottawa, 1997.
Cerca il testo completoAlpert, Cirrus, Michaela Turkowski e Tahiya Tasneem. "Scalability solutions for automated textile sorting : a case study on how dynamic capabilities can overcome scalability challenges". Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-26373.
Testo completoMarfia, Gustavo. "P2P vehicular applications mobility, fairness and scalability /". Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1998391911&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.
Testo completoMiranda, Bueno Alberto. "Scalability in extensible and heterogeneous storage systems". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/279389.
Testo completoL'evolució dels sistemes de computació ha dut un creixement exponencial dels volums de dades, que porta al límit la capacitat d'organitzar i accedir informació de les arquitectures d'emmagatzemament actuals. Amb una incessant creació de dades que creix a un ritme estimat del 40-60% per any, les infraestructures de dades requereixen de distribucions de dades cada cop més escalables que puguin adaptar-se a aquest creixement amb un rendiment adequat. Per tal de proporcionar aquest rendiment, els sistemes d'emmagatzemament de gran escala fan servir agregacions RAID5 o RAID6 connectades amb xarxes d'alta velocitat com FibreChannel o SAS. Malauradament, el rendiment de la tecnologia més emprada actualment, el disc magnètic, no creix prou ràpid per sostenir tal creixement explosiu. D'altra banda, les prediccions apunten que els dispositius d'estat sòlid, els successors de la tecnologia actual, no substituiran els discos magnètics fins d'aquí 5-10 anys. Tot i que el rendiment és molt superior, la indústria NAND necessitarà invertir centenars de milions de dòlars per construir prou fàbriques per satisfer la demanda prevista. A més dels problemes derivats de limitacions tècniques i mecàniques, el creixement massiu de les dades suposa més problemes: la solució més flexible per construir una infraestructura d'emmagatzematge consisteix en fer servir grups de dispositius que es poden fer créixer bé afegint-ne de nous, bé reemplaçant-ne els més vells, incrementant així la capacitat i el rendiment del sistema de forma transparent. Aquesta solució, però, requereix distribucions de dades que es puguin adaptar a aquests canvis a la topologia i explotar el rendiment potencial que el hardware ofereix. Aquestes distribucions haurien de poder reconstruir la col.locació de les dades per acomodar els nous dispositius, extraient-ne el màxim rendiment i oferint una càrrega de treball balancejada. Una distribució inadient pot no fer servir de manera efectiva la capacitat o el rendiment addicional ofert pels nous dispositius, provocant problemes de balanceig com colls d¿ampolla o infrautilització. A més, els sistemes d'emmagatzematge massius estaran inevitablement formats per hardware heterogeni: en créixer els requisits de capacitat i rendiment, es fa necessari afegir nous dispositius per poder suportar la demanda, però és poc probable que els dispositius afegits tinguin la mateixa capacitat o rendiment que els ja instal.lats. A més, en cas de fallada, els discos són reemplaçats per d'altres més ràpids i de més capacitat, ja que no sempre és fàcil (o barat) trobar-ne un model particular. A llarg termini, qualsevol arquitectura d'emmagatzematge de gran escala estarà formada per una miríade de dispositius diferents. El títol d'aquesta tesi, "Scalability in Extensible and Heterogeneous Storage Systems", fa referència a les nostres contribucions a la recerca de distribucions de dades escalables que es puguin adaptar a volums creixents d'informació. La primera contribució és el disseny d'una distribució escalable que es pot adaptar canvis de hardware només redistribuint el mínim per mantenir un càrrega de treball balancejada. A la segona contribució, fem un estudi comparatiu sobre l'impacte del generadors de números pseudo-aleatoris en el rendiment i qualitat de les distribucions pseudo-aleatòries de dades, i provem que una mala selecció del generador pot degradar la qualitat de l'estratègia. La tercera contribució és un anàlisi dels patrons d'accés a dades de llarga duració en traces de sistemes reals, per determinar si és possible oferir un alt rendiment i una bona distribució amb una rebalanceig inferior al mínim. A la contribució final, apliquem el coneixement adquirit en aquest estudi per dissenyar una arquitectura RAID extensible que es pot adaptar a canvis en el número de dispositius sense migrar grans volums de dades, i demostrem que pot ser competitiva amb les distribucions ideals RAID actuals, amb només una penalització del 1.28% de la capacitat
Berg, Hans Inge. "Simulation of Performance Scalability in Pervasive Systems". Thesis, Norwegian University of Science and Technology, Department of Telematics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10080.
Testo completoAs increasingly more services and devices become integrated into pervasive systems, future network topologies will be vastly more sophisticated with numerous heterogeneous devices interconnected. To integrate a new service into this already complex network topology and traffic can give unwanted results if the functional blocks (applets) of a service are not placed at the best suited locations (devices). This thesis will look into the performance and scalability issues when confronted with options of multiple locations in which to run an applet. We will define a modelling framework taking into consideration system usage, network loads, device loads, overloads, timing requirements and propagation delays to mention some factors. In this framework we are able to set up our own scenarios with user patterns and the amount of users in the system. This framework will be written in Simula. From the output gained from this framework we can improve the system or the applets to improve overall traffic flow and resource usage. The framework will be run on a total of 8 different scenarios based on an airport usage model. We will have 6 static applets residing in their own devices and one dynamic applet which we will try to find the best location for within a predefined network topology. The amount of users can be set to a static amount or it can be a dynamic amount changing from hour to hour. The results produced give a better picture of the whole system working together. Based on these results it is possible to come to a conclusion of best suited applet location.
Rodal, Morten. "Scalability of seismic codes on computational clusters". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9145.
Testo completoXu, Donghua. "Scalability and Composability Techniques for Network Simulation". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10450.
Testo completoVieira, Joana. "Scalability Performance of Ericsson Radio Dot System". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177163.
Testo completoTidigare använde sig operatörer av inomhuslösningar för täckningsskäl. Då trafikvolymen växer och flera hotspots tillkommer inomhus, blir även tillhandahållet av kapacitet ett steg för inbyggnadsnät, framför allt på bekostnad av LTE bithastighetslöften. Nätverksleverantörer är medvetna om denna verklighet och multipla inomhussystem har lanserats som små celler, aktiva DAS och speciellt Ericsson Radio System Dot. En betydande faktor som dikterar systemets förmåga att möta framtida krav är skalbarhet, antingen i täckningsområdet eller i kapacitet. Syftet med denna avhandling är att utvärdera Radio Dot Systemets prestanda avseende dessa dimensioner, där faktorer som begränsar kapaciteten och täckningen utvärderas följt av en kostnadsanalys. Vidare förs en diskussion om installationsscenarier som berör en enda aktör ur ett affärsmässigt perspektiv. För de fall som utvärderats, ger Radio Dot System både LTE- och WCDMA täckning och kapacitet inomhus för en rad byggnader, som ses som medelstora till mycket stora. En avvägning mellan nätverkskomponenter och bandbredd ger dessutom en viss flexibilitet gällande spektrum. Radio Dot System har dessutom en kostnadsfördel gentemot femtoceller och makro ut-in täckning gällande de scenarier som analyserats. Som ett singel-operatörssystem är driftmöjligheterna för tillfället begränsade till medelstora företagsklienter. Om användandet av licensfria spektrumband som i vissa länder har utfärdats tar fart, uppstår fler möjligheter för enkeloperatörers inbyggnadssystem.
Messing, Andreas, e Henrik Rönnholm. "Scalability of dynamic locomotion for computer games". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166424.
Testo completoDynamisk förflyttning i datorspel blir allt vanligare, menär fortfarande i sin vagga. I denna rapport genomför vi enutredning om ifall oscillatorer i form av ett CPG-systemkan användas som en spelteknik för att generera dynamiskrörelse i realtid. Vi testade detta genom att använda enfysiksimulator av industrikvalitet för att simulera en tvåfotingoch en fyrfoting (humanoid och salamander) och mätakvalitet, tidsåtgång och minnesanvändning . Försöken visaratt en tvåfoting behöver något sorts balanssystem för attgå längre sträckor, medan salamandern lyckades gå med endastmindre defekter. Minnet tillåter flera tusen instansermedan tidsåtgången tillåter upp till ett par hundra instanser.Detta leder till slutsatsen att det är ett billigt system lämpligtdär antalet instanser är viktigare än kvaliteten.
Hussain, Shahid, e Hassan Shabbir. "Directory scalability in multi-agent based systems". Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3110.
Testo completoGottemukkala, Vibby. "Scalability issues in distributed and parallel databases". Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8176.
Testo completoJiang, Tianji. "Accommodating heterogeneity and scalability for multicast communication". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/8190.
Testo completoTambouris, Efthimios. "Performance and scalability analysis of parallel systems". Thesis, Brunel University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341665.
Testo completoDjugash, Joseph A. "Geolocation with Range: Robustness, Efficiency and Scalability". Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/63.
Testo completoRotsos, Charalampos. "Improving network extensibility and scalability through SDN". Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709033.
Testo completoKonstantakopoulos, Theodoros K. 1977. "Energy scalability of on-chip interconnection networks". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40315.
Testo completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Page 198 blank.
Includes bibliographical references (p. 191-197).
On-chip interconnection networks (OCN) such as point-to-point networks and buses form the communication backbone in multiprocessor systems-on-a-chip, multicore processors, and tiled processors. OCNs consume significant portions of a chip's energy budget, so their energy analysis early in the design cycle becomes important for architectural design decisions. Although innumerable studies have examined OCN implementation and performance, there have been few energy analysis studies. This thesis develops an analytical framework for energy estimation in OCNs, for any given topology and arbitrary communication patterns, and presents OCN energy results based on both analytical communication models and real network traces from applications running on a tiled multicore processor. This thesis is the first work to address communication locality in analyzing multicore interconnect energy and to use real multicore interconnect traces extensively. The thesis compares the energy performance of point-to-point networks with buses for varying degrees of communication locality. The model accounts for wire length, switch energy, and network contention. This work is the first to examine network contention from the energy standpoint.
(cont.) The thesis presents a detailed analysis of the energy costs of a switch and shows that the estimated values for channel energy, switch control logic energy, and switch queue buffer energy are 34.5pJ, 17pJ, and 12pJ, respectively. The results suggest that a one-dimensional point-to-point network results in approximately 66% energy savings over a bus for 16 or more processors, while a two-dimensional network saves over 82%, when the processors communicate with each other with equal likelihood. The savings increase with locality. Analysis of the effect of contention on OCNs for the Raw tiled microprocessor reports a maximum energy overhead of 23% due to resource contention in the interconnection network.
by Theodoros K. Konstantakopoulos.
Ph.D.
Xing, Kerry (Kerry K. ). "Cilkprof : a scalability profiler for Cilk programs". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91879.
Testo completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 51-53).
This thesis describes the design and implementation of Cilkprof, a profiling tool that helps programmers to diagnose scalability problems in their Cilk programs. Cilkprof provides in-depth information about the scalability of programs, without adding excessive overhead. Cilkprof's output can be used to find scalability bottlenecks in the user's code. Cilkprof makes profiling measurements at the fork and join points of computations, which typically limits the amount of overhead incurred by the profiler. In addition, despite recording in-depth information, Cilkprof does not generate large log files that are typical of trace-based profilers. In addition, the profiling algorithm only incurs constant amortized overhead per measurement. CilkProf slows down the serial program execution by a factor of about 10 in the common case, on a well-coarsened parallel program. The slowdown is reasonable for the amount of information gained from the profiling. Finally, the approach taken by Cilkprof enables the creation of an API, which can allow users to specify their own profiling code, without having to change the Cilk runtime.
by Kerry Xing.
M. Eng.
Pak, Nikita. "Automation and scalability of in vivo neuroscience". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119094.
Testo completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 121-124).
Many in vivo neuroscience techniques are limited in terms of scale and suffer from inconsistencies because of the reliance on human operators for critical tasks. Ideally, automation would yield repeatable and reliable experimental procedures. Precision engineering would also allow us to perform more complex experiments by allowing us to take novel approaches to existing problems. Two such tasks that would see great improvement through automation and scalability are accessibility to the brain as well as neuronal activity imaging. In this thesis, I will describe the development of two novel tools that increase the precision, repeatability, and scale of in vivo neural experimentation. The first tool is a robot that automatically performs craniotomies in mice and other mammals by sending an electrical signal through a drill and measuring the voltage drop across the animal. A well-characterized increase in conductance occurs after skull breakthrough due to the lower impedance of the meninges compared to the bone of the skull. This robot allows us access to the brain without damaging the tissue, a critical step in many neuroscience experiments. The second tool is a new type of microscope that can capture high resolution three-dimensional volumes at the speed of the camera frame rate, with isotropic resolution. This microscope is novel in that it uses two orthogonal views of the sample to create a higher resolution image than is possible with just a single view. Increased resolution will potentially allow us to record neuronal activity that we would otherwise miss because of the inability to distinguish two nearby neurons.
by Nikita Pak.
Ph. D.
Wong, Jeremy Ng 1981. "Modeling the scalability of acrylic stream programs". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/18004.
Testo completoIncludes bibliographical references (p. 109-110).
Despite the fact that the streaming application domain is becoming increasingly widespread, few studies have focused specifically on the performance characteristics of stream programs. We introduce two models by which the scalability of stream programs can be predicted to some degree of accuracy. This is accomplished by testing a series of stream benchmarks on our numerical representations of the two models. These numbers are then compared to actual speedups obtained by running the benchmarks through the Raw machine and a Magic network. Using the metrics, we show that stateless acyclic stream programs benefit considerably from data, parallelization. In particular, programs with low communication datarates experience up to a tenfold speedup increase when parallelized to a reasonable margin. Those with high communication data rates also experience approximately a twofold speedup. We find that the model that takes synchronization communication overhead into account, in addition to a cost proportional to the communication rate of the stream, provides the highest predictive accuracy.
by Jeremy Ng Wong.
M.Eng.
Davies, Neil J. "The performance and scalability of parallel systems". Thesis, University of Bristol, 1994. http://hdl.handle.net/1983/964dec41-9a36-44ea-9cfc-f6d1013fcd12.
Testo completoGhaffari, Amir. "The scalability of reliable computation in Erlang". Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6789/.
Testo completoStromatias, Evangelos. "Scalability and robustness of artificial neural networks". Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/scalability-and-robustness-of-artificial-neural-networks(b73b3f77-2bc3-4197-bd0f-dc7501b872cb).html.
Testo completoBernatskiy, Anton. "Improving Scalability of Evolutionary Robotics with Reformulation". ScholarWorks @ UVM, 2018. https://scholarworks.uvm.edu/graddis/957.
Testo completoDesmouceaux, Yoann. "Network-Layer Protocols for Data Center Scalability". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX011/document.
Testo completoWith the development of demand for computing resources, data center architectures are growing both in scale and in complexity.In this context, this thesis takes a step back as compared to traditional network approaches, and shows that providing generic primitives directly within the network layer is a great way to improve efficiency of resource usage, and decrease network traffic and management overhead.Using recently-introduced network architectures, Segment Routing (SR) and Bit-Indexed Explicit Replication (BIER), network layer protocols are designed and analyzed to provide three high-level functions: (1) task mobility, (2) reliable content distribution and (3) load-balancing.First, task mobility is achieved by using SR to provide a zero-loss virtual machine migration service.This then opens the opportunity for studying how to orchestrate task placement and migration while aiming at (i) maximizing the inter-task throughput, while (ii) maximizing the number of newly-placed tasks, but (iii) minimizing the number of tasks to be migrated.Second, reliable content distribution is achieved by using BIER to provide a reliable multicast protocol, in which retransmissions of lost packets are targeted towards the precise set of destinations having missed that packet, thus incurring a minimal traffic overhead.To decrease the load on the source link, this is then extended to enable retransmissions by local peers from the same group, with SR as a helper to find a suitable retransmission candidate.Third, load-balancing is achieved by way of using SR to distribute queries through several application candidates, each of which taking local decisions as to whether to accept those, thus achieving better fairness as compared to centralized approaches.The feasibility of hardware implementation of this approach is investigated, and a solution using covert channels to transparently convey information to the load-balancer is implemented for a state-of-the-art programmable network card.Finally, the possibility of providing autoscaling as a network service is investigated: by letting queries go through a fixed chain of applications using SR, autoscaling is triggered by the last instance, depending on its local state
Scotece, Domenico <1988>. "Edge Computing for Extreme Reliability and Scalability". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9433/5/Tesi-Scotece.pdf.
Testo completoBordes, Philippe. "Adapting video compression to new formats". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S003/document.
Testo completoThe new video codecs should be designed with an high level of adaptability in terms of network bandwidth, format scalability (size, color space…) and backward compatibility. This thesis was made in this context and within the scope of the HEVC standard development. In a first part, several Video Coding adaptations that exploit the signal properties and which take place at the bit-stream creation are explored. The study of improved frame partitioning for inter prediction allows better fitting the actual motion frontiers and shows significant gains. This principle is further extended to long-term motion modeling with trajectories. We also show how the cross-component correlation statistics and the luminance change between pictures can be exploited to increase the coding efficiency. In a second part, post-creation stream adaptations relying on intrinsic stream flexibility are investigated. In particular, a new color gamut scalability scheme addressing color space adaptation is proposed. From this work, we derive color remapping metadata and an associated model to provide low complexity and general purpose color remapping feature. We also explore the adaptive resolution coding and how to extend scalable codec to stream-switching applications. Several of the described techniques have been proposed to MPEG. Some of them have been adopted in the HEVC standard and in the UHD Blu-ray Disc. Various techniques for adapting the video compression to the content characteristics and to the distribution use cases have been considered. They can be selected or combined together depending on the applications requirements