Dissertations / Theses on the topic 'Scalable computing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Scalable computing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Fleming, Kermin Elliott Jr. "Scalable reconfigurable computing leveraging latency-insensitive channels." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79212.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 190-197).
Traditionally, FPGAs have been confined to the limited role of small, low-volume ASIC replacements and as circuit emulators. However, continued Moore's law scaling has given FPGAs new life as accelerators for applications that map well to fine-grained parallel substrates. Examples of such applications include processor modelling, compression, and digital signal processing. Although FPGAs continue to increase in size, some interesting designs still fail to fit in to a single FPGA. Many tools exist that partition RTL descriptions across FPGAs. Unfortunately, existing tools have low performance due to the inefficiency of maintaining the cycle-by-cycle behavior of RTL among discrete FPGAs. These tools are unsuitable for use in FPGA program acceleration, as the purpose of an accelerator is to make applications run faster. This thesis presents latency-insensitive channels, a language-level mechanism by which programmers express points in their their design at which the cycle-by-cycle behavior of the design may be modified by the compiler. By decoupling the timing of portions of the RTL from the high-level function of the program, designs may be mapped to multiple FPGAs without suffering the performance degradation observed in existing tools. This thesis demonstrates, using a diverse set of large designs, that FPGA programs described in terms of latency-insensitive channels obtain significant gains in design feasibility, compilation time, and run-time when mapped to multiple FPGAs.
by Kermin Elliott Fleming, Jr.
Ph.D.
Spagnuolo, Carmine. "Scalable computational science." Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/2581.
Full textComputational science also know as scientific computing is a rapidly growing novel field that uses advanced computing in order to solve complex problems. This new discipline combines technologies, modern computational methods and simulations to address problems too complex to be reliably predicted only by theory and too dangerous or expensive to be reproduced in laboratories. Successes in computational science over the past twenty years have caused demand of supercomputing, to improve the performance of the solutions and to allow the growth of the models, in terms of sizes and quality. From a computer scientist’s perspective, it is natural to think to distribute the computation required to study a complex systems among multiple machines: it is well known that the speed of singleprocessor computers is reaching some physical limits. For these reasons, parallel and distributed computing has become the dominant paradigm for computational scientists who need the latest development on computing resources in order to solve their problems and the “Scalability” has been recognized as the central challenge in this science. In this dissertation the design and implementation of Frameworks, Parallel Languages and Architectures, which enable to improve the state of the art on Scalable Computational Science, are discussed. Frameworks. The proposal of D-MASON, a distributed version of MASON, a wellknown and popular Java toolkit for writing and running Agent-Based Simulations (ABSs). D-MASON introduces a framework level parallelization so that scientists that use the framework (e.g., a domain expert with limited knowledge of distributed programming) could be only minimally aware of such distribution. D-MASON, was began to be developed since 2011, the main purpose of the project was overcoming the limits of the sequentially computation of MASON, using distributed computing. D-MASON enables to do more than MASONin terms of size of simulations (number of agents and complexity of agents behaviors), but allows also to reduce the simulation time of simulations written in MASON. For this reason, one of the most important feature of D-MASON is that it requires a limited number of changing on the MASON’s code in order to execute simulations on distributed systems. v D-MASON, based on Master-Worker paradigm, was initially designed for heterogeneous computing in order to exploit the unused computational resources in labs, but it also provides functionality to be executed in homogeneous systems (as HPC systems) as well as cloud infrastructures. The architecture of D-MASON is presented in the following three papers, which describes all D-MASON layers: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Much effort has been made, on the Communication Layer, to improve the communication efficiency in the case of homogeneous systems. D-MASON is based on Publish/Subscribe (PS) communication paradigm and uses a centralized message broker (based on the Java Message Service standard) to deal with heterogeneous systems. The communication for homogeneous system uses the Message Passing Interface (MPI) standard and is also based on PS. In order to use MPI within Java, D-MASON uses a Java binding of MPI. Unfortunately, this binding is relatively new and does not provides all MPI functionalities. Several communication strategies were designed, implemented and evaluated. These strategies were presented in two papers: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON provides also mechanisms for the visualization and gathering of the data in distributed simulation (available on the Visualization Layer). These solutions are presented in the paper: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. In DABS one of the most complex problem is the partitioning and balancing of the computation. D-MASON provides, in the Distributed Simulation layer, mechanisms for partitioning and dynamically balancing the computation. D-MASON uses field partitioning mechanism to divide the computation among the distributed system. The field partitioning mechanism provides a nice trade-off between balancing and communication effort. Nevertheless a lot of ABS are not based on 2D- or 3D-fields and are based on a communication graph that models the relationship among the agents. Inthiscasethefieldpartitioningmechanismdoesnotensuregoodsimulation performance. Therefore D-MASON provides also a specific mechanisms to manage simulation that uses a graph to describe agent interactions. These solutions were presented in the following publication: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. The field partitioning mechanism, intuitively, enables the mono and bi-dimensional partitioning of an Euclidean space. This approach is also know as uniform partitioning. But in some cases, e.g. simulations that simulate urban areas using a Geographical Information System (GIS), the uniform partitioning degrades the simulation performance, due to the unbalanced distribution of the agents on the field and consequently on the computational resources. In such a case, D-MASON provides a non-uniform partitioning mechanism (inspired by Quad-Tree data structure), presented in the following paper: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. The latest version of D-MASON provides a web-based System Management, to better use D-MASON in Cloud infrastructures. D-MASON on the Amazon EC2 Cloud infrastructure and its performance in terms of speed and cost were compared against D-MASON on an HPC environment. The obtained results, and the new System Management Layer are presented in the following paper: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. ParallelLanguages. The proposal of an architecture, which enable to invoke code supported by a Java Virtual Machine (JVM) from code written in C language. Swft/T, is a parallel scripting language for programming highly concurrent applications in parallel and distributed environments. Swift/T is the reimplemented version of Swift language, with a new compiler and runtime. Swift/T improve Swift, allowing scalability over 500 tasks per second, load balancing feature, distributed data structures, and dataflow-driven concurrent task execution. Swif/T provides an interesting feature the one of calling easily and natively other languages (as Python, R, Julia, C) by using special language functions named leaf functions. Considering the actual trend of some supercomputing vendors (such as Cray Inc.) that support in its processors Java Virtual Machines (JVM), it is desirable to provide methods to call also Java code from Swift/T. In particular is really attractive to be able to call scripting languages for JVM as Clojure, Scala, Groovy, JavaScript etc. For this purpose a C binding to instanziate and call JVM was designed. This binding is used in Swif/T (since the version 1.0) to develop leaf functions that call Java code. The code are public available at GitHub project page. Frameworks. The proposal of two tools, which exploit the computing power of parallel systems to improve the effectiveness and the efficiency of Simulation Optimization strategies. Simulations Optimization (SO) is used to refer to the techniques studied for ascertaining the parameters of a complex model that minimize (or maximize) given criteria (one or many), which can only be computed by performing a simulation run. Due to the the high dimensionality of the search space, the heterogeneity of parameters, the irregular shape and the stochastic nature of the objective evaluation function, the tuning of such systems is extremely demanding from the computational point of view. The first frameworks is SOF: Zero Configuration Simulation Optimization Framework on the Cloud, it was designed to run SO process in viii the cloud. SOF is based on the Apache Hadoop infrastructure and is presented in the following paper: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. The second framework is EMEWS: Extreme-scale Model Exploration with Swift/T, it has been designed at Argonne National Laboratory (USA). EMEWS as SOF allows to perform SO processes in distributed system. Both the frameworks are mainly designed for ABS. In particular EMEWS was tested using the ABS simulation toolkit Repast. Initially, EMEWS was not able to easily execute out of the box simulations written in MASON and NetLogo. This thesis presents new functionalities of EMEWS and solutions to easily execute MASON and NetLogo simulations on it. The EMEWS use cases are presented in the following paper: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architectures. The proposal of an open-source, extensible, architecture for the visualization of data in HTML pages, exploiting a distributed web computing. Following the Edge-centric Computing paradigm, the data visualization is performed edge side ensuring data trustiness, privacy, scalability and dynamic data loading. The architecture has been exploited in the Social Platform for Open Data (SPOD). The proposed architecture, that has also appeared in the following papers: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [edited by author]
Computational science anche conosciuta come calcolo scientifico è un settore in rapida crescita che usa il calcolo avanzato per affrontare problemi complessi. Questa nuova disciplina, combina tecnologia, moderni metodi computazionali e simulazioni per affrontare problemi troppo difficili da poter essere studiati solo in teoria o troppo pericolosi e costosi per poter essere riprodotti sperimentalmente in laboratorio. I progressi dell’ultimo ventennio in computational science hanno sfruttato il supercalcolo per migliorare le performance delle soluzioni e permettere la crescita dei modelli, in termini di dimensioni e qualità dei risultati ottenuti. Le soluzioni adottate si avvalgono del calcolo distribuito: è ben noto che la velocità di un computer con un singolo processore sta raggiungendo dei limiti fisici. Per queste ragioni, la computazione parallela e distribuita è diventata il principale paradigma di calcolo per affrontare i problemi nell’ambito della computational science, in cui la scalabilità delle soluzioni costituisce la sfida da affrontare. In questa tesi vengono discusse la progettazione e l’implementazione di Framework, Linguaggi Paralleli e Architetture che consentono di migliorare lo stato dell’arte della Scalable Computational Science. In particolare, i maggiori contributi riguardano: Frameworks. La proposta di D-MASON, una versione distribuita di MASON, un toolkit Java per la scrittura e l’esecuzione di simulazioni basate su agenti (AgentBased Simulations, ABSs). D-MASON introduce la parallelizzazione a livello framework per far si che gli scienziati che lo utilizzano (ad esempio un esperto con limitata conoscenza della programmazione distribuita) possano rendersi conto solo minimamente di lavorare in ambiente distribuito (ad esempio esperti del dominio con limitata esperienza o nessuna esperienza nel calcolo distribuito). D-MASON è un progetto iniziato nel 2011, il cui principale obiettivo è quello di superare i limiti del calcolo sequenziale di MASON, sfruttando il calcolo distribuito. D-MASON permette di simulare modelli molto più complessi (in termini di numero di agenti e complessità dei comportamenti dei singoli agenti) rispetto a MASON e inoltre consente, a parità di calcolo, di ridurre il tempo necessario ad eseguire le simulazioni MASON. D-MASON è stato progettato in modo da permettere la migrazione di simulazioni scritte in MASON con un numero limitato di modifiche da apportare al codice, al fine di garantire il massimo della semplicità d’uso. v D-MASON è basato sul paradigma Master-Worker, inizialmente pensato per sistemi di calcolo eterogenei, nelle sue ultime versioni consente l’esecuzione anche in sistemi omogenei come sistemi HPC e infrastrutture di cloud computing. L’architettura di D-MASON è stata presentata nelle seguenti pubblicazioni: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Uno degli strati architetturali di D-MASON che ne determina le prestazioni, è il il Communication Layer, il quale offre le funzionalità di comunicazione tra tutte le entità coinvolte nel calcolo. La comunicazione in D-MASON è basata sul paradigma Publish/Subscribe (PS). Al fine di soddisfare la flessibilità e la scalabilità richiesta, vengono fornite due strategie di comunicazione, una centralizzata (utilizzando Java Message Service) e una decentralizzata (utilizzando Message Passing Interface). La comunicazione in sistemi omogenei è sempre basata su PS ma utilizza lo standard Message Passing Interface (MPI). Al fine di utilizzare MPI in Java, lo strato di comunicazione di D-MASON è implementato sfruttando un binding Java a MPI. Tale soluzione non permette però l’utilizzo di tutte le funzionalità di MPI. Al tal proposito molteplici soluzioni sono stare progettate e implementate, e sono presentate nelle seguenti pubblicazioni: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON offre anche meccanismi per la visualizzazione centralizzata e la raccolta di informazioni in simulazioni distribuite (tramite il Visualization Layer). I risultati ottenuti sono stati presentati nella seguente pubblicazione: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. Quando si parla di simulazioni distribuite una delle principali problematiche è il bilanciamento del carico. D-MASON offre, nel Distributed Simulation Layer, meccanismi per il partizionamento dinamico e il bilanciamento del carico. DMASON utilizza la tecnica del field partitioning per suddividere il lavoro tra le entità del sistema distribuito. La tecnica di field partitioning consente di ottenere un buon equilibrio tra il bilanciamento del carico e l’overhead di comunicazione. Molti modelli di simulazione non sono basati su spazi 2/3-dimensionali ma bensì modellano le relazioni tra gli agenti utilizzando strutture dati grafo. In questi casi la tecnica di field partitioning non garantisce soluzioni che consentono di ottenere buone prestazioni. Per risolvere tale problema, D-MASON fornisce particolari soluzioni per simulazioni che utilizzano i grafi per modellare le relazioni tra gli agenti. I risultati conseguiti sono stati presentati nella seguente pubblicazione: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. Il metodo di field partitioning consente il partizionamento di campi Euclidei mono e bi-dimensionali; tale approccio è anche conosciuto con il nome di partizionamento uniforme. In alcuni casi, come ad esempio simulazioni che utilizzano Geographical Information System (GIS), il metodo di partizionamento uniforme non è in grado di garantire buone prestazioni, a causa del posizionamento non bilanciato degli agenti sul campo di simulazione. In questi casi, D-MASON offre un meccanismo di partizionamento non uniforme (inspirato alla struttura dati Quad-Tree), presentato nelle seguenti pubblicazioni: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. Inoltre, D-MASON èstatoestesoalloscopodifornireun’infrastrutturaSimulation-asa-Service(SIMaaS),chesemplificailprocessodiesecuzionedisimulazionidistribuite in un ambiente di Cloud Computing. D-MASON nella sua versione più recente offre uno strato software di management basato su web, che ne consente estrema facilità d’uso in ambienti Cloud. Utilizzando il System Management, D-MASON è stato sperimentato sull’infrastruttura Cloud Amazon EC2 confrontando le prestazioni in questo ambiente cloud con un sistema HPC. I risultati ottenuti sono stati presentati nella seguente pubblicazione: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. LinguaggiParalleli. La proposta di un’architettura, la quale consente di invocare il codice per Java Virtual Machine (JVM) da codice scritto in linguaggio C. Swift/T è un linguaggio di scripting parallelo per sviluppare applicazioni altamente scalabili in ambienti paralleli e distribuiti. Swift/T è l’implementazione del linguaggio Swift per ambienti HPC. Swift/T migliora il linguaggio Swift, consentendo la scalabilità fino a 500 task per secondo, il bilanciamento del carico, strutture dati distribuite, e dataflow task execution. Swift/T consente di invocare nativamente codice scritto in altri linguaggi (come Python, R, Julia e C) utilizzando particolari funzioni definite come leaf function. Il trend attuale di molti produttori di sistemi di supercalcolo (come Cray Inc.), è quello di offrire processori che supportano JVM. Considerato ciò in questa tesi viene presentato il metodo adottato in Swift/T per l’invocazione di linguaggi per JVM (come Java, Clojure, Scala, Groovy, JavaScript) da Swift/T. A tale scopo è stato realizzato un binding C per l’invocazione e la gestione di codice per JVM. Questa soluzione è stata utilizzata in Swift/T (dalla versione 1.0) per estendere il supporto del linguaggio anche a linguaggi per JVM. Il codice sviluppato è stato rilasciato sotto licenza open source ed è disponibile in un repository pubblico su GitHub. Frameworks. La proposta di due tool che sfruttano la potenza di calcolo di sistemi distribuiti per migliorare l’efficacia e l’efficienza di strategie di Simulation Optimization. Simulation Optimization (SO) si riferisce alle tecniche utilizzate per l’individuazione dei parametri di un modello complesso che minimizzano (o massimizzano) determinati criteri, i quali possono essere computati solo tramite l’esecuzione di una simulazione. A causa dell’elevata dimensionalità dello spazio dei parametri, della loro eterogeneità e, della natura stocastica della funzione di viii valutazione, la configurazione di tali sistemi è estremamente onerosa dal punto di vista computazionale. In questo lavoro sono presentati due framework per SO. Il primo framework è SOF:Zero ConfigurationSimulation OptimizationFramework on the Cloud, progettato per l’esecuzione del processo SO in ambienti di cloud computing. SOF è basato su Apache Hadoop ed è presentato nella seguente pubblicazione: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. Il secondo framework è EMEWS: Extreme-scale Model Exploration with Swift/T, progettato per eseguire processi SO in sistemi HPC. Entrambi i framework sono stati sviluppati principalmente per ABS. In particolare EMEWS è stato sperimentato utilizzando il toolkit ABS chiamato Repast. Nella sua prima versione EMEWS non supportava simulazioni scritte in MASON e NetLogo. In questo lavoro di tesi sono descritte alcune funzionalità di EMEWS che consentono il supporto a tali simulazioni. EMEWS e alcuni casi d’uso sono presentati nella seguente pubblicazione: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architetture. La proposta di un’architettura open source per la visualizzazione web di dati dinamici. Tale architettura si basa sul paradigma di Edge-centric Computing; la visualizzazione dei dati è eseguita lato client, garantendo in questo modo l’affidabilità dei dati, la privacy e la scalabilità in termini di numero di visualizzazioni concorrenti. L’architettura è stata utilizzata all’interno della piattaforma sociale SPOD (Social Platform for Open Data), ed è stata presentata nelle seguenti pubblicazioni: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [a cura dell'autore]
XV n.s. (XXIX)
何世全 and Sai-chuen Ho. "Single I/O space for scalable cluster computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222614.
Full textEltony, Amira M. (Amira Madeleine). "Scalable trap technology for quantum computing with ions." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99822.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages [187]-214).
Quantum computers employ quantum mechanical effects, such as superposition and entanglement, to process information in a distinctive way, with advantages for simulation and for new, and in some cases more-efficient algorithms. A quantum bit is a two-level quantum system, such as the electronic or spin state of a trapped atomic ion. Physics experiments with single atomic ions acting as "quantum bits" have demonstrated many of the ingredients for a quantum computer. But to perform useful computations these experimental systems will need to be vastly scaled-up. Our goal is to engineer systems for large-scale quantum computation with trapped ions. Building on established techniques of microfabrication, we create ion traps incorporating exotic materials and devices, and we investigate how quantum algorithms can be efficiently mapped onto physical trap hardware. An existing apparatus built around a bath cryostat is modified for characterization of novel ion traps and devices at cryogenic temperatures (4 K and 77 K). We demonstrate an ion trap on a transparent chip with an integrated photodetector, which allows for scalable, efficient state detection of a quantum bit. To understand and better control electric field noise (which limits gate fidelities), we experiment with coating trap electrodes in graphene. We develop traps compatible with standard CMOS manufacturing to leverage the precision and scale of this platform, and we design a Single Instruction Multiple Data (SIMD) algorithm for implementing the QFT using a distributed array of ion chains. Lastly, we explore how to bring it all together to create an integrated trap module from which a scalable architecture can be assembled.
by Amira M. Eltony.
Ph. D.
Rrustemi, Alban. "Computing surfaces : a platform for scalable interactive displays." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612533.
Full textAllcock, David Thomas Charles. "Surface-electrode ion traps for scalable quantum computing." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.559722.
Full textHo, Sai-chuen. "Single I/O space for scalable cluster computing /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21841512.
Full textPang, Xiaolin. "Scalable Algorithms for Outlier Detection." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11743.
Full textTran, Viet-Trung. "Scalable data-management systems for Big Data." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00920432.
Full textSurapaneni, Chandra Sekhar Medhi Deepankar. "Dynamically organized and scalable virtual organizations in Grid computing." Diss., UMK access, 2005.
Find full text"A thesis in computer science." Typescript. Advisor: Deepankar Medhi. Vita. Title from "catalog record" of the print edition Description based on contents viewed March 12, 2007. Includes bibliographical references (leaves 85-87). Online version of the print edition.
Lanore, Vincent. "On Scalable Reconfigurable Component Models for High-Performance Computing." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1051/document.
Full textComponent-based programming is a programming paradigm which eases code reuse and separation of concerns. Some component models, which are said to be "reconfigurable", allow the modification at runtime of an application's structure. However, these models are not suited to High-Performance Computing (HPC) as they rely on non-scalable mechanisms.The goal of this thesis is to provide models, algorithms and tools to ease the development of component-based reconfigurable HPC applications.The main contribution of the thesis is the DirectMOD component model which eases development and reuse of distributed transformations. In order to improve on this core model in other directions, we have also proposed:• the SpecMOD formal component model which allows automatic specialization of hierarchical component assemblies and provides high-level software engineering features;• mechanisms for efficient fine-grain reconfiguration for AMR applications, an important application class in HPC.An implementation of DirectMOD, called DirectL2C, as been developed so as to implement a series of benchmarks to evaluate our approach. Experiments on HPC architectures show our approach scales. Moreover, a quantitative analysis of the benchmark's codes show that our approach is compact and eases reuse
Albaiz, Abdulaziz (Abdulaziz Mohammad). "MPI-based scalable computing platform for parallel numerical application." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95562.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (page 61).
Developing parallel numerical applications, such as simulators and solvers, involves a variety of challenges in dealing with data partitioning, workload balancing, data dependencies, and synchronization. Many numerical applications share the need for an underlying parallel framework for parallelization on multi-core/multi-machine hardware. In this thesis, a computing platform for parallel numerical applications is designed and implemented. The platform performs parallelization by multiprocessing over MPI library, and serves as a layer of abstraction that hides the complexities in dealing with data distribution and inter-process communication. It also provides the essential functions that most numerical application use, such as handling data-dependency, workload-balancing, and overlapping communication and computation. The performance evaluation of the parallel platform shows that it is highly scalable for large problems.
by Abdulaziz Albaiz.
S.M.
Helal, Ahmed Elmohamadi Mohamed. "Automated Runtime Analysis and Adaptation for Scalable Heterogeneous Computing." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96607.
Full textDoctor of Philosophy
Current supercomputers integrate a massive number of heterogeneous compute units with varying speed, computational throughput, memory bandwidth, and memory access latency. This trend represents a major challenge to end users, as their applications have been designed from the ground up to primarily exploit homogeneous CPUs. While heterogeneous systems can deliver several orders of magnitude speedup compared to traditional CPU-based systems, end users need extensive software and hardware expertise as well as significant time and effort to efficiently utilize all the available compute resources. To streamline such a daunting process, this dissertation presents automated frameworks for analyzing and modeling the performance on parallel architectures and for transforming the execution of user applications at runtime. The proposed frameworks incorporate domain knowledge and adapt to the input data and the underlying hardware using novel static and dynamic analyses. The experimental results show the efficacy of the introduced frameworks across many important application domains, such as computational fluid dynamics (CFD), and computer-aided design (CAD). In particular, the adaptive execution approach on heterogeneous systems achieves up to an order-of-magnitude speedup over the optimized parallel implementations.
De, Guzman Ethan Paul Palisoc. "Energy Efficient Computing using Scalable General Purpose Analog Processors." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2305.
Full textBuehrer, Gregory T. "Scalable mining on emerging architectures." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1198866625.
Full textPaolucci, Cristian. "Prototyping a scalable Aggregate Computing cluster with open-source solutions." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15716/.
Full textAustin, Paul Baden. "Towards a file system for a scalable parallel computing engine." Thesis, University of York, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304159.
Full textCarrillo, Snaider. "Scalable hierarchical networks-on-chip architecture for brain-inspired computing." Thesis, Ulster University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.633690.
Full textSuresh, Visalakshmi. "Scalable and responsive real time event processing using cloud computing." Thesis, University of Newcastle upon Tyne, 2017. http://hdl.handle.net/10443/3917.
Full textLiu, Jiuxing. "Designing high performance and scalable MPI over InfiniBand." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1095296555.
Full textAndersson, Filip, and Simon Norberg. "Scalable applications in a distributed environment." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3917.
Full textLi, Dong. "Scalable and Energy Efficient Execution Methods for Multicore Systems." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/26098.
Full textPh. D.
Mühll, Johann Rudolf Vonder. "Concept and implementation of a scalable architecture for data-parallel computing /." [S.l.] : [s.n.], 1996. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=11787.
Full textTashakor, Ghazal. "Scalable agent-based model simulation using distributed computing on system biology." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671332.
Full textEl modelado basado en agentes es una herramienta computacional muy útil que permite simular un comportamiento complejo utilizando reglas tanto en escalas micro como macro. La complejidad de este tipo de modelado radica en la definición de las reglas que tendrán los agentes para definir los elementos estructurales o los patrones de comportamiento estáticos y/o dinámicos. La presente tesis aborda la definición de modelos complejos de redes biológicas que representan células cancerosas para obtener comportamientos sobre diferentes escenarios mediante simulación y conocer la evolución del proceso de metástasis para usuarios no expertos en sistemas de cómputo. Además se desarrolla una prueba de concepto de cómo incorporar técnicas de análisis de redes dinámicas y de aprendizaje automático en los modelos basados en agentes a partir del desarrollo de un sistema de simulación federado para mejorar el proceso de toma de decisiones. Para el desarrollo de esta tesis se han tenido que abordar, desde el punto de vista de la simulación, la representación de redes biológicas complejas basadas en grafos e investigar como integrar la topología y funciones de este tipo de redes interactuando un modelo basado en agentes. En este objetivo, se ha utilizado el modelo ABM como base para la construcción, agrupamiento y clasificación de los elementos de la red y que representan la estructura de una red biológica compleja y escalable. La simulación de un modelo complejo de múltiples escalas y múltiples agentes, proporciona una herramienta útil para que un científico, no-experto en computación, pueda ejecutar un modelo complejo paramétrico y utilizarlo como herramienta de análisis de escenarios o predicción de variaciones según los diferentes perfiles de pacientes considerados. El desarrollo se ha centrado en un modelo de tumor basado en agentes que ha evolucionado desde un modelo ABM simple y bien conocido, al cual se le han incorporado las variables y dinámicas referenciadas por el Hallmarks of Cancer, a un modelo complejo basado en grafos. Este modelo, basado en grafos, se utiliza para representar a diferentes niveles de interacción y dinámicas dentro de las células en la evolución de un tumor que permite diferentes grado de representaciones (a nivel molecular/celular). Todo ello se ha puesto en funcionamiento en un entorno de simulación y se ha creado un flujo de trabajo (workflow) para construir una red escalable compleja basada en un escenario de crecimiento tumoral y donde se aplican técnicas dinámicas para conocer el crecimiento de la red tumoral sobre diferentes patrones. La experimentación se ha realizado utilizando el entorno de simulación desarrollado considerado la ejecución de modelos para diferentes perfiles de pacientes, como muestra de su funcionalidad, para calcular parámetros de interés para el experto no-informático como por ejemplo la evolución del volumen del tumor. El entorno ha sido diseñado para descubrir y clasificar subgrafos del modelo de tumor basado en agentes, que permitirá distribuir los modelos en un sistema de cómputo de altas prestaciones y así poder analizar escenarios complejos y/o diferentes perfiles de pacientes con patrones tumorales con un alto número de células cancerosas en un tiempo reducido.
Agent-based modeling is a very useful computational tool to simulate complex behavior using rules at micro and macro scales. This type of modeling’s complexity is in defining the rules that the agents will have to define the structural elements or the static and dynamic behavior patterns. This thesis considers the definition of complex models of biological networks that represent cancer cells obtain behaviors on different scenarios by means of simulation and to know the evolution of the metastatic process for non-expert users of computer systems. Besides, a proof of concept has been developed to incorporate dynamic network analysis techniques and machine learning in agent-based models based on developing a federated simulation system to improve the decision-making process. For this thesis’s development, the representation of complex biological networks based on graphs has been analyzed, from the simulation point of view, to investigate how to integrate the topology and functions of this type of networks interacting with an agent-based model. For this purpose, the ABM model has been used as a basis for the construction, grouping, and classification of the network elements representing the structure of a complex and scalable biological network. The simulation of complex models with multiple scales and multiple agents provides a useful tool for a scientist, non-computer expert to execute a complex parametric model and use it to analyze scenarios or predict variations according to the different patient’s profiles. The development has focused on an agent-based tumor model that has evolved from a simple and well-known ABM model. The variables and dynamics referenced by the Hallmarks of Cancer have been incorporated into a complex model based on graphs. Based on graphs, this model is used to represent different levels of interaction and dynamics within cells in the evolution of a tumor with different degrees of representations (at the molecular/cellular level). A simulation environment and workflow have been created to build a complex, scalable network based on a tumor growth scenario. In this environment, dynamic techniques are applied to know the tumor network’s growth using different patterns. The experimentation has been carried out using the simulation environment developed considering the execution of models for different patient profiles, as a sample of its functionality, to calculate parameters of interest for the non-computer expert, such as the evolution of the tumor volume. The environment has been designed to discover and classify subgraphs of the agent-based tumor model to execute these models in a high-performance computer system. These executions will allow us to analyze complex scenarios and different profiles of patients with tumor patterns with a high number of cancer cells in a short time.
Benedicto, Kathryn Flores 1977. "Regions : a scalable infrastructure for scoped service location in ubiquitous computing." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80038.
Full textIncludes bibliographical references (leaves 108-109).
by Kathryn Flores Benedicto.
S.B.and M.Eng.
Coons, Samuel W. "Virtual thin client a scalable service discovery approach for pervasive computing /." [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/anp4316.
Full textTitle from first page of PDF file. Document formatted into pages; contains xi, 68 p.; also contains graphics. Vita. Includes bibliographical references (p. 66-67).
De, Francisci Morales Gianmarco. "Big data and the web: algorithms for data intensive scalable computing." Thesis, IMT Alti Studi Lucca, 2012. http://e-theses.imtlucca.it/34/1/De%20Francisci_phdthesis.pdf.
Full textJarratt, Marie Claire. "Readout and Control: Scalable Techniques for Quantum Information Processing." Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/21572.
Full textDrolia, Utsav. "Adaptive Distributed Caching for Scalable Machine Learning Services." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1004.
Full textLucernati, Romano. "Scalable and Seamless Discovery and Selection of Services in Mobile Cloud Computing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.
Find full textPIVANTI, Marcello. "A Scalable Parallel Architecture with FPGA-Based Network Processor for Scientific Computing." Doctoral thesis, Università degli studi di Ferrara, 2012. http://hdl.handle.net/11392/2389440.
Full textAlham, Nasullah Khalid. "Parallelizing support vector machines for scalable image annotation." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5452.
Full textAguilar, Xavier. "Towards Scalable Performance Analysis of MPI Parallel Applications." Licentiate thesis, KTH, High Performance Computing and Visualization (HPCViz), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-165043.
Full textQC 20150508
Langmead, Benjamin Thomas. "Highly scalable short read alignment with the Burrows-Wheeler Transform and cloud computing." College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9458.
Full textThesis research directed by: Dept. of Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Klenk, Benjamin [Verfasser], and Holger [Akademischer Betreuer] Fröning. "Communication Architectures for Scalable GPU-centric Computing Systems / Benjamin Klenk ; Betreuer: Holger Fröning." Heidelberg : Universitätsbibliothek Heidelberg, 2018. http://d-nb.info/1177691078/34.
Full textCazalas, Jonathan M. "Efficient and Scalable Evaluation of Continuous, Spatio-temporal Queries in Mobile Computing Environments." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5154.
Full textID: 031001567; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Title from PDF title page (viewed August 26, 2013).; Thesis (Ph.D.)--University of Central Florida, 2012.; Includes bibliographical references (p. 103-112).
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
Safranek, Robert J. "Enhancements to the scalable coherent interface cache protocol." PDXScholar, 1999. https://pdxscholar.library.pdx.edu/open_access_etds/3977.
Full textBrzeczko, Albert Walter. "Scalable framework for turn-key honeynet deployment." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51842.
Full textWu, Fan. "Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-111908-165451/.
Full textKeywords: Energy Consumption; Perceptual Error Metric; Multiresolution; Wavelets; Mobile Graphics. Includes bibliographical references (p. 109-124).
Mohror, Kathryn Marie. "Scalable event tracking on high-end parallel systems." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/2811.
Full textHilbrich, Tobias. "Runtime MPI Correctness Checking with a Scalable Tools Infrastructure." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-175472.
Full textPutnam, Patrick P. "Scalable, High-Performance Forward Time Population Genetic Simulation." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1522419645847035.
Full textRaja, Chandrasekar Raghunath. "Designing Scalable and Efficient I/O Middleware for Fault-Resilient High-Performance Computing Clusters." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417733721.
Full textClay, Lenitra M. "Replication techniques for scalable content distribution in the internet." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/8491.
Full textWadhwa, Bharti. "Scalable Data Management for Object-based Storage Systems." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99791.
Full textDoctor of Philosophy
Large scale object-based storage systems face severe challenges to manage the data efficiently for HPC applications and workflows. These storage systems often manage and share data inflexibly, without considering the load imbalance and resource contention in the underlying multi-layer storage hierarchy. This dissertation first studies how resource contention and inflexible data sharing mechanisms impact HPC applications' storage and I/O performance; and then presents a series of efficient techniques, tools and algorithms to provide efficient and scalable data management for current and next-generation HPC storage systems
Dinan, James S. "Scalable Task Parallel Programming in the Partitioned Global Address Space." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275418061.
Full textLi, Hengsha. "Real-time Cloudlet PaaS for GreenIoT : Design of a scalable server PaaS and a GreenIoT application." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239004.
Full textCloudlet är en ny teknik som har fått stort intresse inom nätverksforskning. Tekniken kan beskrivas som en PaaS-plattform (Platform as a Service) som tillåter mobila klienter att exekvera sin kod i molnet. Cloudlet kan ses som ett lager i kanten av kommunikationsnätet.I denna rapport presenteras en cloudlet-baserad arkitektur som inkluderar cloudlet-kod som en del av själva tillämpning på klient-sidan. Vi ger först en översikt av relaterat arbete inom området och beskriver existerande utmaningar som behöver adresseras. Därefter presenterar vi en övergripande design för en cloudlet-baserad implementering. Slutligen presenterar vi cloudlet-arkitekturen, inklusive en prototypimplementation av både klient-tillämpning och cloudlet-server. I vår prototyp av en datavisualiseringstillämpning för CO2, fokuserar vi på hur man formaterar funktionerna på klientsidan, hur man schemalägger cloudlet-PaaS på serversidan, samt hur servern kan göras skalbar. Rapporten avslutas med en prestandautvärdering.Cloudlet-tekniken bedöms i stor utsträckning att användas i IoT-projekt, såsom datavisualisering av luftkvalitet och vattenkvalitet, fläktstyrning och trafikstyrning eller andra användningsområden. Jämfört med den traditionella centraliserade molnarkitekturen har cloudlet hög respons, flexibilitet och skalbarhet.
Sridhar, Jaidev Krishna. "Scalable Job Startup and Inter-Node Communication in Multi-Core InfiniBand Clusters." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243909406.
Full textChai, Lei. "High Performance and Scalable MPI Intra-node Communication Middleware for Multi-core Clusters." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1236639834.
Full textBaheri, Betis. "MARS: Multi-Scalable Actor-Critic Reinforcement Learning Scheduler." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1595039454920637.
Full text