Littérature scientifique sur le sujet « Scalable computing »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Scalable computing ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Scalable computing"
Gusev, Marjan. « Scalable Dew Computing ». Applied Sciences 12, no 19 (22 septembre 2022) : 9510. http://dx.doi.org/10.3390/app12199510.
Texte intégralVenkatasubramanian, Nalini, Shakuntala Miriyala et Gul Agha. « Scalable concurrent computing ». Sadhana 17, no 1 (mars 1992) : 193–220. http://dx.doi.org/10.1007/bf02811343.
Texte intégralKeidar, Idit, et Assaf Schuster. « Want scalable computing ? » ACM SIGACT News 37, no 3 (septembre 2006) : 59–66. http://dx.doi.org/10.1145/1165555.1165569.
Texte intégralRouson, Damian W. I. « Complexity in Scalable Computing ». Scientific Programming 16, no 4 (2008) : 275–76. http://dx.doi.org/10.1155/2008/693705.
Texte intégralDeBenedictis, E. P., et S. C. Johnson. « Extending Unix for scalable computing ». Computer 26, no 11 (novembre 1993) : 43–53. http://dx.doi.org/10.1109/2.241425.
Texte intégralBanerjee, S., S. Agarwal, K. Kamel, A. Kochut, C. Kommareddy, T. Nadeem, P. Thakkar et al. « Rover : scalable location-aware computing ». Computer 35, no 10 (octobre 2002) : 46–53. http://dx.doi.org/10.1109/mc.2002.1039517.
Texte intégralAlexandrov, Vassil. « Towards scalable mathematics and scalable algorithms for extreme scale computing ». Journal of Computational Science 4, no 6 (novembre 2013) : iii—v. http://dx.doi.org/10.1016/s1877-7503(13)00120-8.
Texte intégralLiu, Zhi, Cheng Zhan, Ying Cui, Celimuge Wu et Han Hu. « Robust Edge Computing in UAV Systems via Scalable Computing and Cooperative Computing ». IEEE Wireless Communications 28, no 5 (octobre 2021) : 36–42. http://dx.doi.org/10.1109/mwc.121.2100041.
Texte intégralBarrett, Sean D., Peter P. Rohde et Thomas M. Stace. « Scalable quantum computing with atomic ensembles ». New Journal of Physics 12, no 9 (22 septembre 2010) : 093032. http://dx.doi.org/10.1088/1367-2630/12/9/093032.
Texte intégralJararweh, Yaser, Lo’ai Tawalbeh, Fadi Ababneh, Abdallah Khreishah et Fahd Dosari. « Scalable Cloudlet-based Mobile Computing Model ». Procedia Computer Science 34 (2014) : 434–41. http://dx.doi.org/10.1016/j.procs.2014.07.051.
Texte intégralThèses sur le sujet "Scalable computing"
Fleming, Kermin Elliott Jr. « Scalable reconfigurable computing leveraging latency-insensitive channels ». Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79212.
Texte intégralCataloged from PDF version of thesis.
Includes bibliographical references (p. 190-197).
Traditionally, FPGAs have been confined to the limited role of small, low-volume ASIC replacements and as circuit emulators. However, continued Moore's law scaling has given FPGAs new life as accelerators for applications that map well to fine-grained parallel substrates. Examples of such applications include processor modelling, compression, and digital signal processing. Although FPGAs continue to increase in size, some interesting designs still fail to fit in to a single FPGA. Many tools exist that partition RTL descriptions across FPGAs. Unfortunately, existing tools have low performance due to the inefficiency of maintaining the cycle-by-cycle behavior of RTL among discrete FPGAs. These tools are unsuitable for use in FPGA program acceleration, as the purpose of an accelerator is to make applications run faster. This thesis presents latency-insensitive channels, a language-level mechanism by which programmers express points in their their design at which the cycle-by-cycle behavior of the design may be modified by the compiler. By decoupling the timing of portions of the RTL from the high-level function of the program, designs may be mapped to multiple FPGAs without suffering the performance degradation observed in existing tools. This thesis demonstrates, using a diverse set of large designs, that FPGA programs described in terms of latency-insensitive channels obtain significant gains in design feasibility, compilation time, and run-time when mapped to multiple FPGAs.
by Kermin Elliott Fleming, Jr.
Ph.D.
Spagnuolo, Carmine. « Scalable computational science ». Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/2581.
Texte intégralComputational science also know as scientific computing is a rapidly growing novel field that uses advanced computing in order to solve complex problems. This new discipline combines technologies, modern computational methods and simulations to address problems too complex to be reliably predicted only by theory and too dangerous or expensive to be reproduced in laboratories. Successes in computational science over the past twenty years have caused demand of supercomputing, to improve the performance of the solutions and to allow the growth of the models, in terms of sizes and quality. From a computer scientist’s perspective, it is natural to think to distribute the computation required to study a complex systems among multiple machines: it is well known that the speed of singleprocessor computers is reaching some physical limits. For these reasons, parallel and distributed computing has become the dominant paradigm for computational scientists who need the latest development on computing resources in order to solve their problems and the “Scalability” has been recognized as the central challenge in this science. In this dissertation the design and implementation of Frameworks, Parallel Languages and Architectures, which enable to improve the state of the art on Scalable Computational Science, are discussed. Frameworks. The proposal of D-MASON, a distributed version of MASON, a wellknown and popular Java toolkit for writing and running Agent-Based Simulations (ABSs). D-MASON introduces a framework level parallelization so that scientists that use the framework (e.g., a domain expert with limited knowledge of distributed programming) could be only minimally aware of such distribution. D-MASON, was began to be developed since 2011, the main purpose of the project was overcoming the limits of the sequentially computation of MASON, using distributed computing. D-MASON enables to do more than MASONin terms of size of simulations (number of agents and complexity of agents behaviors), but allows also to reduce the simulation time of simulations written in MASON. For this reason, one of the most important feature of D-MASON is that it requires a limited number of changing on the MASON’s code in order to execute simulations on distributed systems. v D-MASON, based on Master-Worker paradigm, was initially designed for heterogeneous computing in order to exploit the unused computational resources in labs, but it also provides functionality to be executed in homogeneous systems (as HPC systems) as well as cloud infrastructures. The architecture of D-MASON is presented in the following three papers, which describes all D-MASON layers: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Much effort has been made, on the Communication Layer, to improve the communication efficiency in the case of homogeneous systems. D-MASON is based on Publish/Subscribe (PS) communication paradigm and uses a centralized message broker (based on the Java Message Service standard) to deal with heterogeneous systems. The communication for homogeneous system uses the Message Passing Interface (MPI) standard and is also based on PS. In order to use MPI within Java, D-MASON uses a Java binding of MPI. Unfortunately, this binding is relatively new and does not provides all MPI functionalities. Several communication strategies were designed, implemented and evaluated. These strategies were presented in two papers: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON provides also mechanisms for the visualization and gathering of the data in distributed simulation (available on the Visualization Layer). These solutions are presented in the paper: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. In DABS one of the most complex problem is the partitioning and balancing of the computation. D-MASON provides, in the Distributed Simulation layer, mechanisms for partitioning and dynamically balancing the computation. D-MASON uses field partitioning mechanism to divide the computation among the distributed system. The field partitioning mechanism provides a nice trade-off between balancing and communication effort. Nevertheless a lot of ABS are not based on 2D- or 3D-fields and are based on a communication graph that models the relationship among the agents. Inthiscasethefieldpartitioningmechanismdoesnotensuregoodsimulation performance. Therefore D-MASON provides also a specific mechanisms to manage simulation that uses a graph to describe agent interactions. These solutions were presented in the following publication: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. The field partitioning mechanism, intuitively, enables the mono and bi-dimensional partitioning of an Euclidean space. This approach is also know as uniform partitioning. But in some cases, e.g. simulations that simulate urban areas using a Geographical Information System (GIS), the uniform partitioning degrades the simulation performance, due to the unbalanced distribution of the agents on the field and consequently on the computational resources. In such a case, D-MASON provides a non-uniform partitioning mechanism (inspired by Quad-Tree data structure), presented in the following paper: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. The latest version of D-MASON provides a web-based System Management, to better use D-MASON in Cloud infrastructures. D-MASON on the Amazon EC2 Cloud infrastructure and its performance in terms of speed and cost were compared against D-MASON on an HPC environment. The obtained results, and the new System Management Layer are presented in the following paper: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. ParallelLanguages. The proposal of an architecture, which enable to invoke code supported by a Java Virtual Machine (JVM) from code written in C language. Swft/T, is a parallel scripting language for programming highly concurrent applications in parallel and distributed environments. Swift/T is the reimplemented version of Swift language, with a new compiler and runtime. Swift/T improve Swift, allowing scalability over 500 tasks per second, load balancing feature, distributed data structures, and dataflow-driven concurrent task execution. Swif/T provides an interesting feature the one of calling easily and natively other languages (as Python, R, Julia, C) by using special language functions named leaf functions. Considering the actual trend of some supercomputing vendors (such as Cray Inc.) that support in its processors Java Virtual Machines (JVM), it is desirable to provide methods to call also Java code from Swift/T. In particular is really attractive to be able to call scripting languages for JVM as Clojure, Scala, Groovy, JavaScript etc. For this purpose a C binding to instanziate and call JVM was designed. This binding is used in Swif/T (since the version 1.0) to develop leaf functions that call Java code. The code are public available at GitHub project page. Frameworks. The proposal of two tools, which exploit the computing power of parallel systems to improve the effectiveness and the efficiency of Simulation Optimization strategies. Simulations Optimization (SO) is used to refer to the techniques studied for ascertaining the parameters of a complex model that minimize (or maximize) given criteria (one or many), which can only be computed by performing a simulation run. Due to the the high dimensionality of the search space, the heterogeneity of parameters, the irregular shape and the stochastic nature of the objective evaluation function, the tuning of such systems is extremely demanding from the computational point of view. The first frameworks is SOF: Zero Configuration Simulation Optimization Framework on the Cloud, it was designed to run SO process in viii the cloud. SOF is based on the Apache Hadoop infrastructure and is presented in the following paper: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. The second framework is EMEWS: Extreme-scale Model Exploration with Swift/T, it has been designed at Argonne National Laboratory (USA). EMEWS as SOF allows to perform SO processes in distributed system. Both the frameworks are mainly designed for ABS. In particular EMEWS was tested using the ABS simulation toolkit Repast. Initially, EMEWS was not able to easily execute out of the box simulations written in MASON and NetLogo. This thesis presents new functionalities of EMEWS and solutions to easily execute MASON and NetLogo simulations on it. The EMEWS use cases are presented in the following paper: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architectures. The proposal of an open-source, extensible, architecture for the visualization of data in HTML pages, exploiting a distributed web computing. Following the Edge-centric Computing paradigm, the data visualization is performed edge side ensuring data trustiness, privacy, scalability and dynamic data loading. The architecture has been exploited in the Social Platform for Open Data (SPOD). The proposed architecture, that has also appeared in the following papers: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [edited by author]
Computational science anche conosciuta come calcolo scientifico è un settore in rapida crescita che usa il calcolo avanzato per affrontare problemi complessi. Questa nuova disciplina, combina tecnologia, moderni metodi computazionali e simulazioni per affrontare problemi troppo difficili da poter essere studiati solo in teoria o troppo pericolosi e costosi per poter essere riprodotti sperimentalmente in laboratorio. I progressi dell’ultimo ventennio in computational science hanno sfruttato il supercalcolo per migliorare le performance delle soluzioni e permettere la crescita dei modelli, in termini di dimensioni e qualità dei risultati ottenuti. Le soluzioni adottate si avvalgono del calcolo distribuito: è ben noto che la velocità di un computer con un singolo processore sta raggiungendo dei limiti fisici. Per queste ragioni, la computazione parallela e distribuita è diventata il principale paradigma di calcolo per affrontare i problemi nell’ambito della computational science, in cui la scalabilità delle soluzioni costituisce la sfida da affrontare. In questa tesi vengono discusse la progettazione e l’implementazione di Framework, Linguaggi Paralleli e Architetture che consentono di migliorare lo stato dell’arte della Scalable Computational Science. In particolare, i maggiori contributi riguardano: Frameworks. La proposta di D-MASON, una versione distribuita di MASON, un toolkit Java per la scrittura e l’esecuzione di simulazioni basate su agenti (AgentBased Simulations, ABSs). D-MASON introduce la parallelizzazione a livello framework per far si che gli scienziati che lo utilizzano (ad esempio un esperto con limitata conoscenza della programmazione distribuita) possano rendersi conto solo minimamente di lavorare in ambiente distribuito (ad esempio esperti del dominio con limitata esperienza o nessuna esperienza nel calcolo distribuito). D-MASON è un progetto iniziato nel 2011, il cui principale obiettivo è quello di superare i limiti del calcolo sequenziale di MASON, sfruttando il calcolo distribuito. D-MASON permette di simulare modelli molto più complessi (in termini di numero di agenti e complessità dei comportamenti dei singoli agenti) rispetto a MASON e inoltre consente, a parità di calcolo, di ridurre il tempo necessario ad eseguire le simulazioni MASON. D-MASON è stato progettato in modo da permettere la migrazione di simulazioni scritte in MASON con un numero limitato di modifiche da apportare al codice, al fine di garantire il massimo della semplicità d’uso. v D-MASON è basato sul paradigma Master-Worker, inizialmente pensato per sistemi di calcolo eterogenei, nelle sue ultime versioni consente l’esecuzione anche in sistemi omogenei come sistemi HPC e infrastrutture di cloud computing. L’architettura di D-MASON è stata presentata nelle seguenti pubblicazioni: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Uno degli strati architetturali di D-MASON che ne determina le prestazioni, è il il Communication Layer, il quale offre le funzionalità di comunicazione tra tutte le entità coinvolte nel calcolo. La comunicazione in D-MASON è basata sul paradigma Publish/Subscribe (PS). Al fine di soddisfare la flessibilità e la scalabilità richiesta, vengono fornite due strategie di comunicazione, una centralizzata (utilizzando Java Message Service) e una decentralizzata (utilizzando Message Passing Interface). La comunicazione in sistemi omogenei è sempre basata su PS ma utilizza lo standard Message Passing Interface (MPI). Al fine di utilizzare MPI in Java, lo strato di comunicazione di D-MASON è implementato sfruttando un binding Java a MPI. Tale soluzione non permette però l’utilizzo di tutte le funzionalità di MPI. Al tal proposito molteplici soluzioni sono stare progettate e implementate, e sono presentate nelle seguenti pubblicazioni: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON offre anche meccanismi per la visualizzazione centralizzata e la raccolta di informazioni in simulazioni distribuite (tramite il Visualization Layer). I risultati ottenuti sono stati presentati nella seguente pubblicazione: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. Quando si parla di simulazioni distribuite una delle principali problematiche è il bilanciamento del carico. D-MASON offre, nel Distributed Simulation Layer, meccanismi per il partizionamento dinamico e il bilanciamento del carico. DMASON utilizza la tecnica del field partitioning per suddividere il lavoro tra le entità del sistema distribuito. La tecnica di field partitioning consente di ottenere un buon equilibrio tra il bilanciamento del carico e l’overhead di comunicazione. Molti modelli di simulazione non sono basati su spazi 2/3-dimensionali ma bensì modellano le relazioni tra gli agenti utilizzando strutture dati grafo. In questi casi la tecnica di field partitioning non garantisce soluzioni che consentono di ottenere buone prestazioni. Per risolvere tale problema, D-MASON fornisce particolari soluzioni per simulazioni che utilizzano i grafi per modellare le relazioni tra gli agenti. I risultati conseguiti sono stati presentati nella seguente pubblicazione: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. Il metodo di field partitioning consente il partizionamento di campi Euclidei mono e bi-dimensionali; tale approccio è anche conosciuto con il nome di partizionamento uniforme. In alcuni casi, come ad esempio simulazioni che utilizzano Geographical Information System (GIS), il metodo di partizionamento uniforme non è in grado di garantire buone prestazioni, a causa del posizionamento non bilanciato degli agenti sul campo di simulazione. In questi casi, D-MASON offre un meccanismo di partizionamento non uniforme (inspirato alla struttura dati Quad-Tree), presentato nelle seguenti pubblicazioni: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. Inoltre, D-MASON èstatoestesoalloscopodifornireun’infrastrutturaSimulation-asa-Service(SIMaaS),chesemplificailprocessodiesecuzionedisimulazionidistribuite in un ambiente di Cloud Computing. D-MASON nella sua versione più recente offre uno strato software di management basato su web, che ne consente estrema facilità d’uso in ambienti Cloud. Utilizzando il System Management, D-MASON è stato sperimentato sull’infrastruttura Cloud Amazon EC2 confrontando le prestazioni in questo ambiente cloud con un sistema HPC. I risultati ottenuti sono stati presentati nella seguente pubblicazione: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. LinguaggiParalleli. La proposta di un’architettura, la quale consente di invocare il codice per Java Virtual Machine (JVM) da codice scritto in linguaggio C. Swift/T è un linguaggio di scripting parallelo per sviluppare applicazioni altamente scalabili in ambienti paralleli e distribuiti. Swift/T è l’implementazione del linguaggio Swift per ambienti HPC. Swift/T migliora il linguaggio Swift, consentendo la scalabilità fino a 500 task per secondo, il bilanciamento del carico, strutture dati distribuite, e dataflow task execution. Swift/T consente di invocare nativamente codice scritto in altri linguaggi (come Python, R, Julia e C) utilizzando particolari funzioni definite come leaf function. Il trend attuale di molti produttori di sistemi di supercalcolo (come Cray Inc.), è quello di offrire processori che supportano JVM. Considerato ciò in questa tesi viene presentato il metodo adottato in Swift/T per l’invocazione di linguaggi per JVM (come Java, Clojure, Scala, Groovy, JavaScript) da Swift/T. A tale scopo è stato realizzato un binding C per l’invocazione e la gestione di codice per JVM. Questa soluzione è stata utilizzata in Swift/T (dalla versione 1.0) per estendere il supporto del linguaggio anche a linguaggi per JVM. Il codice sviluppato è stato rilasciato sotto licenza open source ed è disponibile in un repository pubblico su GitHub. Frameworks. La proposta di due tool che sfruttano la potenza di calcolo di sistemi distribuiti per migliorare l’efficacia e l’efficienza di strategie di Simulation Optimization. Simulation Optimization (SO) si riferisce alle tecniche utilizzate per l’individuazione dei parametri di un modello complesso che minimizzano (o massimizzano) determinati criteri, i quali possono essere computati solo tramite l’esecuzione di una simulazione. A causa dell’elevata dimensionalità dello spazio dei parametri, della loro eterogeneità e, della natura stocastica della funzione di viii valutazione, la configurazione di tali sistemi è estremamente onerosa dal punto di vista computazionale. In questo lavoro sono presentati due framework per SO. Il primo framework è SOF:Zero ConfigurationSimulation OptimizationFramework on the Cloud, progettato per l’esecuzione del processo SO in ambienti di cloud computing. SOF è basato su Apache Hadoop ed è presentato nella seguente pubblicazione: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. Il secondo framework è EMEWS: Extreme-scale Model Exploration with Swift/T, progettato per eseguire processi SO in sistemi HPC. Entrambi i framework sono stati sviluppati principalmente per ABS. In particolare EMEWS è stato sperimentato utilizzando il toolkit ABS chiamato Repast. Nella sua prima versione EMEWS non supportava simulazioni scritte in MASON e NetLogo. In questo lavoro di tesi sono descritte alcune funzionalità di EMEWS che consentono il supporto a tali simulazioni. EMEWS e alcuni casi d’uso sono presentati nella seguente pubblicazione: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architetture. La proposta di un’architettura open source per la visualizzazione web di dati dinamici. Tale architettura si basa sul paradigma di Edge-centric Computing; la visualizzazione dei dati è eseguita lato client, garantendo in questo modo l’affidabilità dei dati, la privacy e la scalabilità in termini di numero di visualizzazioni concorrenti. L’architettura è stata utilizzata all’interno della piattaforma sociale SPOD (Social Platform for Open Data), ed è stata presentata nelle seguenti pubblicazioni: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [a cura dell'autore]
XV n.s. (XXIX)
何世全 et Sai-chuen Ho. « Single I/O space for scalable cluster computing ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222614.
Texte intégralEltony, Amira M. (Amira Madeleine). « Scalable trap technology for quantum computing with ions ». Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99822.
Texte intégralCataloged from PDF version of thesis.
Includes bibliographical references (pages [187]-214).
Quantum computers employ quantum mechanical effects, such as superposition and entanglement, to process information in a distinctive way, with advantages for simulation and for new, and in some cases more-efficient algorithms. A quantum bit is a two-level quantum system, such as the electronic or spin state of a trapped atomic ion. Physics experiments with single atomic ions acting as "quantum bits" have demonstrated many of the ingredients for a quantum computer. But to perform useful computations these experimental systems will need to be vastly scaled-up. Our goal is to engineer systems for large-scale quantum computation with trapped ions. Building on established techniques of microfabrication, we create ion traps incorporating exotic materials and devices, and we investigate how quantum algorithms can be efficiently mapped onto physical trap hardware. An existing apparatus built around a bath cryostat is modified for characterization of novel ion traps and devices at cryogenic temperatures (4 K and 77 K). We demonstrate an ion trap on a transparent chip with an integrated photodetector, which allows for scalable, efficient state detection of a quantum bit. To understand and better control electric field noise (which limits gate fidelities), we experiment with coating trap electrodes in graphene. We develop traps compatible with standard CMOS manufacturing to leverage the precision and scale of this platform, and we design a Single Instruction Multiple Data (SIMD) algorithm for implementing the QFT using a distributed array of ion chains. Lastly, we explore how to bring it all together to create an integrated trap module from which a scalable architecture can be assembled.
by Amira M. Eltony.
Ph. D.
Rrustemi, Alban. « Computing surfaces : a platform for scalable interactive displays ». Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612533.
Texte intégralAllcock, David Thomas Charles. « Surface-electrode ion traps for scalable quantum computing ». Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.559722.
Texte intégralHo, Sai-chuen. « Single I/O space for scalable cluster computing / ». Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21841512.
Texte intégralPang, Xiaolin. « Scalable Algorithms for Outlier Detection ». Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11743.
Texte intégralTran, Viet-Trung. « Scalable data-management systems for Big Data ». Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00920432.
Texte intégralSurapaneni, Chandra Sekhar Medhi Deepankar. « Dynamically organized and scalable virtual organizations in Grid computing ». Diss., UMK access, 2005.
Trouver le texte intégral"A thesis in computer science." Typescript. Advisor: Deepankar Medhi. Vita. Title from "catalog record" of the print edition Description based on contents viewed March 12, 2007. Includes bibliographical references (leaves 85-87). Online version of the print edition.
Livres sur le sujet "Scalable computing"
Zhiwei, Xu, dir. Scalable parallel computing : Technology, architecture, programming. Boston : WCB/McGraw-Hill, 1998.
Trouver le texte intégralKuan-Ching, Li, dir. Handbook of research on scalable computing technologies. Hershey, PA : Information Science Reference, 2009.
Trouver le texte intégralScalable computing and communications : Theory and practice. Hoboken, New Jersey : Wiley, 2013.
Trouver le texte intégralInstitute, SAS, dir. Scalable performance data server : User's guide : version 1. Cary, NC : SAS Institute Inc., 1996.
Trouver le texte intégralBecker, Steffen, Gunnar Brataas et Sebastian Lehrig, dir. Engineering Scalable, Elastic, and Cost-Efficient Cloud Computing Applications. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54286-7.
Texte intégralStolorz, Paul, et Ron Musick, dir. Scalable High Performance Computing for Knowledge Discovery and Data Mining. Boston, MA : Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5669-5.
Texte intégralPaul, Stolorz, et Musick Ron, dir. Scalable high performance computing for knowledge discovery and data mining. Boston : Kluwer Academic Publishers, 1998.
Trouver le texte intégralStolorz, Paul. Scalable High Performance Computing for Knowledge Discovery and Data Mining. Boston, MA : Springer US, 1998.
Trouver le texte intégralBurnett, Margaret. A scalable method for deductive generalization in the spreadsheet paradigm. [Corvallis, OR : Oregon State University, Dept. of Computer Science, 2001.
Trouver le texte intégralKyaw, Thi Ha. Towards a Scalable Quantum Computing Platform in the Ultrastrong Coupling Regime. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19658-5.
Texte intégralChapitres de livres sur le sujet "Scalable computing"
McColl, W. F. « Scalable computing ». Dans Computer Science Today, 46–61. Berlin, Heidelberg : Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0015236.
Texte intégralSpiller, Timothy P. « Superconducting Circuits for Quantum Computing ». Dans Scalable Quantum Computers, 305–24. Weinheim, FRG : Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603182.ch20.
Texte intégralJin, Cheng, Sugih Jamin, Danny Raz et Yuval Shavitt. « Computing Logical Network Topologies ». Dans Building Scalable Network Services, 31–50. Boston, MA : Springer US, 2004. http://dx.doi.org/10.1007/978-1-4419-8897-3_3.
Texte intégralVarghese, Blesson, Nan Wang, Dimitrios S. Nikolopoulos et Rajkumar Buyya. « Feasibility of Fog Computing ». Dans Scalable Computing and Communications, 127–46. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43795-4_5.
Texte intégralMilburn, G. J., S. Schneider et D. F. V. James. « Ion Trap Quantum Computing with Warm Ions ». Dans Scalable Quantum Computers, 31–40. Weinheim, FRG : Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603182.ch3.
Texte intégralCalarco, T., D. Jaksch, J. I. Cirac, P. Zoller et H. J. Briegel. « Quantum Computing with Trapped Particles in Microscopic Potentials ». Dans Scalable Quantum Computers, 175–85. Weinheim, FRG : Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603182.ch11.
Texte intégralDykman, M. I., et P. M. Platzman. « Quantum Computing Using Electrons Floating on Liquid Helium ». Dans Scalable Quantum Computers, 325–38. Weinheim, FRG : Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603182.ch21.
Texte intégralTimčenko, Valentina, Nikola Zogović, Borislav Đorđević et Miloš Jevtić. « Approach to Assessing Cloud Computing Sustainability ». Dans Scalable Computing and Communications, 93–125. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43795-4_4.
Texte intégralDeutsch, Ivan H., Gavin K. Brennen et Poul S. Jessen. « Quantum Computing with Neutral Atoms in An Optical Lattice ». Dans Scalable Quantum Computers, 155–73. Weinheim, FRG : Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603182.ch10.
Texte intégralAverin, D. V. « Quantum Computing and Quantum Measurement with Mesoscopic Josephson Junctions ». Dans Scalable Quantum Computers, 285–304. Weinheim, FRG : Wiley-VCH Verlag GmbH & Co. KGaA, 2005. http://dx.doi.org/10.1002/3527603182.ch19.
Texte intégralActes de conférences sur le sujet "Scalable computing"
Snir, Marc. « Scalable parallel computing ». Dans the fifth annual ACM symposium. New York, New York, USA : ACM Press, 1993. http://dx.doi.org/10.1145/165231.165236.
Texte intégralDenbo, Seth, et Neil Fraistat. « Diggable Data, Scalable Reading and New Humanities Scholarship ». Dans 2011 Second International Conference on Culture and Computing (Culture Computing). IEEE, 2011. http://dx.doi.org/10.1109/culture-computing.2011.49.
Texte intégralUta, Alexandru, Andreea Sandu, Stefania Costache et Thilo Kielmann. « Scalable In-Memory Computing ». Dans 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid). IEEE, 2015. http://dx.doi.org/10.1109/ccgrid.2015.106.
Texte intégralYin, Jianxiong. « Scalable AI Computing Lifecycle ». Dans 2019 International Symposium on VLSI Design, Automation and Test (VLSI-DAT). IEEE, 2019. http://dx.doi.org/10.1109/vlsi-dat.2019.8741600.
Texte intégralVinet, M., L. Hutin, B. Bertrand, S. Barraud, J. M. Hartmann, Y. J. Kim, V. Mazzocchi et al. « Towards scalable silicon quantum computing ». Dans 2018 IEEE International Electron Devices Meeting (IEDM). IEEE, 2018. http://dx.doi.org/10.1109/iedm.2018.8614675.
Texte intégralVassiliev, Andrei V. « Scalable OpenCL FPGA Computing Evolution ». Dans IWOCL 2017 : 5th International Workshop on OpenCL. New York, NY, USA : ACM, 2017. http://dx.doi.org/10.1145/3078155.3078165.
Texte intégralHemmer, Philip, Jerog Wrachtrup, Fedor Jelezko, Philippe Tamarat, Steven Prawer et Mikhail Lukin. « Scalable quantum computing in diamond ». Dans Integrated Optoelectronic Devices 2007, sous la direction de Zameer U. Hasan, Alan E. Craig, Selim M. Shahriar et Hans J. Coufal. SPIE, 2007. http://dx.doi.org/10.1117/12.716388.
Texte intégralDantas, Mario A. R. « Data Intensive Scalable Computing (DISC) ». Dans Escola Regional de Alto Desempenho de São Paulo. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/eradsp.2020.16873.
Texte intégralZhang, Hui, Riqing Chen, Guangchen Ruan et Masatoshi Ando. « Scalable dental computing on cyberinfrastructure ». Dans 2015 IEEE International Conference on Big Data (Big Data). IEEE, 2015. http://dx.doi.org/10.1109/bigdata.2015.7364042.
Texte intégralDümmler, Jörg, Thomas Rauber et Gudula Rünger. « Scalable computing with parallel tasks ». Dans the 2nd Workshop. New York, New York, USA : ACM Press, 2009. http://dx.doi.org/10.1145/1646468.1646477.
Texte intégralRapports d'organisations sur le sujet "Scalable computing"
West, Joshua T. Dilution Refrigerator Technology for Scalable Quantum Computing. Fort Belvoir, VA : Defense Technical Information Center, mai 2014. http://dx.doi.org/10.21236/ada605324.
Texte intégralWest, John E., Robert E. Jensen et Louis H. Turcotte. Migration of WAM to Scalable Computing Environments. Fort Belvoir, VA : Defense Technical Information Center, septembre 1997. http://dx.doi.org/10.21236/ada330149.
Texte intégralJohn Mellor-Crummey. Center for Programming Models for Scalable Parallel Computing. Office of Scientific and Technical Information (OSTI), février 2008. http://dx.doi.org/10.2172/927362.
Texte intégralWalmsley, Ian. Scalable Quantum Networks for Distributed Computing and Sensing. Fort Belvoir, VA : Defense Technical Information Center, avril 2016. http://dx.doi.org/10.21236/ad1007637.
Texte intégralBanerjee, Prithviraj. VLSI CAD on Scalable High Performance Computing Platforms. Fort Belvoir, VA : Defense Technical Information Center, septembre 1998. http://dx.doi.org/10.21236/ada358137.
Texte intégralWang, Jianchao, et Yuanyuan Yang. Scalable Multicast Networks for High-Performance Computing and Communications. Fort Belvoir, VA : Defense Technical Information Center, janvier 2001. http://dx.doi.org/10.21236/ada394378.
Texte intégralBrandt, S. Scalable File Systems for High Performance Computing Final Report. Office of Scientific and Technical Information (OSTI), octobre 2007. http://dx.doi.org/10.2172/923092.
Texte intégralMellor-Crummey, John. Final Report : Center for Programming Models for Scalable Parallel Computing. Office of Scientific and Technical Information (OSTI), septembre 2011. http://dx.doi.org/10.2172/1121319.
Texte intégralGao, Guang, R. Center for Programming Models for Scalable Parallel Computing : Future Programming Models. Office of Scientific and Technical Information (OSTI), juillet 2008. http://dx.doi.org/10.2172/935031.
Texte intégralKarbach, Carsten, et Wolfgang Frings. Final Scientific Report : A Scalable Development Environment for Peta-Scale Computing. Office of Scientific and Technical Information (OSTI), février 2013. http://dx.doi.org/10.2172/1063754.
Texte intégral