Academic literature on the topic 'Computing clusters'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computing clusters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computing clusters"

1

ROSENBERG, ARNOLD L., and RON C. CHIANG. "HETEROGENEITY IN COMPUTING: INSIGHTS FROM A WORKSHARING SCHEDULING PROBLEM." International Journal of Foundations of Computer Science 22, no. 06 (September 2011): 1471–93. http://dx.doi.org/10.1142/s0129054111008829.

Full text
Abstract:
Heterogeneity complicates the use of multicomputer platforms. Can it also enhance their performance? How can one measure the power of a heterogeneous assemblage of computers ("cluster"), in absolute terms (how powerful is this cluster) and relative terms (which cluster is more powerful)? Is a cluster that has one super-fast computer and the rest of "average" speed more/less powerful than one all of whose computers are "moderately" fast? If you can replace just one computer in a cluster with a faster one, should you replace the fastest? the slowest? A result concerning "worksharing" in heterogeneous clusters provides a highly idealized, yet algorithmically meaningful, framework for studying such questions in a way that admits rigorous analysis and formal proof. We encounter some surprises as we answer the preceding questions (perforce, within the idealized framework). Highlights: (1) If one can replace only one computer in a cluster by a faster one, it is (almost) always most advantageous to replace the fastest one. (2) If the computers in two clusters have the same mean speed, then the cluster with the larger variance in speed is (almost) always more productive (verified analytically for small clusters and empirically for large ones.) (3) Heterogeneity can actually enhance a cluster's computing power.
APA, Harvard, Vancouver, ISO, and other styles
2

WEBER, MICHAEL. "WORKSTATION CLUSTERS: ONE WAY TO PARALLEL COMPUTING." International Journal of Modern Physics C 04, no. 06 (December 1993): 1307–14. http://dx.doi.org/10.1142/s0129183193001026.

Full text
Abstract:
The feasibility and constraints of workstation clusters for parallel processing are investigated. Measurements of latency and bandwidth are presented to fix the position of clusters in comparison to massively parallel systems. So it becomes possible to identify the kind of applications that seem to be suited for running on a cluster.
APA, Harvard, Vancouver, ISO, and other styles
3

Tripathy, Minakshi, and C. R. Tripathy. "A Comparative Analysis of Performance of Shared Memory Cluster Computing Interconnection Systems." Journal of Computer Networks and Communications 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/128438.

Full text
Abstract:
In recent past, many types of shared memory cluster computing interconnection systems have been proposed. Each of these systems has its own advantages and limitations. With the increase in system size of the cluster interconnection systems, the comparative analysis of their various performance measures becomes quite inevitable. The cluster architecture, load balancing, and fault tolerance are some of the important aspects, which need to be addressed. The comparison needs to be made in order to choose the best one for a particular application. In this paper, a detailed comparative study on four important and different classes of shared memory cluster architectures has been made. The systems taken up for the purpose of the study are shared memory clusters, hierarchical shared memory clusters, distributed shared memory clusters, and the virtual distributed shared memory clusters. These clusters are analyzed and compared on the basis of the architecture, load balancing, and fault tolerance aspects. The results of comparison are reported.
APA, Harvard, Vancouver, ISO, and other styles
4

Singhania, Shrinkhala, and Monika Tak. "Workstation Clusters for Parallel Computing." International Journal of Engineering Trends and Technology 28, no. 1 (October 25, 2015): 13–14. http://dx.doi.org/10.14445/22315381/ijett-v28p203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stone, J., and F. Ercal. "Workstation clusters for parallel computing." IEEE Potentials 20, no. 2 (2001): 31–33. http://dx.doi.org/10.1109/45.954655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Regassa, Dereje, Heonyoung Yeom, and Yongseok Son. "Harvesting the Aggregate Computing Power of Commodity Computers for Supercomputing Applications." Applied Sciences 12, no. 10 (May 19, 2022): 5113. http://dx.doi.org/10.3390/app12105113.

Full text
Abstract:
Distributed supercomputing is becoming common in different companies and academia. Most of the parallel computing researchers focused on harnessing the power of commodity processors and even internet computers to aggregate their computation powers to solve computationally complex problems. Using flexible commodity cluster computers for supercomputing workloads over a dedicated supercomputer and expensive high-performance computing (HPC) infrastructure is cost-effective. Its scalable nature can make it better employed to the available organizational resources, which can benefit researchers who aim to conduct numerous repetitive calculations on small to large volumes of data to obtain valid results in a reasonable time. In this paper, we design and implement an HPC-based supercomputing facility from commodity computers at an organizational level to provide two separate implementations for cluster-based supercomputing using Hadoop and Spark-based HPC clusters, primarily for data-intensive jobs and Torque-based clusters for Multiple Instruction Multiple Data (MIMD) workloads. The performance of these clusters is measured through extensive experimentation. With the implementation of the message passing interface, the performance of the Spark and Torque clusters is increased by 16.6% for repetitive applications and by 73.68% for computation-intensive applications with a speedup of 1.79 and 2.47 respectively on the HPDA cluster. We conclude that the specific application or job could be chosen to run based on the computation parameters on the implemented clusters.
APA, Harvard, Vancouver, ISO, and other styles
7

Llorens-Carrodeguas, Alejandro, Stefanos G. Sagkriotis, Cristina Cervelló-Pastor, and Dimitrios P. Pezaros. "An Energy-Friendly Scheduler for Edge Computing Systems." Sensors 21, no. 21 (October 28, 2021): 7151. http://dx.doi.org/10.3390/s21217151.

Full text
Abstract:
The deployment of modern applications, like massive Internet of Things (IoT), poses a combination of challenges that service providers need to overcome: high availability of the offered services, low latency, and low energy consumption. To overcome these challenges, service providers have been placing computing infrastructure close to the end users, at the edge of the network. In this vein, single board computer (SBC) clusters have gained attention due to their low cost, low energy consumption, and easy programmability. A subset of IoT applications requires the deployment of battery-powered SBCs, or clusters thereof. More recently, the deployment of services on SBC clusters has been automated through the use of containers. The management of these containers is performed by orchestration platforms, like Kubernetes. However, orchestration platforms do not consider remaining energy levels for their placement decisions and therefore are not optimized for energy-constrained environments. In this study, we propose a scheduler that is optimised for energy-constrained SBC clusters and operates within Kubernetes. Through comparison with the available schedulers we achieved 23% fewer event rejections, 83% less deadline violations, and approximately a 59% reduction of the consumed energy throughout the cluster.
APA, Harvard, Vancouver, ISO, and other styles
8

Accion, E., A. Bria, G. Bernabeu, M. Caubet, M. Delfino, X. Espinal, G. Merino, F. Lopez, F. Martinez, and E. Planas. "Dimensioning storage and computing clusters for efficient high throughput computing." Journal of Physics: Conference Series 396, no. 4 (December 13, 2012): 042040. http://dx.doi.org/10.1088/1742-6596/396/4/042040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Zhao. "Coordination Method on Cloud Computing Clusters." Advanced Materials Research 605-607 (December 2012): 2160–63. http://dx.doi.org/10.4028/www.scientific.net/amr.605-607.2160.

Full text
Abstract:
This paper makes the case for explicit coordination of network transmission activities among virtual machines (VMs) in the data center Ethernet to proactively prevent network congestion. We think that virtualization has opened up new opportunities for explicit coordination that are simple, effective, currently feasible, and independent of switch-level hardware support. We show that explicit coordination can be implemented transparently without modifying any applications, standard protocols, network switches, or VMs.
APA, Harvard, Vancouver, ISO, and other styles
10

Harrawood, Brian P., Greeshma A. Agasthya, Manu N. Lakshmanan, Gretchen Raterman, and Anuj J. Kapadia. "Geant4 distributed computing for compact clusters." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 764 (November 2014): 11–17. http://dx.doi.org/10.1016/j.nima.2014.07.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computing clusters"

1

Shum, Kam Hong. "Adaptive parallelism for computing on heterogeneous clusters." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aji, Ashwin M. "Programming High-Performance Clusters with Heterogeneous Computing Devices." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52366.

Full text
Abstract:
Today's high-performance computing (HPC) clusters are seeing an increase in the adoption of accelerators like GPUs, FPGAs and co-processors, leading to heterogeneity in the computation and memory subsystems. To program such systems, application developers typically employ a hybrid programming model of MPI across the compute nodes in the cluster and an accelerator-specific library (e.g.; CUDA, OpenCL, OpenMP, OpenACC) across the accelerator devices within each compute node. Such explicit management of disjointed computation and memory resources leads to reduced productivity and performance. This dissertation focuses on designing, implementing and evaluating a runtime system for HPC clusters with heterogeneous computing devices. This work also explores extending existing programming models to make use of our runtime system for easier code modernization of existing applications. Specifically, we present MPI-ACC, an extension to the popular MPI programming model and runtime system for efficient data movement and automatic task mapping across the CPUs and accelerators within a cluster, and discuss the lessons learned. MPI-ACC's task-mapping runtime subsystem performs fast and automatic device selection for a given task. MPI-ACC's data-movement subsystem includes careful optimizations for end-to-end communication among CPUs and accelerators, which are seamlessly leveraged by the application developers. MPI-ACC provides a familiar, flexible and natural interface for programmers to choose the right computation or communication targets, while its runtime system achieves efficient cluster utilization.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

周志賢 and Chi-yin Edward Chow. "Adaptive recovery with hierarchical checkpointing on workstation clusters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B29812914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chow, Chi-yin Edward. "Adaptive recovery with hierarchical checkpointing on workstation clusters /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20792700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Melas, Panagiotis. "The performance evaluation of workstation clusters." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ribeiro, Tiago Filipe Rodrigues. "Developing and evaluating clopencl applications for heterogeneous clusters." Master's thesis, Instituto Politécnico de Bragança, Escola Superior de Tecnologia e Gestão, 2012. http://hdl.handle.net/10198/7948.

Full text
Abstract:
In the last few years, the computing systems processing capabilities have increased significantly, changing from single-core to multi-core and even many-core systems. Accompanying this evolution, local networks have also become faster, with multi-gigabit technologies like Infiniband, Myrinet and 10G Ethernet. Parallel/distributed programming tools and standards, like POSIX Threads, OpenMP and MPI, have helped to explore these technologies and have been frequently combined, giving rise to Hybrid Programming Models. Recently, co-processors like GPUs and FPGAs, started to be used as accelerators, requiring specialized frameworks (like CUDA for NVIDIA GPUs). Presented with so much heterogeneity, the industry formulated the OpenCL specification, as a standard to explore heterogeneous systems. However, in the context of cluster computing, one problem surfaces: OpenCL only enables a developer to use the devices that are present in the local machine. With many processor devices scattered across cluster nodes (CPUs, GPUs and other co-processors), it then became important to enable software developers to take full advantage of the full cluster device set. This dissertation demonstrates and evaluates an OpenCL extension, named clOpenCL, which supports the simple deployment and efficient running of OpenCL-based parallel applications that may span several cluster nodes, thus expanding the original single-node OpenCL model. The main contributions are that clOpenCL i) offers a transparent approach to the porting of traditional OpenCL applications to cluster environments and ii) provides significant performance increases over classical (non-)hybrid parallel approaches. Nos últimos anos, a capacidade de processamento dos sistemas de computação aumentou significativamente, passando de CPUs com um núcleo para CPUs multi-núcleo. Acompanhando esta evolução, as redes locais também se tornaram mais rápidas, com tecnologias multi-gigabit como a Infiniband, Myrinet e 10G Ethernet. Ferramentas e standards paralelos/distribuídos, como POSIX Threads, OpenMP e MPI, ajudaram a explorar esses sistemas, e têm sido frequentemente combinados dando origem a Modelos de Programação Híbrida. Mais recentemente, co-processadores como GPUs e FPGAs, começaram a ser utilizados como aceleradores, exigindo frameworks especializadas (como o CUDA para GPUs NVIDIA). Deparada com tanta heterogeneidade, a indústria formulou a especificação OpenCL, como sendo um standard para exploração de sistemas heterogéneos. No entanto, no contexto da computação em cluster, um problema surge: o OpenCL só permite ao desenvolvedor utilizar dispositivos presentes na máquina local. Com tantos dispositivos de processamento espalhados pelos nós de um cluster (CPUs, GPUs e outros co-processadores), tornou-se assim importante habilitar os desenvolvedores de software, a tirarem o máximo proveito do conjunto total de dispositivos do cluster. Esta dissertação demonstra e avalia uma extensão OpenCL, chamada clOpenCL, que suporta a implementação simples e execução eficiente de aplicações paralelas baseadas em OpenCL que podem estender-se por vários nós do cluster, expandindo assim o modelo original de um único nó do OpenCL. As principais contribuições referem-se a que o clOpenCL i) oferece uma abordagem transparente à portabilidade de aplicações OpenCL tradicionais para ambientes cluster e ii) proporciona aumentos significativos de desempenho sobre abordagens paralelas clássicas (não) híbridas.
APA, Harvard, Vancouver, ISO, and other styles
7

Rough, Justin, and mikewood@deakin edu au. "A Platform for reliable computing on clusters using group communications." Deakin University. School of Computing and Mathematics, 2001. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20060412.141015.

Full text
Abstract:
Shared clusters represent an excellent platform for the execution of parallel applications given their low price/performance ratio and the presence of cluster infrastructure in many organisations. The focus of recent research efforts are on parallelism management, transport and efficient access to resources, and making clusters easy to use. In this thesis, we examine reliable parallel computing on clusters. The aim of this research is to demonstrate the feasibility of developing an operating system facility providing transport fault tolerance using existing, enhanced and newly built operating system services for supporting parallel applications. In particular, we use existing process duplication and process migration services, and synthesise a group communications facility for use in a transparent checkpointing facility. This research is carried out using the methods of experimental computer science. To provide a foundation for the synthesis of the group communications and checkpointing facilities, we survey and review related work in both fields. For group communications, we examine the V Distributed System, the x-kernel and Psync, the ISIS Toolkit, and Horus. We identify a need for services that consider the placement of processes on computers in the cluster. For Checkpointing, we examine Manetho, KeyKOS, libckpt, and Diskless Checkpointing. We observe the use of remote computer memories for storing checkpoints, and the use of copy-on-write mechanisms to reduce the time to create a checkpoint of a process. We propose a group communications facility providing two sets of services: user-oriented services and system-oriented services. User-oriented services provide transparency and target application. System-oriented services supplement the user-oriented services for supporting other operating systems services and do not provide transparency. Additional flexibility is achieved by providing delivery and ordering semantics independently. An operating system facility providing transparent checkpointing is synthesised using coordinated checkpointing. To ensure a consistent set of checkpoints are generated by the facility, instead of blindly blocking the processes of a parallel application, only non-deterministic events are blocked. This allows the processes of the parallel application to continue execution during the checkpoint operation. Checkpoints are created by adapting process duplication mechanisms, and checkpoint data is transferred to remote computer memories and disk for storage using the mechanisms of process migration. The services of the group communications facility are used to coordinate the checkpoint operation, and to transport checkpoint data to remote computer memories and disk. Both the group communications facility and the checkpointing facility have been implemented in the GENESIS cluster operating system and provide proof-of-concept. GENESIS uses a microkernel and client-server based operating system architecture, and is demonstrated to provide an appropriate environment for the development of these facilities. We design a number of experiments to test the performance of both the group communications facility and checkpointing facility, and to provide proof-of-performance. We present our approach to testing, the challenges raised in testing the facilities, and how we overcome them. For group communications, we examine the performance of a number of delivery semantics. Good speed-ups are observed and system-oriented group communication services are shown to provide significant performance advantages over user-oriented semantics in the presence of packet loss. For checkpointing, we examine the scalability of the facility given different levels of resource usage and a variable number of computers. Low overheads are observed for checkpointing a parallel application. It is made clear by this research that the microkernel and client-server based cluster operating system provide an ideal environment for the development of a high performance group communications facility and a transparent checkpointing facility for generating a platform for reliable parallel computing on clusters.
APA, Harvard, Vancouver, ISO, and other styles
8

Daillidis, Christos. "Establishing Linux Clusters for high-performance computing (HPC) at NPS." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sept%5FDaillidis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nakad, Zahi Samir. "High Performance Applications on Reconfigurable Clusters." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/35682.

Full text
Abstract:
Many problems faced in the engineering world are computationally intensive. Filtering using FIR (Finite Impulse Response) filters is an example to that. This thesis discusses the implementation of a fast, reconfigurable, and scalable FIR (Finite Impulse Response) digital filter. Constant coefficient multipliers and a Fast FIFO implementation are also discussed in connection with the FIR filter. This filter is used in two of its structures: the direct-form and the lattice structure. The thesis describes several configurations that can be created with the different components available and reports the testing results of these configurations.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Rafique, Muhammad Mustafa. "An Adaptive Framework for Managing Heterogeneous Many-Core Clusters." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/29119.

Full text
Abstract:
The computing needs and the input and result datasets of modern scientific and enterprise applications are growing exponentially. To support such applications, High-Performance Computing (HPC) systems need to employ thousands of cores and innovative data management. At the same time, an emerging trend in designing HPC systems is to leverage specialized asymmetric multicores, such as IBM Cell and AMD Fusion APUs, and commodity computational accelerators, such as programmable GPUs, which exhibit excellent price to performance ratio as well as the much needed high energy efficiency. While such accelerators have been studied in detail as stand-alone computational engines, integrating the accelerators into large-scale distributed systems with heterogeneous computing resources for data-intensive computing presents unique challenges and trade-offs. Traditional programming and resource management techniques cannot be directly applied to many-core accelerators in heterogeneous distributed settings, given the complex and custom instruction sets architectures, memory hierarchies and I/O characteristics of different accelerators. In this dissertation, we explore the design space of using commodity accelerators, specifically IBM Cell and programmable GPUs, in distributed settings for data-intensive computing and propose an adaptive framework for programming and managing heterogeneous clusters. The proposed framework provides a MapReduce-based extended programming model for heterogeneous clusters, which distributes tasks between asymmetric compute nodes by considering workload characteristics and capabilities of individual compute nodes. The framework provides efficient data prefetching techniques that leverage general-purpose cores to stage the input data in the private memories of the specialized cores. We also explore the use of an advanced layered-architecture based software engineering approach and provide mixin-layers based reusable software components to enable easy and quick deployment of heterogeneous clusters. The framework also provides multiple resource management and scheduling policies under different constraints, e.g., energy-aware and QoS-aware, to support executing concurrent applications on multi-tenant heterogeneous clusters. When applied to representative applications and benchmarks, our framework yields significantly improved performance in terms of programming efficiency and optimal resource management as compared to conventional, hand-tuned, approaches to program and manage accelerator-based heterogeneous clusters.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computing clusters"

1

Pfister, Gregory F. In search of clusters. 2nd ed. Upper Saddle River, NJ: Prentice Hall PTR, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

In search of clusters: The coming battle in lowly parallel computing. Upper Saddle River, N.J: Prentice Hall PTR, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Geoffrey, Fox, and Dongarra J. J, eds. Distributed and cloud computing: Clusters, grids, clouds, and the future Internet. Watham, MA: Morgan Kaufmann, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tuning Microsoft server clusters: Guaranteeing high availability for business networks. New York: McGraw-Hill, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

William, Gropp, IEEE Computer Society, and IEEE Computer Society. Task Force on Cluster Computing., eds. Proceedings: 2002 IEEE International Conference on Cluster Computing, 23-26 September 2002, Chicago, Illinois. Los Alamitos, Calif: IEEE Computer Society, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

IEEE, International Conference on Cluster Computing (7th 2005 Burlington Mass ). 2005 IEEE International Conference on Cluster Computing (CLUSTER): Burlington, MA, 27-30 September, 2005. Piscataway, N.J: IEEE, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

IEEE International Conference on Cluster Computing (3rd 2001 Newport Beach, California, USA). 2001 IEEE International Conference on Cluster Computing: Proceedings : 8-11 October 2001, Newport Beach, California, USA. Los Alamitos, Calif: IEEE Computer Society, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Computing networks: From cluster to cloud computing. London: ISTE, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Boden, Harald. Multidisziplinäre Optimierung und Cluster-Computing. Heidelberg: Physica-Verlag HD, 1996. http://dx.doi.org/10.1007/978-3-642-48081-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hoffmann, Karl Heinz, and Arnd Meyer, eds. Parallel Algorithms and Cluster Computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-33541-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computing clusters"

1

Steele, Guy L., Xiaowei Shen, Josep Torrellas, Mark Tuckerman, Eric J. Bohm, Laxmikant V. Kalé, Glenn Martyna, et al. "Clusters." In Encyclopedia of Parallel Computing, 289–97. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kielmann, Thilo, Sergei Gorlatch, Utpal Banerjee, Rocco De Nicola, Jack Dongarra, Piotr Luszczek, Paul Feautrier, et al. "Beowulf Clusters." In Encyclopedia of Parallel Computing, 129. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Steele, Guy L., Xiaowei Shen, Josep Torrellas, Mark Tuckerman, Eric J. Bohm, Laxmikant V. Kalé, Glenn Martyna, et al. "Commodity Clusters." In Encyclopedia of Parallel Computing, 341. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dongarra, Jack, Piotr Luszczek, Paul Feautrier, Field G. Zee, Ernie Chan, Robert A. Geijn, Robert Bjornson, et al. "Linux Clusters." In Encyclopedia of Parallel Computing, 1036. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leasure, Bruce, David J. Kuck, Sergei Gorlatch, Murray Cole, Gregory R. Watson, Alain Darte, David Padua, et al. "PC Clusters." In Encyclopedia of Parallel Computing, 1487. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kwiatkowski, Jan, Marcin Pawlik, Gerard Frankowski, Kazimierz Balos, Roman Wyrzykowski, and Konrad Karczewski. "Dynamic Clusters Available Under Clusterix Grid." In Applied Parallel Computing. State of the Art in Scientific Computing, 819–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-75755-9_99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kielmann, Thilo, Sergei Gorlatch, Utpal Banerjee, Rocco De Nicola, Jack Dongarra, Piotr Luszczek, Paul Feautrier, et al. "Beowulf-Class Clusters." In Encyclopedia of Parallel Computing, 130. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dolz, Manuel F., Juan C. Fernández, Rafael Mayo, and Enrique S. Quintana-Ortí. "EnergySaving Cluster Roll: Power Saving System for Clusters." In Architecture of Computing Systems - ARCS 2010, 162–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11950-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Narang, Tulika. "Finding Clusters of Data: Cluster Analysis in R." In Advances in Intelligent Systems and Computing, 635–40. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3153-3_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rao, Abhijit, Rajat Upadhyay, Nirav Shah, Sagar Arlekar, Jayanth Raghothamma, and Shrisha Rao. "Cluster Performance Forecasting Using Predictive Modeling for Virtual Beowulf Clusters." In Distributed Computing and Networking, 456–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-92295-7_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computing clusters"

1

Dugerdil, Philippe, and Sebastien Jossi. "Computing dynamic clusters." In Proceeding of the 2nd annual conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1506216.1506228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nakata, Maho. "All about RICC: RIKEN Integrated Cluster of Clusters." In 2011 Second International Conference on Networking and Computing (ICNC). IEEE, 2011. http://dx.doi.org/10.1109/icnc.2011.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lapshina, S. Yu. "The Optimal Processor Cores' Number choice for the Parallel Cluster Multiple Labeling Technique on HighPerformance Computing Systems." In Всероссийская научная конференция "Единое цифровое пространство научных знаний: проблемы и решения". Москва, Берлин: Директмедиа Паблишинг, 2021. http://dx.doi.org/10.51218/978-5-4499-1905-2-2021-311-319.

Full text
Abstract:
The article is about the research of a optimum number of processor cores for launching the Parallel Cluster Multiple Labeling Technique on modern supercomputer systems installed in the JSCC RAS. This technique may be used in any field as a tool for differentiating large lattice clusters, because it is given input in a format independent of the application. At the JSCC RAS, this tool was used to study the problem of the spread of epidemics, for which an appropriate multiagent model was developed. In the course of imitation experiments, a variant of the Parallel Cluster Multiple Labeling Technique for percolation Hoshen-Kopelman clusters related to the tag linking mechanism, which can also be used in any area as a tool for differentiating large-size lattice clusters, was used to be improved on a multiprocessor system.
APA, Harvard, Vancouver, ISO, and other styles
4

Böhm, Christian, Karin Kailing, Peer Kröger, and Arthur Zimek. "Computing Clusters of Correlation Connected objects." In the 2004 ACM SIGMOD international conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1007568.1007620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sahba, Amin, and John J. Prevost. "Hypercube based clusters in Cloud Computing." In 2016 World Automation Congress (WAC). IEEE, 2016. http://dx.doi.org/10.1109/wac.2016.7582974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anedda, P., M. Gaggero, G. Busonera, O. Schiaratura, and G. Zanetti. "Flexible Clusters for High-Performance Computing." In 2010 IEEE 12th International Conference on High Performance Computing and Communications (HPCC 2010). IEEE, 2010. http://dx.doi.org/10.1109/hpcc.2010.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kindratenko, Volodymyr V., Jeremy J. Enos, Guochun Shi, Michael T. Showerman, Galen W. Arnold, John E. Stone, James C. Phillips, and Wen-mei Hwu. "GPU clusters for high-performance computing." In 2009 IEEE International Conference on Cluster Computing and Workshops. IEEE, 2009. http://dx.doi.org/10.1109/clustr.2009.5289128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roush, Ellard, and Zoram Thanga. "Zone Clusters: A virtual cluster based upon solaris containers." In 2009 IEEE International Conference on Cluster Computing and Workshops. IEEE, 2009. http://dx.doi.org/10.1109/clustr.2009.5289197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pramanick, Mauro, and Zhu. "A system recovery benchmark for clusters." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brunst, Nagel, and Malony. "A distributed performance analysis architecture for clusters." In Proceedings IEEE International Conference on Cluster Computing CLUSTR-03. IEEE, 2003. http://dx.doi.org/10.1109/clustr.2003.1253301.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computing clusters"

1

Dongarra, Jack, Thomas Sterling, Horst Simon, and Erich Strohmaier. High performance computing: Clusters, constellations, MPPs, and future directions. Office of Scientific and Technical Information (OSTI), June 2003. http://dx.doi.org/10.2172/813392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Garlick, J. I/O Forwarding on Livermore Computing Commodity Linux Clusters. Office of Scientific and Technical Information (OSTI), December 2012. http://dx.doi.org/10.2172/1070157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abu-Ghazaleh, Nael. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters. Fort Belvoir, VA: Defense Technical Information Center, August 2005. http://dx.doi.org/10.21236/ada438052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alsing, Paul, Michael Fanto, and A. M. Smith. Cluster State Quantum Computing. Fort Belvoir, VA: Defense Technical Information Center, December 2012. http://dx.doi.org/10.21236/ada572237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Richards, Mark A., and Daniel P. Campbell. Rapidly Reconfigurable High Performance Computing Cluster. Fort Belvoir, VA: Defense Technical Information Center, July 2005. http://dx.doi.org/10.21236/ada438586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Duke, D. W., and T. P. Green. [Research toward a heterogeneous networked computing cluster]. Office of Scientific and Technical Information (OSTI), August 1998. http://dx.doi.org/10.2172/674884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Haoyuan, Ali Ghodsi, Matei Zaharia, Scott Shenker, and Ion Stoica. Reliable, Memory Speed Storage for Cluster Computing Frameworks. Fort Belvoir, VA: Defense Technical Information Center, June 2014. http://dx.doi.org/10.21236/ada611854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, H. Y., J. M. Brandt, and R. C. Armstrong. ATM-based cluster computing for multi-problem domains. Office of Scientific and Technical Information (OSTI), August 1996. http://dx.doi.org/10.2172/415338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Eric Burger, Eric Burger. Studying Building Energy Use with a Micro Computing Cluster. Experiment, October 2014. http://dx.doi.org/10.18258/3777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ilg, Mark. Multi-Core Computing Cluster for Safety Fan Analysis of Guided Projectiles. Fort Belvoir, VA: Defense Technical Information Center, September 2011. http://dx.doi.org/10.21236/ada551790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography