Journal articles on the topic 'Computing clusters'

To see the other types of publications on this topic, follow the link: Computing clusters.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computing clusters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

ROSENBERG, ARNOLD L., and RON C. CHIANG. "HETEROGENEITY IN COMPUTING: INSIGHTS FROM A WORKSHARING SCHEDULING PROBLEM." International Journal of Foundations of Computer Science 22, no. 06 (September 2011): 1471–93. http://dx.doi.org/10.1142/s0129054111008829.

Full text
Abstract:
Heterogeneity complicates the use of multicomputer platforms. Can it also enhance their performance? How can one measure the power of a heterogeneous assemblage of computers ("cluster"), in absolute terms (how powerful is this cluster) and relative terms (which cluster is more powerful)? Is a cluster that has one super-fast computer and the rest of "average" speed more/less powerful than one all of whose computers are "moderately" fast? If you can replace just one computer in a cluster with a faster one, should you replace the fastest? the slowest? A result concerning "worksharing" in heterogeneous clusters provides a highly idealized, yet algorithmically meaningful, framework for studying such questions in a way that admits rigorous analysis and formal proof. We encounter some surprises as we answer the preceding questions (perforce, within the idealized framework). Highlights: (1) If one can replace only one computer in a cluster by a faster one, it is (almost) always most advantageous to replace the fastest one. (2) If the computers in two clusters have the same mean speed, then the cluster with the larger variance in speed is (almost) always more productive (verified analytically for small clusters and empirically for large ones.) (3) Heterogeneity can actually enhance a cluster's computing power.
APA, Harvard, Vancouver, ISO, and other styles
2

WEBER, MICHAEL. "WORKSTATION CLUSTERS: ONE WAY TO PARALLEL COMPUTING." International Journal of Modern Physics C 04, no. 06 (December 1993): 1307–14. http://dx.doi.org/10.1142/s0129183193001026.

Full text
Abstract:
The feasibility and constraints of workstation clusters for parallel processing are investigated. Measurements of latency and bandwidth are presented to fix the position of clusters in comparison to massively parallel systems. So it becomes possible to identify the kind of applications that seem to be suited for running on a cluster.
APA, Harvard, Vancouver, ISO, and other styles
3

Tripathy, Minakshi, and C. R. Tripathy. "A Comparative Analysis of Performance of Shared Memory Cluster Computing Interconnection Systems." Journal of Computer Networks and Communications 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/128438.

Full text
Abstract:
In recent past, many types of shared memory cluster computing interconnection systems have been proposed. Each of these systems has its own advantages and limitations. With the increase in system size of the cluster interconnection systems, the comparative analysis of their various performance measures becomes quite inevitable. The cluster architecture, load balancing, and fault tolerance are some of the important aspects, which need to be addressed. The comparison needs to be made in order to choose the best one for a particular application. In this paper, a detailed comparative study on four important and different classes of shared memory cluster architectures has been made. The systems taken up for the purpose of the study are shared memory clusters, hierarchical shared memory clusters, distributed shared memory clusters, and the virtual distributed shared memory clusters. These clusters are analyzed and compared on the basis of the architecture, load balancing, and fault tolerance aspects. The results of comparison are reported.
APA, Harvard, Vancouver, ISO, and other styles
4

Singhania, Shrinkhala, and Monika Tak. "Workstation Clusters for Parallel Computing." International Journal of Engineering Trends and Technology 28, no. 1 (October 25, 2015): 13–14. http://dx.doi.org/10.14445/22315381/ijett-v28p203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stone, J., and F. Ercal. "Workstation clusters for parallel computing." IEEE Potentials 20, no. 2 (2001): 31–33. http://dx.doi.org/10.1109/45.954655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Regassa, Dereje, Heonyoung Yeom, and Yongseok Son. "Harvesting the Aggregate Computing Power of Commodity Computers for Supercomputing Applications." Applied Sciences 12, no. 10 (May 19, 2022): 5113. http://dx.doi.org/10.3390/app12105113.

Full text
Abstract:
Distributed supercomputing is becoming common in different companies and academia. Most of the parallel computing researchers focused on harnessing the power of commodity processors and even internet computers to aggregate their computation powers to solve computationally complex problems. Using flexible commodity cluster computers for supercomputing workloads over a dedicated supercomputer and expensive high-performance computing (HPC) infrastructure is cost-effective. Its scalable nature can make it better employed to the available organizational resources, which can benefit researchers who aim to conduct numerous repetitive calculations on small to large volumes of data to obtain valid results in a reasonable time. In this paper, we design and implement an HPC-based supercomputing facility from commodity computers at an organizational level to provide two separate implementations for cluster-based supercomputing using Hadoop and Spark-based HPC clusters, primarily for data-intensive jobs and Torque-based clusters for Multiple Instruction Multiple Data (MIMD) workloads. The performance of these clusters is measured through extensive experimentation. With the implementation of the message passing interface, the performance of the Spark and Torque clusters is increased by 16.6% for repetitive applications and by 73.68% for computation-intensive applications with a speedup of 1.79 and 2.47 respectively on the HPDA cluster. We conclude that the specific application or job could be chosen to run based on the computation parameters on the implemented clusters.
APA, Harvard, Vancouver, ISO, and other styles
7

Llorens-Carrodeguas, Alejandro, Stefanos G. Sagkriotis, Cristina Cervelló-Pastor, and Dimitrios P. Pezaros. "An Energy-Friendly Scheduler for Edge Computing Systems." Sensors 21, no. 21 (October 28, 2021): 7151. http://dx.doi.org/10.3390/s21217151.

Full text
Abstract:
The deployment of modern applications, like massive Internet of Things (IoT), poses a combination of challenges that service providers need to overcome: high availability of the offered services, low latency, and low energy consumption. To overcome these challenges, service providers have been placing computing infrastructure close to the end users, at the edge of the network. In this vein, single board computer (SBC) clusters have gained attention due to their low cost, low energy consumption, and easy programmability. A subset of IoT applications requires the deployment of battery-powered SBCs, or clusters thereof. More recently, the deployment of services on SBC clusters has been automated through the use of containers. The management of these containers is performed by orchestration platforms, like Kubernetes. However, orchestration platforms do not consider remaining energy levels for their placement decisions and therefore are not optimized for energy-constrained environments. In this study, we propose a scheduler that is optimised for energy-constrained SBC clusters and operates within Kubernetes. Through comparison with the available schedulers we achieved 23% fewer event rejections, 83% less deadline violations, and approximately a 59% reduction of the consumed energy throughout the cluster.
APA, Harvard, Vancouver, ISO, and other styles
8

Accion, E., A. Bria, G. Bernabeu, M. Caubet, M. Delfino, X. Espinal, G. Merino, F. Lopez, F. Martinez, and E. Planas. "Dimensioning storage and computing clusters for efficient high throughput computing." Journal of Physics: Conference Series 396, no. 4 (December 13, 2012): 042040. http://dx.doi.org/10.1088/1742-6596/396/4/042040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Zhao. "Coordination Method on Cloud Computing Clusters." Advanced Materials Research 605-607 (December 2012): 2160–63. http://dx.doi.org/10.4028/www.scientific.net/amr.605-607.2160.

Full text
Abstract:
This paper makes the case for explicit coordination of network transmission activities among virtual machines (VMs) in the data center Ethernet to proactively prevent network congestion. We think that virtualization has opened up new opportunities for explicit coordination that are simple, effective, currently feasible, and independent of switch-level hardware support. We show that explicit coordination can be implemented transparently without modifying any applications, standard protocols, network switches, or VMs.
APA, Harvard, Vancouver, ISO, and other styles
10

Harrawood, Brian P., Greeshma A. Agasthya, Manu N. Lakshmanan, Gretchen Raterman, and Anuj J. Kapadia. "Geant4 distributed computing for compact clusters." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 764 (November 2014): 11–17. http://dx.doi.org/10.1016/j.nima.2014.07.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Atiquzzaman, Mohammed, and Pradip K. Srimani. "Parallel computing on clusters of workstations." Parallel Computing 26, no. 2-3 (February 2000): 175–77. http://dx.doi.org/10.1016/s0167-8191(99)00101-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Grushin, D. A., and N. N. Kuzyurin. "On Effective Scheduling in Computing Clusters." Programming and Computer Software 45, no. 7 (December 2019): 398–404. http://dx.doi.org/10.1134/s0361768819070077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sheng, Chong Chong, Wei Song Hu, Xin Ming Hu, and Bai Feng Wu. "StreamMAP: Automatic Task Assignment System on GPU Cluster." Advanced Materials Research 926-930 (May 2014): 2414–17. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.2414.

Full text
Abstract:
GPU Clusters which use General-Purpose GPUs (GPGPUs) as accelerators are becoming more and more popular in high performance computing area. Currently the mainly used programming model for GPU cluster is hybrid MPI/CUDA. However, when using this model, programmers tend to need detailed knowledge of the hardware resources, which makes the program more complicated and less portable. In this paper, we present StreamMAP, an automatic task assignment system on GPU Clusters. The main contributions of StreamMAP are (1) It provides powerful yet concise language extension suitable to describe the computing resource demands of cluster tasks. (2) It maintains resource information and implements automatic task assignment for GPU Cluster. Experiments show that StreamMAP provides programmability, portability and performance gains.
APA, Harvard, Vancouver, ISO, and other styles
14

Du, Ran, Jingyan Shi, Xiaowei Jiang, and Jiaheng Zou. "Cosmos : A Unified Accounting System both for the HTCondor and Slurm Clusters at IHEP." EPJ Web of Conferences 245 (2020): 07060. http://dx.doi.org/10.1051/epjconf/202024507060.

Full text
Abstract:
HTCondor was adopted to manage the High Throughput Computing (HTC) cluster at IHEP in 2016. In 2017 a Slurm cluster was set up to run High Performance Computing (HPC) jobs. To provide accounting services for these two clusters, we implemented a unified accounting system named Cosmos. Multiple workloads bring different accounting requirements. Briefly speaking, there are four types of jobs to account. First of all, 30 million single-core jobs run in the HTCondor cluster every year. Secondly, Virtual Machine (VM) jobs run in the legacy HTCondor VM cluster. Thirdly, parallel jobs run in the Slurm cluster, and some of these jobs are run on the GPU worker nodes to accelerate computing. Lastly, some selected HTC jobs are migrated from the HTCondor cluster to the Slurm cluster for research purposes. To satisfy all the mentioned requirements, Cosmos is implemented with four layers: acquisition, integration, statistics and presentation. Details about the issues and solutions of each layer will be presented in the paper. Cosmos has run in production for two years, and the status shows that it is a well-functioning system, also meets the requirements of the HTCondor and Slurm clusters.
APA, Harvard, Vancouver, ISO, and other styles
15

Valentino, Rico, Woo-Sung Jung, and Young-Bae Ko. "A Design and Simulation of the Opportunistic Computation Offloading with Learning-Based Prediction for Unmanned Aerial Vehicle (UAV) Clustering Networks." Sensors 18, no. 11 (November 2, 2018): 3751. http://dx.doi.org/10.3390/s18113751.

Full text
Abstract:
Drones have recently become extremely popular, especially in military and civilian applications. Examples of drone utilization include reconnaissance, surveillance, and packet delivery. As time has passed, drones’ tasks have become larger and more complex. As a result, swarms or clusters of drones are preferred, because they offer more coverage, flexibility, and reliability. However, drone systems have limited computing power and energy resources, which means that sometimes it is difficult for drones to finish their tasks on schedule. A solution to this is required so that drone clusters can complete their work faster. One possible solution is an offloading scheme between drone clusters. In this study, we propose an opportunistic computational offloading system, which allows for a drone cluster with a high intensity task to borrow computing resources opportunistically from other nearby drone clusters. We design an artificial neural network-based response time prediction module for deciding whether it is faster to finish tasks by offloading them to other drone clusters. The offloading scheme is conducted only if the predicted offloading response time is smaller than the local computing time. Through simulation results, we show that our proposed scheme can decrease the response time of drone clusters through an opportunistic offloading process.
APA, Harvard, Vancouver, ISO, and other styles
16

Singh, Niharika, Upasana Lakhina, Ajay Jangra, and Priyanka Jangra. "Verification and Identification Approach to Maintain MVCC in Cloud Computing." International Journal of Cloud Applications and Computing 7, no. 4 (October 2017): 41–59. http://dx.doi.org/10.4018/ijcac.2017100103.

Full text
Abstract:
MultiVersion concurrency control is maintained over transactional database systems for secure, fast and efficient access to the shared data file implementation scenario. Most of the services and application offered in cloud world are real-time, which entails optimized compatibility service environment between master and slave clusters. In the paper, offered methodology supports replication and triggering methods intended for data consistency and dynamicity. Here, cluster based communication is set up for processing. Intercommunication among different clusters is administered through middleware besides slave intra-communication is handled by verification and identification protection. The proposed approach incorporates resistive flow to handle high impact systems that identifies and verifies multiple processes. Statistical analysis determines that the new scheme reduces the overheads from different master and slave servers as they are co-located in clusters which allow increased horizontal and vertical scalability of resources.
APA, Harvard, Vancouver, ISO, and other styles
17

Kovalenko, Vadim, Anna Rodakova, Hamza Mohammed Ridha Al-Khafaji, Artem Volkov, Ammar Muthanna, and Andrey Koucheryavy. "Resource Allocation Computing Algorithm for UAV Dynamical Statements based on AI Technology." Webology 19, no. 1 (January 20, 2022): 2307–19. http://dx.doi.org/10.14704/web/v19i1/web19157.

Full text
Abstract:
An unmanned aerial vehicle (UAV) is one of the complex and relevant communication networks of 5G and 2030 networks. Development of technologies for virtualization (NFV), containerization and orchestration of data systems. NFV technology can be implemented not only in the data center but also on the switch or router. Thus, by analogy with the above-described trend, flying network segments can also use computing power to solve any problems. For example, deployment on virtual distributed capacities of a flying station controller, an internal network controller. Within the framework of this direction, there are a number of interrelated tasks that need to be resolved by the trends and capabilities of Artificial Intelligence technologies. This paper proposes an algorithm for searching for computing resources in real time. The article defined the criteria for choosing the head node and the cluster with the highest total resources, considered the possibility of implementing the function of the SDN controller in the UAV cluster, the main possible functions and tasks of the UAV, proposed a three-level architecture based on the separation of the functions performed by the UAV. In this work, simulation was carried out in the Matlab program to detect areas of increased load, form UAV clusters, select a head node in clusters and select one UAV cluster with the highest total resources for its subsequent migration to the area of increased load.
APA, Harvard, Vancouver, ISO, and other styles
18

Hamad, Faten. "An Overview of Hadoop Scheduler Algorithms." Modern Applied Science 12, no. 8 (July 26, 2018): 69. http://dx.doi.org/10.5539/mas.v12n8p69.

Full text
Abstract:
Hadoop is a cloud computing open source system, used in large-scale data processing. It became the basic computing platforms for many internet companies. With Hadoop platform users can develop the cloud computing application and then submit the task to the platform. Hadoop has a strong fault tolerance, and can easily increase the number of cluster nodes, using linear expansion of the cluster size, so that clusters can process larger datasets. However Hadoop has some shortcomings, especially in the actual use of the process of exposure to the MapReduce scheduler, which calls for more researches on Hadoop scheduling algorithms.This survey provides an overview of the default Hadoop scheduler algorithms and the problem they have. It also compare between five Hadoop framework scheduling algorithms in term of the default scheduler algorithm to be enhanced, the proposed scheduler algorithm, type of cluster applied either heterogeneous or homogeneous, methodology, and clusters classification based on performance evaluation. Finally, a new algorithm based on capacity scheduling and use of perspective resource utilization to enhance Hadoop scheduling is proposed.
APA, Harvard, Vancouver, ISO, and other styles
19

Mesheryakov, Roman, Alexander Moiseev, Anton Demin, Vadim Dorofeev, and Vasily Sorokin. "Using Parallel Computing in Queueing Network Simulation." Key Engineering Materials 685 (February 2016): 943–47. http://dx.doi.org/10.4028/www.scientific.net/kem.685.943.

Full text
Abstract:
The paper is devoted to the simulation of queueing networks on high performance computer clusters. The objective is to develop a mathematical model of queueing network and simulation approach to the modelling of the general network functionality, as well as to provide a software implementation on a high-performance computer cluster. The simulation is based on a discrete-event approach, object oriented programming, and MPI technology. The model of the queueing networks simulation system was developed as an application that allows a user to simulate networks of rather free configuration. The experiments on a high performance computer cluster emphasize the high efficiency of parallel computing.
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Chengzhung, Wanlei Zhou, Andrzej Goscinski, and Xue-bin Chi. "Advanced services for Clusters and Internet computing." Future Generation Computer Systems 20, no. 4 (May 2004): 501–3. http://dx.doi.org/10.1016/s0167-739x(03)00169-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Javangula, Pradeep, Kourosh Modarre, Paresh Shenoy, Yi Liu, and Aran Nayebi. "Efficient Hybrid Algorithms for Computing Clusters Overlap." Procedia Computer Science 108 (2017): 1050–59. http://dx.doi.org/10.1016/j.procs.2017.05.212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dongarra, Jack, Masaaki Shimasaki, and Bernard Tourancheau. "Clusters and computational grids for scientific computing." Parallel Computing 27, no. 11 (October 2001): 1401–2. http://dx.doi.org/10.1016/s0167-8191(01)00095-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mahanti, Anirban, and Derek L. Eager. "Adaptive data parallel computing on workstation clusters." Journal of Parallel and Distributed Computing 64, no. 11 (November 2004): 1241–55. http://dx.doi.org/10.1016/j.jpdc.2004.07.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Dongarra, Jack J., and Bernard Tourancheau. "Clusters and Computational Grids for Scientific Computing." International Journal of High Performance Computing Applications 13, no. 3 (August 1999): 179. http://dx.doi.org/10.1177/109434209901300301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Niemi, Tapio, and Ari-Pekka Hameri. "Memory-based scheduling of scientific computing clusters." Journal of Supercomputing 61, no. 3 (April 22, 2011): 520–44. http://dx.doi.org/10.1007/s11227-011-0612-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Xavier, Percival, Wentong Cai, and Bu-Sung Lee. "Workload management of cooperatively federated computing clusters." Journal of Supercomputing 36, no. 3 (June 2006): 309–22. http://dx.doi.org/10.1007/s11227-006-8300-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lu, Wei Jia, and Zhuang Zhi Yan. "Improved FCM Algorithm Based on K-Means and Granular Computing." Journal of Intelligent Systems 24, no. 2 (June 1, 2015): 215–22. http://dx.doi.org/10.1515/jisys-2014-0119.

Full text
Abstract:
AbstractThe fuzzy clustering algorithm has been widely used in the research area and production and life. However, the conventional fuzzy algorithms have a disadvantage of high computational complexity. This article proposes an improved fuzzy C-means (FCM) algorithm based on K-means and principle of granularity. This algorithm is aiming at solving the problems of optimal number of clusters and sensitivity to the data initialization in the conventional FCM methods. The initialization stage of the K-medoid cluster, which is different from others, has a strong representation and is capable of detecting data with different sizes. Meanwhile, through the combination of the granular computing and FCM, the optimal number of clusters is obtained by choosing accurate validity functions. Finally, the detailed clustering process of the proposed algorithm is presented, and its performance is validated by simulation tests. The test results show that the proposed improved FCM algorithm has enhanced clustering performance in the computational complexity, running time, cluster effectiveness compared with the existing FCM algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Meng, Lamei. "The Promotion Effect of the Improved ISCA Model on the Application of Accounting Informatization in Small- and Medium-Sized Enterprises in the Cloud Computing Environment." Mobile Information Systems 2022 (May 11, 2022): 1–13. http://dx.doi.org/10.1155/2022/4228178.

Full text
Abstract:
Cloud computing has played a strong role in promoting the accounting informatization of small and medium-sized enterprises, which is helpful to accelerate the construction of accounting informatization. In order to improve the accounting informatization construction effect of small and medium-sized enterprises, this paper proposes an improved ISCA accounting informatization model based on cloud computing and proposes a semisupervised clustering method of accounting information based on the minimum prototype cluster to classify accounting information. Moreover, this paper uses labeled samples to measure the compactness and purity of clusters and guide the splitting of clusters and constructs a corresponding intelligent model structure. The research shows that the improved ISCA model proposed in this paper has a very obvious effect on the improvement of application of accounting informatization in small- and medium-sized enterprises in the cloud computing environment.
APA, Harvard, Vancouver, ISO, and other styles
29

Aydin, Semra, and Omer Faruk Bay. "Building a high performance computing clusters to use in computing course applications." Procedia - Social and Behavioral Sciences 1, no. 1 (2009): 2396–401. http://dx.doi.org/10.1016/j.sbspro.2009.01.420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Piyoungkorn, Kajornsak, Siriboon Chaisawat, and Chalee Vorakulpipat. "Trusted Electronic Contract for Enabling Peer-to-Peer HPC Resource Sharing." Applied Sciences 12, no. 10 (May 20, 2022): 5153. http://dx.doi.org/10.3390/app12105153.

Full text
Abstract:
With the growing need for HPC resource usage in Thailand, this study aims to foster the creation of an HPC resource sharing ecosystem based on available in-house computing infrastructure. The model of computing resource sharing based on blockchain technology is presented for bridging communication between multiple clusters of HPC systems. The use of blockchain technology allows states among HPC systems to be synchronized and extends capabilities in enforcing governing rules. A smart contract was deployed on the blockchain network to enable users to request computing resources. Upon a request being made, a matching scheme performs the automatic selection of a suitable cluster based on current cluster utilization data and distance from users. Since users and clusters are anonymized from each other, a trusted payment scheme and permission access control are presented to assure both parties. As the system leverages off-chined and on-chained data exchange to carry out the operation, the secure gateway is proposed to mitigate technical difficulty from the client’s perspective and ensure information is securely flowing to and from legitimate actors. The result of this work ensures HPC service providers can maximize the utilization of their resources and monetize idle computing time, while users can access demanded resources conveniently and pay at a reasonable price.
APA, Harvard, Vancouver, ISO, and other styles
31

Alandes Pradillo, Maria, Nils Høimyr, Pablo Llopis Sanmillan, and Markus Tapani Jylhänkangas. "Migrating Engineering Windows HPC applications to Linux HTCondor and Slurm Clusters." EPJ Web of Conferences 245 (2020): 09016. http://dx.doi.org/10.1051/epjconf/202024509016.

Full text
Abstract:
The CERN IT department has been maintaining different High Performance Computing (HPC) services over the past five years. While the bulk of computing facilities at CERN are running under Linux, a Windows cluster was dedicated for engineering simulations and analysis related to accelerator technology development. The Windows cluster consisted of machines with powerful CPUs, big memory, and a low-latency interconnect. The Linux cluster resources are accessible through HTCondor, and are used for general purpose parallel but single-node type jobs, providing computing power to the CERN experiments and departments for tasks such as physics event reconstruction, data analysis, and simulation. For HPC workloads that require multi-node parallel environments for Message Passing Interface (MPI) based programs, there is another Linux-based HPC service that is comprised of several clusters running under the Slurm batch system, and consist of powerful hardware with low-latency interconnects. In 2018, it was decided to consolidate compute intensive jobs in Linux to make a better use of the existing resources. Moreover, this was also in line with CERN IT strategy to reduce its dependencies on Microsoft products. This paper focuses on the migration of Ansys [1], COMSOL [2] and CST [3] users from Windows HPC to Linux clusters. Ansys, COMSOL and CST are three engineering applications used at CERN for different domains, like multiphysics simulations and electromagnetic field problems. Users of these applications are in different departments, with different needs and levels of expertise. In most cases, the users have no prior knowledge of Linux. The paper will present the technical strategy to allow the engineering users to submit their simulations to the appropriate Linux cluster, depending on their simulation requirements. We also describe the technical solution to integrate their Windows workstations in order from them to be able to submit to Linux clusters. Finally, we discuss the challenges and lessons learnt during the migration.
APA, Harvard, Vancouver, ISO, and other styles
32

Barreiro Megino, Fernando Harald, Jeffrey Ryan Albert, Frank Berghaus, Kaushik De, FaHui Lin, Danika MacDonell, Tadashi Maeno, et al. "Using Kubernetes as an ATLAS computing site." EPJ Web of Conferences 245 (2020): 07025. http://dx.doi.org/10.1051/epjconf/202024507025.

Full text
Abstract:
In recent years containerization has revolutionized cloud environments, providing a secure, lightweight, standardized way to package and execute software. Solutions such as Kubernetes enable orchestration of containers in a cluster, including for the purpose of job scheduling. Kubernetes is becoming a de facto standard, available at all major cloud computing providers, and is gaining increased attention from some WLCG sites. In particular, CERN IT has integrated Kubernetes into their cloud infrastructure by providing an interface to instantly create Kubernetes clusters, and the University of Victoria is pursuing an infrastructure-as-code approach to deploying Kubernetes as a flexible and resilient platform for running services and delivering resources. The ATLAS experiment at the LHC has partnered with CERN IT and the University of Victoria to explore and demonstrate the feasibility of running an ATLAS computing site directly on Kubernetes, replacing all grid computing services. We have interfaced ATLAS’ workload submission engine PanDA with Kubernetes, to directly submit and monitor the status of containerized jobs. We describe the integration and deployment details, and focus on the lessons learned from running a wide variety of ATLAS production payloads on Kubernetes using clusters of several thousand cores at CERN and the Tier 2 computing site in Victoria.
APA, Harvard, Vancouver, ISO, and other styles
33

Al-Neama, Mohammed W., Naglaa M. Reda, and Fayed F. M. Ghaleb. "An Improved Distance Matrix Computation Algorithm for Multicore Clusters." BioMed Research International 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/406178.

Full text
Abstract:
Distance matrix has diverse usage in different research areas. Its computation is typically an essential task in most bioinformatics applications, especially in multiple sequence alignment. The gigantic explosion of biological sequence databases leads to an urgent need for accelerating these computations.DistVectalgorithm was introduced in the paper of Al-Neama et al. (in press) to present a recent approach for vectorizing distance matrix computing. It showed an efficient performance in both sequential and parallel computing. However, the multicore cluster systems, which are available now, with their scalability and performance/cost ratio, meet the need for more powerful and efficient performance. This paper proposesDistVect1as highly efficient parallel vectorized algorithm with high performance for computing distance matrix, addressed to multicore clusters. It reformulatesDistVect1vectorized algorithm in terms of clusters primitives. It deduces an efficient approach of partitioning and scheduling computations, convenient to this type of architecture. Implementations employ potential of both MPI and OpenMP libraries. Experimental results show that the proposed method performs improvement of around 3-fold speedup upon SSE2. Further it also achieves speedups more than 9 orders of magnitude compared to the publicly available parallel implementation utilized in ClustalW-MPI.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Liang, and Tian Yu Wo. "A Scalable Data Platform for Cloud Computing Systems." Applied Mechanics and Materials 577 (July 2014): 860–64. http://dx.doi.org/10.4028/www.scientific.net/amm.577.860.

Full text
Abstract:
With cloud computing systems becoming popular, it has been a hotspot to design a scalable, highly available and cost-effective data platform. This paper proposed such a data platform using MySQL DBMS blocks. For scalability, a three-level (system, super-cluster, cluster) architecture is applied, making it scalable to thousands of applications. For availability, we use asynchronous replication across geographically dispersed super clusters to provide disaster recovery, synchronous replication within a cluster to perform failure recovery and hot standby or even process pair mechanism for controllers to enhance fault tolerance. For resource utility, we design a novel load balancing strategy by exploiting the key property that the throughput requirement of web applications is flucatuated in a time period. Experiments with NLPIR dataset indicate that the system can scale to a large number of web applications and make good use of resources provided.
APA, Harvard, Vancouver, ISO, and other styles
35

Mellerup, Erling, Martin Balslev Jørgensen, Henrik Dam, and Gert Lykke Møller. "Combinations of SNP genotypes from the Wellcome Trust Case Control Study of bipolar patients." Acta Neuropsychiatrica 30, no. 2 (December 6, 2017): 106–10. http://dx.doi.org/10.1017/neu.2017.36.

Full text
Abstract:
ObjectivesCombinations of genetic variants are the basis for polygenic disorders. We examined combinations of SNP genotypes taken from the 446 729 SNPs in The Wellcome Trust Case Control Study of bipolar patients.MethodsParallel computing by graphics processing units, cloud computing, and data mining tools were used to scan The Wellcome Trust data set for combinations.ResultsTwo clusters of combinations were significantly associated with bipolar disorder. One cluster contained 68 combinations, each of which included five SNP genotypes. Of the 1998 patients, 305 had combinations from this cluster in their genome, but none of the 1500 controls had any of these combinations in their genome. The other cluster contained six combinations, each of which included five SNP genotypes. Of the 1998 patients, 515 had combinations from the cluster in their genome, but none of the 1500 controls had any of these combinations in their genome.ConclusionClusters of combinations of genetic variants can be considered general risk factors for polygenic disorders, whereas accumulation of combinations from the clusters in the genome of a patient can be considered a personal risk factor.
APA, Harvard, Vancouver, ISO, and other styles
36

BANDYOPADHYAY, S., V. P. ROYCHOWDHURY, and D. B. JANES. "CHEMICALLY SELF-ASSEMBLED NANOELECTRONIC COMPUTING NETWORKS." International Journal of High Speed Electronics and Systems 09, no. 01 (March 1998): 1–35. http://dx.doi.org/10.1142/s0129156498000038.

Full text
Abstract:
Recent advances in chemical self-assembly will soon make it possible to synthesize extremely powerful computing machinery from metallic clusters and organic molecules. These self-organized networks can function as Boolean logic circuits, associative memory, image processors, and combinatorial optimizers. Computational or signal processing activity is elicited from simple charge interactions between clusters which are resistively/capacitively linked by conjugated molecular wires or ribbons. The resulting circuits are massively parallel, fault-tolerant, ultrafast, ultradense and dissipate very little power.
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Wenkai, Theodoros Pavloudis, Alexey V. Verkhovtsev, Andrey V. Solov’yov, and Richard E. Palmer. "Molecular dynamics simulation of nanofilament breakage in neuromorphic nanoparticle networks." Nanotechnology 33, no. 27 (April 12, 2022): 275602. http://dx.doi.org/10.1088/1361-6528/ac5e6d.

Full text
Abstract:
Abstract Neuromorphic computing systems may be the future of computing and cluster-based networks are a promising architecture for the realization of these systems. The creation and dissolution of synapses between the clusters are of great importance for their function. In this work, we model the thermal breakage of a gold nanofilament located between two gold nanoparticles via molecular dynamics simulations to study on the mechanisms of neuromorphic nanoparticle-based devices. We employ simulations of Au nanowires of different lengths (20–80 Å), widths (4–8 Å) and shapes connecting two Au1415 nanoparticles (NPs) and monitor the evolution of the system via a detailed structural identification analysis. We found that atoms of the nanofilament gradually aggregate towards the clusters, causing the middle of wire to gradually thin and then break. Most of the system remains crystalline during this process but the center is molten. The terminal NPs increase the melting point of the NWs by fixing the middle wire and act as recrystallization areas. We report a strong dependence on the width of the NWs, but also their length and structure. These results may serve as guidelines for the realization of cluster-based neuromorphic computing systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Tsung-Lung, and Wen-Cai Lu. "Structural and electronic characteristics of intercalated monopotassium–rubrene: Simulation on a commodity computing cluster." Journal of Theoretical and Computational Chemistry 15, no. 04 (June 2016): 1650035. http://dx.doi.org/10.1142/s0219633616500358.

Full text
Abstract:
The structural and electronic characteristics of the intercalated monopotassium–rubrene (K1Rub) are studied. In the intercalated K1Rub, one of the two pairs of phenyl groups of rubrene is intercalated by potassium, whereas the other pair remains pristine. This structural feature facilitates the comparison of the electronic structures of the intercalated and pristine pairs of phenyl groups. It is found that, in contrast to potassium adsorption to rubrene, the potassium intercalation promotes the carbon [Formula: see text] orbitals of the intercalated pair of phenyls to participate in the electronic structures of HOMO. Additionally, this intercalated K1Rub is used as a testing vehicle to study the performance of a commodity computing cluster built to run the General Atomic and Molecular Electronic Structure System (GAMESS) simulation package. It is shown that, for many frequently encountered simulation tasks, the performance of the commodity computing cluster is comparable with a massive computing cluster. The high performance-cost-ratio of the computing clusters constructed with commodity hardware suggests a feasible alternative for research institutes to establish their computing facilities.
APA, Harvard, Vancouver, ISO, and other styles
39

Ward, Doyle V., Andrew G. Hoss, Raivo Kolde, Helen C. van Aggelen, Joshua Loving, Stephen A. Smith, Deborah A. Mack, et al. "Integration of genomic and clinical data augments surveillance of healthcare-acquired infections." Infection Control & Hospital Epidemiology 40, no. 6 (April 23, 2019): 649–55. http://dx.doi.org/10.1017/ice.2019.75.

Full text
Abstract:
AbstractBackground:Determining infectious cross-transmission events in healthcare settings involves manual surveillance of case clusters by infection control personnel, followed by strain typing of clinical/environmental isolates suspected in said clusters. Recent advances in genomic sequencing and cloud computing now allow for the rapid molecular typing of infecting isolates.Objective:To facilitate rapid recognition of transmission clusters, we aimed to assess infection control surveillance using whole-genome sequencing (WGS) of microbial pathogens to identify cross-transmission events for epidemiologic review.Methods:Clinical isolates ofStaphylococcus aureus,Enterococcus faecium,Pseudomonas aeruginosa, andKlebsiella pneumoniaewere obtained prospectively at an academic medical center, from September 1, 2016, to September 30, 2017. Isolate genomes were sequenced, followed by single-nucleotide variant analysis; a cloud-computing platform was used for whole-genome sequence analysis and cluster identification.Results:Most strains of the 4 studied pathogens were unrelated, and 34 potential transmission clusters were present. The characteristics of the potential clusters were complex and likely not identifiable by traditional surveillance alone. Notably, only 1 cluster had been suspected by routine manual surveillance.Conclusions:Our work supports the assertion that integration of genomic and clinical epidemiologic data can augment infection control surveillance for both the identification of cross-transmission events and the inclusion of missed and exclusion of misidentified outbreaks (ie, false alarms). The integration of clinical data is essential to prioritize suspect clusters for investigation, and for existing infections, a timely review of both the clinical and WGS results can hold promise to reduce HAIs. A richer understanding of cross-transmission events within healthcare settings will require the expansion of current surveillance approaches.
APA, Harvard, Vancouver, ISO, and other styles
40

Lapshina, S. Y. "The Optimal Processor Cores' Number choice for the Parallel Cluster Multiple Labeling Technique on High-Performance Computing Systems." Information resources of Russia, no. 5 (2020): 40–43. http://dx.doi.org/10.51218/0204-3653-2020-5-40-43.

Full text
Abstract:
The article is about the research of a optimum number of processor cores for launching the Parallel Cluster Multiple Labeling Technique on modern supercomputer systems installed in the JSCC RAS. This technique may be used in any field as a tool for differentiating large lattice clusters, because it is given input in a format independent of the application. At the JSCC RAS, this tool was used to study the problem of the spread of epidemics, for which an appropriate multiagent model was developed. In the course of imitation experiments, a variant of the Parallel Cluster Multiple Labeling Technique for percolation Hoshen-Kopelman clusters related to the tag linking mechanism, which can also be used in any area as a tool for differentiating large-size lattice clusters, was used to be improved on a multiprocessor system.
APA, Harvard, Vancouver, ISO, and other styles
41

Fernández-Cerero, Damián, Jorge Yago Fernández-Rodríguez, Juan A. Álvarez-García, Luis M. Soria-Morillo, and Alejandro Fernández-Montes. "Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things." Sensors 19, no. 13 (July 9, 2019): 3026. http://dx.doi.org/10.3390/s19133026.

Full text
Abstract:
The number of connected sensors and devices is expected to increase to billions in the near future. However, centralised cloud-computing data centres present various challenges to meet the requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput and bandwidth constraints. Edge computing is becoming the standard computing paradigm for latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related to centralised cloud-computing models. Such a paradigm relies on bringing computation close to the source of data, which presents serious operational challenges for large-scale cloud-computing providers. In this work, we present an architecture composed of low-cost Single-Board-Computer clusters near to data sources, and centralised cloud-computing data centres. The proposed cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT workload requirements while keeping scalability. We include an extensive empirical analysis to assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud architectures, and evaluate them through extensive simulation. We finally show that acquisition costs can be drastically reduced while keeping performance levels in data-intensive IoT use cases.
APA, Harvard, Vancouver, ISO, and other styles
42

Vasilyev, N. P., and M. M. Rovnyagin. "Hybrid clusters for budget supercomputers and cloud computing." Automation and Remote Control 75, no. 10 (October 2014): 1869–74. http://dx.doi.org/10.1134/s0005117914100130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Montero, Ruben S., Rafael Moreno-Vozmediano, and Ignacio M. Llorente. "An elasticity model for High Throughput Computing clusters." Journal of Parallel and Distributed Computing 71, no. 6 (June 2011): 750–57. http://dx.doi.org/10.1016/j.jpdc.2010.05.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kim, Seontae, and Young-ri Choi. "Constraint-aware VM placement in heterogeneous computing clusters." Cluster Computing 23, no. 1 (August 1, 2019): 71–85. http://dx.doi.org/10.1007/s10586-019-02966-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Grushin, D. A., and N. N. Kuzyurin. "Energy-efficient computing for a group of clusters." Programming and Computer Software 39, no. 6 (November 2013): 295–300. http://dx.doi.org/10.1134/s0361768813060030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kurzyniec, Dawid, and Vaidy Sunderam. "Failure Resilient Heterogeneous Parallel Computing Across Multidomain Clusters." International Journal of High Performance Computing Applications 19, no. 2 (May 2005): 143–55. http://dx.doi.org/10.1177/1094342005054260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zoraja, Ivan, Hermann Hellwagner, and Vaidy Sunderam. "SCIPVM: Parallel distributed computing on SCI workstation clusters." Concurrency: Practice and Experience 11, no. 3 (March 1999): 121–38. http://dx.doi.org/10.1002/(sici)1096-9128(199903)11:3<121::aid-cpe368>3.0.co;2-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Karthik, A. "Multicloud Deployment of Computing Clusters for Loosely Coupled Multi Task Computing (MTC) Applications." IOSR Journal of Computer Engineering 4, no. 3 (2012): 05–12. http://dx.doi.org/10.9790/0661-0430512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kolumbet, Vadim, and Olha Svynchuk. "MULTIAGENT METHODS OF MANAGEMENT OF DISTRIBUTED COMPUTING IN HYBRID CLUSTERS." Advanced Information Systems 6, no. 1 (April 6, 2022): 32–36. http://dx.doi.org/10.20998/2522-9052.2022.1.05.

Full text
Abstract:
Modern information technologies include the use of server systems, virtualization technologies, communication tools for distributed computing and development of software and hardware solutions of data processing and storage centers, the most effective of such complexes for managing heterogeneous computing resources are hybrid GRID- distributed computing infrastructure combines resources of different types with collective access to these resources for and sharing shared resources. The article considers a multi-agent system that provides integration of the computational management approach for a cluster Grid system of computational type, the nodes of which have a complex hybrid structure. The hybrid cluster includes computing modules that support different parallel programming technologies and differ in their computational characteristics. The novelty and practical significance of the methods and tools presented in the article are a significant increase in the functionality of the Grid cluster computing management system for the distribution and division of Grid resources at different levels of tasks, the ability to embed intelligent computing management tools in problem-oriented applications. The use of multi-agent systems for task planning in Grid systems will solve two main problems - scalability and adaptability. The methods and techniques used today do not sufficiently provide solutions to these complex problems. Thus, the scientific task of improving the effectiveness of methods and tools for managing problem-oriented distributed computing in a cluster Grid system, integrated with traditional meta-planners and local resource managers of Grid nodes, corresponding to trends in the concept of scalability and adaptability.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhuang, Yi Jie, Min Wang, and Xiao Chong Pan. "Research on Cloud Computing Industrial Cluster Innovation Evaluation Model." Advanced Materials Research 711 (June 2013): 647–51. http://dx.doi.org/10.4028/www.scientific.net/amr.711.647.

Full text
Abstract:
In this paper, a Bayesian network-based assessment model used for evaluating the innovation of cloud computing industry is presented. Firstly, the innovation measurement model of cloud computing industrial clusters is designed. Then Bayesian network assessment and the self-learning method to the model are proposed. Finally, accompanying with empirical data, the most likely innovation status value of cloud computing industrial clusters and key variables influencing the innovation status value can be predicted. This model can provide the theory basis for researching the innovative development of cloud computing industries.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography