Journal articles on the topic 'Distributed computing infrastructure'

To see the other types of publications on this topic, follow the link: Distributed computing infrastructure.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distributed computing infrastructure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Horzela, Maximilian, Henri Casanova, Manuel Giffels, Artur Gottmann, Robin Hofsaess, Günter Quast, Simone Rossi Tisbeni, Achim Streit, and Frédéric Suter. "Modeling Distributed Computing Infrastructures for HEP Applications." EPJ Web of Conferences 295 (2024): 04032. http://dx.doi.org/10.1051/epjconf/202429504032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support a plethora of users and workflows, such as the Worldwide LHC Computing Grid (WLCG), is not trivial. Due to the complexity and size of these infrastructures, it is not feasible to deploy experimental test-beds at large scales merely for the purpose of comparing and evaluating alternate designs. An alternative is to study the behaviours of these systems using simulation. This approach has been used successfully in the past to identify efficient and practical infrastructure designs for High Energy Physics (HEP). A prominent example is the Monarc simulation framework, which was used to study the initial structure of the WLCG. New simulation capabilities are needed to simulate large-scale heterogeneous computing systems with complex networks, data access and caching patterns. A modern tool to simulate HEP workloads that execute on distributed computing infrastructures based on the SimGrid and WRENCH simulation frameworks is outlined. Studies of its accuracy and scalability are presented using HEP as a case-study. Hypothetical adjustments to prevailing computing architectures in HEP are studied providing insights into the dynamics of a part of the WLCG and candidates for improvements.
2

Korenkov, Vladimir, Andrei Dolbilov, Valeri Mitsyn, Ivan Kashunin, Nikolay Kutovskiy, Dmitry Podgainy, Oksana Streltsova, Tatiana Strizh, Vladimir Trofimov, and Peter Zrelov. "The JINR distributed computing environment." EPJ Web of Conferences 214 (2019): 03009. http://dx.doi.org/10.1051/epjconf/201921403009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computing in the field of high energy physics requires usage of heterogeneous computing resources and IT, such as grid, high performance computing, cloud computing and big data analytics for data processing and analysis. The core of the distributed computing environment at the Joint Institute for Nuclear Research is the Multifunctional Information and Computing Complex. It includes Tier1 for CMS experiment, Tier2 site for all LHC experiments and other grid non-LHC VOs, such as BIOMED, COMPASS, NICA/MPD, NOvA, STAR and BESIII, as well as cloud and HPC infrastructures. A brief status overview of each component is presented. Particular attention is given to the development of distributed computations performed in collaboration with CERN, BNL, FNAL, FAIR, China, and JINR Member States. One of the directions for the cloud infrastructure is the development of integration methods of various cloud resources of the JINR Member State organizations in order to perform common tasks, and also to distribute a load across integrated resources. We performed cloud resources integration of scientific centers in Armenia, Azerbaijan, Belarus, Kazakhstan and Russia. Extension of the HPC component will be carried through a specialized infrastructure for HPC engineering that is being created at MICC, which makes use of the contact liquid cooling technology implemented by the Russian company JSC "RSC Technologies". Current plans are to further develop MICC as a center for scientific computing within the multidisciplinary research environment of JINR and JINR Member States, and mainly for the NICA mega-science project.
3

Fergusson, David, Roberto Barbera, Emidio Giorgio, Marco Fargetta, Gergely Sipos, Diego Romano, Malcolm Atkinson, and Elizabeth Vander Meer. "Distributed Computing Education, Part 4: Training Infrastructure." IEEE Distributed Systems Online 9, no. 10 (October 2008): 2. http://dx.doi.org/10.1109/mdso.2008.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arslan, Mustafa Y., Indrajeet Singh, Shailendra Singh, Harsha V. Madhyastha, Karthikeyan Sundaresan, and Srikanth V. Krishnamurthy. "CWC: A Distributed Computing Infrastructure Using Smartphones." IEEE Transactions on Mobile Computing 14, no. 8 (August 1, 2015): 1587–600. http://dx.doi.org/10.1109/tmc.2014.2362753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Di Girolamo, Alessandro, Federica Legger, Panos Paparrigopoulos, Alexei Klimentov, Jaroslava Schovancová, Valentin Kuznetsov, Mario Lassnig, et al. "Operational Intelligence for Distributed Computing Systems for Exascale Science." EPJ Web of Conferences 245 (2020): 03017. http://dx.doi.org/10.1051/epjconf/202024503017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions from software developers, shifters and operational teams is needed to efficiently manage such heterogeneous infrastructures. A wealth of operational data can be exploited to increase the level of automation in computing operations by using adequate techniques, such as machine learning (ML), tailored to solve specific problems. The Operational Intelligence project is a joint effort from various WLCG communities aimed at increasing the level of automation in computing operations. We discuss how state-of-the-art technologies can be used to build general solutions to common problems and to reduce the operational cost of the experiment computing infrastructure.
6

J, Bakiadarshani. "Computing while Charging: Building a Distributed Computing Infrastructure using Smartphones." International Journal for Research in Applied Science and Engineering Technology V, no. III (March 24, 2017): 323–37. http://dx.doi.org/10.22214/ijraset.2017.3060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Adam, C., D. Barberis, S. Crépé-Renaudin, K. De, F. Fassi, A. Stradling, M. Svatos, A. Vartapetian, and H. Wolters. "Computing shifts to monitor ATLAS distributed computing infrastructure and operations." Journal of Physics: Conference Series 898 (October 2017): 092004. http://dx.doi.org/10.1088/1742-6596/898/9/092004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nishant, Neerav, and Vaishali Singh. "Distributed Infrastructure for an Academic Cloud." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 6 (July 10, 2023): 34–38. http://dx.doi.org/10.17762/ijritcc.v11i6.6769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The various community infrastructure literature reveals the challenges in educational institutions to embrace cloud computing trends. Setting up an own data center in effect means a private cloud. If research on the open cloud services is available within the institution, then the rollout of such research products becomes an in-house implementation. Thus, even reducing the dependence on cloud vendors. Distribution of resources opens the channel for better communication within academic institutions. It also attracts opportunities to procure individual hardware with a bigger gain. Enormous spending and unaccounted credits fall into central budgets if not controlled in a structured manner. Also, increasing the overall data management cost as an institution needs a different perspective for its’ long-term benefits. The expenses allow branching the cloud management tasks either in a vendor’s private cloud or own Cloud if feasible. Bigdata does touch the academics to so much extent that such disparate de-central data management creates several pitfalls. The solution then suggested to have a controlled environment claimed as distributed computing. Infrastructure spending shoots up with a pay as you go model. We claim that a distributed infrastructure as an excellent opportunity in the computing when performed at the cost of trust of a private cloud. The open-source movements experiment the distributed clouds by promoting OpenStack swift.
9

CHEN, QIMING, PARVATHI CHUNDI, UMESHWAR DAYAL, and MEICHUN HSU. "DYNAMIC AGENTS." International Journal of Cooperative Information Systems 08, no. 02n03 (June 1999): 195–223. http://dx.doi.org/10.1142/s0218843099000101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We claim that a dynamic agent infrastructure can provide a shift from static distributed computing to dynamic distributed computing, and we have developed an infrastructure to realize such a shift. We shall compare this infrastructure with other distributed computing infrastructures such as CORBA and DCOM, and demonstrate its value in highly dynamic system integration, service provisioning and distributed applications such as data mining on the Web. The infrastructure is Java-based, light-weight, and extensible. It differs from other agent platforms and client/server infrastructures in its support of dynamic behavior modification of agents. A dynamic agent is not designed to have a fixed set of predefined functions, but instead, to carry application-specific actions, which can be loaded and modified on the fly. This allows a dynamic agent to adjust its capability to accommodate changes in the environment and requirements, and play different roles across multiple applications. The above features are supported by the light-weight, built-in management facilities of dynamic agents, which can be commonly used by the "carried" application programs to communicate, manage resources and modify their problem-solving capabilities. Therefore, the proposed infrastructure allows application-specific multi-agent systems to be developed easily on top of it, provides "nuts and bolts" for run-time system integration, and supports dynamic service construction, modification and movement. A prototype has been developed at HP Labs and made available to several external research groups.
10

Dubenskaya, J., A. Kryukov, A. Demichev, and N. Prikhodko. "New security infrastructure model for distributed computing systems." Journal of Physics: Conference Series 681 (February 3, 2016): 012051. http://dx.doi.org/10.1088/1742-6596/681/1/012051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Andrade, P., M. Babik, K. Bhatt, P. Chand, D. Collados, V. Duggal, P. Fuente, et al. "Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid." Journal of Physics: Conference Series 396, no. 3 (December 13, 2012): 032002. http://dx.doi.org/10.1088/1742-6596/396/3/032002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bauer, Daniela. "Distributed Computing for Small Experiments." EPJ Web of Conferences 182 (2018): 02010. http://dx.doi.org/10.1051/epjconf/201818202010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The large Large Hadron Collider experiments have successfully used distributed computing for years. The same infrastructure yields large opportunistic resources for smaller collaborations. In addition, some national grid initiatives make dedicated resources for small collaborations available. This article presents an overview of the services available and how to access them, including an example of how small collaborations have successfully incorporated distributed computing into their workflows.
13

Salomoni, Davide, Ahmad Alkhansa, Marica Antonacci, Patrizia Belluomo, Massimo Biasotto, Luca Giovanni Carbone, Daniele Cesini, et al. "INFN and the evolution of distributed scientific computing in Italy." EPJ Web of Conferences 295 (2024): 10004. http://dx.doi.org/10.1051/epjconf/202429510004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
INFN has been running a distributed infrastructure (the Tier-1 at Bologna-CNAF and 9 Tier-2 centres) for more than 20 years which currently offers about 150000 CPU cores and 120 PB of space both in tape and disk storage, serving more than 40 international scientific collaborations. This Grid-based infrastructure was augmented in 2019 with the INFN Cloud: a production quality multi-site federated Cloud infrastructure, composed by a core backbone, and which is able to integrate other INFN sites and public or private Clouds as well. The INFN Cloud provides a customizable and extensible portfolio offering computing and storage services spanning the IaaS, PaaS and SaaS layers, with dedicated solutions to serve special purposes, such as ISO-certified regions for the handling of sensitive data. INFN is now revising and expanding its infrastructure to tackle the challenges expected in the next 10 years of scientific computing adopting a “cloud-first” approach, through which all the INFN data centres will be federated via the INFN Cloud middleware and integrated with key HPC centres, such as the pre-exascale Leonardo machine at CINECA. In such a process, which involves both the infrastructures and the higher level services, initiatives and projects such as the "Italian National Centre on HPC, Big Data and Quantum Computing" (funded in the context of the Italian "National Recovery and Resilience Plan") and the Bologna Technopole are precious opportunities that will be exploited to offer advanced resources and services to universities, research institutions and industry. In this paper we describe how INFN is evolving its computing infrastructure, with the ambition to create and operate a national vendorneutral, open, scalable, and flexible "datalake" able to serve much more than just INFN users and experiments.
14

Voronin, Dmitrii, Victoria Shevchenko, and Olga Chengar. "Technology of computing risks visualization for distributed production infrastructures." MATEC Web of Conferences 224 (2018): 02071. http://dx.doi.org/10.1051/matecconf/201822402071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Scientific problems related to the classification, assessment, visualization and management of risks in the cloud environments have been considered. The analysis of the state-of-the-art methods, offered for these problems solving, has been carried out taking into account the specificity of the cloud infrastructure oriented on large-scale tasks processing in distributed production infrastructures. Unfortunately, not much of scientific and objective researches had been focused on the developing of effective approaches for cloud risks visualization providing the necessary information to support decision-making in distributed production infrastructures. In order to fill this research gap, this study attempts to propose a risks visualization technique that is based on radar chart implementation for multidimensional data visualization.
15

Zhang, Xiaomei. "JUNO distributed computing system." EPJ Web of Conferences 295 (2024): 04030. http://dx.doi.org/10.1051/epjconf/202429504030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Jiangmen Underground Neutrino Observatory (JUNO) [1] is a multipurpose neutrino experiment and the determination of the neutrino mass hierarchy is its primary physics goal. JUNO is going to start data taking in 2024 and plans to use distributed computing infrastructure for the data processing and analysis tasks. The JUNO distributed computing system has been designed and built based on DIRAC [2]. Since last year, the official Monte Carlo (MC) production has been running on the system, and petabytes of massive MC data have been shared among JUNO data centers through this system. In this paper, an overview of the JUNO distributed computing system will be presented, including workload management system, data management, and condition data access system. Moreover, the progress of adapting the system to support token-based AAI [3] and HTTP-TPC [4] will be reported. Finally, the paper will mention the preparations for the upcoming JUNO data-taking.
16

Fisk, Ian M. "Computing in High Energy Physics." International Journal of Modern Physics A 20, no. 14 (June 10, 2005): 3021–32. http://dx.doi.org/10.1142/s0217751x0502570x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this review, the computing challenges facing the current and next generation of high energy physics experiments will be discussed. High energy physics computing represents an interesting infrastructure challenge as the use of large-scale commodity computing clusters has increased. The causes and ramifications of these infrastructure challenges will be outlined. Increasing requirements, limited physical infrastructure at computing facilities, and limited budgets have driven many experiments to deploy distributed computing solutions to meet the growing computing needs for analysis reconstruction, and simulation. The current generation of experiments have developed and integrated a number of solutions to facilitate distributed computing. The current work of the running experiments gives an insight into the challenges that will be faced by the next generation of experiments and the infrastructure that will be needed.
17

Bibi, Iram, Adnan Akhunzada, Jahanzaib Malik, Muhammad Khurram Khan, and Muhammad Dawood. "Secure Distributed Mobile Volunteer Computing with Android." ACM Transactions on Internet Technology 22, no. 1 (February 28, 2022): 1–21. http://dx.doi.org/10.1145/3428151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Volunteer Computing provision of seamless connectivity that enables convenient and rapid deployment of greener and cheaper computing infrastructure is extremely promising to complement next-generation distributed computing systems. Undoubtedly, without tactile Internet and secure VC ecosystems, harnessing its full potentials and making it an alternative viable and reliable computing infrastructure is next to impossible. Android-enabled smart devices, applications, and services are inevitable for Volunteer computing. Contrarily, the progressive developments of sophisticated Android malware may reduce its exponential growth. Besides, Android malwares are considered the most potential and persistent cyber threat to mobile VC systems. To secure Android-based mobile volunteer computing, the authors proposed MulDroid, an efficient and self-learning autonomous hybrid (Long-Short-Term Memory, Convolutional Neural Network, Deep Neural Network) multi-vector Android malware threat detection framework. The proposed mechanism is highly scalable with well-coordinated infrastructure and self-optimizing capabilities to proficiently tackle fast-growing dynamic variants of sophisticated malware threats and attacks with 99.01% detection accuracy. For a comprehensive evaluation, the authors employed current state-of-the-art malware datasets (Android Malware Dataset, Androzoo) with standard performance evaluation metrics. Moreover, MulDroid is compared with our constructed contemporary hybrid DL-driven architectures and benchmark algorithms. Our proposed mechanism outperforms in terms of detection accuracy with a trivial tradeoff speed efficiency. Additionally, a 10-fold cross-validation is performed to explicitly show unbiased results.
18

Manandhar, Reeya, and Gajendra Sharma. "Virtualization in distributed system: a brief overview." BOHR International Journal of Computer Science 1, no. 1 (2022): 42–46. http://dx.doi.org/10.54646/bijcs.2022.07.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Virtual machines are popular because of their efficiency, ease of use, and flexibility. There has been an increasingdemand for the deployment of a robust distributed network for maximizing the performance of such systemsand minimizing the infrastructural cost. In this study, we have discussed various levels at which virtualizationcan be implemented for distributed computing, which can contribute to increased efficiency and performanceof distributed computing. The study gives an overview of various types of virtualization techniques and theirbenefits. For example, server virtualization helps to create multiple server instances from one physical server. Suchtechniques will decrease the infrastructure costs, make the system more scalable, and help in the full utilization ofavailable resources.
19

Andronico, Giuseppe. "China-EU scientific cooperation on JUNO distributed computing." EPJ Web of Conferences 245 (2020): 03038. http://dx.doi.org/10.1051/epjconf/202024503038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Jiangmen Underground Neutrino Observatory (JUNO) is an underground 20 kton liquid scintillator detector being built in the south of China. Targeting an unprecedented relative energy resolution of 3% at 1 MeV, JUNO will be able to study neutrino oscillation phenomena and determine neutrino mass ordering with a statistical significance of 3-4 sigma within six years running time. These physics challenges are addressed by a large Collaboration localized in three continents. In this context, key to the success of JUNO will be the realization of a distributed computing infrastructure to fulfill foreseen computing needs. Computing infrastructure development is performed jointly by the Institute for High Energy Physics (IHEP) (part of Chinese Academy of Sciences (CAS)), and a number of Italian, French and Russian data centers, already part of WLCG (Worldwide LHC Computing Grid). Upon its establishment, JUNO is expected to deliver not less than 2 PB of data per year, to be stored in the data centers throughout China and Europe. Data analysis activities will be also carried out in cooperation. This contribution is meant to report on China-EU cooperation to design and build together the JUNO computing infrastructure and to describe its main characteristics and requirements.
20

di Costanzo, Alexandre, Marcos Dias de Assuncao, and Rajkumar Buyya. "Harnessing Cloud Technologies for a Virtualized Distributed Computing Infrastructure." IEEE Internet Computing 13, no. 5 (September 2009): 24–33. http://dx.doi.org/10.1109/mic.2009.108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Schram, Malachi, Mathew Thomas, Kevin Fox, Benjamin LaRoque, Brent VanDevender, Noah Oblath, and David Cowley. "Distributed Computing for the Project 8 Experiment." EPJ Web of Conferences 245 (2020): 03030. http://dx.doi.org/10.1051/epjconf/202024503030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Project 8 collaboration aims to measure the absolute neutrino mass or improve on the current limit by measuring the tritium beta decay electron spectrum. We present the current distributed computing model for the Project 8 experiment. Project 8 is in its second phase of data taking with a near continuous data rate of 1Gbps. The current computing model uses DIRAC (Distributed Infrastructure with Remote Agent Control) for its workflow and data management. A detailed meta-data assignment using the DIRAC File Catalog is used to automate raw data transfers and subsequent stages of data processing. The DIRAC system is deployed on containers managed using a Kubernetes cluster to provide a scalable infrastructure. A modified DIRAC Site Director provides the ability to submit jobs using Singularity on opportunistic High-Performance Computing (HPC) sites.
22

Manandhar, Reeya, and Gajendra Sharma. "Virtualization in Distributed System: A Brief Overview." BOHR International Journal of Intelligent Instrumentation and Computing 1, no. 1 (2021): 34–38. http://dx.doi.org/10.54646/bijiiac.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Virtual machines are popular because of their efficiency, ease of use and flexibility. There has been an increasing demand for deployment of a robust distributed network for maximizing the performance of such systems and minimizing the infrastructural cost. In this paper we have discussed various levels at which virtualization can be implemented for distributed computing which can contribute to increased efficiency and performance of distributed computing. The paper gives an overview of various types of virtualization techniques and their benefits. For eg: Server virtualization helps to create multiple server instances from one physical server. Such techniques will decrease the infrastructure cost, make the system more scalable and help in full utilization of available resources.
23

Bryngemark, Lene Kristian, David Cameron, Valentina Dutta, Thomas Eichlersmith, Balazs Konya, Omar Moreno, Geoffrey Mullier, et al. "Building a Distributed Computing System for LDMX." EPJ Web of Conferences 251 (2021): 02038. http://dx.doi.org/10.1051/epjconf/202125102038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Particle physics experiments rely extensively on computing and data services, making e-infrastructure an integral part of the research collaboration. Constructing and operating distributed computing can however be challenging for a smaller-scale collaboration. The Light Dark Matter eXperiment (LDMX) is a planned small-scale accelerator-based experiment to search for dark matter in the sub-GeV mass region. Finalizing the design of the detector relies on Monte-Carlo simulation of expected physics processes. A distributed computing pilot project was proposed to better utilize available resources at the collaborating institutes, and to improve scalability and reproducibility. This paper outlines the chosen lightweight distributed solution, presenting requirements, the component integration steps, and the experiences using a pilot system for tests with large-scale simulations. The system leverages existing technologies wherever possible, minimizing the need for software development, and deploys only non-intrusive components at the participating sites. The pilot proved that integrating existing components can dramatically reduce the effort needed to build and operate a distributed e-infrastructure, making it attainable even for smaller research collaborations.
24

Jashal, Brij Kishor, Valentin Kuznetsov, Federica Legger, and Ceyhun Uzunoglu. "The CMS monitoring applications for LHC Run 3." EPJ Web of Conferences 295 (2024): 04045. http://dx.doi.org/10.1051/epjconf/202429504045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Data taking at the Large Hadron Collider (LHC) at CERN restarted in 2022. The CMS experiment relies on a distributed computing infrastructure based on WLCG (Worldwide LHC Computing Grid) to support the LHC Run 3 physics program. The CMS computing infrastructure is highly heterogeneous and relies on a set of centrally provided services, such as distributed workload management and data management, and computing resources hosted at almost 150 sites worldwide. Smooth data taking and processing requires all computing subsystems to be fully operational, and available computing and storage resources need to be continuously monitored. During the long shutdown between LHC Run 2 and Run 3, the CMS monitoring infrastructure has undergone major changes to increase the coverage of monitored applications and services, while becoming more sustainable and easier to operate and maintain. The used technologies are based on open-source solutions, either provided by the CERN IT department through the MONIT infrastructure, or managed by the CMS monitoring team. Monitoring applications for distributed workload management, submission infrastructure based on HTCondor, distributed data management, facilities have been ported from mostly custom-built applications to use common data flow and visualization services. Data are mostly stored in non-SQL databases and storage technologies such as ElasticSearch, VictoriaMetrics, Prometheus, InfluxDB and HDFS, and accessed either via programmatic APIs, Apache Spark or Sqoop jobs, or visualized preferentially using Grafana. Most CMS monitoring applications are deployed on Kubernetes clusters to minimize maintenance operations. In this contribution we present the full stack of CMS monitoring services and show how we leveraged the use of common technologies to cover a variety of monitoring applications and cope with the computing challenges of LHC Run 3.
25

MORIN, CHRISTINE, YVON JÉGOU, JÉRÔME GALLARD, and PIERRE RITEAU. "CLOUDS: A NEW PLAYGROUND FOR THE XTREEMOS GRID OPERATING SYSTEM." Parallel Processing Letters 19, no. 03 (September 2009): 435–49. http://dx.doi.org/10.1142/s0129626409000328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The emerging cloud computing model has recently gained a lot of interest both from commercial companies and from the research community. XtreemOS is a distributed operating system for large-scale wide-area dynamic infrastructures spanning multiple administrative domains. XtreemOS, which is based on the Linux operating system, has been designed as a Grid operating system providing native support for virtual organizations. In this paper, we discuss the positioning of XtreemOS technologies with regard to cloud computing. More specifically, we investigate a scenario where XtreemOS could help users take full advantage of clouds in a global environment including their own resources and cloud resources. We also discuss how the XtreemOS system could be used by cloud service providers to manage their underlying infrastructure. This study shows that the XtreemOS distributed operating system is a highly relevant technology in the new era of cloud computing where future clouds seamlessly span multiple bare hardware providers and where customers extend their IT infrastructure by provisioning resources from different cloud service providers.
26

Bagnasco, Stefano. "The Ligo-Virgo-KAGRA Computing Infrastructure for Gravitational-wave Research." EPJ Web of Conferences 295 (2024): 04047. http://dx.doi.org/10.1051/epjconf/202429504047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The LIGO, VIRGO and KAGRA Gravitational-wave (GW) observatories are getting ready for their fourth observational period, O4, scheduled to begin in March 2023, with improved sensitivities and thus higher event rates. GW-related computing has both large commonalities with HEP computing, particularly in the domain of offline data processing and analysis, and important differences, for example in the fact that the amount of raw data doesn’t grow much with the instrument sensitivity, or the need to timely generate and distribute “event candidate alerts” to EM and neutrino observatories, thus making gravitational multi-messenger astronomy possible. Data from the interferometers are exchanged between collaborations both for low-latency and offline processing; in recent years, the three collaborations designed and built a common distributed computing infrastructure to prepare for a growing computing demand, and to reduce the maintenance burden of legacy custom-made tools, by increasingly adopting tools and architectures originally developed in the context of HEP computing. So, for example, HTCondor is used for workflow management, Rucio for many data management needs, CVMFS for code and data distribution, and more. We will present GW computing use cases and report about the architecture of the computing infrastructure as will be used during O4, as well as some planned upgrades for the subsequent observing run O5.
27

Mikkilineni, Rao. "Architectural Resiliency in Distributed Computing." International Journal of Grid and High Performance Computing 4, no. 4 (October 2012): 37–51. http://dx.doi.org/10.4018/jghpc.2012100103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cellular organisms have evolved to manage themselves and their interactions with their surroundings with a high degree of resiliency, efficiency and scalability. Signaling and collaboration of autonomous distributed computing elements accomplishing a common goal with optimal resource utilization are the differentiating characteristics that contribute to the computing model of cellular organisms. By introducing signaling and self-management abstractions in an autonomic computing element called Distributed Intelligent Managed Element (DIME), the authors improve the architectural resiliency, efficiency, and scaling in distributed computing systems. Described are two implementations of DIME network architecture to demonstrate auto-scaling, self-repair, dynamic performance optimization, and end to end distributed transaction management. By virtualizing a process (by converting it into a DIME) in the Linux operating system and also building a new native operating system called Parallax OS optimized for Intel-multi-core processors, which converts each core into a DIME, implications of the DIME computing model to future cloud computing services and datacenter infrastructure management practices and discuss the relationship of the DIME computing model to current discussions on Turing machines, Gödel’s theorems and a call for no less than a Kuhnian paradigm shift by some computer scientists.
28

Zolotariov, Denis. "THE DISTRIBUTED SYSTEM OF AUTOMATED COMPUTING BASED ON CLOUD INFRASTRUCTURE." Innovative Technologies and Scientific Solutions for Industries, no. 4 (14) (December 21, 2020): 47–55. http://dx.doi.org/10.30837/itssi.2020.14.047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Schantz, R. E. "BBN's Network Computing Software Infrastructure and Distributed Applications (1970-1990)." IEEE Annals of the History of Computing 28, no. 1 (January 2006): 72–88. http://dx.doi.org/10.1109/mahc.2006.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Arif, Suki, Mowafaq Salem Alzboon, and M. Mahmuddin. "Distributed Quadtree Overlay for Resource Discovery in Shared Computing Infrastructure." Advanced Science Letters 23, no. 6 (June 1, 2017): 5397–401. http://dx.doi.org/10.1166/asl.2017.7384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Merzky, Andre, Ole Weidner, and Shantenu Jha. "SAGA: A standardized access layer to heterogeneous Distributed Computing Infrastructure." SoftwareX 1-2 (September 2015): 3–8. http://dx.doi.org/10.1016/j.softx.2015.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kozlovszky, M., K. Karoczkai, I. Marton, A. Balasko, A. Marosi, and P. Kacsuk. "ENABLING GENERIC DISTRIBUTED COMPUTING INFRASTRUCTURE COMPATIBILITY FOR WORKFLOW MANAGEMENT SYSTEMS." Computer Science 13, no. 3 (2012): 61. http://dx.doi.org/10.7494/csci.2012.13.3.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Thomas, Mathew, Malachi Schram, Kevin Fox, Jan Strube, Noah S. Oblath, Robert Rallo, Zachary C. Kennedy, Tamas Varga, Anil K. Battu, and Christopher A. Barrett. "Distributed heterogeneous compute infrastructure for the study of additive manufacturing systems." MRS Advances 5, no. 29-30 (2020): 1547–55. http://dx.doi.org/10.1557/adv.2020.103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACTWe present the current status of a scalable computing framework to address the need of the multidisciplinary effort to study chemical dynamics. Specifically, we are enabling scientists to process and store experimental data, run large-scale computationally expensive high-fidelity physical simulations, and analyze these results using state-of-the-art data analytics, machine learning, and uncertainty quantification methods using heterogeneous computing resources. We present the results of this framework on a single metadata-driven workflow to accelerate an additive manufacturing use-case.
34

Kim, Bockjoo, and Dimitri Bourilkov. "Automatic Monitoring of Large-Scale Computing Infrastructure." EPJ Web of Conferences 295 (2024): 07007. http://dx.doi.org/10.1051/epjconf/202429507007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern distributed computing systems produce large amounts of monitoring data. For these systems to operate smoothly, underperforming or failing components must be identified quickly, and preferably automatically, enabling the system managers to react accordingly. In this contribution, we analyze jobs and transfer data collected in the running of the LHC computing infrastructure. The monitoring data is harvested from the Elasticsearch database and converted to formats suitable for further processing. Based on various machine and deep learning techniques, we develop automatic tools for continuous monitoring of the health of the underlying systems. Our initial implementation is based on publicly available deep learning tools, PyTorch or TensorFlow packages, running on state-of-the-art GPU systems.
35

Lei, Ying, and Zhi Lu Lai. "The Modal Identification of Structure Using Distributed ERA and EFDD Methods." Advanced Materials Research 163-167 (December 2010): 2532–36. http://dx.doi.org/10.4028/www.scientific.net/amr.163-167.2532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Structural health monitoring (SHM) is an emerging field in civil engineering, offering the potential for continuous and periodic assessment of the safety and integrity of civil infrastructure. In this paper, a distributed computing strategy for modal identification of structure is proposed, which is suitable for the problem of solving large volume of data set in structural health monitoring. Numerical example of distribute computing the modal properties of truss illustrates the distributed out-put only modal identification algorithm based on NExT / ERA techniques and EFDD. This strategy can also be applied to other complicated structure to determine modal parameters.
36

Kim, Cheonyong, Joobum Kim, Ki-Hyeon Kim, Sang-Kwon Lee, Kiwook Kim, Syed Asif Raza Shah, and Young-Hoon Goo. "ScienceIoT: Evolution of the Wireless Infrastructure of KREONET." Sensors 21, no. 17 (August 30, 2021): 5852. http://dx.doi.org/10.3390/s21175852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Here, we introduce the current stage and future directions of the wireless infrastructure of the Korea Research Environment Open NETwork (KREONET), a representative national research and education network in Korea. In 2018, ScienceLoRa, a pioneering wireless network infrastructure for scientific applications based on low-power wide-area network technology, was launched. Existing in-service applications in monitoring regions, research facilities, and universities prove the effectiveness of using wireless infrastructure in scientific areas. Furthermore, to support the more stringent requirements of various scientific scenarios, ScienceLoRa is evolving toward ScienceIoT by employing high-performance wireless technology and distributed computing capability. Specifically, by accommodating a private 5G network and an integrated edge computing platform, ScienceIoT is expected to support cutting-edge scientific applications requiring high-throughput and distributed data processing.
37

Adamski, Marcin, Krzysztof Kurowski, Marek Mika, Wojciech Piątek, and Jan Węglarz. "Security Aspects in Resource Management Systems in Distributed Computing Environments." Foundations of Computing and Decision Sciences 42, no. 4 (December 20, 2017): 299–313. http://dx.doi.org/10.1515/fcds-2017-0015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract In many distributed computing systems, aspects related to security are getting more and more relevant. Security is ubiquitous and could not be treated as a separated problem or a challenge. In our opinion it should be considered in the context of resource management in distributed computing environments like Grids and Clouds, e.g. scheduled computations can be much delayed because of cyber-attacks, inefficient infrastructure or users valuable and sensitive data can be stolen even in the process of correct computation. To prevent such cases there is a need to introduce new evaluation metrics for resource management that will represent the level of security of computing resources and more broadly distributed computing infrastructures. In our approach, we have introduced a new metric called reputation, which simply determines the level of reliability of computing resources from the security perspective and could be taken into account during scheduling procedures. The new reputation metric is based on various relevant parameters regarding cyber-attacks (also energy attacks), administrative activities such as security updates, bug fixes and security patches. Moreover, we have conducted various computational experiments within the Grid Scheduling Simulator environment (GSSIM) inspired by real application scenarios. Finally, our experimental studies of new resource management approaches taking into account critical security aspects are also discussed in this paper.
38

Agarwal, Pankaj, and Kouros Owzar. "Next Generation Distributed Computing for Cancer Research." Cancer Informatics 13s7 (January 2014): CIN.S16344. http://dx.doi.org/10.4137/cin.s16344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.
39

Anjum, Asma, and Asma Parveen. "Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 2 (July 1, 2023): 276. http://dx.doi.org/10.11591/ijres.v12.i2.pp276-286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud computing gives on-demand access to computing resources in metered and powerfully adapted way; it empowers the client to get access to fast and flexible resources through virtualization and widely adaptable for various applications. Further, to provide assurance of productive computation, scheduling of task is very much important in cloud infrastructure environment. Moreover, the main aim of task execution phenomena is to reduce the execution time and reserve infrastructure; further, considering huge application, workflow scheduling has drawn fine attention in business as well as scientific area. Hence, in this research work, we design and develop an optimized load balancing in parallel computation aka optimal load balancing in parallel computing (OLBP) mechanism to distribute the load; at first different parameter in workload is computed and then loads are distributed. Further OLBP mechanism considers makespan time and energy as constraint and further task offloading is done considering the server speed. This phenomenon provides the balancing of workflow; further OLBP mechanism is evaluated using cyber shake workflow dataset and outperforms the existing workflow mechanism.
40

Mohd Pakhrudin, Nor Syazwani, Murizah Kassim, and Azlina Idris. "A review on orchestration distributed systems for IoT smart services in fog computing." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 2 (April 1, 2021): 1812. http://dx.doi.org/10.11591/ijece.v11i2.pp1812-1822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper provides a review of orchestration distributed systems for IoT smart services in fog computing. The cloud infrastructure alone cannot handle the flow of information with the abundance of data, devices and interactions. Thus, fog computing becomes a new paradigm to overcome the problem. One of the first challenges was to build the orchestration systems to activate the clouds and to execute tasks throughout the whole system that has to be considered to the situation in the large scale of geographical distance, heterogeneity and low latency to support the limitation of cloud computing. Some problems exist for orchestration distributed in fog computing are to fulfil with high reliability and low-delay requirements in the IoT applications system and to form a larger computer network like a fog network, at different geographic sites. This paper reviewed approximately 68 articles on orchestration distributed system for fog computing. The result shows the orchestration distribute system and some of the evaluation criteria for fog computing that have been compared in terms of Borg, Kubernetes, Swarm, Mesos, Aurora, heterogeneity, QoS management, scalability, mobility, federation, and interoperability. The significance of this study is to support the researcher in developing orchestration distributed systems for IoT smart services in fog computing focus on IR4.0 national agenda
41

Ariza-Porras, Christian, Valentin Kuznetsov, Federica Legger, Rahul Indra, Nikodemas Tuckus, and Ceyhun Uzunoglu. "The evolution of the CMS monitoring infrastructure." EPJ Web of Conferences 251 (2021): 02004. http://dx.doi.org/10.1051/epjconf/202125102004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The CMS experiment at the CERN LHC (Large Hadron Collider) relies on a distributed computing infrastructure to process the multi-petabyte datasets where the collision and simulated data are stored. A scalable and reliable monitoring system is required to ensure efficient operation of the distributed computing services, and to provide a comprehensive set of measurements of the system performances. In this paper we present the full stack of CMS monitoring applications, partly based on the MONIT infrastructure, a suite of monitoring services provided by the CERN IT department. These are complemented by a set of applications developed over the last few years by CMS, leveraging open-source technologies that are industry-standards in the IT world, such as Kubernetes and Prometheus. We discuss how this choice helped the adoption of common monitoring solutions within the experiment, and increased the level of automation in the operation and deployment of our services.
42

Wijethunga, Kalana, Maksim Storetvedt, Costin Grigoras, Latchezar Betev, Maarten Litmaath, Gayashan Amarasinghe, and Indika Perera. "Site Sonar-A Flexible and Extensible Infrastructure Monitoring Tool for ALICE Computing Grid." EPJ Web of Conferences 295 (2024): 04037. http://dx.doi.org/10.1051/epjconf/202429504037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The ALICE experiment at the CERN Large Hadron Collider relies on a massive, distributed Computing Grid for its data processing. The ALICE Computing Grid is built by combining a large number of individual computing sites distributed globally. These Grid sites are maintained by different institutions across the world and contribute thousands of worker nodes possessing different capabilities and configurations. Developing software for Grid operations that works on all nodes while harnessing the maximum capabilities offered by any given Grid site is challenging without advance knowledge of what capabilities each site offers. Site Sonar is an architecture-independent Grid infrastructure monitoring framework developed by the ALICE Grid team to monitor the infrastructure capabilities and configurations of worker nodes at sites across the ALICE Grid without the need to contact local site administrators. Site Sonar is a highly flexible and extensible framework that offers infrastructure metric collection without local agent installations at Grid sites. This paper introduces the Site Sonar Grid infrastructure monitoring framework and reports significant findings acquired about the ALICE Computing Grid using Site Sonar.
43

Kolesnyk, Zakhar, Oleksandr Mezhenskyi, Oleksandr Davykoza, and Heorhii Kuchuk. "FOG COMPUTING TECHNOLOGY IN DISTRIBUTED SYSTEMS." Системи управління, навігації та зв’язку. Збірник наукових праць 1, no. 75 (February 9, 2024): 94–97. http://dx.doi.org/10.26906/sunz.2024.1.094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Topicality. The concept of fog computing is an evolutionary stage in the development of the cloud concept. It occupies a leading position among the general trends in the development of information technology. The emergence of this concept is closely related to the origin and development of the concept of the Internet of Things. The results. The subject area was analyzed. It includes an analysis of current trends in the field of organizing distributed computing, an analysis of the use of population algorithms and ontology models for solving optimization problems in distributed systems, an analysis of models, methods and algorithms for solving the problem of transferring the computational load in distributed systems implemented on the basis of fog computing. Conclusion. It has been revealed that the concept of fog computing makes it possible to solve most of the problems associated with the load on the communication infrastructure and the latency of information exchange. But they do not resolve issues related to the high dynamism of the foggy environment and the concomitant decrease in the efficiency of the distributed system.
44

SCHERMERHORN, PAUL, and MATTHIAS SCHEUTZ. "NATURAL LANGUAGE INTERACTIONS IN DISTRIBUTED NETWORKS OF SMART DEVICES." International Journal of Semantic Computing 02, no. 04 (December 2008): 503–24. http://dx.doi.org/10.1142/s1793351x08000579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Advances in sensing and networking hardware have made the prospect of ambient intelligence more realistic, but the challenge of creating a software framework suitable for ambient intelligence systems remains. We present ADE, the Agent Development Environment, a distributed agent infrastructure with built-in natural language processing capabilities connected to a sophisticated goal manager that controls access to the world via multiple server interfaces for sensing and actuating devices. Unlike other ambient intelligence infrastructures, ADE includes support for multiple autonomous robots integrated into the system. ADE allows developers of ambient intelligence environments to implement agents of varying complexity to meet the varying requirements of each scenario, and it provides facilities to ensure security and fault tolerance for distributed computing. Natural language processing is conducted incrementally, as utterances are acquired, allowing fast, accurate responses from system agents. We describe ADE and a sample of the many experiments and demonstrations conducted using the infrastructure, then an example architecture for a "smart home" is proposed to demonstrate ADE's utility as an infrastructure for ambient intelligence.
45

A. H. Alkurdi, Ahmed, and Subhi R. M. Zeebaree. "Navigating the Landscape of IoT, Distributed Cloud Computing: A Comprehensive Review." Academic Journal of Nawroz University 13, no. 1 (March 31, 2024): 360–92. http://dx.doi.org/10.25007/ajnu.v13n1a2011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This comprehensive academic exploration delves into the revolutionary convergence of the Internet of Things (IoT) with distributed cloud computing, redefining the realms of data processing, storage, and communication. The paper critically analyzes scholarly work from reputable journals, providing profound insights into this integration's multifaceted applications and underlying technological frameworks. The relevance of IoT, a network of interconnected devices and sensors, is emphasized through its significant impact on diverse sectors, including healthcare, education, agriculture, and smart cities. This impact is magnified by its extensive data collection, processing, and analysis capabilities, enabled through cloud computing platforms. The objective of the paper is to methodically compare and contrast contemporary scholarly contributions, shedding light on the diverse applications and technological infrastructures of IoT in conjunction with distributed cloud computing. This endeavor encompasses an examination of IoT-based cloud infrastructure, detailed analysis of specific needs, implementations, and applications of IoT-based cloud computing, and a review of various IoT cloud platforms. The paper also highlights the benefits of integrating IoT with cloud computing, elucidating significant advantages and potential future directions of this technology. Through this scholarly inquiry, the paper aims to offer an in-depth perspective on the state-of-the-art developments in IoT and distributed cloud computing. It underscores their significance and potential in shaping the future of digital technology and its applications across various domains.
46

Villebonnet, Violaine, Georges Da Costa, Laurent Lefevre, Jean-Marc Pierson, and Patricia Stolf. "“Big, Medium, Little”: Reaching Energy Proportionality with Heterogeneous Computing Scheduler." Parallel Processing Letters 25, no. 03 (September 2015): 1541006. http://dx.doi.org/10.1142/s0129626415410066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Energy savings are among the most important topics concerning Cloud and HPC infrastructures nowadays. Servers consume a large amount of energy, even when their computing power is not fully utilized. These static costs represent quite a concern, mostly because many datacenter managers are over-provisioning their infrastructures compared to the actual needs. This results in a high part of wasted power consumption. In this paper, we proposed the BML (“Big, Medium, Little”) infrastructure, composed of heterogeneous architectures, and a scheduling framework dealing with energy proportionality. We introduce heterogeneous power processors inside datacenters as a way to reduce energy consumption when processing variable workloads. Our framework brings an intelligent utilization of the infrastructure by dynamically executing applications on the architecture that suits their needs, while minimizing energy consumption. In this paper we focus on distributed stateless web servers scenario and we analyze the energy savings achieved through energy proportionality.
47

Estrella, F., C. del Frate, T. Hauer, M. Odeh, D. Rogulin, S. R. Amendolia, D. Schottlander, T. Solomonides, R. Warren, and R. McClatchey. "Resolving Clinicians’ Queries Across a Grid’s Infrastructure." Methods of Information in Medicine 44, no. 02 (2005): 149–53. http://dx.doi.org/10.1055/s-0038-1633936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Summary Objectives: The past decade has witnessed order of magnitude increases in computing power, data storage capacity and network speed, giving birth to applications which may handle large data volumes of increased complexity, distributed over the internet. Methods: Medical image analysis is one of the areas for which this unique opportunity likely brings revolutionary advances both for the scientist’s research study and the clinician’s everyday work. Grids [1] computing promises to resolve many of the difficulties in facilitating medical image analysis to allow radiologists to collaborate without having to co-locate. Results: The EU-funded MammoGrid project [2] aims to investigate the feasibility of developing a Grid-enabled European database of mammograms and provide an information infrastructure which federates multiple mammogram databases. This will enable clinicians to develop new common, collaborative and co-operative approaches to the analysis of mammographic data. Conclusion: This paper focuses on one of the key requirements for large-scale distributed mammogram analysis: resolving queries across a grid-connected federation of images.
48

Draghici, George. "Infrastructure for Integrated Collaborative Product Development in Distributed Environment." Applied Mechanics and Materials 760 (May 2015): 9–14. http://dx.doi.org/10.4028/www.scientific.net/amm.760.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The tools used for collaborative product design is based on PLM solutions, integrating BOM, CAD, CAE, CAPP, CAM and PDM software. The PLM platforms are distributed in virtual project team of partners. Currently, the integrated collaborative and distributed product design and manufacture can be supported by the cloud technology, given advances in Cloud Computing. The objective of our research is to develop an infrastructure for integrated collaborative and distributed product design and manufacturing, including software applications, CNC machine tools, additive manufacturing machines etc., dispersed in different universities labs for students’ education.
49

Gupta, Punit, and Deepika Agrawal. "Trusted Cloud Platform for Cloud Infrastructure." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 8 (August 30, 2013): 1884–91. http://dx.doi.org/10.24297/ijct.v10i8.1473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Reliability and trust Models are used to enhance secure , reliable scheduling , load balancing and QoS in cloud and Distributed environment. Trust models that are being used in Distributed and Grid environment, does not qualify cloud computing environment requirements. Since the parameters that have being taken into consideration in these trust models, does not fit in the cloud Infrastructure As A Service, a suitable trust model is proposed based on the existing model that is suitable for trust value management for the cloud IaaS parameters. Based on the above achieved trust values, trust based scheduling and load balancing is done for better allocation of resources and enhancing the QOS of services been provided to the users. In this paper, an trust based cloud computing framework is proposed using trust model ,trust based scheduling and load balancing algorithms. Here we describe the design and development of trusted Cloud service model for cloud Infrastructure as a service (IaaS) known as VimCloud .VimCloud an open source cloud computing framework that implements the tusted Cloud Service Model and trust based scheduling and load balancing algorithm . However one of the major issues in cloud IaaS is to ensure reliability and security or used data and computation. Trusted cloud service model ensures that user virual machine executes only on trusted cloud node, whose integrity and reliability is known in term of trust value . VimCloud shown practical in term of performace which is better then existing models.
50

Čerňanský, Michal, Ladislav Huraj, and Marek Šimon. "Controlled DDoS Attack on IPv4/IPv6 Network Using Distributed Computing Infrastructure." Journal of information and organizational sciences 44, no. 2 (December 9, 2020): 297–316. http://dx.doi.org/10.31341/jios.44.2.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper focuses on design, background and experimental results of real environment of DDoS attacks. The experimental testbed is based on employment of a tool for IT automation to perform DDoS attacks under monitoring. DDoS attacks are still serious threat in both IPv4 and IPv6 networks and creation of simple tool to test the network for DDoS attack and to allow evaluation of vulnerabilities and DDoS countermeasures of the networks is necessary. In proposed testbed, Ansible orchestration tool is employed to perform and coordinate DDoS attacks. Ansible is a powerful tool and simplifies the implementation of the test environment. Moreover, no special hardware is required for the attacks execution, the testbed uses existing infrastructure in an organization. The case study of implementation of this environment shows straightforwardness to create a testbed comparable with a botnet with ten thousand bots. Furthermore, the experimental results demonstrate the potential of the proposed environment and present the impact of the attacks on particular target servers in IPv4 and IPv6 networks.

To the bibliography