Добірка наукової літератури з теми "Distributed computing infrastructure"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Distributed computing infrastructure".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Distributed computing infrastructure":

1

Horzela, Maximilian, Henri Casanova, Manuel Giffels, Artur Gottmann, Robin Hofsaess, Günter Quast, Simone Rossi Tisbeni, Achim Streit, and Frédéric Suter. "Modeling Distributed Computing Infrastructures for HEP Applications." EPJ Web of Conferences 295 (2024): 04032. http://dx.doi.org/10.1051/epjconf/202429504032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support a plethora of users and workflows, such as the Worldwide LHC Computing Grid (WLCG), is not trivial. Due to the complexity and size of these infrastructures, it is not feasible to deploy experimental test-beds at large scales merely for the purpose of comparing and evaluating alternate designs. An alternative is to study the behaviours of these systems using simulation. This approach has been used successfully in the past to identify efficient and practical infrastructure designs for High Energy Physics (HEP). A prominent example is the Monarc simulation framework, which was used to study the initial structure of the WLCG. New simulation capabilities are needed to simulate large-scale heterogeneous computing systems with complex networks, data access and caching patterns. A modern tool to simulate HEP workloads that execute on distributed computing infrastructures based on the SimGrid and WRENCH simulation frameworks is outlined. Studies of its accuracy and scalability are presented using HEP as a case-study. Hypothetical adjustments to prevailing computing architectures in HEP are studied providing insights into the dynamics of a part of the WLCG and candidates for improvements.
2

Korenkov, Vladimir, Andrei Dolbilov, Valeri Mitsyn, Ivan Kashunin, Nikolay Kutovskiy, Dmitry Podgainy, Oksana Streltsova, Tatiana Strizh, Vladimir Trofimov, and Peter Zrelov. "The JINR distributed computing environment." EPJ Web of Conferences 214 (2019): 03009. http://dx.doi.org/10.1051/epjconf/201921403009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computing in the field of high energy physics requires usage of heterogeneous computing resources and IT, such as grid, high performance computing, cloud computing and big data analytics for data processing and analysis. The core of the distributed computing environment at the Joint Institute for Nuclear Research is the Multifunctional Information and Computing Complex. It includes Tier1 for CMS experiment, Tier2 site for all LHC experiments and other grid non-LHC VOs, such as BIOMED, COMPASS, NICA/MPD, NOvA, STAR and BESIII, as well as cloud and HPC infrastructures. A brief status overview of each component is presented. Particular attention is given to the development of distributed computations performed in collaboration with CERN, BNL, FNAL, FAIR, China, and JINR Member States. One of the directions for the cloud infrastructure is the development of integration methods of various cloud resources of the JINR Member State organizations in order to perform common tasks, and also to distribute a load across integrated resources. We performed cloud resources integration of scientific centers in Armenia, Azerbaijan, Belarus, Kazakhstan and Russia. Extension of the HPC component will be carried through a specialized infrastructure for HPC engineering that is being created at MICC, which makes use of the contact liquid cooling technology implemented by the Russian company JSC "RSC Technologies". Current plans are to further develop MICC as a center for scientific computing within the multidisciplinary research environment of JINR and JINR Member States, and mainly for the NICA mega-science project.
3

Fergusson, David, Roberto Barbera, Emidio Giorgio, Marco Fargetta, Gergely Sipos, Diego Romano, Malcolm Atkinson, and Elizabeth Vander Meer. "Distributed Computing Education, Part 4: Training Infrastructure." IEEE Distributed Systems Online 9, no. 10 (October 2008): 2. http://dx.doi.org/10.1109/mdso.2008.28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Arslan, Mustafa Y., Indrajeet Singh, Shailendra Singh, Harsha V. Madhyastha, Karthikeyan Sundaresan, and Srikanth V. Krishnamurthy. "CWC: A Distributed Computing Infrastructure Using Smartphones." IEEE Transactions on Mobile Computing 14, no. 8 (August 1, 2015): 1587–600. http://dx.doi.org/10.1109/tmc.2014.2362753.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Di Girolamo, Alessandro, Federica Legger, Panos Paparrigopoulos, Alexei Klimentov, Jaroslava Schovancová, Valentin Kuznetsov, Mario Lassnig, et al. "Operational Intelligence for Distributed Computing Systems for Exascale Science." EPJ Web of Conferences 245 (2020): 03017. http://dx.doi.org/10.1051/epjconf/202024503017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions from software developers, shifters and operational teams is needed to efficiently manage such heterogeneous infrastructures. A wealth of operational data can be exploited to increase the level of automation in computing operations by using adequate techniques, such as machine learning (ML), tailored to solve specific problems. The Operational Intelligence project is a joint effort from various WLCG communities aimed at increasing the level of automation in computing operations. We discuss how state-of-the-art technologies can be used to build general solutions to common problems and to reduce the operational cost of the experiment computing infrastructure.
6

J, Bakiadarshani. "Computing while Charging: Building a Distributed Computing Infrastructure using Smartphones." International Journal for Research in Applied Science and Engineering Technology V, no. III (March 24, 2017): 323–37. http://dx.doi.org/10.22214/ijraset.2017.3060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Adam, C., D. Barberis, S. Crépé-Renaudin, K. De, F. Fassi, A. Stradling, M. Svatos, A. Vartapetian, and H. Wolters. "Computing shifts to monitor ATLAS distributed computing infrastructure and operations." Journal of Physics: Conference Series 898 (October 2017): 092004. http://dx.doi.org/10.1088/1742-6596/898/9/092004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nishant, Neerav, and Vaishali Singh. "Distributed Infrastructure for an Academic Cloud." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 6 (July 10, 2023): 34–38. http://dx.doi.org/10.17762/ijritcc.v11i6.6769.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The various community infrastructure literature reveals the challenges in educational institutions to embrace cloud computing trends. Setting up an own data center in effect means a private cloud. If research on the open cloud services is available within the institution, then the rollout of such research products becomes an in-house implementation. Thus, even reducing the dependence on cloud vendors. Distribution of resources opens the channel for better communication within academic institutions. It also attracts opportunities to procure individual hardware with a bigger gain. Enormous spending and unaccounted credits fall into central budgets if not controlled in a structured manner. Also, increasing the overall data management cost as an institution needs a different perspective for its’ long-term benefits. The expenses allow branching the cloud management tasks either in a vendor’s private cloud or own Cloud if feasible. Bigdata does touch the academics to so much extent that such disparate de-central data management creates several pitfalls. The solution then suggested to have a controlled environment claimed as distributed computing. Infrastructure spending shoots up with a pay as you go model. We claim that a distributed infrastructure as an excellent opportunity in the computing when performed at the cost of trust of a private cloud. The open-source movements experiment the distributed clouds by promoting OpenStack swift.
9

CHEN, QIMING, PARVATHI CHUNDI, UMESHWAR DAYAL, and MEICHUN HSU. "DYNAMIC AGENTS." International Journal of Cooperative Information Systems 08, no. 02n03 (June 1999): 195–223. http://dx.doi.org/10.1142/s0218843099000101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We claim that a dynamic agent infrastructure can provide a shift from static distributed computing to dynamic distributed computing, and we have developed an infrastructure to realize such a shift. We shall compare this infrastructure with other distributed computing infrastructures such as CORBA and DCOM, and demonstrate its value in highly dynamic system integration, service provisioning and distributed applications such as data mining on the Web. The infrastructure is Java-based, light-weight, and extensible. It differs from other agent platforms and client/server infrastructures in its support of dynamic behavior modification of agents. A dynamic agent is not designed to have a fixed set of predefined functions, but instead, to carry application-specific actions, which can be loaded and modified on the fly. This allows a dynamic agent to adjust its capability to accommodate changes in the environment and requirements, and play different roles across multiple applications. The above features are supported by the light-weight, built-in management facilities of dynamic agents, which can be commonly used by the "carried" application programs to communicate, manage resources and modify their problem-solving capabilities. Therefore, the proposed infrastructure allows application-specific multi-agent systems to be developed easily on top of it, provides "nuts and bolts" for run-time system integration, and supports dynamic service construction, modification and movement. A prototype has been developed at HP Labs and made available to several external research groups.
10

Dubenskaya, J., A. Kryukov, A. Demichev, and N. Prikhodko. "New security infrastructure model for distributed computing systems." Journal of Physics: Conference Series 681 (February 3, 2016): 012051. http://dx.doi.org/10.1088/1742-6596/681/1/012051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Distributed computing infrastructure":

1

AlJabban, Tarek. "Distributed database storage management for a cloud computing infrastructure." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114556.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Internet applications have recently witnessed tremendous growth in terms of both size and complexity. Cloud computing is one of the several distributed technologies that have emerged to help meeting the objectives of these applications in terms of achieving high availability, performance and scalability.Platform as a Service (PaaS) is one kind of services provided by cloud solutions. These systems often follow a multi-tier architecture consisting mainly of a presentation tier, an application tier and a database tier. The volumes of data exchanged between the application tier and the database tier become huge, especially for enterprise level applications. As a result, the design of the database tier in cloud systems has to carefully address the scalability challenges rising from the huge data volumes. In this thesis, we propose a data distribution approach to improve the scalability of the database tier. Our approach is applied to a traditional single database server. It works by replacing the traditionally used single machine storage paradigm with a distributed storage paradigm. The suggested approach maintains the features that originally exists in the database system, and additionally provides the features of distribution and replication. Distributing the data storage helps improving the system fault-tolerance as it decreases the possibility of having a failure at the database server. It also helps resolve specific performance issues such as reducing the I/O usage and consecutively decreasing the possibility of an I/O bottleneck. Yet, it produces other performance challenges that need to be addressed. To prove the feasibility of our proposed approach, we use it to implement two extensions to the storage manager module of the PostgreSQL database system, using the HDFS distributed file system, and the HBase distributed key-value store.
Les applications Internet ont récemment connu une croissance considérable en termes de taille et de complexité. Afin de satisfaire la forte demande pour les ressources informatiques et les espaces de stockage, les technologies en distribution ont commencé à devenir plus impliquées dans les applications à grande échelle. Le Cloud Computing est l'une de ces nombreuses technologies qui ont émergé pour aider à atteindre les objectifs de ces applications, telles que la haute disponibilité, les performances et l'évolutivité.Platform as a Service (PaaS) est un type de service qui peut être fourni par les solutions de Cloud Computing. Ces systèmes suivent souvent une architecture multi-niveaux qui se compose principalement d'un niveau de présentation, un niveau d'application et d'un niveau de base de données. Les volumes de données échangées entre l'application et la base de données deviennent énormes en particulier pour les applications de niveau entreprise. En conséquence, la conception de la base de données dans les systèmes de Cloud Computing doit prendre en compte le challenge de l'évolution des quantités énormes de données. Dans cette mémoire, nous proposons une approche de distribution des données qui peuvent être utilisées pour améliorer l'évolutivité des bases de données. Nous proposons deux techniques qui peuvent être appliquées à un serveur de base de données unique traditionnelle.Ces techniques fonctionnent en remplaçant le paradigme traditionnel utilisant une seule machine de stockage avec un paradigme de stockage distribué. Les techniques proposées maintiennent les caractéristiques qui existaient à l'origine dans le système de base de données, et en plus fournissent les caractéristiques de la distribution et de la réplication. Ces deux fonctionnalités supplémentaires aident à améliorer le système de tolérance aux pannes, car ils diminuent la possibilité d'avoir une défaillance au niveau du serveur de base de données. La distribution du stockage permet de résoudre les problèmes de performances spécifiques, tels que la réduction de l'utilisation des entrées/sorties et consécutivement de diminuer la possibilité de saturation des entrées/sorties.Par ailleurs, cela produit d'autres défis de performances qui doivent être pris en compte. Pour prouver la faisabilité de nos techniques, nous les avons implémentées comme des extensions du module de gestion de stockage de la base de données PostgreSQL.
2

LUCREZIA, FRANCESCO. "Network Infrastructures for Highly Distributed Cloud-Computing." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2706032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Software-Defined-Network (SDN) is emerging as a solid opportunity for the Network Service Providers (NSP) to reduce costs while at the same time providing better and/or new services. The possibility to flexibly manage and configure highly-available and scalable network services through data model abstractions and easy-to-consume APIs is attractive and the adoption of such technologies is gaining momentum. At the same time, NSPs are planning to innovate their infrastructures through a process of network softwarisation and programmability. The SDN paradigm aims at improving the design, configuration, maintenance and service provisioning agility of the network through a centralised software control. This can be easily achievable in local area networks, typical of data-centers, where the benefits of having programmable access to the entire network is not restricted by latency between the network devices and the SDN controller which is reasonably located in the same LAN of the data path nodes. In Wide Area Networks (WAN), instead, a centralised control plane limits the speed of responsiveness in reaction to time-constrained network events due to unavoidable latencies caused by physical distances. Moreover, an end-to-end control shall involve the participation of multiple, domain-specific, controllers: access devices, data-center fabrics and backbone networks have very different characteristics and their control-plane could hardly coexist in a single centralised entity, unless of very complex solutions which inevitably lead to software bugs, inconsistent states and performance issues. In recent years, the idea to exploit SDN for WAN infrastructures to connect multiple sites together has spread in both the scientific community and the industry. The former has produced interesting results in terms of framework proposals, complexity and performance analysis for network resource allocation schemes and open-source proof of concept prototypes targeting SDN architectures spanning multiple technological and administrative domains. On the other hand, much of the work still remains confined to the academy mainly because based on pure Openflow prototype implementation, networks emulated on a single general-purpose machine or on simulations proving algorithms effectiveness. The industry has made SDN a reality via closed-source systems, running on single administrative domain networks with little if no diversification of access and backbone devices. In this dissertation we present our contributions to the design and the implementation of SDN architectures for the control plane of WAN infrastructures. In particular, we studied and prototyped two SDN platforms to build a programmable, intent-based, control-plane suitable for the today highly distributed cloud infrastructures. Our main contributions are: (i) an holistic and architectural description of a distributed SDN control-plane for end-end QoS provisioning; we compare the legacy IntServ RSVP protocol with a novel approach for prioritising application-sensitive flows via centralised vantage points. It is based on a peer-to-peer architecture and could so be suitable for the inter-authoritative domains scenario. (ii) An open-source platform based on a two-layer hierarchy of network controllers designed to provision end-to-end connectivity in real networks composed by heterogeneous devices and links within a single authoritative domain. This platform has been integrated in CORD, an open-source project whose goal is to bring data-center economics and cloud agility to the NSP central office infrastructures, combining NFV (Network Function Virtualization), SDN and the elasticity of commodity clouds. Our platform enables the provisioning of connectivity services between multiple CORD sites, up to the customer premises. Thus our system and software contributions in SDN has been combined with a NFV infrastructure for network service automation and orchestration.
3

Khan, Kashif. "A distributed computing architecture to enable advances in field operations and management of distributed infrastructure." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/a-distributed-computing-architecture-to-enable-advances-in-field-operations-and-management-of-distributed-infrastructure(a9181e99-adf3-47cb-93e1-89d267219e50).html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distributed infrastructures (e.g., water networks and electric Grids) are difficult to manage due to their scale, lack of accessibility, complexity, ageing and uncertainties in knowledge of their structure. In addition they are subject to loads that can be highly variable and unpredictable and to accidental events such as component failure, leakage and malicious tampering. To support in-field operations and central management of these infrastructures, the availability of consistent and up-to-date knowledge about the current state of the network and how it would respond to planned interventions is argued to be highly desirable. However, at present, large-scale infrastructures are “data rich but knowledge poor”. Data, algorithms and tools for network analysis are improving but there is a need to integrate them to support more directly engineering operations. Current ICT solutions are mainly based on specialized, monolithic and heavyweight software packages that restrict the dissemination of dynamic information and its appropriate and timely presentation particularly to field engineers who operate in a resource constrained and less reliable environments. This thesis proposes a solution to these problems by recognizing that current monolithic ICT solutions for infrastructure management seek to meet the requirements of different human roles and operating environments (defined in this work as field and central sides). It proposes an architectural approach to providing dynamic, predictive, user-centric, device and platform independent access to consistent and up-to-date knowledge. This architecture integrates the components required to implement the functionalities of data gathering, data storage, simulation modelling, and information visualization and analysis. These components are tightly coupled in current implementations of software for analysing the behaviour of networks. The architectural approach, by contrast, requires they be kept as separate as possible and interact only when required using common and standard protocols. The thesis particularly concentrates on engineering practices in clean water distribution networks but the methods are applicable to other structural networks, for example, the electricity Grid. A prototype implementation is provided that establishes a dynamic hydraulic simulation model and enables the model to be queried via remote access in a device and platform independent manner.This thesis provides an extensive evaluation comparing the architecture driven approach with current approaches, to substantiate the above claims. This evaluation is conducted by the use of benchmarks that are currently published and accepted in the water engineering community. To facilitate this evaluation, a working prototype of the whole architecture has been developed and is made available under an open source licence.
4

Peters, Stephen Leslie. "Hyperglue : an infrastructure for human-centered computing in distributed, pervasive, intelligent environments." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35594.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 161-165).
As intelligent environments (IEs) move from simple kiosks and meeting rooms into the everyday offices, kitchens, and living spaces we use, the need for these spaces to communicate not only with users, but also with each other, will become increasingly important. Users will want to be able to shift their work environment between localities easily, and will also need to communicate with others as they move about. These IEs will thus require two pieces of infrastructure: a knowledge representation (KR) which can keep track of people and their relationships to the world; and a communication mechanism so that the IE can mediate interactions. This thesis seeks to define, explore and evaluate one way of creating this infrastructure, by creating societies of agents that can act on behalf of real-world entities such as users, physical spaces, or informal groups of people. Just as users interact with each other and with objects in their physical location, the agent societies interact with each other along communication channels organized along these same relationships. By organizing the infrastructure through analogies to the real world, we hope to achieve a simpler conceptual model for the users, as well as a communication hierarchy which can be realized efficiently.
by Stephen L. Peters.
Ph.D.
5

Bianchi, Stefano. "Design and Implementation of a Cloud Infrastructure for Distributed Scientific Calculation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.
6

Mechtri, Marouen. "Virtual networked infrastructure provisioning in distributed cloud environments." Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0028/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'informatique en nuage (Cloud Computing) a émergé comme un nouveau paradigme pour offrir des ressources informatiques à la demande et pour externaliser des infrastructures logicielles et matérielles. Le Cloud Computing est rapidement et fondamentalement en train de révolutionner la façon dont les services informatiques sont mis à disposition et gérés. Ces services peuvent être demandés à partir d’un ou plusieurs fournisseurs de Cloud d’où le besoin de la mise en réseau entre les composants des services informatiques distribués dans des emplacements géographiquement répartis. Les utilisateurs du Cloud veulent aussi déployer et instancier facilement leurs ressources entre les différentes plateformes hétérogènes de Cloud Computing. Les fournisseurs de Cloud assurent la mise à disposition des ressources de calcul sous forme des machines virtuelles à leurs utilisateurs. Par contre, ces clients veulent aussi la mise en réseau entre leurs ressources virtuelles. En plus, ils veulent non seulement contrôler et gérer leurs applications, mais aussi contrôler la connectivité réseau et déployer des fonctions et des services de réseaux complexes dans leurs infrastructures virtuelles dédiées. Les besoins des utilisateurs avaient évolué au-delà d'avoir une simple machine virtuelle à l'acquisition de ressources et de services virtuels complexes, flexibles, élastiques et intelligents. L'objectif de cette thèse est de permettre le placement et l’instanciation des ressources complexes dans des infrastructures de Cloud distribués tout en permettant aux utilisateurs le contrôle et la gestion de leurs ressources. En plus, notre objectif est d'assurer la convergence entre les services de cloud et de réseau. Pour atteindre cela, nous proposons des algorithmes de mapping d’infrastructures virtuelles dans les centres de données et dans le réseau tout en respectant les exigences des utilisateurs. Avec l'apparition du Cloud Computing, les réseaux traditionnels sont étendus et renforcés avec des réseaux logiciels reposant sur la virtualisation des ressources et des fonctions réseaux. En plus, le nouveau paradigme d'architecture réseau (Software Defined Networks) est particulièrement pertinent car il vise à offrir la programmation du réseau et à découpler, dans un équipement réseau, la partie plan de données de la partie plan de contrôle. Dans ce contexte, la première partie propose des algorithmes optimaux (exacts) et heuristiques de placement pour trouver le meilleur mapping entre les demandes des utilisateurs et les infrastructures sous-jacentes, tout en respectant les exigences exprimées dans les demandes. Cela inclut des contraintes de localisation permettant de placer une partie des ressources virtuelles dans le même nœud physique. Ces contraintes assurent aussi le placement des ressources dans des nœuds distincts. Les algorithmes proposés assurent le placement simultané des nœuds et des liens virtuels sur l’infrastructure physique. Nous avons proposé aussi un algorithme heuristique afin d’accélérer le temps de résolution et de réduire la complexité du problème. L'approche proposée se base sur la technique de décomposition des graphes et la technique de couplage des graphes bipartis. Dans la troisième partie, nous proposons un cadriciel open source (framework) permettant d’assurer la mise en réseau dynamique entre des ressources Cloud distribués et l’instanciation des fonctions réseau dans l’infrastructure virtuelle de l’utilisateur. Ce cadriciel permettra de déployer et d’activer les composants réseaux afin de mettre en place les demandes des utilisateurs. Cette solution se base sur un gestionnaire des ressources réseaux "Cloud Network Gateway Manager" et des passerelles logicielles permettant d’établir la connectivité dynamique et à la demande entre des ressources cloud et réseau. Le CNG-Manager offre le contrôle de la partie réseau et prend en charge le déploiement des fonctions réseau nécessaires dans l'infrastructure virtuelle des utilisateurs
Cloud computing emerged as a new paradigm for on-demand provisioning of IT resources and for infrastructure externalization and is rapidly and fundamentally revolutionizing the way IT is delivered and managed. The resulting incremental Cloud adoption is fostering to some extent cloud providers cooperation and increasing the needs of tenants and the complexity of their demands. Tenants need to network their distributed and geographically spread cloud resources and services. They also want to easily accomplish their deployments and instantiations across heterogeneous cloud platforms. Traditional cloud providers focus on compute resources provisioning and offer mostly virtual machines to tenants and cloud services consumers who actually expect full-fledged (complete) networking of their virtual and dedicated resources. They not only want to control and manage their applications but also control connectivity to easily deploy complex network functions and services in their dedicated virtual infrastructures. The needs of users are thus growing beyond the simple provisioning of virtual machines to the acquisition of complex, flexible, elastic and intelligent virtual resources and services. The goal of this thesis is to enable the provisioning and instantiation of this type of more complex resources while empowering tenants with control and management capabilities and to enable the convergence of cloud and network services. To reach these goals, the thesis proposes mapping algorithms for optimized in-data center and in-network resources hosting according to the tenants' virtual infrastructures requests. In parallel to the apparition of cloud services, traditional networks are being extended and enhanced with software networks relying on the virtualization of network resources and functions especially through network resources and functions virtualization. Software Defined Networks are especially relevant as they decouple network control and data forwarding and provide the needed network programmability and system and network management capabilities. In such a context, the first part proposes optimal (exact) and heuristic placement algorithms to find the best mapping between the tenants' requests and the hosting infrastructures while respecting the objectives expressed in the demands. This includes localization constraints to place some of the virtual resources and services in the same host and to distribute other resources in distinct hosts. The proposed algorithms achieve simultaneous node (host) and link (connection) mappings. A heuristic algorithm is proposed to address the poor scalability and high complexity of the exact solution(s). The heuristic scales much better and is several orders of magnitude more efficient in terms of convergence time towards near optimal and optimal solutions. This is achieved by reducing complexity of the mapping process using topological patterns to map virtual graph requests to physical graphs representing respectively the tenants' requests and the providers' physical infrastructures. The proposed approach relies on graph decomposition into topology patterns and bipartite graphs matching techniques. The third part propose an open source Cloud Networking framework to achieve cloud and network resources provisioning and instantiation in order to respectively host and activate the tenants' virtual resources and services. This framework enables and facilitates dynamic networking of distributed cloud services and applications. This solution relies on a Cloud Network Gateway Manager and gateways to establish dynamic connectivity between cloud and network resources. The CNG-Manager provides the application networking control and supports the deployment of the needed underlying network functions in the tenant desired infrastructure (or slice since the physical infrastructure is shared by multiple tenants with each tenant receiving a dedicated and isolated portion/share of the physical resources)
7

Svärd, Petter. "Dynamic Cloud Resource Management : Scheduling, Migration and Server Disaggregation." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-87904.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A key aspect of cloud computing is the promise of infinite, scalable resources, and that cloud services should scale up and down on demand. This thesis investigates methods for dynamic resource allocation and management of services in cloud datacenters, introducing new approaches as well as improvements to established technologies.Virtualization is a key technology for cloud computing as it allows several operating system instances to run on the same Physical Machine, PM, and cloud services normally consists of a number of Virtual Machines, VMs, that are hosted on PMs. In this thesis, a novel virtualization approach is presented. Instead of running each PM isolated, resources from multiple PMs in the datacenter are disaggregated and exposed to the VMs as pools of CPU, I/O and memory resources. VMs are provisioned by using the right amount of resources from each pool, thereby enabling both larger VMs than any single PM can host as well as VMs with tailor-made specifications for their application. Another important aspect of virtualization is live migration of VMs, which is the concept moving VMs between PMs without interruption in service. Live migration allows for better PM utilization and is also useful for administrative purposes. In the thesis, two improvements to the standard live migration algorithm are presented, delta compression and page transfer reordering. The improvements can reduce migration downtime, i.e., the time that the VM is unavailable, as well as the total migration time. Postcopy migration, where the VM is resumed on the destination before the memory content is transferred is also studied. Both userspace and in-kernel postcopy algorithms are evaluated in an in-depth study of live migration principles and performance.Efficient mapping of VMs onto PMs is a key problem for cloud providers as PM utilization directly impacts revenue. When services are accepted into a datacenter, a decision is made on which PM should host the service VMs. This thesis presents a general approach for service scheduling that allows for the same scheduling software to be used across multiple cloud architectures. A number of scheduling algorithms to optimize objectives like revenue or utilization are also studied. Finally, an approach for continuous datacenter consolidation is presented. As VM workloads fluctuate and server availability varies any initial mapping is bound to become suboptimal over time. The continuous datacenter consolidation approach adjusts this VM-to-PM mapping during operation based on combinations of management actions, like suspending/resuming PMs, live migrating VMs, and suspending/resuming VMs. Proof-of-concept software and a set of algorithms that allows cloud providers to continuously optimize their server resources are presented in the thesis.
8

Mechtri, Marouen. "Virtual networked infrastructure provisioning in distributed cloud environments." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'informatique en nuage (Cloud Computing) a émergé comme un nouveau paradigme pour offrir des ressources informatiques à la demande et pour externaliser des infrastructures logicielles et matérielles. Le Cloud Computing est rapidement et fondamentalement en train de révolutionner la façon dont les services informatiques sont mis à disposition et gérés. Ces services peuvent être demandés à partir d’un ou plusieurs fournisseurs de Cloud d’où le besoin de la mise en réseau entre les composants des services informatiques distribués dans des emplacements géographiquement répartis. Les utilisateurs du Cloud veulent aussi déployer et instancier facilement leurs ressources entre les différentes plateformes hétérogènes de Cloud Computing. Les fournisseurs de Cloud assurent la mise à disposition des ressources de calcul sous forme des machines virtuelles à leurs utilisateurs. Par contre, ces clients veulent aussi la mise en réseau entre leurs ressources virtuelles. En plus, ils veulent non seulement contrôler et gérer leurs applications, mais aussi contrôler la connectivité réseau et déployer des fonctions et des services de réseaux complexes dans leurs infrastructures virtuelles dédiées. Les besoins des utilisateurs avaient évolué au-delà d'avoir une simple machine virtuelle à l'acquisition de ressources et de services virtuels complexes, flexibles, élastiques et intelligents. L'objectif de cette thèse est de permettre le placement et l’instanciation des ressources complexes dans des infrastructures de Cloud distribués tout en permettant aux utilisateurs le contrôle et la gestion de leurs ressources. En plus, notre objectif est d'assurer la convergence entre les services de cloud et de réseau. Pour atteindre cela, nous proposons des algorithmes de mapping d’infrastructures virtuelles dans les centres de données et dans le réseau tout en respectant les exigences des utilisateurs. Avec l'apparition du Cloud Computing, les réseaux traditionnels sont étendus et renforcés avec des réseaux logiciels reposant sur la virtualisation des ressources et des fonctions réseaux. En plus, le nouveau paradigme d'architecture réseau (Software Defined Networks) est particulièrement pertinent car il vise à offrir la programmation du réseau et à découpler, dans un équipement réseau, la partie plan de données de la partie plan de contrôle. Dans ce contexte, la première partie propose des algorithmes optimaux (exacts) et heuristiques de placement pour trouver le meilleur mapping entre les demandes des utilisateurs et les infrastructures sous-jacentes, tout en respectant les exigences exprimées dans les demandes. Cela inclut des contraintes de localisation permettant de placer une partie des ressources virtuelles dans le même nœud physique. Ces contraintes assurent aussi le placement des ressources dans des nœuds distincts. Les algorithmes proposés assurent le placement simultané des nœuds et des liens virtuels sur l’infrastructure physique. Nous avons proposé aussi un algorithme heuristique afin d’accélérer le temps de résolution et de réduire la complexité du problème. L'approche proposée se base sur la technique de décomposition des graphes et la technique de couplage des graphes bipartis. Dans la troisième partie, nous proposons un cadriciel open source (framework) permettant d’assurer la mise en réseau dynamique entre des ressources Cloud distribués et l’instanciation des fonctions réseau dans l’infrastructure virtuelle de l’utilisateur. Ce cadriciel permettra de déployer et d’activer les composants réseaux afin de mettre en place les demandes des utilisateurs. Cette solution se base sur un gestionnaire des ressources réseaux "Cloud Network Gateway Manager" et des passerelles logicielles permettant d’établir la connectivité dynamique et à la demande entre des ressources cloud et réseau. Le CNG-Manager offre le contrôle de la partie réseau et prend en charge le déploiement des fonctions réseau nécessaires dans l'infrastructure virtuelle des utilisateurs
Cloud computing emerged as a new paradigm for on-demand provisioning of IT resources and for infrastructure externalization and is rapidly and fundamentally revolutionizing the way IT is delivered and managed. The resulting incremental Cloud adoption is fostering to some extent cloud providers cooperation and increasing the needs of tenants and the complexity of their demands. Tenants need to network their distributed and geographically spread cloud resources and services. They also want to easily accomplish their deployments and instantiations across heterogeneous cloud platforms. Traditional cloud providers focus on compute resources provisioning and offer mostly virtual machines to tenants and cloud services consumers who actually expect full-fledged (complete) networking of their virtual and dedicated resources. They not only want to control and manage their applications but also control connectivity to easily deploy complex network functions and services in their dedicated virtual infrastructures. The needs of users are thus growing beyond the simple provisioning of virtual machines to the acquisition of complex, flexible, elastic and intelligent virtual resources and services. The goal of this thesis is to enable the provisioning and instantiation of this type of more complex resources while empowering tenants with control and management capabilities and to enable the convergence of cloud and network services. To reach these goals, the thesis proposes mapping algorithms for optimized in-data center and in-network resources hosting according to the tenants' virtual infrastructures requests. In parallel to the apparition of cloud services, traditional networks are being extended and enhanced with software networks relying on the virtualization of network resources and functions especially through network resources and functions virtualization. Software Defined Networks are especially relevant as they decouple network control and data forwarding and provide the needed network programmability and system and network management capabilities. In such a context, the first part proposes optimal (exact) and heuristic placement algorithms to find the best mapping between the tenants' requests and the hosting infrastructures while respecting the objectives expressed in the demands. This includes localization constraints to place some of the virtual resources and services in the same host and to distribute other resources in distinct hosts. The proposed algorithms achieve simultaneous node (host) and link (connection) mappings. A heuristic algorithm is proposed to address the poor scalability and high complexity of the exact solution(s). The heuristic scales much better and is several orders of magnitude more efficient in terms of convergence time towards near optimal and optimal solutions. This is achieved by reducing complexity of the mapping process using topological patterns to map virtual graph requests to physical graphs representing respectively the tenants' requests and the providers' physical infrastructures. The proposed approach relies on graph decomposition into topology patterns and bipartite graphs matching techniques. The third part propose an open source Cloud Networking framework to achieve cloud and network resources provisioning and instantiation in order to respectively host and activate the tenants' virtual resources and services. This framework enables and facilitates dynamic networking of distributed cloud services and applications. This solution relies on a Cloud Network Gateway Manager and gateways to establish dynamic connectivity between cloud and network resources. The CNG-Manager provides the application networking control and supports the deployment of the needed underlying network functions in the tenant desired infrastructure (or slice since the physical infrastructure is shared by multiple tenants with each tenant receiving a dedicated and isolated portion/share of the physical resources)
9

Rojas, Balderrama Javier. "Gestion du cycle de vie de services déployés sur une infrastructure de calcul distribuée en neuroinformatique." Phd thesis, Université de Nice Sophia-Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00804893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'intérêt va croissant parmi les communautés scientifiques pour le partage de données et d'applications qui facilitent les recherches et l'établissement de collaborations fructueuses. Les domaines interdisciplinaires tels que les neurosciences nécessitent particulièrement de disposer d'une puissance de calcul suffisante pour l'expérimentation à grande échelle. Malgré les progrès réalisés dans la mise en œuvre de telles infrastructures distribuées, de nombreux défis sur l'interopérabilité et le passage à l'échelle ne sont pas complètement résolus. L'évolution permanente des technologies, la complexité intrinsèque des environnements de production et leur faible fiabilité à l'exécution sont autant de facteurs pénalisants. Ce travail porte sur la modélisation et l'implantation d'un environnement orienté services qui permet l'exécution d'applications scientifiques sur des infrastructures de calcul distribué, exploitant leur capacité de calcul haut débit. Le modèle comprend une spécification de description d'interfaces en ligne de commande; un pont entre les architectures orientées services et le calcul globalisé; ainsi que l'utilisation efficace de ressources locales et distantes pour le passage à l'échelle. Une implantation de référence est réalisée pour démontrer la faisabilité de cette approche. Sa pertinence et illustrée dans le contexte de deux projets de recherche dirigés par des campagnes expérimentales de grande ampleur réalisées sur des ressources distribuées. L'environnement développé se substitue aux systèmes existants dont les préoccupations se concentrent souvent sur la seule exécution. Il permet la gestion de codes patrimoniaux en tant que services, prenant en compte leur cycle de vie entier. De plus, l'approche orientée services aide à la conception de flux de calcul scientifique qui sont utilisés en tant que moyen flexible pour décrire des applications composées de services multiples. L'approche proposée est évaluée à la fois qualitativement et quantitativement en utilisant des applications réelles en analyse de neuroimages. Les expériences qualitatives sont basées sur l'optimisation de la spécificité et la sensibilité des outils de segmentation du cerveau utilisés pour traiter des Image par Raisonnance Magnétique de patients atteints de sclérose en plaques. Les expériences quantitative traitent de l'accélération et de la latence mesurées pendant l'exécution d'études longitudinales portant sur la mesure d'atrophie cérébrale chez des patients affectés de la maladie d'Alzheimer.
10

Suthakar, Uthayanath. "A scalable data store and analytic platform for real-time monitoring of data-intensive scientific infrastructure." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15788.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Monitoring data-intensive scientific infrastructures in real-time such as jobs, data transfers, and hardware failures is vital for efficient operation. Due to the high volume and velocity of events that are produced, traditional methods are no longer optimal. Several techniques, as well as enabling architectures, are available to support the Big Data issue. In this respect, this thesis complements existing survey work by contributing an extensive literature review of both traditional and emerging Big Data architecture. Scalability, low-latency, fault-tolerance, and intelligence are key challenges of the traditional architecture. However, Big Data technologies and approaches have become increasingly popular for use cases that demand the use of scalable, data intensive processing (parallel), and fault-tolerance (data replication) and support for low-latency computations. In the context of a scalable data store and analytics platform for monitoring data-intensive scientific infrastructure, Lambda Architecture was adapted and evaluated on the Worldwide LHC Computing Grid, which has been proven effective. This is especially true for computationally and data-intensive use cases. In this thesis, an efficient strategy for the collection and storage of large volumes of data for computation is presented. By moving the transformation logic out from the data pipeline and moving to analytics layers, it simplifies the architecture and overall process. Time utilised is reduced, untampered raw data are kept at storage level for fault-tolerance, and the required transformation can be done when needed. An optimised Lambda Architecture (OLA), which involved modelling an efficient way of joining batch layer and streaming layer with minimum code duplications in order to support scalability, low-latency, and fault-tolerance is presented. A few models were evaluated; pure streaming layer, pure batch layer and the combination of both batch and streaming layers. Experimental results demonstrate that OLA performed better than the traditional architecture as well the Lambda Architecture. The OLA was also enhanced by adding an intelligence layer for predicting data access pattern. The intelligence layer actively adapts and updates the model built by the batch layer, which eliminates the re-training time while providing a high level of accuracy using the Deep Learning technique. The fundamental contribution to knowledge is a scalable, low-latency, fault-tolerant, intelligent, and heterogeneous-based architecture for monitoring a data-intensive scientific infrastructure, that can benefit from Big Data, technologies and approaches.

Книги з теми "Distributed computing infrastructure":

1

Bubak, Marian, Jacek Kitowski, and Kazimierz Wiatr, eds. eScience on Distributed Computing Infrastructure. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10894-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tianruo, Yang Laurence, and Guo Minyi, eds. High performance computing: Paradigm and infrastructure. Hoboken, N.J: J. Wiley, 2005.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

1974-, Wang Lizhe, Jie Wei, and Chen Jinjun, eds. Grid computing: Infrastructure, service, and applications. Boca Raton: CRC Press, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sharda, Ramesh. Operations Research and Cyber-Infrastructure. Boston, MA: Springer US, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kacsuk, Péter, ed. Science Gateways for Distributed Computing Infrastructures. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11268-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Krause, Jordan. Windows Server 2012 R2 administrator cookbook: Over 80 hands-on recipes to effectively administer and manage your Windows Server 2012 R2 infrastructure in enterprise environments. Birmingham, UK: Packt Publishing, 2015.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Parashar, Manish. Advanced computational infrastructures for parallel and distributed adaptive applications. Hoboken, N.J: John Wiley & Sons, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

1967-, Parashar Manish, and Li Xiaolin 1973-, eds. Advanced computational infrastructures for parallel and distributed adaptive applications. Hoboken, N.J: John Wiley & Sons, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Villari, Massimo, Ivona Braidic, and Francesco Tusa. Achieving federated and self-manageable cloud infrastructures: Theory and practice. Hershey, PA: Business Science Reference, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

International Symposium on Grid Computing (2010 Taipei, Taiwan). Data driven e-Science: Use cases and successful applications of distributed computing infrastructures (ISGC 2010). Edited by Lin Simon C and Yen Eric. New York: Springer, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Distributed computing infrastructure":

1

Jiang, Weirong, and Viktor K. Prasanna. "Energy-Efficient Internet Infrastructure." In Energy-Efficient Distributed Computing Systems, 567–92. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118342015.ch20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rycerz, Katarzyna, Marian Bubak, Eryk Ciepiela, Maciej Pawlik, Olivier Hoenen, Daniel Harężlak, Bartosz Wilk, Tomasz Gubała, Jan Meizner, and David Coster. "Enabling Multiscale Fusion Simulations on Distributed Computing Resources." In eScience on Distributed Computing Infrastructure, 195–210. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10894-0_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Banerjee, Prith. "An Intelligent IT Infrastructure for the Future." In Distributed Computing and Networking, 1. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11322-2_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tang, Jia, and Minjie Zhang. "An Agent-Based Grid Computing Infrastructure." In Parallel and Distributed Processing and Applications, 630–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11576235_64.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Dziekoński, Paweł, Franciszek Klajn, Łukasz Flis, Patryk Lasoń, Marek Magryś, Andrzej Oziębło, Radosław Rowicki, et al. "National Distributed High Performance Computing Infrastructure for PL-Grid Users." In eScience on Distributed Computing Infrastructure, 16–33. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10894-0_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Acharya, Satyajit, Chris George, and Hrushikesha Mohanty. "Specifying a Mobile Computing Infrastructure and Services." In Distributed Computing and Internet Technology, 244–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30555-2_29.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Nawrocki, Krzysztof, Andrzej Olszewski, Adam Padée, Anna Padée, Mariusz Witek, Piotr Wójcik, and Miłosz Zdybał. "Domain-Oriented Services for High Energy Physics in Polish Computing Centers." In eScience on Distributed Computing Infrastructure, 226–37. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10894-0_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kitowski, Jacek, Kazimierz Wiatr, Łukasz Dutka, Tomasz Szepieniec, Mariusz Sterzel, and Robert Pająk. "Domain-Specific Services in Polish e-Infrastructure." In eScience on Distributed Computing Infrastructure, 1–15. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10894-0_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kocot, Joanna, Tomasz Szepieniec, Piotr Wójcik, Michał Trzeciak, Maciej Golik, Tomasz Grabarczyk, Hubert Siejkowski, and Mariusz Sterzel. "A Framework for Domain-Specific Science Gateways." In eScience on Distributed Computing Infrastructure, 130–46. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10894-0_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kurowski, Krzysztof, Piotr Dziubecki, Piotr Grabowski, Michał Krysiński, Tomasz Piontek, and Dawid Szejnfeld. "Easy Development and Integration of Science Gateways with Vine Toolkit." In eScience on Distributed Computing Infrastructure, 147–63. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10894-0_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Distributed computing infrastructure":

1

Zasada, Stefan J., Mariusz Mamonski, Derek Groen, Joris Borgdorff, Ilya Saverchenko, Tomasz Piontek, Krzysztof Kurowski, and Peter V. Coveney. "Distributed Infrastructure for Multiscale Computing." In 2012 IEEE/ACM 16th International Symposium on Distributed Simulation and Real Time Applications (DS-RT). IEEE, 2012. http://dx.doi.org/10.1109/ds-rt.2012.17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ghijsen, Mattijs, Jeroen van der Ham, Paola Grosso, and Cees de Laat. "Towards an Infrastructure Description Language for Modeling Computing Infrastructures." In 2012 IEEE 10th International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE, 2012. http://dx.doi.org/10.1109/ispa.2012.35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Callegaro, Davide, Sabur Baidya, and Marco Levorato. "Dynamic Distributed Computing for Infrastructure-Assisted Autonomous UAVs." In ICC 2020 - 2020 IEEE International Conference on Communications (ICC). IEEE, 2020. http://dx.doi.org/10.1109/icc40277.2020.9148986.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhou, Larry, Jordan Lambert, Yanyan Zheng, Zheng Li, Alan Yen, Sandra Liu, Vivian Ye, et al. "Distributed Scalable Edge Computing Infrastructure for Open Metaverse." In 2023 IEEE Cloud Summit. IEEE, 2023. http://dx.doi.org/10.1109/cloudsummit57601.2023.00007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ashrafi, Tasnia H., Sayed E. Arefin, Kowshik D. J. Das, Md A. Hossain, and Amitabha Chakrabarty. "FOG based distributed IoT infrastructure." In ICC '17: Second International Conference on Internet of Things, Data and Cloud Computing. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3018896.3036365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kutovskiy, N., I. Pelevanyuk, and D. Zaborov. "USING DISTRIBUTED CLOUDS FOR SCIENTIFIC COMPUTING." In 9th International Conference "Distributed Computing and Grid Technologies in Science and Education". Crossref, 2021. http://dx.doi.org/10.54546/mlit.2021.78.51.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, cloud resources are the most flexible tool to provide access to infrastructures forestablishing services and applications. However, it is also a valuable resource for scientific computing.At the Joint Institute for Nuclear Research, the computing cloud was integrated with the DIRACsystem. It allowed for the submission of scientific computing jobs directly to the cloud. Thanks to theexperience, the cloud resources of several organizations from the JINR Member States were integratedin the same way. It increased the total amount of cloud resources accessible in a uniform way throughDIRAC, in the scope of the so-called Distributed Information and Computing Environment (DICE).Folding@Home tasks related to the SARS-CoV-2 virus were submitted to all available cloudresources. In addition to useful scientific results, such experience was also helpful in obtaininginformation about the performance, limitations, strengths, and weaknesses of the combined system.Based on the gained experience, the DICE infrastructure was tuned to successfully perform real userjobs related to Monte-Carlo simulation for the Baikal-GVD experiment.
7

Milanovic, N., and V. Mornar. "A software infrastructure for distributed computing based on DCOM." In Proceedings 23rd International Conference Information Technology Interfaces. ITI 2001. IEEE, 2001. http://dx.doi.org/10.1109/iti.2001.937998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

"Session 8: infrastructure." In Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004. IEEE, 2004. http://dx.doi.org/10.1109/hpdc.2004.1323542.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fagan, Michael, Mohammad Maifi Hasan Khan, and Bing Wang. "Leveraging Cloud Infrastructure for Troubleshooting Edge Computing Systems." In 2012 IEEE 18th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2012. http://dx.doi.org/10.1109/icpads.2012.67.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sinnott, R. O., G. Stewart, A. Asenov, C. Millar, D. Reid, G. Roy, S. Roy, C. Davenhall, B. Harbulot, and M. Jones. "e-Infrastructure Support for nanoCMOS Device and Circuit Simulations." In Parallel and Distributed Computing and Networks. Calgary,AB,Canada: ACTAPRESS, 2010. http://dx.doi.org/10.2316/p.2010.676-048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Distributed computing infrastructure":

1

Kang, Myong H., Judith N. Froscher, and Brian J. Eppinger. Towards an Infrastructure for MLS Distributed Computing. Fort Belvoir, VA: Defense Technical Information Center, January 1998. http://dx.doi.org/10.21236/ada465483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії