Dissertations / Theses on the topic 'GRIND COMPUTING'

To see the other types of publications on this topic, follow the link: GRIND COMPUTING.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'GRIND COMPUTING.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Petersen, Karsten. "Grid Computing - Eine Einführung." Universitätsbibliothek Chemnitz, 2003. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200301292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morel, Matthieu. "Components for grid computing." Nice, 2006. http://www.theses.fr/2006NICE4086.

Full text
Abstract:
This thesis aims at facilitating the design and deployment of distributed applications on Grids, using a component-based programming approach. The specific issues in Grid computing addressed by the proposal of this thesis are: complexity of design, deployment, flexibility and high performance. We propose and justify a component model and an implementation framework. Our component model grounds on the Fractal component model and the active object model, It takes advantage of, first, the hierarchical model, well defined semantics and extensibility of the Fractal model, and second, the identification of components as configurable activities. We define a deployment model based on the concept of virtual architectures, and we propose primitives for collective communications through the specification of collective interfaces. Collective interfaces handle data distribution, parallelism and synchronization of invocations. They establish a basis for defining complex interactions between multiple components. We realized an implementation of this model on top of the ProActive Grid middleware, therefore benefiting from underlying features of ProActive. We demonstrate the scalability and efficiency of the framework by developing and deploying on several hundred nodes a compute and communication-intensive application, and we take advantage of the collective interfaces to develop a component-based SPMD application with benchmarks
L’objectif de cette thèse est de faciliter la conception et le déploiement d’applications distribuées sur la Grille, en utilisant une approche orientée composants. Les problématiques du calcul sur grilles abordées dans notre proposition sont: la complexité de conception, le déploiement, la flexibilité et la performance. Nous proposons et justifions un modèle de composants et son implantation. Le modèle proposé repose sur le modèle de composants Fractal et sur le modèle des objets actifs. Il bénéficie d’une part, de la structure hiérarchique et de la définition précise du modèle Fractal, et d’autre part, de l’identification des composants comme activités configurables. Nous proposons un modèle de déploiement et nous spécifions un ensemble de primitives pour les communications collectives, grâce à la définition d’interfaces collectives. Les interfaces collectives permettent de gérer la distribution des données, le parallélisme et la synchronisation des invocations. Nous avons développé une implantation du modèle proposé avec l’intergiciel de grille ProActive. Le framework de composants bénéficie ainsi des fonctionnalités sous-jacentes offertes par l’intergiciel ProActive. Nous démontrons la capacité de passage à l’échelle et l’efficacité de notre framework en déployant sur plusieurs centaines de machines des applications intensives en termes de calcul et de communications. Nous mettons à profit les interfaces collectives pour développer une application SPMD à base de composants, dont nous évaluons les performances
APA, Harvard, Vancouver, ISO, and other styles
3

Avila, George Himer. "Constructing Covering Arrays using Parallel Computing and Grid Computing." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/17027.

Full text
Abstract:
A good strategy to test a software component involves the generation of the whole set of cases that participate in its operation. While testing only individual values may not be enough, exhaustive testing of all possible combinations is not always feasible. An alternative technique to accomplish this goal is called combinato- rial testing. Combinatorial testing is a method that can reduce cost and increase the effectiveness of software testing for many applications. It is based on con- structing functional test-suites of economical size, which provide coverage of the most prevalent configurations. Covering arrays are combinatorial objects, that have been applied to do functional tests of software components. The use of cov- ering arrays allows to test all the interactions, of a given size, among the input parameters using the minimum number of test cases. For software testing, the fundamental problem is finding a covering array with the minimum possible number of rows, thus reducing the number of tests, the cost, and the time expended on the software testing process. Because of the importance of the construction of (near) optimal covering arrays, much research has been carried out in developing effective methods for constructing them. There are several reported methods for constructing these combinatorial models, among them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4) metaheuristics methods. Metaheuristic methods, particularly through the application of simulated anneal- ing has provided the most accurate results in several instances to date. Simulated annealing algorithm is a general-purpose stochastic optimization method that has proved to be an effective tool for approximating globally optimal solutions to many optimization problems. However, one of the major drawbacks of the simulated an- nealing is the time it requires to obtain good solutions. In this thesis, we propose the development of an improved simulated annealing algorithm
Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027
Palancia
APA, Harvard, Vancouver, ISO, and other styles
4

Copaja, Cornejo Richard Nivaldo. "Grid computing para propósitos científicos." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2007. https://hdl.handle.net/20.500.12672/14091.

Full text
Abstract:
La investigación busca enfatizar las bondades del Grid Computing en el desarrollo de proyectos de investigación científica. La evolución de las redes de comunicación de alta velocidad, ha creado un escenario idóneo para el desarrollo de esta tecnología que proporcionará funcionalidades análogas a las existentes en las redes de suministro eléctrico; es decir un único punto de acceso a un conjunto de recursos distribuidos geográficamente como supercomputadores, clusters, sistemas de almacenamiento, fuentes de información, instrumentos y personal. La tecnología Grid Computing actual ofrece la funcionalidad mínima necesaria para, de forma transparente y segura, compartir y explotar simultáneamente los recursos pertenecientes a diferentes organizaciones, respetando sus propias políticas y procedimientos de seguridad y gestión de recursos. La propuesta constituye una solución viable para la difusión y creación de un Grid universitario, a nivel de Lima-Metropolitana, en una primera etapa y en un futuro a nivel nacional. De esta forma se contribuirá en la elevación del nivel de las investigaciones científicas peruanas.
Trabajo de suficiencia profesional
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Lizhe. "Virtual environments for Grid computing." Karlsruhe : Universitätsverlag, 2008. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000009892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Constantinescu-Fuløp, Zoran. "A Desktop Grid Computing Approach for Scientific Computing and Visualization." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2191.

Full text
Abstract:

Scientific Computing is the collection of tools, techniques, and theories required to solve on a computer, mathematical models of problems from science and engineering, and its main goal is to gain insight in such problems. Generally, it is difficult to understand or communicate information from complex or large datasets generated by Scientific Computing methods and techniques (computational simulations, complex experiments, observational instruments etc.). Therefore, support of Scientific Visualization is needed, to provide the techniques, algorithms, and software tools needed to extract and display appropriately important information from numerical data.

Usually, complex computational and visualization algorithms require large amounts of computational power. The computing power of a single desktop computer is insufficient for running such complex algorithms, and, traditionally, large parallel supercomputers or dedicated clusters were used for this job. However, very high initial investments and maintenance costs limit the availability of such systems. A more convenient solution, which is becoming more and more popular, is based on the use of nondedicated desktop PCs in a Desktop Grid Computing environment. Harnessing idle CPU cycles, storage space and other resources of networked computers to work together on a particularly computational intensive application does this. Increasing power and communication bandwidth of desktop computers provides for this solution.

In a desktop grid system, the execution of an application is orchestrated by a central scheduler node, which distributes the tasks amongst the worker nodes and awaits workers’ results. An application only finishes when all tasks have been completed. The attractiveness of exploiting desktop grids is further reinforced by the fact that costs are highly distributed: every volunteer supports her resources (hardware, power costs and internet connections) while the benefited entity provides management infrastructures, namely network bandwidth, servers and management services, receiving in exchange a massive and otherwise unaffordable computing power. The usefulness of desktop grid computing is not limited to major high throughput public computing projects. Many institutions, ranging from academics to enterprises, hold vast number of desktop machines and could benefit from exploiting the idle cycles of their local machines.

In the work presented in this thesis, the central idea has been to provide a desktop grid computing framework and to prove its viability by testing it in some Scientific Computing and Visualization experiments. We present here QADPZ, an open source system for desktop grid computing that have been developed to meet the above presented needs. QADPZ enables users from a local network or Internet to share their resources. It is a multi-platform, heterogeneous system, where different computing resources from inside an organization can be used. It can be used also for volunteer computing, where the communication infrastructure is the Internet. QADPZ supports the following native operating systems: Linux, Windows, MacOS and Unix variants. The reason behind natively supporting multiple operating systems, and not only one (Unix or Windows, as other systems do), is that often, in real life, this kind of limitation restricts very much the usability of desktop grid computing.

QADPZ provides a flexible object-oriented software framework that makes it easy for programmers to write various applications, and for researchers to address issues such as adaptive parallelism, fault-tolerance, and scalability. The framework supports also the execution of legacy applications, which for different reasons could not be rewritten, and that makes it suitable for other domains as business. It also supports low-level programming languages as C/C++ or high-level language applications, (e.g. Lisp, Python, and Java), and provides the necessary mechanisms to use such applications in a computation. Consequently, users with various backgrounds can benefit from using QADPZ. The flexible object-oriented structure and the modularity allow facile improvements and further extensions to other programming languages.

We have developed a general-purpose runtime and an API to support new kinds of high performance computing applications, and therefore to benefit from the advantages offered by desktop grid computing. This API directly supports the C/C++ programming language. We have shown how distributed computing extends beyond the master-worker paradigm (typical for such systems) and provided QADPZ with an extended API that supports in addition lightweight tasks and parallel computing (using the message passing paradigm - MPI). This extends the range of applications that can be used to already existing MPI based applications - e.g. parallel numerical solvers used in computational science, or parallel visualization algorithms.

Another restriction of existing systems, especially middleware based, is that each resource provider needs to install a runtime module with administrator privileges. This poses some issues regarding data integrity and accessibility on providers computers. The QADPZ system tries to overcome this by allowing the middleware module to run as a non-privileged user, even with restricted access, to the local system.

QADPZ provides also low-level optimizations, such as on-the-fly compression and encryption for communication. The user can choose from different algorithms, depending on the application, improving both the communication overhead imposed by large data transfers and keeping privacy of the data. The system goes further, by providing an experimental, adaptive compression algorithm, which can transparently choose different algorithms to improve the application. QADPZ support two different protocols (UDP and TCP/IP) in order to improve the efficiency of communication.

Free source code allows its flexible installations and modifications based on the particular needs of research projects and institutions. In addition to being a very powerful tool for computationally intensive research, the open sourceness makes QADPZ a flexible educational platform for numerous smallsize student projects in the areas of operating systems, distributed systems, mobile agents, parallel algorithms, etc. Open source software is a natural choice for modern research as well, because it encourages effectively integration, cooperation and boosting of new ideas.

This thesis proposes also an improved conceptual model (based on the master-worker paradigm), which makes contributions in several directions: pull vs. push work-units, pipelining of work-units, more work-units sent at a time, adaptive number of workers, adaptive time-out interval for work-units, and multithreading. We have also demonstrated that the use of desktop grids should not be limited to only master-worker applications, but it can be used for more fine-grained parallel Scientific Computing and Visualization applications, by performing some specific experiments. This thesis makes supplementary contributions: a hierarchical taxonomy of the main existing desktop grids, and an adaptive compression algorithm for remote visualization. QADPZ has also pioneered autonomic computing approach for desktop grids and presents specific self-management features: self-knowledge, self-configuration, selfoptimization and self-healing. It is worth to mention that to the present the QADPZ has over a thousand users who have download it (since July, 2001 when it has been uploaded to sourceforge.net), and many of them use it for their daily tasks (see the appendix). Many of the results have been published or are in course of publishing as it can be seen from the references.

APA, Harvard, Vancouver, ISO, and other styles
7

Burgess, David A. "Parallel computing for unstructured mesh algorithms." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Koehler, Stephan. "Video Streams in a Computing Grid." Thesis, KTH, School of Information and Communication Technology (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24271.

Full text
Abstract:

The growth of online video services such as YouTube enabled a new broadcasting medium for video. Similarly, consumer television is moving from analog to digital distribution of video content. Being able to manipulate the video stream by integrating a video or image overlay while streaming could enable a personalized video stream for each viewer. This master thesis explores the digital video domain to understand how streaming video can be efficiently modified, and designs and implements a prototype system for distributed video modification and streaming.

This thesis starts by examining standards and protocols related to video coding, formats and network distribution. To support multiple concurrent video streams to users, a distributed data and compute grid is used to create a scalable system for video streaming. Several (commercial) products are examined to find that GigaSpaces provides the optimal features for implementing the prototype. Furthermore third party libraries like libavcodec by FFMPEG and JBoss Netty are selected for respectively video coding and network streaming. The prototype design is then formulated including the design choices, the functionality in terms of user stories, the components that will make up the system and the flow of events in the system. Finally, the implementation is described followed by an evaluation of the fault tolerance, throughput, scalability and configuration. The evaluation shows that the prototype is fault tolerant and its throughput scales bothvertically and horizontally.

Intended audience

This thesis focuses on topics in the area of general computer science and network technology. It is therefore assumed that the reader has knowledge of basic concepts and techniques in these areas. More specifically this report focuses on topics related to digital video and distributed computer systems. Knowledge in these areas is helpful but not required.

APA, Harvard, Vancouver, ISO, and other styles
9

Polze, Andreas, and Bettina Schnor. "Grid-Computing : [Seminar im Sommersemester 2003]." Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2009/3316/.

Full text
Abstract:
1. Applikationen für weitverteiltes Rechnen Dennis Klemann, Lars Schmidt-Bielicke, Philipp Seuring 2. Das Globus-Toolkit Dietmar Bremser, Alexis Krepp, Tobias Rausch 3. Open Grid Services Architecture Lars Trieloff 4. Condor, Condor-G, Classad Stefan Henze, Kai Köhne 5. The Cactus Framework Thomas Hille, Martin Karlsch 6. High Performance Scheduler mit Maui/PBS Ole Weidner, Jörg Schummer, Benedikt Meuthrath 7. Bandbreiten-Monitoring mit NWS Alexander Ritter, Gregor Höfert 8. The Paradyn Parallel Performance Measurement Tool Jens Ulferts, Christian Liesegang 9. Grid-Applikationen in der Praxis Steffen Bach, Michael Blume, Helge Issel
APA, Harvard, Vancouver, ISO, and other styles
10

Cai, Wei. "Reconfigurable resource management in grid computing." Thesis, Lancaster University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ong, Sze Hwei 1979. "Grid computing : business and policy implications." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/30035.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2003.
Includes bibliographical references (leaves 84-86).
The Grid is a distributed computing infrastructure that facilitates the exchange of expertise and resources. It is somewhat analogous to the electric power grid in that it can potentially provide a universal source of IT resources that can have a huge impact on human capabilities and on the entire society. Currently the Grid is being deployed (in limited ways) in some research and academic institutions. As Grid computing technologies mature further, the commercial sector can also benefit. With Grid technologies enabling utility computing, enterprises will be able to access IT resources on-demand in a utility-like way. This thesis gives a brief introduction on Grids and looks back into the history of power grids for lessons learned. It suggests that the Grid and the power grid are both infrastructures and factors of reliability, standardization, universal access and affordability are necessary to ensure the success of any infrastructure. Once the Grid is successful, it can open up new opportunities in the field of utility computing and impact IT provision in the commercial sector. The new utility computing ecosystem would consist of five major players - the Grid resource supplier, the Grid infrastructure supplier, the utility service provider, the re-seller and the end user. Further industry analysis reveals that there are new roles for current players in the traditional IT provision industry and opportunities for new entrants in this new ecosystem. The thesis attempts to identify the characteristics of each of the five major players to help the IT industry better understand the requirements of these new roles. Current players in the IT provision industry would have to decide which of the above roles to play in this new utility computing ecosystem and to re-define their market strategies accordingly. New entrants to the field would likely be players in the telecommunication sector who want a share of this growing pie and whose existing relationship with bandwidth subscribers can be leveraged upon. This thesis concludes with recommendations on several policy issues: Grid standardization for inter-operability, decentralized Grid governance to encourage optimal resource sharing and mechanisms for transcending cultural/organizational barriers inhibiting the commercial adoption of Grid computing.
by Sze Hwei Ong.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
12

Alfawair, Mai. "A framework for evolving grid computing systems." Thesis, De Montfort University, 2009. http://hdl.handle.net/2086/3423.

Full text
Abstract:
Grid computing was born in the 1990s, when researchers were looking for a way to share expensive computing resources and experiment equipment. Grid computing is becoming increasingly popular because it promotes the sharing of distributed resources that may be heterogeneous in nature, and it enables scientists and engineering professionals to solve large scale computing problems. In reality, there are already huge numbers of grid computing facilities distributed around the world, each one having been created to serve a particular group of scientists such as weather forecasters, or a group of users such as stock markets. However, the need to extend the functionalities of current grid systems lends itself to the consideration of grid evolution. This allows the combination of many disjunct grids into a single powerful grid that can operate as one vast computational resource, as well as for grid environments to be flexible, to be able to change and to evolve. The rationale for grid evolution is the current rapid and increasing advances in both software and hardware. Evolution means adding or removing capabilities. This research defines grid evolution as adding new functions and/or equipment and removing unusable resources that affect the performance of some nodes. This thesis produces a new technique for grid evolution, allowing it to be seamless and to operate at run time. Within grid computing, evolution is an integration of software and hardware and can be of two distinct types, external and internal. Internal evolution occurs inside the grid boundary by migrating special resources such as application software from node to node inside the grid. While external evolution occurs between grids. This thesis develops a framework for grid evolution that insulates users from the complexities of grids. This framework has at its core a resource broker together with a grid monitor to cope with internal and external evolution, advance reservation, fault tolerance, the monitoring of the grid environment, increased resource utilisation and the high availability of grid resources. The starting point for the present framework of grid evolution is when the grid receives a job whose requirements do not exist on the required node which triggers grid evolution. If the grid has all the requirements scattered across its nodes, internal evolution enabling the grid to migrate the required resources to the required node in order to satisfy job requirements ensues, but if the grid does not have these resources, external evolution enables the grid either to collect them from other grids (permanent evolution) or to send the job to other grids for execution (just in time) evolution. Finally a simulation tool called (EVOSim) has been designed, developed and tested. It is written in Oracle 10g and has been used for the creation of four grids, each of which has a different setup including different nodes, application software, data and polices. Experiments were done by submitting jobs to the grid at run time, and then comparing the results and analysing the performance of those grids that use the approach of evolution with those that do not. The results of these experiments have demonstrated that these features significantly improve the performance of grid environments and provide excellent scheduling results, with a decreasing number of rejected jobs.
APA, Harvard, Vancouver, ISO, and other styles
13

Stewart, Sean. "Deploying a CMS Tier-3 Computing Cluster with Grid-enabled Computing Infrastructure." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2564.

Full text
Abstract:
The Large Hadron Collider (LHC), whose experiments include the Compact Muon Solenoid (CMS), produces over 30 million gigabytes of data annually, and implements a distributed computing architecture—a tiered hierarchy, from Tier-0 through Tier-3—in order to process and store all of this data. Out of all of the computing tiers, Tier-3 clusters allow scientists the most freedom and flexibility to perform their analyses of LHC data. Tier-3 clusters also provide local services such as login and storage services, provide a means to locally host and analyze LHC data, and allow both remote and local users to submit grid-based jobs. Using the Rocks cluster distribution software version 6.1.1, along with the Open Science Grid (OSG) roll version 3.2.35, a grid-enabled CMS Tier-3 computing cluster was deployed at Florida International University’s Modesto A. Maidique campus. Validation metric results from Ganglia, MyOSG, and CMS Dashboard verified a successful deployment.
APA, Harvard, Vancouver, ISO, and other styles
14

Belli, Stefano. "Tecniche di resource discovery nel grid computing." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9528/.

Full text
Abstract:
In questa tesi vengono analizzate le principali tecniche di Resource Discovery in uso nei sistemi di Grid Computing, valutando i principali vantaggi e svantaggi di ogni soluzione. Particolare attenzione verrà riposta sul Resource Discovery ad Agenti, che si propone come architettura capace di risolvere in maniera definitiva i classici problemi di queste reti. All'interno dell'elaborato, inoltre, ogni tecnica presentata verrà arricchita con una sua implementazione pratica: tra queste, ricordiamo MDS, Chord e l'implementazione Kang.
APA, Harvard, Vancouver, ISO, and other styles
15

Kaya, Ozgur. "Efficient Scheduling In Distributed Computing On Grid." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607928/index.pdf.

Full text
Abstract:
Today many computing resources distributed geographically are idle much of time. The aim of the grid computing is collecting these resources into a single system. It helps to solve problems that are too complex for a single PC. Scheduling plays a critical role in the efficient and effective management of resources to achieve high performance on grid computing environment. Due to the heterogeneity and highly dynamic nature of grid, developing scheduling algorithms for grid computing involves some challenges. In this work, we concentrate on efficient scheduling of distributed tasks on grid. We propose a novel scheduling heuristic for bag-of-tasks applications. The proposed algorithm primarily makes use of history based runtime estimation. The history stores information about the applications whose runtimes and other specific properties are recorded during the previous executions. Scheduling decisions are made according to similarity between the applications. Definition of similarity is an important aspect of this approach, apart from the best resource allocation. The aim of this scheduling algorithm (HISA-History Injected Scheduling Algorithm) is to define and find the similarity, and assign the job to the most suitable resource, making use of the similarity. In our evaluation, we use Grid simulation tool called GridSim. A number of intensive experiments with various simulation settings have been conducted. Based on the experimental results, the effectiveness of HISA scheduling heuristic is studied and compared to the other scheduling algorithms embedded in GridSim. The results show that history injection improves the performance of future job submissions on a grid.
APA, Harvard, Vancouver, ISO, and other styles
16

Popuri, Vamsi. "Intrusion detection for grid and cloud computing." Thesis, Linköpings universitet, Institutionen för systemteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70364.

Full text
Abstract:
In today’s life providing security has become more cumbersome because of all the malicious possibilities in data transmission, so we need a system which makes data transmission more secure beyond encryption, passwords and digital signatures. The system that we are discussing in this thesis is an Intrusion Detection System, which is a platform that provides security in the distributed systems.   This paper also attempts to explain the drawbacks in conventional system designs, which results in low performance due to network congestion and less data efficiency. We consider cloud and grid computing systems to improve the performance of the system. Cloud systems are characterized by a main server and other connected servers which provide certain services. Cloud systems, especially public cloud systems are prone to intrusions and care must be taken to secure the system. The emphasis in this thesis is to make cloud systems secure using intrusion detection system. Intrusion detection can be performed using either behaviour based or knowledge based techniques or both. We use UML as a tool to design the system, which helps in reducing the design complexity.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Lizhe [Verfasser]. "Virtual environments for grid computing / Lizhe Wang." Karlsruhe : KIT Scientific Publishing, 2009. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Smith, Andrew Cameron. "LHCb data management on the computing grid." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3018.

Full text
Abstract:
The LHCb detector is one of the four experiments being built to harness the proton-proton collisions provided by the Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN). The data rate expected, when the LHC experiments are fully operational, eclipses that of any previous scientific experiments and has motivated the adoption of a grid computing paradigm to store and process the data. Managing PetaBytes of data in a distributed environment provides a rich set of challenges related to scalability, reliability and performance. This thesis will present the data management requirements for executing the workload of the LHCb collab- oration. We present the systems designed that support all aspects of the grid data management for LHCb, from data transfer, to data integrity, and efficient data access. The distributed computing environment is inherently unstable and much focus has been made on providing systems that are ro- bust and resilient to observed failures.
APA, Harvard, Vancouver, ISO, and other styles
19

Cao, Junwei. "Agent-based resource management for grid computing." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/4172/.

Full text
Abstract:
A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capability. An ideal grid environment should provide access to the available resources in a seamless manner. Resource management is an important infrastructural component of a grid computing environment. The overall aim of resource management is to efficiently schedule applications that need to utilise the available resources in the grid environment. Such goals within the high performance community will rely on accurate performance prediction capabilities. An existing toolkit, known as PACE (Performance Analysis and Characterisation Environment), is used to provide quantitative data concerning the performance of sophisticated applications running on high performance resources. In this thesis an ASCI (Accelerated Strategic Computing Initiative) kernel application, Sweep3D, is used to illustrate the PACE performance prediction capabilities. The validation results show that a reasonable accuracy can be obtained, cross-platform comparisons can be easily undertaken, and the process benefits from a rapid evaluation time. While extremely well-suited for managing a locally distributed multi-computer, the PACE functions do not map well onto a wide-area environment, where heterogeneity, multiple administrative domains, and communication irregularities dramatically complicate the job of resource management. Scalability and adaptability are two key challenges that must be addressed. In this thesis, an A4 (Agile Architecture and Autonomous Agents) methodology is introduced for the development of large-scale distributed software systems with highly dynamic behaviours. An agent is considered to be both a service provider and a service requestor. Agents are organised into a hierarchy with service advertisement and discovery capabilities. There are four main performance metrics for an A4 system: service discovery speed, agent system efficiency, workload balancing, and discovery success rate. Coupling the A4 methodology with PACE functions, results in an Agent-based Resource Management System (ARMS), which is implemented for grid computing. The PACE functions supply accurate performance information (e. g. execution time) as input to a local resource scheduler on the fly. At a meta-level, agents advertise their service information and cooperate with each other to discover available resources for grid-enabled applications. A Performance Monitor and Advisor (PMA) is also developed in ARMS to optimise the performance of the agent behaviours. The PMA is capable of performance modelling and simulation about the agents in ARMS and can be used to improve overall system performance. The PMA can monitor agent behaviours in ARMS and reconfigure them with optimised strategies, which include the use of ACTs (Agent Capability Tables), limited service lifetime, limited scope for service advertisement and discovery, agent mobility and service distribution, etc. The main contribution of this work is that it provides a methodology and prototype implementation of a grid Resource Management System (RMS). The system includes a number of original features that cannot be found in existing research solutions.
APA, Harvard, Vancouver, ISO, and other styles
20

Killian, Rudi. "Dynamic superscalar grid for technical debt reduction." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2726.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018.
Organizations and the private individual, look to technology advancements to increase their ability to make informed decisions. The motivation for technology adoption by entities sprouting from an innate need for value generation. The technology currently heralded as the future platform to facilitate value addition, is popularly termed cloud computing. The move to cloud computing however, may conceivably increase the obsolescence cycle for currently retained Information Technology (IT) assets. The term obsolescence, applied as the inability to repurpose or scale an information system resource for needed functionality. The incapacity to reconfigure, grow or shrink an IT asset, be it hardware or software is a well-known narrative of technical debt. The notion of emergent technical debt realities is professed to be all but inevitable when informed by Moore’s Law, as technology must inexorably advance. Of more imminent concern however are that major accelerating factors of technical debt are deemed as non-holistic conceptualization and design conventions. Should management of IT assets fail to address technical debt continually, the technology platform would predictably require replacement. The unrealized value, functional and fiscal loss, together with the resultant e-waste generated by technical debt is meaningfully unattractive. Historically, the cloud milieu had evolved from the grid and clustering paradigms which allowed for information sourcing across multiple and often dispersed computing platforms. The parallel operations in distributed computing environments are inherently value adding, as enhanced effective use of resources and efficiency in data handling may be achieved. The predominant information processing solutions that implement parallel operations in distributed environments are abstracted constructs, styled as High Performance Computing (HPC) or High Throughput Computing (HTC). Regardless of the underlying distributed environment, the archetypes of HPC and HTC differ radically in standard implementation. The foremost contrasting factors of parallelism granularity, failover and locality in data handling have recently been the subject of greater academic discourse towards possible fusion of the two technologies. In this research paper, we uncover probable platforms of future technical debt and subsequently recommend redeployment alternatives. The suggested alternatives take the form of scalable grids, which should provide alignment with the contemporary nature of individual information processing needs. The potential of grids, as efficient and effective information sourcing solutions across geographically dispersed heterogeneous systems are envisioned to reduce or delay aspects of technical debt. As part of an experimental investigation to test plausibility of concepts, artefacts are designed to generically implement HPC and HTC. The design features exposed by the experimental artefacts, could provide insights towards amalgamation of HPC and HTC.
APA, Harvard, Vancouver, ISO, and other styles
21

Cardenas, Baron Yonny. "Grid caching : specification and implementation of collaborative cache services for grid computing." Lyon, INSA, 2007. http://theses.insa-lyon.fr/publication/2007ISAL0107/these.pdf.

Full text
Abstract:
This thesis proposes an approach for the design and implementation of collaborative cache systems in grids that supports capabilities for monitoring and controlling cache interactions. Our approach permits to compose and evaluate high-level collaborative cache functions in a flexible way. Our proposal is based on a multilayer model that defines the main functions of a collaborative grid cache system. This model and the provided specification are used to build a flexible and generic software infrastructure for the operation and control of collaborative caches. This infrastructure is composed of a group of autonomous cache elements called Grid Cache Services (GCS). The GCS is a local administrator of temporary storage and data which is implemented as a grid service that provides the cache capabilities defined by the model. We study a possible configuration for a group of GCS that constitutes a basic management system of temporary data called Temporal Storage Service (TSS)
Cette thèse propose une approche de la conception et de l'implémentation de systèmes de cache collaboratif dans les grilles de données. Notre approche permet la composition et l'évaluation des fonctions d‘un système de cache collaboratif de haut niveau de façon flexible. Notre proposition est basée sur un modèle multicouche qui définit les fonctions principales d'un système de cache collaboratif pour les grilles. Ce modèle et la spécification fournie sont utilisés pour construire une infrastructure logicielle flexible et générique pour l'opération et le contrôle du cache collaboratif. Cette infrastructure est composée d'un groupe d’éléments autonomes de cache appelés "Grid Cache Services" (GCS). Le GCS est un administrateur local de moyens de stockage et de données temporaires. Nous étudions une possible configuration d’un groupe de GCS qui constitue un système basique d'administration de données temporaires appelé "Temporal Storage Service" (TSS)
APA, Harvard, Vancouver, ISO, and other styles
22

Cardenas, Baron Yonni Brunie Lionel Pierson Jean-Marc. "Grid caching specification and implementation of collaborative cache services for grid computing /." Villeurbanne : Doc'INSA, 2008. http://docinsa.insa-lyon.fr/these/pont.php?id=cardenas_baron.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

In, Jang-Uk. "Efficient scheduling techniques and systems for grid computing." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0013834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Cederström, Andreas. "On using Desktop Grid Computing in software industry." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5800.

Full text
Abstract:
Context. When dealing with large data sets and heavy calculations the common solution is clusters, supercomputers or Grids of these two. However, there are ways of gaining large computational power by utilizing the unused cycles of regular home or office computers, this are referred to as Desktop Grids. Objectives. In this study we review the current field of solutions for open source Desktop Grid computing capable of dealing with a heterogeneous set of clients and dynamic size of the Desktop Grid. We investigate current use, interest of use and priority of key attributes of Desktop Grids. Finally we want to show how time effective Desktop Grids are compared to execution on a single machine and in the process show effort needed to setup a Desktop Grid and start computing. The overall purpose of this study is to provide a path for industry organizations to take when taking the first step into Desktop Grid computing. Methods. We use a systematic review to collect information of existing open source Desktop Grid solutions. Studies are selected based on inclusion criterions and a quality assessment. A survey questioner is used to assess industry usage, interest and prioritization of attributes of Desktop Grids. We will conduct an experiment to show execution speedup as well as setup effort. Results. We found ten open source Desktop Grids fulfilling our requirements. The survey shows that Desktop Grids is used to a very little extent within industry while a majority of the participants state that there is an interest for Desktop Grids. As result of the experiment, we can say that we achieved very high speedup and that effort needed to setup a Desktop Grid is about 40 hours for one person with no prior experience to the selected Desktop Grid system. Conclusions. We conclude that industry organizations have a possible need for Desktop Grids but in order to be more successful, Desktop Grid developers must put more effort into areas as automated testing and code compilation.
APA, Harvard, Vancouver, ISO, and other styles
25

Andrade, Jorge. "Grid and High-Performance Computing for Applied Bioinformatics." Doctoral thesis, Stockholm : Bioteknologi, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bsoul, Mohammad. "Economic scheduling in Grid computing using Tender models." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/3094.

Full text
Abstract:
Economic scheduling needs to be considered for Grid computing environment, because it gives an incentive for resource providers to supply their resources. Moreover, it enforces efficient use of resources, because the users have to pay for their use. Tendering is a suitable model for Grid scheduling because users start the negotiations for finding suitable resources for executing their jobs. Furthermore, the users specify their job requirements with their requests and therefore the resources reply with bids that are based on the cost of taking on the job and the availability of their processors. In this thesis, a framework for economic Grid scheduling using tendering is proposed. The framework entities such as users, brokers and resources employ tender/contract-net model to negotiate the prices and deadlines. The brokers' role is acting on behalf of users. During the negotiations, the entities aim to maximise their performance which is measured by a number of metrics. In order to evaluate the entities' performance under different scenarios, a Java- based simulator, called MICOSim, supporting event-driven simulation of economic Grid scheduling is presented. MICOSim can perform a simulation of more than one hundred entities faster than real time. It is concluded from the evaluation that users who are interested in increasing the job success rate and paying less for executing their jobs have to consider received prices to select the most appropriate bids, while users who are interested in improving the job average satisfaction rate have to consider either received completion time or both price and completion time to select the most suitable bids when the submission of jobs is static. The best broker strategy is the one that doesn't take into account meeting the job deadlines in the bids it sends to job owners. Finally, the resource strategy that considers the price to determine if to reply to a request or not is superior to other resource strategies. The only exception is employing this strategy with price that is too low. However, there is a tiny difference between the performances of different user strategies in dynamic submission. It is also concluded from the evaluation that broker strategies have the best performance when the revenue they target from the users is reasonable. Thus, the broker's aim has to be receiving reasonable revenue (neither too low nor too high) from acting on behalf of users. It is observed from the results that the strategy performance is influenced by the behaviour of other entities such as the submission time of user jobs. Finally, it is observed that the characteristics of entities have an effect on the performance of strategies. For example, the two user strategies that consider the received completion time and both price and completion time to determine if to accept a broker bid have similar performance, because of the existence of resources with various prices from cheap to expensive and existence of resources which don't care about the price paid for the execution. So, the price threshold doesn't have a large effect on the performance.
APA, Harvard, Vancouver, ISO, and other styles
27

Paterson, Stuart Keble. "LHCb distributed data analysis on the computing grid." Thesis, University of Glasgow, 2006. http://theses.gla.ac.uk/1077/.

Full text
Abstract:
LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.
APA, Harvard, Vancouver, ISO, and other styles
28

Mustafee, Navonil. "A grid computing framework for commercial simulation packages." Thesis, Brunel University, 2007. http://bura.brunel.ac.uk/handle/2438/4009.

Full text
Abstract:
An increased need for collaborative research among different organizations, together with continuing advances in communication technology and computer hardware, has facilitated the development of distributed systems that can provide users non-trivial access to geographically dispersed computing resources (processors, storage, applications, data, instruments, etc.) that are administered in multiple computer domains. The term grid computing or grids is popularly used to refer to such distributed systems. A broader definition of grid computing includes the use of computing resources within an organization for running organization-specific applications. This research is in the context of using grid computing within an enterprise to maximize the use of available hardware and software resources for processing enterprise applications. Large scale scientific simulations have traditionally been the primary benefactor of grid computing. The application of this technology to simulation in industry has, however, been negligible. This research investigates how grid technology can be effectively exploited by simulation practitioners using Windows-based commercially available simulation packages to model simulations in industry. These packages are commonly referred to as Commercial Off-The-Shelf (COTS) Simulation Packages (CSPs). The study identifies several higher level grid services that could be potentially used to support the practise of simulation in industry. It proposes a grid computing framework to investigate these services in the context of CSP-based simulations. This framework is called the CSP-Grid Computing (CSP-GC) Framework. Each identified higher level grid service in this framework is referred to as a CSP-specific service. A total of six case studies are presented to experimentally evaluate how grid computing technologies can be used together with unmodified simulation packages to support some of the CSP-specific services. The contribution of this thesis is the CSP-GC framework that identifies how simulation practise in industry may benefit from the use of grid technology. A further contribution is the recognition of specific grid computing software (grid middleware) that can possibly be used together with existing CSPs to provide grid support. With its focus on end-users and end-user tools, it is intended that this research will encourage wider adoption of grid computing in the workplace and that simulation users will derive benefit from using this technology.
APA, Harvard, Vancouver, ISO, and other styles
29

Omar, Wail M. "Self-management middleware services for autonomic grid computing." Thesis, Liverpool John Moores University, 2006. http://researchonline.ljmu.ac.uk/5784/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Van, Le. "Gridsec : une architecture sécurisée pour le grid computing." Besançon, 2003. http://www.theses.fr/2003BESA2028.

Full text
Abstract:
Les grilles de calcul se développent de plus en plus et sont de plus en plus puissantes. Ce concept est utilisé aussi bien pour des calculs intensifs que pour l'exécution d'applications à grance échelle. L'accès aux ressources ne doit pas être limité par l'hétérogénéité des machines et des réseaux interconnectés. Les données qui traversent ces nombreux réseaux doivent être sécurisées. Dans ce document, nous présentons GRIDSec, une architecture sécurisée permettant d'identifier et d'authentifier un site auprès des autres, sans pour autant entraver l'autorité des entités locales. GRIDSec se base sur l'authentification du site (i. E. Du participant) et protège les échanges sans affecter la compatibilité des protocoles existants ni les performances. GRIDSec utilise les fonctions de sécurité de DNSSec de façon inhabituelle: nous avons transformé un serveur DNS en un serveur de distribution de clés. Nous avons utilisé SSH pour protéger la phase d'authentification et d'échange de clés. Une fois les serveurs authentifiés, nous avons utilisé IPSec pour protéger les données. Notre système de sécurité est validé d'une part en simulant une application distribuée à l'aide de SimGrid et d'autre part à l'aide de résultats expérimentaux. Des tests de performances montrent le faible coût des mécanismes mis en oeuvre.
APA, Harvard, Vancouver, ISO, and other styles
31

MANCA, EMANUELE. "Grid and high performance computing applied to bioinformatics." Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266595.

Full text
Abstract:
Recent advances in genome sequencing technologies and modern biological data analysis technologies used in bioinformatics have led to a fast and continuous increase in biological data. The difficulty of managing the huge amounts of data currently available to researchers and the need to have results within a reasonable time have led to the use of distributed and parallel computing infrastructures for their analysis. In this context Grid computing has been successfully used. Grid computing is based on a distributed system which interconnects several computers and/or clusters to access global-scale resources. This infrastructure is exible, highly scalable and can achieve high performances with data-compute-intensive algorithms. Recently, bioinformatics is exploring new approaches based on the use of hardware accelerators, such as the Graphics Processing Units (GPUs). Initially developed as graphics cards, GPUs have been recently introduced for scientific purposes by rea- son of their performance per watt and the better cost/performance ratio achieved in terms of throughput and response time compared to other high-performance com- puting solutions. Although developers must have an in-depth knowledge of GPU programming and hardware to be effective, GPU accelerators have produced a lot of impressive results. The use of high-performance computing infrastructures raises the question of finding a way to parallelize the algorithms while limiting data dependency issues in order to accelerate computations on a massively parallel hardware. In this context, the research activity in this dissertation focused on the assessment and testing of the impact of these innovative high-performance computing technolo- gies on computational biology. In order to achieve high levels of parallelism and, in the final analysis, obtain high performances, some of the bioinformatic algorithms applicable to genome data analysis were selected, analyzed and implemented. These algorithms have been highly parallelized and optimized, thus maximizing the GPU hardware resources. The overall results show that the proposed parallel algorithms are highly performant, thus justifying the use of such technology. However, a software infrastructure for work ow management has been devised to provide support in CPU and GPU computation on a distributed GPU-based in- frastructure. Moreover, this software infrastructure allows a further coarse-grained data-parallel parallelization on more GPUs. Results show that the proposed appli- cation speed-up increases with the increase in the number of GPUs.
APA, Harvard, Vancouver, ISO, and other styles
32

Cooper, Andrew. "Towards a trusted grid architecture." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:42268964-c1db-4599-9dbc-a1ceb1015ef1.

Full text
Abstract:
The malicious host problem is challenging in distributed systems such as grids and clouds. Rival organisations may share the same physical infrastructure. Administrators might deliberately or accidentally compromise users' data. The thesis concerns the development of a security architecture that allows users to place a high degree of trust in remote systems to process their data securely. The problem is tackled through a new security layer that ensures users' data can only be accessed within a trusted execution environment. Access to encrypted programs and data is authorised by a key management service using trusted computing attestation. Strong data integrity and confidentiality protection on remote hosts is provided by the job security manager virtual machine. The trusted grid architecture supports the enforcement of digital rights management controls. Subgrids allow users to define a strong trusted boundary for delegated grid jobs. Recipient keys enforce a trusted return path for job results to help users create secure grid workflows. Mandatory access controls allow stakeholders to mandate the software that is available to grid users. A key goal of the new architecture is backwards compatibility with existing grid infrastructure and data. This is achieved using a novel virtualisation architecture where the security layer is pushed down to the remote host, so it does not need to be pre-installed by the service provider. A new attestation scheme, called origin attestation, supports the execution of unmodified, legacy grid jobs. These features will ease the transition to a trusted grid and help make it practical for deployment on a global scale.
APA, Harvard, Vancouver, ISO, and other styles
33

Lindner, Peggy. "Management von verteilten ingenieurwissenschaftlichen Anwendungen in heterogenen Grid Umgebungen." [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-32412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Surapaneni, Chandra Sekhar Medhi Deepankar. "Dynamically organized and scalable virtual organizations in Grid computing." Diss., UMK access, 2005.

Find full text
Abstract:
Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2005.
"A thesis in computer science." Typescript. Advisor: Deepankar Medhi. Vita. Title from "catalog record" of the print edition Description based on contents viewed March 12, 2007. Includes bibliographical references (leaves 85-87). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
35

Shah, ShairBaz. "Using P2P approach for resource discovery in Grid Computing." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3088.

Full text
Abstract:
One of the fundamental requirements of Grid computing is efficient and effective resource discovery mechanism. Resource discovery involves discovery of appropriate resources required by user applications. In this regard various resource discovery mechanisms have been proposed during the recent years. These mechanisms range from centralized to hierarchical information servers approach. Most of the techniques developed based on these approaches have scalability and fault tolerance limitations. To overcome these limitations Peer to Peer based discovery mechanisms are proposed.
shairbaz@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
36

Min, Rui. "Scheduling advance reservations with priorities in Grid computing systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62798.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tang, Jia. "An agent-based peer-to-peer grid computing architecture." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060508.151716/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Tian, Hongfei 1979. "Grid computing as an integrating force in virtual enterprises." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/85742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Phalke, Vidyadhar 1968. "Strategies for managing business disruption due to grid computing." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/16998.

Full text
Abstract:
Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2003.
Includes bibliographical references (leaves 44-45).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
In the technology centric businesses disruptive technologies displace incumbents time and again, sometimes to the extent that incumbents go bankrupt. In this thesis we would address the issue of what strategies are essential to prepare for and to manage disruptions for the affected businesses and industries. Specifically we will look at grid computing that is poised to disrupt (1) certain Enterprise IT departments, and (2) the software industry in the high-performance and web services space. In this thesis, we will present an analysis for addressing this disruption on a segment by segment basis across the distributed computing and application software value chain.
by Vidyadhar Phalke.
S.M.M.O.T.
APA, Harvard, Vancouver, ISO, and other styles
40

Abraham, G. T. "Group-based parallel multi-scheduling methods for grid computing." Thesis, Coventry University, 2016. http://curve.coventry.ac.uk/open/items/9e286ae2-da74-42c9-978b-65d5eb3e3857/1.

Full text
Abstract:
With the advent in multicore computers, the scheduling of Grid jobs can be made more effective if scaled to fully utilize the underlying hardware and parallelized to benefit from the exploitation of multicores. The fact that sequential algorithms do not scale with multicore systems nor benefit from parallelism remains a major challenge to scheduling in the Grid. As multicore systems become ever more pervasive in our computing lives, over reliance on such systems for passive parallelism does not offer the best option in harnessing the benefits of their multiprocessors for Grid scheduling. An explicit means of exploiting parallelism for Grid scheduling is required. The Group-based Parallel Multi-scheduler for Grid introduced in this work is aimed at effectively exploiting the benefits of multicore systems for Grid job scheduling by splitting jobs and machines into paired groups and independently multi-scheduling jobs in parallel from the groups. The Priority method splits jobs into four priority groups based on job attributes and uses two methods (SimTog and EvenDist) methods to group machines. Then the scheduling is carried out using the MinMin algorithm within the discrete group pairs. The Priority method was implemented and compared with the MinMin scheduling algorithm without grouping (named ordinary MinMin in this research). The analysis of results compared against the ordinary MinMin shows substantial improvement in speedup and gains in scheduling efficiency. In addition, the Execution Time Balanced (ETB) and Execution Time Sorted then Balanced (ETSB) methods were also implemented to group jobs in order to improve on some deficiencies found with the Priority method. The two methods used the same machine grouping methods as used with the Priority method, but were able to vary the number of groups and equally exploited different means of grouping jobs to ensure equitability of jobs in groups. The MinMin Grid scheduling algorithm was then executed independently within the discrete group pairs. Results and analysis shows that the ETB and ETSB methods gain still further improvement over MinMin compared to the Priority method. The conclusion is reached that grouping jobs and machines before scheduling improves the scheduling efficiency significantly.
APA, Harvard, Vancouver, ISO, and other styles
41

Lorch, Markus. "PRIMA - Privilege Management and Authorization in Grid Computing Environments." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/26995.

Full text
Abstract:
Computational grids and other heterogeneous, large-scale distributed systems require more powerful and more flexible authorization mechanisms to realize fine-grained access-control of resources. Computational grids are increasingly used for collaborative problem-solving and advanced science and engineering applications. Usage scenarios for advanced grids require support for small, dynamic working groups, direct delegation of access privileges among users, procedures for establishing trust relationships without requiring organizational level agreements, precise management by individuals of their privileges, and retention of authority by resource providers. Existing systems fail to provide the necessary flexibility and granularity to support these scenarios. The reasons include the overhead imposed by required administrator intervention, coarse granularity that only allows for all-or-nothing access control decisions, and the inability to implement finer-grained access control without requiring trusted application code. PRIMA, the model and system developed in this research, focuses on management and enforcement of fine-grained privileges. The PRIMA model introduces novel approaches that can be used in place of, or in combination with existing access control mechanisms. PRIMA enables the users of a system to manage access to their own assets directly without the need for, and costs of intervention by technical personnel. System administrators benefit from more flexible and fine-grained definition of access privileges and policies. A novel access control decision and enforcement model with support for legacy applications has been developed. The model uses on-demand account leasing and implements expressive enforcement mechanisms built on existing low-overhead security primitives of the operating systems. The combination of the PRIMA components constitutes a comprehensive security model that facilitates highly dynamic authorization scenarios and increases security through least privilege access to resources. In summary, PRIMA mechanisms enable the use of fine-grained access rights, reduce administrative costs to resource providers, enable ad-hoc and dynamic collaboration scenarios, and provide improved security service to long-lived grid communities.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Luo, Xubin. "Wavelength Division Multiplexing Optical Networks for Supporting Grid Computing." Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1221753325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Afzal, Ali. "Capacity planning and stochastic scheduling in grid computing environments." Thesis, Imperial College London, 2007. http://hdl.handle.net/10044/1/11503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Srivatsan, Siddhartha Eluppai. "Integrating heterogeneous computing resources to form a campus grid." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Alsoghayer, Raid Abdullah. "Risk assessment models for resource failure in grid computing." Thesis, University of Leeds, 2011. http://etheses.whiterose.ac.uk/1909/.

Full text
Abstract:
Service Level Agreements (SLAs) are introduced in order to overcome the limitations associated with the best-effort approach in Grid computing, and to accordingly make Grid computing more attractive for commercial uses. However, commercial Grid providers are not keen to adopt SLAs since there is a risk of SLA violation as a result of resource failure, which will result in a penalty fee; therefore, the need to model the resources risk of failure is critical to Grid resource providers. Essentially, moving from the best-effort approach for accepting SLAs to a risk-aware approach assists the Grid resource provider to provide a high-level Quality of Service (QoS). Moreover, risk is an important factor in establishing the resource price and penalty fee in the case of resource failure. In light of this, we propose a mathematical model to predict the risk of failure of a Grid resource using a discrete-time analytical model driven by reliability functions fitted to observed data. The model relies on the resource historical information so as to predict the probability of the resource failure (risk of failure) for a given time interval. The model was evaluated by comparing the predicted risk of failure with the observed risk of failure using availability data gathered from Grids resources. The risk of failure is an important property of a Grid resource, especially when scheduling jobs optimally in relation to resources so as to achieve a business objective. However, in Grid computing, user-centric scheduling algorithms ignore the risk factor and mostly address the minimisation of the cost of the resource allocation, or the overall deadline by which the job must be executed completely. Therefore, we propose a novel user-centric scheduling algorithm for scheduling Bag of Tasks (BoT) applications. The algorithm, which aims to meet user requirements, takes into account the risk of failure, the cost of resources and the job deadline. With this in mind, through simulation, we demonstrate that the algorithm provides a near-optimal solution for minimizing the cost of executing BoT jobs. Also, we show that the execution time of the proposed algorithm is very low, and is therefore suitable for solving scheduling problems in real-time. Risk assessment benefits the resource provider by providing methods to either support accepting or rejecting an SLA. Moreover, it will enable the resource provider to understand the capacity of the infrastructure and to thereby plan future investment. Scheduling algorithms will benefit the resource provider by providing methods to meet user requirements and the better utilisation of resources. The ability to adopt a risk assessment method and user-centric algorithms makes the exploitation of Grid systems more realistic.
APA, Harvard, Vancouver, ISO, and other styles
46

Rashid, Md Mamunur. "Non-grid opportunistic resources for (big data) volunteer computing." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/61077/.

Full text
Abstract:
CPU-intensive computing at LHC (The Large Hadron Collider) requires collaborative distributed computing resources to accomplish its data reconstruction and analysis. Currently, institutional Grid is trying to manage and process large datasets within limited time and cost. The baseline paradigm is now well established to use the Computing Grid and more specifically the WLCG (Worldwide LHC Computing Grid) and its supporting infrastructures. In order to achieve its Grid Computing, LHCb has developed a Community Grid Solution called DIRAC (Distributed Infrastructure with Remote Agent Control). It is based on a pilot job submission system to the institutional Grid infrastructures. However, there are other computing resources like idle desktops (e.g. SETI@home), the idle computing cluster (e.g. CERN's Online selection farm outside data-taking periods by LHC detectors) that could be used outside the Grid infrastructures. Because of their lightweight, in particular, simulation activities could benefit from using those opportunistic resources. The DIRAC architecture allows the use of the existing institutional grid resources. To expand the capability of existing computing powers, I have proposed to integrate opportunistic resources in the distributed computing system (DIRAC). In order, not to be dependent on the local settings for the worker node at the external resource, I propose using virtual machines. The architectural modifications required for DIRAC are presented here, with specific examples for data analyses non-Grid clusters. This solution was achieved by making the necessary changes in 3 state-of-the-art technologies: DIRAC, CernVM and OpenNebula. The combination of these three techniques is referred to as the DiCON architecture. I am referring the new approach as a framework rather than a specific technical solution to a specific scientific problem as this can be reused in similar big data analysis environment. I have also shown how this was used to analyse large-scale climate data. This was a rather challenging to use one developed infrastructure to another research area. I have also proposed to use dataflow architecture to exploit the possibilities of opportunistic resources and in the meantime, establish reliability and stability. Dataflow computing architecture in a virtual environment is seen as a possible future research extension of this work. This is a theoretical contribution only and this is a unique approach in a virtual cloud (not in-house computing) environment. This paradigm could give the scientific community access to a large number of non- conventional opportunistic CPU resources for scientific data processing. This PhD work optimises the challenges and the solutions provided by such a computing infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
47

CRISTOFORI, Andrea. "Grid accounting for computing and storage resources towards standardization." Doctoral thesis, Università degli studi di Ferrara, 2011. http://hdl.handle.net/11392/2389368.

Full text
Abstract:
In the last years, we have seen a growing interest of the scientific community first and commercial vendors then, in new technologies like Grid and Cloud computing. The first in particular, was born to meet the enormous computational requests mostly coming from physic experiments, especially Large Hadron Collider's (LHC) experiments at Conseil Européen pour la Recherche Nucléaire (European Laboratory for Particle Physics) (CERN) in Geneva. Other scientific disciplines that are also benefiting from those technologies are biology, astronomy, earth sciences, life sciences, etc. Grid systems allow the sharing of heterogeneous computational and storage resources between different geographically distributed institutes, agencies or universities. For this purpose technologies have been developed to allow communication, authentication, storing and processing of the required software and scientific data. This allows different scientific communities the access to computational resources that a single institute could not host for logistical and cost reasons. Grid systems were not the only answer to this growing need of resources of different communities. At the same time, in the last years, we have seen the affirmation of the so called Cloud Computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g.: networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The use of both computational paradigms and the utilization of storage resources, leverage on different authentication and authorization tools. The utilization of those technologies requires systems for the accounting of the consumed resources. Those systems are built on the top of the existing infrastructure and they collect all the needed data related to the users, groups and resources utilized. This information is then collected in central repositories where they can be analyzed and aggregated. Open Grid Forum (OGF) is the international organism that works to develop standards in the Grid environment. Usage Record - Working Group (UR-WG) is a group, born within OGF aiming at standardizing the Usage Record (UR) structure and publication for different kinds of resources. Up to now, the emphasis has been on the accounting for computational resources. With time it came out the need to expand those concepts to other aspects and especially to a definition and implementation of a standard UR for storage accounting. Several extensions to the UR definition are proposed in this thesis and the proposed developments in this field are described. The Distributed Grid Accounting System (DGAS) has been chosen, among other tools available, as the accounting system for the Italian Grid and is also adopted in other countries such as Greece and Germany. Together with HLRmon, it offers a complete accounting system and it is the tool that has been used during the writing of the thesis at INFN-CNAF. • In Chapter 1, I will focus on the paradigm of distributed computing and the Grid infrastructure will be introduced with particular emphasis on the gLite middleware and the EGI-InSPIRE project. • In Chapter 2, I will discuss some Grid accounting systems for computational resources with particular stress for DGAS. • In Chapter 3, the cross-check monitoring system used to check the correctness of the gathered data at the INFN-CNAF's Tier1 is presented. • In Chapter 4, another important aspect on accounting, accounting for storage resources, is introduced and the definition of a standard UR for storage accounting is presented. • In Chapter 5, an implementation of a new accounting system for the storage that uses the definitions given in Chapter 4 is presented. • In Chapter 6, the focus of the thesis move on the performance and reliability tests performed on the latest development release of DGAS that implements ActiveMQ as a standard transport mechanism. • In Appendix A are collected the BASH scripts and SQL code that are part of the cross-check tool described in Chapter 3. • In Appendix B are collected the scripts used in the implementation of the accounting system described in Chapter 5. • In Appendix C are collected the scripts and configurations used for the tests of the ActiveMQ implementation of DGAS described in Chapter 6. • In Appendix D are collected the publications in which I contributed during the thesis work
APA, Harvard, Vancouver, ISO, and other styles
48

Scholz, Stefan. "Geschäftsmodelle für Grid Computing in der Medizin und der Biomedizin." Lohmar Köln Eul, 2009. http://d-nb.info/1000567591/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Reinicke, Michael. "Dienstauswahlverfahren im Grid /." Göttingen : Sierke, 2007. http://www.gbv.de/dms/zbw/52764790X.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sandholm, Thomas. "Managing Service Levels in Grid Computing Systems : Quota Policy and Computational Market Approaches." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4346.

Full text
Abstract:

We study techniques to enforce and provision differentiated service levels in Computational Grid systems. The Grid offers simplified provisioning of peak-capacity for applications with computational requirements beyond local machines and clusters, by sharing resources across organizational boundaries. Current systems have focussed on access control, i.e., managing who is allowed to run applications on remote sites. Very little work has been done on providing differentiated service levels for those applications that are admitted. This leads to a number of problems when scheduling jobs in a fair and efficient way. For example, users with a large number of long-running jobs could starve out others, both intentionally and non-intentionally. We investigate the requirements of High Performance Computing (HPC) applications that run in academic Grid systems, and propose two models of service-level management. Our first model is based on global real-time quota enforcement, where projects are granted resource quota, such as CPU hours, across the Grid by a centralized allocation authority. We implement the SweGrid Accounting System to enforce quota allocated by the Swedish National Allocations Committee in the SweGrid production Grid, which connects six Swedish HPC centers. A flexible authorization policy framework allows provisioning and enforcement of two different service levels across the SweGrid clusters; high-priority and low-priority jobs. As a solution to more fine-grained control over service levels we propose and implement a Grid Market system, using a market-based resource allocator called Tycoon. The conclusion of our research is that although the Grid accounting solution offers better service level enforcement support than state-of-the-art production Grid systems, it turned out to be complex to set the resource price and other policies manually, while ensuring fairness and efficiency of the system. Our Grid Market on the other hand sets the price according to the dynamic demand, and it is further incentive compatible, in that the overall system state remains healthy even in the presence of strategic users.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography