Dissertations / Theses on the topic 'Computational Grid'

To see the other types of publications on this topic, follow the link: Computational Grid.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational Grid.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lu, Kai. "Decentralized load balancing in heterogeneous computational grids." Thesis, The University of Sydney, 2007. http://hdl.handle.net/2123/9382.

Full text
Abstract:
With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, grid computing has emerged as an attractive computing paradigm. The space limitations of conventional distributed systems can thus be overcome, to fully exploit the resources of under-utilised computing resources in every region around the world for distributed jobs. Workload and resource management are key grid services at the service level of grid software infrastructure, where issues of load balancing represent a common concern for most grid infrastructure developers. Although these are established research areas in parallel and distributed computing, grid computing environments present a number of new challenges, including large-scale computing resources, heterogeneous computing power, the autonomy of organisations hosting the resources, uneven job-arrival pattern among grid sites, considerable job transfer costs, and considerable communication overhead involved in capturing the load information of sites. This dissertation focuses on designing solutions for load balancing in computational grids that can cater for the unique characteristics of grid computing environments. To explore the solution space, we conducted a survey for load balancing solutions, which enabled discussion and comparison of existing approaches, and the delimiting and exploration of the apportion of solution space. A system model was developed to study the load-balancing problems in computational grid environments. In particular, we developed three decentralised algorithms for job dispatching and load balancing—using only partial information: the desirability-aware load balancing algorithm (DA), the performance-driven desirability-aware load-balancing algorithm (P-DA), and the performance-driven region-based load-balancing algorithm (P-RB). All three are scalable, dynamic, decentralised and sender-initiated. We conducted extensive simulation studies to analyse the performance of our load-balancing algorithms. Simulation results showed that the algorithms significantly outperform preexisting decentralised algorithms that are relevant to this research.
APA, Harvard, Vancouver, ISO, and other styles
2

Ming, Jiang. "Distributed simulations on a computational Grid." Thesis, University of Birmingham, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487476.

Full text
Abstract:
In order to simulate a large scale and complex model, a distributed simulation normally may require to harness and organise a huge amount of computing and network resources to support the simulation. A computational Grid is a novel distributed computing system that is able to organise virtually unlimited computing and network resources together to meet the resource requirements of various computational intensive problems. This thesis focuses on the issues of the integration of a distributed simulation and a computational Grid. Particularly, the dynamic and heterogeneous nature of Grid resources and the potentiaJIy high communication latencies between these resources are identified as the main chaJIenges to the performance of a distributed simulation running on a computational Grid. This thesis proposes a generic framework that provides a systematic solution to tackle these challenges and supports the execution, management and optimisation of a distributed simulation program on a computational Grid. A prototype of the framework is also implemented and evaluated. Within the prototype implementation, an adaptive control mechanism for optimising the execution of a Time Warp Parallel Discrete Event Simulation program on a computational Grid is developed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
3

Cardenas, Baron Yonni Brunie Lionel Pierson Jean-Marc. "Grid caching specification and implementation of collaborative cache services for grid computing /." Villeurbanne : Doc'INSA, 2008. http://docinsa.insa-lyon.fr/these/pont.php?id=cardenas_baron.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Phinjaroenphan, Panu, and s2118294@student rmit edu au. "An Efficient, Practical, Portable Mapping Technique on Computational Grids." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080516.145808.

Full text
Abstract:
Grid computing provides a powerful, virtual parallel system known as a computational Grid on which users can run parallel applications to solve problems quickly. However, users must be careful to allocate tasks to nodes properly because improper allocation of only one task could result in lengthy executions of applications, or even worse, applications could crash. This allocation problem is called the mapping problem, and an entity that tackles this problem is called a mapper. In this thesis, we aim to develop an efficient, practical, portable mapper. To study the mapping problem, researchers often make unrealistic assumptions such as that nodes of Grids are always reliable, that execution times of tasks assigned to nodes are known a priori, or that detailed information of parallel applications is always known. As a result, the practicality and portability of mappers developed in such conditions are uncertain. Our review of related work suggested that a more efficient tool is required to study this problem; therefore, we developed GMap, a simulator researchers/developers can use to develop practical, portable mappers. The fact that nodes are not always reliable leads to the development of an algorithm for predicting the reliability of nodes and a predictor for identifying reliable nodes of Grids. Experimental results showed that the predictor reduced the chance of failures in executions of applications by half. The facts that execution times of tasks assigned to nodes are not known a priori and that detailed information of parallel applications is not alw ays known, lead to the evaluation of five nearest-neighbour (nn) execution time estimators: k-nn smoothing, k-nn, adaptive k-nn, one-nn, and adaptive one-nn. Experimental results showed that adaptive k-nn was the most efficient one. We also implemented the predictor and the estimator in GMap. Using GMap, we could reliably compare the efficiency of six mapping algorithms: Min-min, Max-min, Genetic Algorithms, Simulated Annealing, Tabu Search, and Quick-quality Map, with none of the preceding unrealistic assumptions. Experimental results showed that Quick-quality Map was the most efficient one. As a result of these findings, we achieved our goal in developing an efficient, practical, portable mapper.
APA, Harvard, Vancouver, ISO, and other styles
5

Lepping, Joachim [Verfasser]. "Dezentrales grid scheduling mittels computational intelligence / Joachim Lepping." Dortmund : Universitätsbibliothek Technische Universität Dortmund, 2011. http://d-nb.info/1011570572/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Quirk, J. J. "An adaptive grid algorithm for computational shock hydrodynamics." Thesis, Boston Spa, U.K. : British Library Document Supply Centre, 1991. http://ethos.bl.uk/OrderDetails.do?did=1&uin=uk.bl.ethos.280913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Paulus, Amanda. "A Model-Predictive-Control Based Smart-Grid Aggregator." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230958.

Full text
Abstract:
Intermittent energy source usage, such as solar and wind power, is continuously increasing. Intermittent energy sources are highly dependent on prevailing weather conditions, resulting in stochastic electricity generation. The expected stochasticity in electricity generation will cause issues for the current power grid. Moreover, an expected issue for the Swedish power grid is higher peak loads. Thus, there is an emerging need for novel and smart power systems capable of shifting peak loads in the future electricity grid. Model Predictive Control (MPC) is a sophisticated control method that is suitable for smart-grid aggregators. Hence, MPC can be used to optimally control the efficiency of energy use in a smart grid and shift peak loads. The purpose of this thesis is to investigate optimal peak load-shifting and efficiency of electrical substation operation in a smart grid in Ramsjöåsen, Sweden, using an MPC based smart-grid aggregator. Furthermore, the purpose is also to contribute to the theoretical foundation for future peak load-shifting in smart grids. Within the thesis project a mathematical model for the smart grid in Ramsjöåsen is developed, which is then used to simulate different scenarios. The simulated results indicate that an MPC based smart-grid aggregator improves the performance of the smart grid in Ramsjöåsen, as regards to both peak load-shifting and efficiency of electrical substation operation.
Användningen av intermittenta energikällor, såsom sol och vindkraft, ökar ständigt. Intermittenta energikällor är starkt beroende av rådande väderförhållanden, vilket resulterar i stokastisk elproduktion. Den förväntade stokasticiteten i elproduktion kommer att orsaka problem för det nuvarande elnätet. Dessutom förväntas högre toppbelastningar för det svenska elnätet. Således finns ett växande behov av nya och smarta kraftsystem som kan reducera toppbelastningar i det framtida elnätet. Model Predictive Control (MPC) är en sofistikerad styrningsmetod som är lämplig för smart-näts aggregatorer. Därav kan MPC användas för att optimalt styra effektivitet av energianvändning i ett smart nät och minska toppbelastningar. Syftet med detta examensarbete är att undersöka optimal reducering av toppbelastningar och drift-effektivitet av transformatorstationen i ett smart nät i Ramsjöåsen, Sverige, med hjälp av en MPC baserad smart-näts aggregator. Dessutom är syftet att bidra till den teoretiska grunden för framtida topplastskapning i smarta nät. Inom examensarbetsprojektet utvecklas en matematisk modell för smart nätet i Ramsjöåsen, som sedan används för att simulera olika scenarier. De simulerade resultaten indikerar att en MPC baserad smart-näts aggregator förbättrar smart nätets prestanda i Ramsjöåsen, vad gäller både topplastsreducering och drifteffektivitet av transformatorstationen.
APA, Harvard, Vancouver, ISO, and other styles
8

Popescu, Mihaela. "A finite volume, Cartesian grid method for computational aeroacoustics." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Afgan, Enis. "Utility driven grid scheduling framework." Birmingham, Ala. : University of Alabama at Birmingham, 2009. https://www.mhsl.uab.edu/dt/2009p/afgan.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Alabama at Birmingham, 2009.
Title from PDF title page (viewed Sept. 1, 2009). Additional advisors: Brandon Eames, Elliot Lefkowitz, Anthony Skjellum, Alan Sprague. Includes bibliographical references (p. 228-245).
APA, Harvard, Vancouver, ISO, and other styles
10

Sandholm, Thomas. "Managing Service Levels in Grid Computing Systems : Quota Policy and Computational Market Approaches." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4346.

Full text
Abstract:

We study techniques to enforce and provision differentiated service levels in Computational Grid systems. The Grid offers simplified provisioning of peak-capacity for applications with computational requirements beyond local machines and clusters, by sharing resources across organizational boundaries. Current systems have focussed on access control, i.e., managing who is allowed to run applications on remote sites. Very little work has been done on providing differentiated service levels for those applications that are admitted. This leads to a number of problems when scheduling jobs in a fair and efficient way. For example, users with a large number of long-running jobs could starve out others, both intentionally and non-intentionally. We investigate the requirements of High Performance Computing (HPC) applications that run in academic Grid systems, and propose two models of service-level management. Our first model is based on global real-time quota enforcement, where projects are granted resource quota, such as CPU hours, across the Grid by a centralized allocation authority. We implement the SweGrid Accounting System to enforce quota allocated by the Swedish National Allocations Committee in the SweGrid production Grid, which connects six Swedish HPC centers. A flexible authorization policy framework allows provisioning and enforcement of two different service levels across the SweGrid clusters; high-priority and low-priority jobs. As a solution to more fine-grained control over service levels we propose and implement a Grid Market system, using a market-based resource allocator called Tycoon. The conclusion of our research is that although the Grid accounting solution offers better service level enforcement support than state-of-the-art production Grid systems, it turned out to be complex to set the resource price and other policies manually, while ensuring fairness and efficiency of the system. Our Grid Market on the other hand sets the price according to the dynamic demand, and it is further incentive compatible, in that the overall system state remains healthy even in the presence of strategic users.

APA, Harvard, Vancouver, ISO, and other styles
11

Junkert, Levi Daniel. "The grid overlay system model." Thesis, Montana State University, 2009. http://etd.lib.montana.edu/etd/2009/junkert/JunkertL0509.pdf.

Full text
Abstract:
The grid overlay system model is a new technique for forming a grid computing model for research computing. In this method we construct a grid that is dynamically allocated from a set of resources in a unique and progressive manner. This new system model allows for construction of virtual environments for execution of applications on many diverse shared resources. The system can dynamically scale to create a range of resources from a single machine to a virtual cluster of machines. This model provides a virtual container that can run legacy and customized software in an emulated environment or directly on the host's hardware through virtualization. Using this model on current consumer hardware allows for a unique blend of software containment with dynamic resource allocation. Our model, in combination with commercial off the shelf (COTS) hardware and software, is able to create a large grid system with multiple combinations of hardware and software environments. In our model we propose a unique set of abstraction layers for systems. The combination of our model with current consumer hardware and software provides a unique design principle for addressing grid implementation, hardware reusability, operating system deployment and implementation, virtualization in the grid, and user control techniques. This provides a robust and simple framework that allows for the construction of computational research solutions in which performance can be traded for flexibility, and vice versa. Our model can be applied to computational research grids, service oriented grids, and even scales to collections of mobile or embedded system grids.
APA, Harvard, Vancouver, ISO, and other styles
12

Heminger, Michael Alan. "Dynamic Grid Motion in a High-Order Computational Aeroacoustic Solver." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1272550725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shi, Benyun. "A concurrent negotiation mechanism for grid resource co-allocation." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Barbosa, Rodrigo Moreira. "MobiGrid: arcabouço para agentes móveis em ambiente de grades computacionais." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04052007-175707/.

Full text
Abstract:
Este texto apresenta nosso projeto de implementação de um arcabouço de suporte a agentes móveis dentro de um ambiente de grade denominado InteGrade. Nosso arcabouço - MobiGrid - foi criado de forma a permitir que aplicações seqüenciais longas possam ser executadas em uma rede de estações de trabalho pessoais. Os agentes móveis são utilizados para encapsular essas aplicações com longo tempo de processamento. O encapsulamento de uma aplicação com longo tempo de processamento dentro de um agente móvel é o que denominamos como tarefa. Sendo assim, as tarefas podem migrar sempre que a máquina é requisitada por seu usuário local, já que são providas com capacidade de migração automática. Nosso arcabouço também fornece ao usuário um gerente que rastreia as tarefas por ele submetidas. Baseados no ambiente de execução de tarefas descrito, criamos um modelo matemático para efetuarmos simulações de como se comportariam muitas tarefas submetidas a uma grade com grande quantidade de estações de trabalho. Neste trabalho apresentamos também esse modelo, bem como os resultados das simulações nele baseadas.
This text presents a project which focuses on the implementation of a framework for mobile agents support within a grid environment project, namely InteGrade. Our framework - MobiGrid - was created in such a way that time consuming sequential applications can be executed on a network of personal workstations. The encapsulation of a long processing application by a mobile agent is what we call task. Hence, the tasks can migrate whenever the local machine is requested by its local user, since they are provided with automatic migration capabilities. Our framework also provides the user with a manager that keeps track of the submitted agents. Based on the execution environment described above, we have created a mathematical model which allows us to simulate how a great quantity of tasks submitted to a grid with many workstations would behave. In this text, we also present our model, as well as the results of our simulations.
APA, Harvard, Vancouver, ISO, and other styles
15

Gandhi, Bhavin B. "On the coverage of grid information dissemination protocols." Diss., Online access via UMI:, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Erdil, Değer Cenk. "Adaptive dissemination protocols for hybrid grid resource scheduling." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ghorbani, Mohammadmersad. "Computational analysis of CpG site DNA methylation." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/8217.

Full text
Abstract:
Epigenetics is the study of factors that can change DNA and passed to next generation without change to DNA sequence. DNA methylation is one of the categories of epigenetic change. DNA methylation is the attachment of methyl group (CH3) to DNA. Most of the time it occurs in the sequences that G is followed by C known as CpG sites and by addition of methyl to the cytosine residue. As science and technology progress new data are available about individual’s DNA methylation profile in different conditions. Also new features discovered that can have role in DNA methylation. The availability of new data on DNA methylation and other features of DNA provide challenge to bioinformatics and the opportunity to discover new knowledge from existing data. In this research multiple data series were used to identify classes of methylation DNA to CpG sites. These classes are a) Never methylated CpG sites,b) Always methylated CpG sites, c) Methylated CpG sites in cancer/disease samples and non-methylated in normal samples d) Methylated CpG sites in normal samples and non-methylated in cancer/disease samples. After identification of these sites and their classes, an analysis was carried out to find the features which can better classify these sites a matrix of features was generated using four applications in EMBOSS software suite. Features matrix was also generated using the gUse/WS-PGRADE portal workflow system. In order to do this each of the four applications were grid enabled and ported to BOINC platform. The gUse portal was connected to the BOINC project via 3G-bridge. Each node in the workflow created portion of matrix and then these portions were combined together to create final matrix. This final feature matrix used in a hill climbing workflow. Hill climbing node was a JAVA program ported to BOINC platform. A Hill climbing search workflow was used to search for a subset of features that are better at classifying the CpG sites using 5 different measurements and three different classification methods: support vector machine, naïve bayes and J48 decision tree. Using this approach the hill climbing search found the models which contain less than half the number of features and better classification results. It is also been demonstrated that using gUse/WS-PGRADE workflow system can provide a modular way of feature generation so adding new feature generator application can be done without changing other parts. It is also shown that using grid enabled applications can speedup both feature generation and feature subset selection. The approach used in this research for distributed workflow based feature generation is not restricted to this study and can be applied in other studies that involve feature generation. The approach also needs multiple binaries to generate portions of features. The grid enabled hill climbing search application can also be used in different context as it only requires to follow the same format of feature matrix.
APA, Harvard, Vancouver, ISO, and other styles
18

Muranganwa, Raymond. "Design and implementation of a multi-agent opportunistic grid computing platform." Thesis, University of Fort Hare, 2016. http://hdl.handle.net/10353/2722.

Full text
Abstract:
Opportunistic Grid Computing involves joining idle computing resources in enterprises into a converged high performance commodity infrastructure. The research described in this dissertation investigates the viability of public resource computing in offering a plethora of possibilities through seamless access to shared compute and storage resources. The research proposes and conceptualizes the Multi-Agent Opportunistic Grid (MAOG) solution in an Information and Communication Technologies for Development (ICT4D) initiative to address some limitations prevalent in traditional distributed system implementations. Proof-of-concept software components based on JADE (Java Agent Development Framework) validated Multi-Agent Systems (MAS) as an important tool for provisioning of Opportunistic Grid Computing platforms. Exploration of agent technologies within the research context identified two key components which improve access to extended computer capabilities. The first component is a Mobile Agent (MA) compute component in which a group of agents interact to pool shared processor cycles. The compute component integrates dynamic resource identification and allocation strategies by incorporating the Contract Net Protocol (CNP) and rule based reasoning concepts. The second service is a MAS based storage component realized through disk mirroring and Google file-system’s chunking with atomic append storage techniques. This research provides a candidate Opportunistic Grid Computing platform design and implementation through the use of MAS. Experiments conducted validated the design and implementation of the compute and storage services. From results, support for processing user applications; resource identification and allocation; and rule based reasoning validated the MA compute component. A MAS based file-system that implements chunking optimizations was considered to be optimum based on evaluations. The findings from the undertaken experiments also validated the functional adequacy of the implementation, and show the suitability of MAS for provisioning of robust, autonomous, and intelligent platforms. The context of this research, ICT4D, provides a solution to optimizing and increasing the utilization of computing resources that are usually idle in these contexts.
APA, Harvard, Vancouver, ISO, and other styles
19

Berten, Vandy. "Stochastic approach to Brokering heuristics for computational grids." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210707.

Full text
Abstract:
Computational Grids are large infrastructures composed of several components such as clusters, or massively parallel machines, generally spread across a country or the world, linked together through some network such as Internet, and allowing a transparent access to any resource. Grids have become unavoidable for a large part of the scientific community requiring computational power such as high-energy physics, bioinformatics or earth observation. Large projects are emerging, often at an international level, but even if Grids are on the way of being efficient and user-friendly systems, computer scientists and engineers still have a huge amount of work to do in order to improve their efficiency. Amongst a large number of problems to solve or to improve upon, the problem of scheduling the work and balancing the load is of first importance.

This work concentrates on the way the work is dispatched on such systems, and mainly on how the first level of scheduling – generally name brokering, or meta-sheduling – is performed. We deeply analyze the behavior of popular strategies, compare their efficiency, and propose a new very efficient brokering policy providing notable performances, attested by the large number of simulations we performed and provided in the document.

The work is mainly split in two parts. After introducing the mathematical framework on which the following of the manuscript is based, we study systems where the grid brokering is done without any feed-back information, i.e. without knowing the current state of the clusters when the resource broker – the grid component receiving jobs from clients and performing the brokering – makes its decision. We show here how a computational grid behaves if the brokering is done is such a way that each cluster receives a quantity of work proportional to its computational capacity.

The second part of this work is rather independent from the first one, and consists in the presentation of a brokering strategy, based on Whittle's indices, trying to minimize as much as possible the average sojourn time of jobs. We show how efficient the proposed strategy is for computational grids, compared to the ones popular in production systems. We also show its robustness to several parameter changes, and provide several very efficient algorithms allowing to make the required computations for this index policy. We finally extend our model in several directions.


Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Jordy, Daniel Edward. "Computational drop testing of printed circuit boards with BGA components." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Al, Bodour R. "A flexible model supporting QoS and reallocation for grid applications." Thesis, Coventry University, 2011. http://curve.coventry.ac.uk/open/items/208265d9-ee5d-4c88-a955-466c9fc3dfa3/1.

Full text
Abstract:
The rise of business-oriented and commercial applications for Grid computing environments has recently gathered pace. Grid computing traditionally has been linked with scientific environments, where heterogeneous resources provided by Grid systems and infrastructures were employed for carrying out computationally-intensive and data-intensive scientific experiments or applications that may have not been possible before. The natural progression is that business-oriented applications will look to build on this success and utilise the large number of heterogeneous Grid resources including computational resources such as CPUs and memory and storage resources such as disk space, potentially available. The success of introducing these applications into the mainstream is directly related to whether service providers can deliver a level of Quality of Service (QoS) to a consumer and the ability of the consumer to request high-level QoS such as the numbers of CPUs required or the RAM required. QoS refers to the guidelines and requirements requested by a user/consumer from the service providers and resources. The communication and agreement establishment processes between user and provider must be defined clearly to accommodate a new type of user where knowledge of the underlying infrastructure cannot be assumed. QoS parameters have generally been defined at the Grid resource level using low level definitions. This tailors to specific applications and models related to scientific domains where brokering, scheduling and QoS delivery is designed for specific applications within specific domains. This thesis presents a flexible model for high-level QoS requests. Business Grid Quality of Service (BGQoS) is introduced for business-oriented and commercial Grid applications which may wish to make use of the resources made available by Grid system environments. BGQoS allows GRCs (Grid Resource Consumers) to specify varying types of high-level QoS requirements which are delivered via querying up-to-date resource information, matchmaking and monitoring operations. Moreover, we present dynamically calculated metrics for measuring QoS such as reliability, increasing the accuracy of meeting the GRC’s requirements. On the other hand GRPs (Grid Resource Provider) are also capable of advertising their resources, their capabilities, their usage policies and availability both locally and globally. This leads to a flexible model that could be carried across domains without altering the core operations and which could easily be expanded in order to accommodate different types of GRC, resources and applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Paladugula, Jithendar. "Parallelization of light scattering spectroscopy and its integration with computational grid environments." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0005280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Franco, Patrícia Batista [UNESP]. "Escalonamento de tarefas em ambiente de simulação de grid computacional." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/98681.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-07-01Bitstream added on 2014-06-13T18:39:29Z : No. of bitstreams: 1 franco_pb_me_sjrp.pdf: 2600940 bytes, checksum: c5d3454406b1b9e853e5a555f0a171cb (MD5)
Diversos são os esforços para o desenvolvimento de políticas de escalonamento em grid computacional. O uso de simuladores de grid computacional é de especial importância para o estudo de algoritmos de escalonamento de tarefas. Através dos simuladores, é possível avaliar e comparar o desempenho de diferentes algoritmos em diferentes cenários. Apesar das ferramentas de simulação fornecerem funcionalidades básicas para simulação de ambientes distribuídos, elas não disponibilizam políticas internas de escalonamento de tarefas; além disso, a implementação dos algoritmos deve ser feita pelo próprio usuário. Portanto, o objetivo deste trabalho é desenvolver a biblioteca de escalonamento de tarefas LIBTS (Library Tasks Scheduling) e adaptá-la ao simulador SimGrid para oferecer aos usuários uma ferramenta que possibilite o estudo de algoritmos de escalonamento de tarefas em grid computacional. Através da LIBTS os usuários podem comparar os algoritmos implementados (Workqueue (WQ), Workqueue with Replication (WQR), Sufferage, XSufferage, Dynamic FPLTF) em diversos cenários, como também desenvolver e implementar novos algoritmos de escalonamento de tarefas. Este trabalho também proporciona uma revisão de literatura sobre grid computacional, apresentando as características e metodologias dos algoritmos implementados na LIBTS e as principais características das ferramentas de simulação. Além disso, os cenários de testes criados para comparar os algoritmos validaram o funcionamento da biblioteca e o funcionamento correto dos algoritmos na LIBTS
Too much has been done to develop scheduling policies in computational grid. The use of computational grid simulators is particularly important for studying the algorithms of task scheduling. Through the simulators it’s possible to assess and compare the performance of different algorithms in various scenarios. Despite the simulation tools provide basic features for simulation in distributed environments, they don’t offer internal policies of task scheduling, so that the implementation of the algorithms must be realized by the user himself. Therefore, this study aims to develop the library of task scheduling LIBTS (Library Tasks Scheduling) and adapt it to the SimGrid simulator to provide the users with a tool to analyze the algorithms of task scheduling in the computational grid. Through the LIBTS, the users can compare the implemented algorithms (Workqueue (WQ), Workqueue with Replication (WQR), Sufferage, XSufferage, Dynamic FPLTF) in several scenarios, as well as to develop and implement new algorithms of task scheduling. This work also provides a literature review about the computational grid, presenting the characteristics and methodologies of the implemented algorithms in the LIBTS and the most important features of the simulation tools. Furthermore, the test scenarios created to compare the algorithms validate the library operation and the correct operation of the algorithms in LIBTS
APA, Harvard, Vancouver, ISO, and other styles
24

Killian, Rudi. "Dynamic superscalar grid for technical debt reduction." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2726.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018.
Organizations and the private individual, look to technology advancements to increase their ability to make informed decisions. The motivation for technology adoption by entities sprouting from an innate need for value generation. The technology currently heralded as the future platform to facilitate value addition, is popularly termed cloud computing. The move to cloud computing however, may conceivably increase the obsolescence cycle for currently retained Information Technology (IT) assets. The term obsolescence, applied as the inability to repurpose or scale an information system resource for needed functionality. The incapacity to reconfigure, grow or shrink an IT asset, be it hardware or software is a well-known narrative of technical debt. The notion of emergent technical debt realities is professed to be all but inevitable when informed by Moore’s Law, as technology must inexorably advance. Of more imminent concern however are that major accelerating factors of technical debt are deemed as non-holistic conceptualization and design conventions. Should management of IT assets fail to address technical debt continually, the technology platform would predictably require replacement. The unrealized value, functional and fiscal loss, together with the resultant e-waste generated by technical debt is meaningfully unattractive. Historically, the cloud milieu had evolved from the grid and clustering paradigms which allowed for information sourcing across multiple and often dispersed computing platforms. The parallel operations in distributed computing environments are inherently value adding, as enhanced effective use of resources and efficiency in data handling may be achieved. The predominant information processing solutions that implement parallel operations in distributed environments are abstracted constructs, styled as High Performance Computing (HPC) or High Throughput Computing (HTC). Regardless of the underlying distributed environment, the archetypes of HPC and HTC differ radically in standard implementation. The foremost contrasting factors of parallelism granularity, failover and locality in data handling have recently been the subject of greater academic discourse towards possible fusion of the two technologies. In this research paper, we uncover probable platforms of future technical debt and subsequently recommend redeployment alternatives. The suggested alternatives take the form of scalable grids, which should provide alignment with the contemporary nature of individual information processing needs. The potential of grids, as efficient and effective information sourcing solutions across geographically dispersed heterogeneous systems are envisioned to reduce or delay aspects of technical debt. As part of an experimental investigation to test plausibility of concepts, artefacts are designed to generically implement HPC and HTC. The design features exposed by the experimental artefacts, could provide insights towards amalgamation of HPC and HTC.
APA, Harvard, Vancouver, ISO, and other styles
25

Markham-Smith, Damian. "A novel meshless method based on the point-structured grid system and solver." Thesis, Brunel University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Fang, Kuan-Chieh. "Unsteady Incompressible Flow Analysis Using C-Type Grid with a Curved Branch Cut." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin962376293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Fernandes, de Carvalho Costa Genaro. "Automatic dynamic tuning of parallel/distributed applications on computational grids." Doctoral thesis, Universitat Autònoma de Barcelona, 2009. http://hdl.handle.net/10803/5771.

Full text
Abstract:
Las aplicaciones paralelas presentan diversos problemas de prestaciones al ser cambiadas para ejecutar en entornos Grid. Las características del sistema pueden ser diferentes en cada ejecución y en algunos casos dentro de una misma ejecución. Los recursos remotos comparten enlaces de red y, los procesos de la aplicación comparten las maquinas con otros procesos. En esos escenarios se propone el uso de la técnica de sintonización dinámica de prestaciones para ayudar las aplicaciones a adaptarse a los cambios en las características del sistema con la intención de resolver los problemas de prestaciones.
Esta tesis analiza el problema de la ejecución de aplicaciones paralelas en entornos Grid, herramientas para el análisis de prestaciones disponibles y modelos adecuados para la sintonización dinámica de aplicaciones paralelas en este tipo de entorno. De este análisis, se propone una arquitectura para sintonización dinámica de aplicaciones paralelas en entornos Grid llamada GMATE. Esta arquitectura incluye diversas contribuciones. En el caso donde un gestor de recursos de alto nivel decide el mapeo de procesos de la aplicación, se proponen dos aproximaciones para el seguimiento de procesos que habilita GMATE a localizar donde las capas de software del Grid ha mapeado los procesos de la aplicación. Una aproximación consiste en la integración de GMATE a las capas de software del Grid. La otra integra los componentes requeridos de GMATE dentro de los binarios de la aplicación. El primer enfoque requiere derechos de administración en cuanto que el segundo incrementa el binario del la aplicación, lo que torna más lento el arranque de la misma.
Para respetar las políticas de las organizaciones propietarias de los recursos, todas las comunicaciones usan el mismo certificado de seguridad de la aplicación. Esas comunicaciones son hechas con base en las capas del Grid. Esta aproximación habilita la arquitectura a monitorizar y sintonizar los procesos de la aplicación de forma dinámica a las restricciones de cortafuegos de las organizaciones y políticas de uso de la red en las mismas.
Para bajar la necesidad de comunicaciones en la arquitectura GMATE, se encapsula parte de la lógica requerida para colectar las medidas y para cambiar los parámetros de la aplicación en componentes encuestados que ejecutan dentro de espacio de memoria del proceso de la aplicación. Para colectar medidas, se ha creado componentes sensores que disminuyen la necesidad de comunicaciones llevando el procesamiento de eventos para dentro del proceso de la aplicación. Diferente de la instrumentación tradicional, los sensores pueden retrasar la transmisión de los eventos y ejecutar operaciones básicas como sumas, temporizadores, promedios o generación de eventos basados en umbrales. Esas capacidades reducen los requerimientos de comunicaciones que son útiles en situaciones de bajo ancho de banda. Se propone también el encapsulamiento de la lógica de cambio en las aplicaciones en componentes actuadores. Los actuadores son instalados en el flujo de ejecución de la aplicación y posibilita la sincronización y baja intrusión en cambio de variables y ejecución de funciones para modificar la aplicación. La arquitectura propuesta posibilita la comunicación entre sensores y actuadores lo cual habilita la sintonizaciones sencillas sin necesidad de comunicaciones.
Para aplicar la sintonización dinámica en entornos Grid, necesitamos de un modelo de prestaciones que pueda ser usado en entornos con heterogeneidad de red y procesadores para guiar el proceso. Se propone un modelo de prestaciones basado en una heurística para lograr usar el máximo número de esclavos y el mejor tamaño de grano en una ejecución Maestro-Esclavo en sistemas Grid. Se considera que una clase de aplicaciones pueden ser desarrolladas con la capacidad de cambiar el tamaño de grano en tiempo de ejecución y que esa acción cambia la relación entre cómputo y comunicación. En el escenario donde usuarios reciben un conjunto de recursos para una ejecución paralela, esos recursos pueden configurar un sistema de múltiples clústeres. La heurística del modelo presentado permite guiar la selección de los recursos para disminuir el tiempo total de ejecución de la aplicación. Se intenta buscar el punto donde el maestro usa el máximo número de esclavos posible asignando prioridad a aquellos más rápidos.
Se presentan los resultados de sintonización del tamaño de grano y número de esclavos en ejecuciones Maestro-Esclavo en entornos Grid donde se logra bajar el tiempo total de ejecución de la aplicación y aumentando la eficiencia de uso de los recursos. Se presentan las implementaciones de las aplicaciones multiplicación de matrices, N-Body y cargas sintéticas las cuales tienen diferentes grados en la relación entre cómputo y comunicación en escenarios de cambio del tamaño de grano.
When moving to Grid Computing, parallel applications face several performance problems. The system characteristics are different in each execution and sometimes within the same execution. Remote resources share network links and in some cases, the processes share machines using per-core allocation. In such scenarios we propose to use automatic performance tuning techniques to help an application adapt itself thus a system changes in order to overcome performance bottlenecks.
This thesis analyzes such problems of parallel application execution in Computational Grids, available tools for performance analysis and models to suit automatic dynamic tuning in such environments. From such an analysis, we propose system architecture for automatic dynamic tuning of parallel applications on computational Grids named GMATE. Its architecture includes several contributions. In cases where a Grid meta-scheduler decides application mapping, we propose two process tracking approaches that enable GMATE to locate where a Grid middleware maps application processes. One approach consists of the integration of GMATE components as Grid middleware. The other involves the need to embed a GMATE component inside application binaries. The first requires site administration privileges while the other increases the application binary which slows down application startup.
To obey organizational policies, all communications use the same application security certificates for authentication. The same communications are performed using Grid middleware API. That approach enables the monitoring and tuning process to adapt dynamically to organizational firewall restrictions and network usage policies.
To lower the communication needs of GMATE, we encapsulate part of the logic required to collect measurements and change application parameters in components that run inside the processing space. For measurements collection, we create sensor components that reduce the communication needs by event processing inside the process space. Different from traditional instrumentation, sensors can postpone the event transmission and perform basic operations such as summarizations, timers, averages or threshold based event generation. That reduces the communication requirements in cases where network bandwidth is expensive. We also encapsulate the modifications used to tune the application in components called actuators. Actuators may be installed at some point in the program flow execution and provide synchronization and low overhead control of application variables and function executions. As sensors and actuators can communicate with each other, we can perform simple tuning within process executions without the need for communication.
As the dynamic tuning is performance model-centric, we need a performance model that can be used on heterogeneous processors and network such Grid Systems. We propose a heuristic performance model to find the maximum number of workers and best grain size of a Master-Worker execution in such systems. We assume that some classes of application may be built capable of changing grain size at runtime and that change action can modify an application's compute-communication ratio. When users request a set of resources for a parallel execution, they may receive a multi-cluster configuration. The heuristic model allows for shrinking the set of resources without decreasing the application execution time. The idea is to reach the maximum number of workers the master can use, giving high priority to the faster ones.
We presented the results of the dynamic tuning of grain size and the number of workers in Master-Worker applications on Grid systems, lowering the total application execution time while raising system efficiency. We used the implementation of Matrix-Multiplication, N-Body and synthetic workloads to try out different compute-communication ratio changes in different grain size selections.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Jinjun. "Towards effective and efficient temporal verification in grid workflow systems." Australasian Digital Thesis Program, 2007. http://adt.lib.swin.edu.au/public/adt-VSWT20070424.112326/index.html.

Full text
Abstract:
Thesis (Ph.D) - Swinburne University of Technology, Faculty of Information & Communication Technologies, Centre for Information Technology Research, 2007.
A thesis to CITR - Centre for Information Technology Research, Faculty of Information and Communication Technologies, Swinburne University of Technology, for the degree of Doctor of Philosophy, 2007. Typescript. Bibliography p. 145-160.
APA, Harvard, Vancouver, ISO, and other styles
29

Luyt, Leslie. "Automated grid fault detection and repair." Thesis, Rhodes University, 2012. http://hdl.handle.net/10962/d1006693.

Full text
Abstract:
With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired.
TeX
Adobe Acrobat 9.51 Paper Capture Plug-in
APA, Harvard, Vancouver, ISO, and other styles
30

Abraham, G. T. "Group-based parallel multi-scheduling methods for grid computing." Thesis, Coventry University, 2016. http://curve.coventry.ac.uk/open/items/9e286ae2-da74-42c9-978b-65d5eb3e3857/1.

Full text
Abstract:
With the advent in multicore computers, the scheduling of Grid jobs can be made more effective if scaled to fully utilize the underlying hardware and parallelized to benefit from the exploitation of multicores. The fact that sequential algorithms do not scale with multicore systems nor benefit from parallelism remains a major challenge to scheduling in the Grid. As multicore systems become ever more pervasive in our computing lives, over reliance on such systems for passive parallelism does not offer the best option in harnessing the benefits of their multiprocessors for Grid scheduling. An explicit means of exploiting parallelism for Grid scheduling is required. The Group-based Parallel Multi-scheduler for Grid introduced in this work is aimed at effectively exploiting the benefits of multicore systems for Grid job scheduling by splitting jobs and machines into paired groups and independently multi-scheduling jobs in parallel from the groups. The Priority method splits jobs into four priority groups based on job attributes and uses two methods (SimTog and EvenDist) methods to group machines. Then the scheduling is carried out using the MinMin algorithm within the discrete group pairs. The Priority method was implemented and compared with the MinMin scheduling algorithm without grouping (named ordinary MinMin in this research). The analysis of results compared against the ordinary MinMin shows substantial improvement in speedup and gains in scheduling efficiency. In addition, the Execution Time Balanced (ETB) and Execution Time Sorted then Balanced (ETSB) methods were also implemented to group jobs in order to improve on some deficiencies found with the Priority method. The two methods used the same machine grouping methods as used with the Priority method, but were able to vary the number of groups and equally exploited different means of grouping jobs to ensure equitability of jobs in groups. The MinMin Grid scheduling algorithm was then executed independently within the discrete group pairs. Results and analysis shows that the ETB and ETSB methods gain still further improvement over MinMin compared to the Priority method. The conclusion is reached that grouping jobs and machines before scheduling improves the scheduling efficiency significantly.
APA, Harvard, Vancouver, ISO, and other styles
31

Franco, Patrícia Batista. "Escalonamento de tarefas em ambiente de simulação de grid computacional /." São José do Rio Preto : [s.n.], 2011. http://hdl.handle.net/11449/98681.

Full text
Abstract:
Orientador: Roberta Spolon
Banca: Sarita Mazzini Brushi
Banca: Marcos Antônio Cavenaghi
Resumo: Diversos são os esforços para o desenvolvimento de políticas de escalonamento em grid computacional. O uso de simuladores de grid computacional é de especial importância para o estudo de algoritmos de escalonamento de tarefas. Através dos simuladores, é possível avaliar e comparar o desempenho de diferentes algoritmos em diferentes cenários. Apesar das ferramentas de simulação fornecerem funcionalidades básicas para simulação de ambientes distribuídos, elas não disponibilizam políticas internas de escalonamento de tarefas; além disso, a implementação dos algoritmos deve ser feita pelo próprio usuário. Portanto, o objetivo deste trabalho é desenvolver a biblioteca de escalonamento de tarefas LIBTS (Library Tasks Scheduling) e adaptá-la ao simulador SimGrid para oferecer aos usuários uma ferramenta que possibilite o estudo de algoritmos de escalonamento de tarefas em grid computacional. Através da LIBTS os usuários podem comparar os algoritmos implementados (Workqueue (WQ), Workqueue with Replication (WQR), Sufferage, XSufferage, Dynamic FPLTF) em diversos cenários, como também desenvolver e implementar novos algoritmos de escalonamento de tarefas. Este trabalho também proporciona uma revisão de literatura sobre grid computacional, apresentando as características e metodologias dos algoritmos implementados na LIBTS e as principais características das ferramentas de simulação. Além disso, os cenários de testes criados para comparar os algoritmos validaram o funcionamento da biblioteca e o funcionamento correto dos algoritmos na LIBTS
Abstract: Too much has been done to develop scheduling policies in computational grid. The use of computational grid simulators is particularly important for studying the algorithms of task scheduling. Through the simulators it's possible to assess and compare the performance of different algorithms in various scenarios. Despite the simulation tools provide basic features for simulation in distributed environments, they don't offer internal policies of task scheduling, so that the implementation of the algorithms must be realized by the user himself. Therefore, this study aims to develop the library of task scheduling LIBTS (Library Tasks Scheduling) and adapt it to the SimGrid simulator to provide the users with a tool to analyze the algorithms of task scheduling in the computational grid. Through the LIBTS, the users can compare the implemented algorithms (Workqueue (WQ), Workqueue with Replication (WQR), Sufferage, XSufferage, Dynamic FPLTF) in several scenarios, as well as to develop and implement new algorithms of task scheduling. This work also provides a literature review about the computational grid, presenting the characteristics and methodologies of the implemented algorithms in the LIBTS and the most important features of the simulation tools. Furthermore, the test scenarios created to compare the algorithms validate the library operation and the correct operation of the algorithms in LIBTS
Mestre
APA, Harvard, Vancouver, ISO, and other styles
32

Coetzee, Serena Martha. "An analysis of a data grid approach for spatial data infrastructures." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-09272009-152926/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sharp, Richard Paul. "Computational approaches for diffusive light transport finite-elements, grid adaption, and error estimation /." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1154705561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sharp, Richard Paul Jr. "Computational approaches for diffusive light transport: finite-elements, grid adaption, and error estimation." The Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=osu1154705561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yu, Yingwei. "Computational role of disinhibition in brain function." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Maleki, Sasan. "Addressing the computational issues of the Shapley value with applications in the smart grid." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/383963/.

Full text
Abstract:
We consider the computational issues that arise in using the Shapley value in practical applications. Calculating the Shapley value involves computing the value of an exponential number of coalitions, which poses a significant computational challenge in two cases: (i) when the number of agents (players) is large (e.g., more than 20), and (ii) when the time complexity of the characteristic function is high. However, to date, researchers have aimed to address only the first case, although with limited success. To address the first issue, we focus on approximating the Shapley value. In more detail, building upon the existing sampling-based approaches, we propose an improved error bound for approximating the Shapley value using simple random sampling (SRS), which can be used in any superadditive game. Moreover, we put forward the use of stratified sampling, which can lead to smaller standard errors. We propose two methods for minimising the standard error in supermodular games and a class of games that have a property that we call order-reflecting. We show that among others, newsvendor games, which have applications in the smart grid, exhibit this property. Furthermore, to evaluate our approach, we apply our stratified sampling methods to an instance of newsvendor games consisting of 100 agents using real data. We find that the standard error of stratified sampling in our experiments is on average 48% lower than that of SRS. To address the second issue, we propose the characteristic function of the game be approximated. This way, calculating the Shapley value becomes straightforward. However, in order to maintain fairness, we argue that, in distributing the value of the grand coalition, agents' contribution to the complexity of the characteristic function must be taken into account. As such, we propose the bounded rational Shapley value, which, using the additivity axiom of the Shapley value, ensures that the share of each agent reflects its contribution to the difficulty of computing the coalition values. We demonstrate the usefulness of this approach in a demand response scenario where a number of apartments want to fairly divide the discount they receive for coordinating their cooling loads.
APA, Harvard, Vancouver, ISO, and other styles
37

CHEN, LIHUA. "THREE-DIMENSIONAL PHYSIOLOGICAL FLOW SIMULATION ON MULTI-BOX COMPUTATIONAL DOMAINS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1070573674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhou, Dayi. "Scheduling for fast turnaround in peer-based desktop grid systems /." view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1188887261&sid=2&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 136-144). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
39

Paspuleti, Swetha Yılmaz Levent. "Agent-mediated brokering and matchmaking for simulation model reuse on the semantic grid." Auburn, Ala., 2005. http://repo.lib.auburn.edu/2005%20Fall/Thesis/PASPULETI_SWETHA_36.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Buyya, Rajkumar 1970. "Economic-based distributed resource management and scheduling for grid computing." Monash University, School of Computer Science and Software Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/8760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kerdels, Jochen [Verfasser]. "A computational model of grid cells based on a recursive growing neural gas / Jochen Kerdels." Hagen : Fernuniversität Hagen, 2016. http://d-nb.info/1081558474/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Phillips, Tyrone. "Extrapolation-based Discretization Error and Uncertainty Estimation in Computational Fluid Dynamics." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/31504.

Full text
Abstract:
The solution to partial differential equations generally requires approximations that result in numerical error in the final solution. Of the different types of numerical error in a solution, discretization error is the largest and most difficult error to estimate. In addition, the accuracy of the discretization error estimates relies on the solution (or multiple solutions used in the estimate) being in the asymptotic range. The asymptotic range is used to describe the convergence of a solution, where an asymptotic solution approaches the exact solution at a rate proportional to the change in mesh spacing to an exponent equal to the formal order of accuracy. A non-asymptotic solution can result in unpredictable convergence rates introducing uncertainty in discretization error estimates. To account for the additional uncertainty, various discretization uncertainty estimators have been developed. The goal of this work is to evaluation discretization error and discretization uncertainty estimators based on Richardson extrapolation for computational fluid dynamics problems. In order to evaluate the estimators, the exact solution should be known. A select set of solutions to the 2D Euler equations with known exact solutions are used to evaluate the estimators. Since exact solutions are only available for trivial cases, two applications are also used to evaluate the estimators which are solutions to the Navier-Stokes equations: a laminar flat plate and a turbulent flat plate using the k-Ï SST turbulence model. Since the exact solutions to the Navier-Stokes equations for these cases are unknown, numerical benchmarks are created which are solutions on significantly finer meshes than the solutions used to estimate the discretization error and uncertainty. Metrics are developed to evaluate the accuracy of the error and uncertainty estimates and to study the behavior of each estimator when the solutions are in, near, and far from the asymptotic range. Based on the results, general recommendations are made for the implementation of the error and uncertainty estimators. In addition, a new uncertainty estimator is proposed with the goal of combining the favorable attributes of the discretization error and uncertainty estimators evaluated. The new estimator is evaluated using numerical solutions which were not used for development and shows improved accuracy over the evaluated estimators.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Stokes-Rees, Ian. "A REST model for high throughput scheduling in computational grids." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442954.

Full text
Abstract:
Current grid computing architectures have been based on cluster management and batch queuing systems, extended to a distributed, federated domain. These have shown shortcomings in terms of scalability, stability, and modularity. To address these problems, this dissertation applies architectural styles from the Internet and Web to the domain of generic computational grids. Using the REST style, a flexible model for grid resource interaction is developed which removes the need for any centralised services or specific protocols, thereby allowing a range of implementations and layering of further functionality. The context for resource interaction is a generalisation and formalisation of the Condor ClassAd match-making mechanism. This set theoretic model is described in depth, including the advantages and features which it realises. This RESTful style is also motivated by operational experience with existing grid infrastructures, and the design, operation, and performance of a proto-RESTful grid middleware package named DIRAC. This package was designed to provide for the LHCb particle physics experiment’s “off-line” computational infrastructure, and was first exercised during a 6 month data challenge which utilised over 670 years of CPU time and produced 98 TB of data through 300,000 tasks executed at computing centres around the world. The design of DIRAC and performance measures from the data challenge are reported. The main contribution of this work is the development of a REST model for grid resource interaction. In particular, it allows resource templating for scheduling queues which provide a novel distributed and scalable approach to resource scheduling on the grid.
APA, Harvard, Vancouver, ISO, and other styles
44

Dorica, Mark. "Novel electromagnetic design system enhancements using computational intelligence strategies." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102972.

Full text
Abstract:
This thesis presents a wide spectrum of novel extensions and enhancements to critical components of modern electromagnetic analysis and design systems. These advancements are achieved through the use of computational intelligence, which comprises neural networks, evolutionary algorithms, and fuzzy systems. These tools have been proven in myriad industrial applications ranging from computer network optimization to heavy machinery control.
The analysis module of an electromagnetic analysis and design system typically comprises mesh generation and mesh improvement stages. A novel method for discovering optimal orderings of mesh improvement operators is proposed and leads to a suite of novel mesh improvement techniques. The new techniques outperform existing methods in both mesh quality improvement and computational cost.
The remaining contributions pertain to the design module. Specifically, a novel space mapping method is proposed, which allows for the optimization of response surface models. The method is able to combine the accuracy of fine models with the speed of coarse models. Optimal results are achieved for a fraction of the cost of the standard optimization approach.
Models built from computational data often do not take into consideration the intrinsic characteristics of the data. A novel model building approach is proposed, which customizes the model to the underlying responses and accelerates searching within the model. The novel approach is able to significantly reduce model error and accelerate optimization.
Automatic design schemes for 2D structures typically preconceive the final design or create an intractable search space. A novel non-preconceived approach is presented, which relies on a new genome structure and genetic operators. The new approach is capable of a threefold performance improvement and improved manufacturability.
Automatic design of 3D wire structures is often based on "in-series" architectures, which limit performance. A novel technique for automatic creative design of 3D wire antennas is proposed. The antenna structures are grown from a starting wire and invalid designs are avoided. The high quality antennas that emerge from this bio-inspired approach could not have been obtained by a human designer and are able to outperform standard designs.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Tianqi. "An architecture to support scalable distributed virtual environment systems on grid." Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B31473374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

SEVERINI, Marco. "Energy and resources management in Micro Grid environments." Doctoral thesis, Università Politecnica delle Marche, 2017. http://hdl.handle.net/11566/245444.

Full text
Abstract:
Nonostante le tecnologie Micro Grid siano ancora in fase sperimentale, il potenziale miglioramento di efficienza robustezza e flessibilità è significativo. Lo spreco di energia e le fluttuazioni del carico possono essere notevolmente ridotte, ciononostante un sistema automatico che gestisca correttamente le risorse risulta necessario per sviluppare completamente il potenziale delle risorse disponibili. Al riguardo, un approccio alla gestione dell’energia, basato su tecniche Mixed Integer Linear Programming è stato esaminato, implementato e proposto. La dissertazione copre gli aspetti teorici del problema, quali le tecniche di gestione MILP, il modello di Micro Grid per due degli scenari più comuni, e gli algoritmi a supporto del sistema di gestione. Le sperimentazioni hanno evidenziato l’efficacia del metodo in termini di efficienza e robustezza. Per migliorare la gestione, si è ritenuto necessario modellare il comportamento di un impianto fotovoltaico reale. Prendendo in considerazione l’effetto dell’ombreggiamento parziale, le performance dell’impianto possono essere valutate, e l’accuratezza nella predizione della produzione di energia solare migliorata. Inoltre, per fornire al gestore lo stato del sistema, un algoritmo capace di monitorare l’attività di ciascun carico a partire dall’analisi del consumo aggregato di energia è stato esaminato. A supporto dell’attività di gestione, inoltre, è stato implementato un algoritmo di schedulazione per dispositivi a consumo ridotto, per lo sviluppo di dispositivi sensore alimentati da fonti rinnovabili impiegabili nei sistemi di lettura automatica dei contatori, così da fornire al manager le informazioni relative al consumo di acqua e gas. A complemento, un algoritmo per l’identificazione delle perdite, per distinguere il consumo effettivo dallo spreco di risorse, è stato investigato.
Althought Micro Grid technologies are still in the experimental phase, the potential improvement of efficiency robustness and flexibility is significant. The energy waste and the load swing can be greatly reduced, nonetheless an automated system that properly manages the resources is required to fully develop the potential of the available resources. On purpose, an energy management system approach, based on Mixed Integer Linear Programming technique has been investigated, implemented and proposed. The dissertation covers the theoretical aspects of the problem, such as the MILP management approach, the model of a Micro Grid for two of the most common scenarios, and the algorithms that support the management system. The experimentations have shown the effectiveness of the approach in terms of management efficiency and robustness. To improve the management, the modelling of the behaviour of a real life photovoltaic power plant has been deemed necessary . By taking into account the effect of partial shading, the actual performance of a plant can be evaluated and thus the accuracy of the forecast of solar energy production can be improved. Additionally, to feed the state of the system back to the manager, an algorithm that monitors the activity of each appliance within the system through the analysis of the aggregated energy consumption has been investigated. To support the management activity, also, a scheduling algorithm aimed at ultra low power devices has been proposed and implemented, as a mean to develop sensor devices powered by renewable energy supply. This type of sensor can be effectively used in automated meter reading systems to provide the manager with the information relating water and gas consumption. Furthermore, a leakage detection algorithm has been developed and investigated to differentiate actual consumption from resource waste.
APA, Harvard, Vancouver, ISO, and other styles
47

Akimana, Rachel. "Grid Fault management techniques: the case of a Grid environment with malicious entities." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210460.

Full text
Abstract:

La tolérance et la gestion des fautes dans les grilles de données/calcul est d’une importance capitale. En effet, comme dans tout autre système distribué, les composants d’une grille sont susceptibles de tomber en panne à tout moment. Mais le risque de panne croît avec la taille du système, et est donc plus exacerbé dans un système de grille. En plus, tout en essayant de mettre à profit les ressources offertes par la grille, les applications tournant sur celle-ci sont de plus en plus complexes (ex. impliquent des interactions complexes, prennent des jours d’exécution), ce qui les rend plus vulnérables aux fautes. Le plus difficile dans la gestion des fautes dans une grille, c’est qu’il est difficile de savoir si une faute qui survient sur une entité de la grille est induite malicieusement ou accidentellement.

Dans notre travail de thèse, nous utilisons le terme faute, au sens large, pour faire référence à tout étant inattendu qui survient sur tout composant de la grille. Certains de ces états provoquent des comportements aussi inattendus et perceptibles au niveau de la grille tandis que d’autres passent inaperçues. De plus, certaines de ces fautes sont le résultat d’une action malveillante alors que d’autres surviennent accidentellement ou instantanément. Dans ce travail de thèse, nous avons traité le cas de ces fautes induites malicieusement, et qui généralement passent inaperçues. Nous avons considéré en particulier le problème de la confidentialité et de l’intégrité des données stockées à long-terme sur la grille.

L’étude de la confidentialité des données a été faite en deux temps dont la première partie concerne la confidentialité des données actives. Dans cette partie, nous avons considéré une application liée à la recherche des similitudes d’une séquence d’ADN dans une base de données contenant des séquences d’ADN et stockée sur la grille. Pour cela, nous avons proposé une méthode qui permet d’effectuer la comparaison sur un composant distant, mais tout en gardant confidentielle la séquence qui fait l’objet de la comparaison.

Concernant les données passives, nous avons proposé une méthode de partage des données confidentielles et chiffrés sur la grille.

En rapport avec l’intégrité des données, nous avons considéré le cas des données anonymes dans le cadre de l’intégrité des données passives. Pour les données actives, nous avons considéré le problème de la corruption des jobs exécutés sur la grille. Pour chacune des cas, nous avons proposé des mécanismes permettant de vérifier l’authenticité des données utilisées ou produites par ces applications.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Jinjun, and n/a. "Towards effective and efficient temporal verification in grid workflow systems." Swinburne University of Technology, 2007. http://adt.lib.swin.edu.au./public/adt-VSWT20070424.112326.

Full text
Abstract:
In grid architecture, a grid workflow system is a type of high-level grid middleware which aims to support large-scale sophisticated scientific or business processes in a variety of complex e-science or e-business applications such as climate modelling, disaster recovery, medical surgery, high energy physics, international stock market modelling and so on. Such sophisticated processes often contain hundreds of thousands of computation or data intensive activities and take a long time to complete. In reality, they are normally time constrained. Correspondingly, temporal constraints are enforced when they are modelled or redesigned as grid workflow specifications at build-time. The main types of temporal constraints include upper bound, lower bound and fixed-time. Then, temporal verification would be conducted so that we can identify any temporal violations and handle them in time. Conventional temporal verification research and practice have presented some basic concepts and approaches. However, they have not paid sufficient attention to overall temporal verification effectiveness and efficiency. In the context of grid economy, any resources for executing grid workflows must be paid. Therefore, more resources should be mainly used for execution of grid workflow itself rather than for temporal verification. Poor temporal verification effectiveness or efficiency would cause more resources diverted to temporal verification. Hence, temporal verification effectiveness and efficiency become a prominent issue and deserve an in-depth investigation. This thesis systematically investigates the limitations of conventional temporal verification in terms of temporal verification effectiveness and efficiency. The detailed analysis of temporal verification effectiveness and efficiency is conducted for each step of a temporal verification cycle. There are four steps in total: Step 1 - defining temporal consistency; Step 2 - assigning temporal constraints; Step 3 - selecting appropriate checkpoints; and Step 4 - verifying temporal constraints. Based on the investigation and analysis, we propose some new concepts and develop a set of innovative methods and algorithms towards more effective and efficient temporal verification. Comparisons, quantitative evaluations and/or mathematical proofs are also presented at each step of the temporal verification cycle. These demonstrate that our new concepts, innovative methods and algorithms can significantly improve overall temporal verification effectiveness and efficiency. Specifically, in Step 1, we analyse the limitations of two temporal consistency states which are defined by conventional verification work. After, we propose four new states towards better temporal verification effectiveness. In Step 2, we analyse the necessity of a number of temporal constraints in terms of temporal verification effectiveness. Then we design a novel algorithm for assigning a series of finegrained temporal constraints within a few user-set coarse-grained ones. In Step 3, we discuss the problem of existing representative checkpoint selection strategies in terms of temporal verification effectiveness and efficiency. The problem is that they often ignore some necessary checkpoints and/or select some unnecessary ones. To solve this problem, we develop an innovative strategy and corresponding algorithms which only select sufficient and necessary checkpoints. In Step 4, we investigate a phenomenon which is ignored by existing temporal verification work, i.e. temporal dependency. Temporal dependency means temporal constraints are often dependent on each other in terms of their verification. We analyse its impact on overall temporal verification effectiveness and efficiency. Based on this, we develop some novel temporal verification algorithms which can significantly improve overall temporal verification effectiveness and efficiency. Finally, we present an extension to our research about handling temporal verification results since these verification results are based on our four new temporal consistency states. The major contributions of this research are that we have provided a set of new concepts, innovative methods and algorithms for temporal verification in grid workflow systems. With these, we can significantly improve overall temporal verification effectiveness and efficiency. This would eventually improve the overall performance and usability of grid workflow systems because temporal verification can be viewed as a service or function of grid workflow systems. Consequently, by deploying the new concepts, innovative methods and algorithms, grid workflow systems would be able to better support large-scale sophisticated scientific and business processes in complex e-science and e-business applications in the context of grid economy.
APA, Harvard, Vancouver, ISO, and other styles
49

Jones, Michael Angus Scott. "Using AFS as a distributed file system for computational and data grids in high energy physics." Thesis, University of Manchester, 2005. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:181210.

Full text
Abstract:
The use of the distributed file system, AFS, as a solution to the “input/output sandbox” problem in grid computing is studied. A computational grid middleware, primarily to accommodate the environment of the BaBar Computing Model, has been designed, written and is presented. A summary of the existing grid middleware and resources is discussed. A number of benchmarks (one written for this thesis) are used to test the performance of the AFS over the wide area network and grid environment. The performance of the AFS is also tested using a straightforward BaBar Analysis code on real data. Secure web-based and command-line interfaces created to monitor job submission and grid fabric are presented.
APA, Harvard, Vancouver, ISO, and other styles
50

Lau, Chung Yin. "Computational stress analysis for ball grid array reliability and passive component reliability in board level assemblies /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?MECH%202005%20LAU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography