Dissertations / Theses on the topic 'Parallel systems and technologies'

To see the other types of publications on this topic, follow the link: Parallel systems and technologies.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parallel systems and technologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Noghani, Waheed Bazazan. "An investigation into the implementation of a cost-effective ASP architecture using VLSI and WSI technologies and its effect on modular-MPC systems." Thesis, Brunel University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Khan, Muhammad Sikandar Lal. "Enabling physical action in computer mediated communication : an embodied interaction approach." Licentiate thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-108569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jakušev, Aleksandr. "Diferencialinių lygčių ir jų sistemų skaitinio sprendimo algoritmų lygiagretinimo technologijos kūrimas, analizė ir taikymai." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20090217_160637-75872.

Full text
Abstract:
Šiame darbe pateikiama nauja tiesinės algebros algoritmų, atsirandančių sprendžiant dif. lygtis ir jų sistemas, lygiagretinimo technologija. Ši technologija apjungia "lygiagrečiųjų duomenų" ir "globalios atminties" lygiagretinimo modelių privalumus, ir, naudojant apibrėžtos klasės uždavinių yptaumus, leidžia lengvai gauti efektyvų programos kodą, kuris pusiau automatiškai lygiagretinamas. Darbas susideda iš 3 dalių: egzistuojančių priemonių apžvalga, naujos technologijos aprašymas, įvairūs taikymai.
The new parallelization technology is presented in this work. The technology is suitable for parallelization of linear algebra problems that arise during solution of PDE and PDE systems. The new technology combines the strong points of "data parallel" and "global memory" parallel programming models. Using the pecularities of the problems of a given class, the technology allows to write effective code easily, with the addition of the possibility for semi-automatic parallelization. The work consists of 3 parts: the review of existing technologies, the description of the new one, various applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Fagg, Graham Edward. "Enabling technologies for parallel heterogeneous computing." Thesis, University of Reading, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Teller, Justin Stevenson. "Scheduling Tasks on Heterogeneous Chip Multiprocessors with Reconfigurable Hardware." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1211985748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hulot, Carlos. "Parallel tracking systems." Thesis, University of Southampton, 1995. https://eprints.soton.ac.uk/264882/.

Full text
Abstract:
Tracking Systems provide an important analysis technique that can be used in many different areas of science. A Tracking System can be defined as the estimation of the dynamic state of moving objects based on `inaccurate’ measurements taken by sensors. The area encompasses a wide range of subjects, although the two most essential elements are estimation and data association. Tracking systems are applicable to relatively simple as well as more complex applications. These include air traffic control, ocean surveillance and control sonar tracking, military surveillance, missile guidance, physics particle experiments, global positioning systems and aerospace. This thesis describes an investigation into state-of-the-art tracking algorithms and distributed memory architectures (Multiple Instructions Multiple Data systems - “MIMD”) for parallel processing of tracking systems. The first algorithm investigated is the Interacting Multiple Model (IMM) which has been shown recently to be one of the most cost-effective in its class. IMM scalability is investigated for tracking single targets in a clean environment. Next, the IMM is coupled with a well-established Bayesian data association technique known as Probabilistic Data Association (PDA) to permit the tracking of a target in different clutter environments (IMMPDA). As in the previous case, IMMPDA scalability is investigated for tracking a single target in different clutter environments. In order to evaluate the effectiveness of these new parallel techniques, standard languages and parallel software systems (to provide message-passing facilities) have been used. The main objective is to demonstrate how these complex algorithms can benefit in the general case from being implemented using parallel architectures.
APA, Harvard, Vancouver, ISO, and other styles
7

Johnson, Robert David. "Parallel analytic tableaux systems." Thesis, Queen Mary, University of London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ghanemi, Salim. "Non-numerical parallel algorithms for asynchronous parallel computer systems." Thesis, Loughborough University, 1987. https://dspace.lboro.ac.uk/2134/28016.

Full text
Abstract:
The work in this thesis covers mainly the design and analysis of many important Non-Numerical Parallel Algorithms that run on MIMD type Parallel Computer Systems (PCSs), in particular the NEPTUNE and the SEQUENT BALANCE 8000 PCSs available at Loughborough University of Technology.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeong, Byeong-Soo. "Indexing in parallel database systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/8189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Braungardt, Sibylle. "Complex systems for quantum technologies." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/35871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sharma, Gunjana. "Heterogeneous Technologies for Microfluidic Systems." Doctoral thesis, Uppsala universitet, Mikrosystemteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-131109.

Full text
Abstract:
In this thesis, conventional and unconventional technologies have been studied and combined in order to make heterogeneous microfluidics with potential advantages, especially in biological applications. Many conventional materials, like silicon, glass, thermoplastic polymers, polyimide and polydimethylsiloxane (PDMS) have been combined in building heterogeneous microfluidic devices or demonstrators. Aside from these materials, unconventional materials for microfluidics such as stainless steel and the fluoroelastomer Viton have been explored. The advantages of the heterogeneous technologies presented were demonstrated in several examples: (1) For instance, in cell biology, surface properties play an important role. Different functions were achieved by combining microengineering and surface modification. Two examples were made by depositing a Teflon-like film: a) a non-textured surface was made hydrophobic to allow higher pressures for cell migration studies and b) a surface textured by ion track technology was even made super-hydrophobic. (2) In microfluidics, microactuators used for fluid handling are important, e.g. in valves and pumps. Here, microactuators that can handle high-pressures were presented, which may allow miniaturization of high performance bioanalyses that until now have been restricted to larger instruments. (3) In some applications the elastomer PDMS cannot be used due to its high permeability and poor solvent resistivity. Viton can be a good replacement when elasticity is needed, like in the demonstrated paraffin actuated membrane.(4) Sensing of bio-molecules in aquatic solutions has potential in diagnostics on-site. A proof-of-principle demonstration of a potentially highly sensitive biosensor was made by integrating a robust solidly mounted resonator in a PDMS based microfluidic system. It is concluded that heterogeneous technologies are important for microfluidic systems like micro total analysis systems (µTAS) and lab-on-chip (LOC) devices.
APA, Harvard, Vancouver, ISO, and other styles
12

Fartaria, Luís Alberto Jorge. "Energy systems for transportation technologies." Master's thesis, Universidade de Aveiro, 2011. http://hdl.handle.net/10773/8726.

Full text
Abstract:
Mestrado em Engenharia Mecânica
The objective of the work developed in this master thesis is to implement in one system for the transport people in underdeveloped mountainous regions in Nepal. This is based on the idea of Asso. Prof. of Physics Lok Bahadur Baral from Tribhuvan University in Kathmandu (Nepal), who started to develop a gravitational potential energy for transportation. His work was conducted under supervision of Prof. Kartnig at the Department of Engineering Design for Transport and Conveying Systems at Vienna University of Technology in collaboration with Tribhuvan University. This part of the project is to develop the power supply for the infrastructure created by David Guerrero in his bachelor thesis. The infrastructure is already designed to receive one engine that will give the power necessary for this to be successfully accomplished. For the engine to work we have to have some energy supply in our system, that energy requirement will originate from a variation of renewable energy technologies.
APA, Harvard, Vancouver, ISO, and other styles
13

Nader, Babak. "Parallel solution of sparse linear systems." Full text open access at:, 1987. http://content.ohsu.edu/u?/etd,138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Menich, Ronald Paul. "Resource allocation in parallel processing systems." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/28049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wesson, Paul John. "Parallel algorithms for systems of equations." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Manoharan, Sathiamoorthy. "Task assignment in parallel processor systems." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/6568.

Full text
Abstract:
A generic object-oriented simulation platform is developed in order to conduct experiments on the performance of assignment schemes. The simulation platform, called Genesis, is generic in the sense that it can model the key parameters that describe a parallel system: the architecture, the program, the assignment scheme and the message routing strategy. Genesis uses as its basis a sound architectural representation scheme developed in the thesis. The thesis reports results from a number of experiments assessing the performance of assignment schemes using Genesis. The comparison results indicate that the new assignment scheme proposed in this thesis is a promising alternative to the work-greedy assignment schemes. The proposed scheme has a time-complexity less than those of the work-greedy schemes and achieves an average performance better than, or comparable to, those of the work-greedy schemes. To generate an assignment, some parameters describing the program model will be required. In many cases, accurate estimation of these parameters is hard. It is thought that inaccuracies in the estimation would lead to poor assignments. The thesis investigates this speculation and presents experimental evidence that shows such inaccuracies do not greatly affect the quality of the assignments.
APA, Harvard, Vancouver, ISO, and other styles
17

Beaumont, Anthony Joseph. "Scheduling in Or-parallel PROLOG systems." Thesis, University of Bristol, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.281798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Smith, Mark. "Spatial reaction systems on parallel supercomputers." Thesis, University of Edinburgh, 1994. http://hdl.handle.net/1842/12985.

Full text
Abstract:
A wide variety of physical, chemical and biological systems can be represented as a collection of discrete spatial locations within which some interaction proceeds, and between which reactants diffuse or migrate. Many such real-world spatial reaction systems are known to be both non-linear and stochastic in nature, and thus studies of these systems have generally relied upon analytic approximation and computer simulation. However, this later approach can become impractical for large, complex systems which require massive computational resources. In this work we analyse a general spatial reaction system in both the deterministic and stochastic scenarios. A study of the deterministic parameter space reveals a new categorisation for system development in terms of its criticality. This result is then coupled with a complete analysis of the linearised stochastic system, in order to provide an understanding of the spatial-temporal covariance structures within reactant distributions. In addition to an analysis, and empirical confirmation, of the various criticality behaviours in both deterministic and stochastic cases, we use our theoretical results to enable efficient implementation of spatial reaction system simulations on parallel supercomputers. Such novel computing resources are necessary to enable the study of realistic-scale, long-term stochastic activity, however they are notoriously difficult to exploit. We have therefore developed advanced programming and implementation techniques, concentrating mainly on dynamic load-balancing methodologies, to enable such studies. These techniques make direct use of our analytic results in order to achieve the most efficient exploitation of supercomputing resources, given the particular attributes of the system under study. These new techniques have allowed us to investigate complex individual-based systems on a previously untried scale. In addition, they are of general applicability to a wide range of real-world simulations.
APA, Harvard, Vancouver, ISO, and other styles
19

Kelly, Thomas. "Optimizing hardware granularity in parallel systems." Thesis, University of Edinburgh, 1995. http://hdl.handle.net/1842/15141.

Full text
Abstract:
In order for parallel architectures to be of significant use in providing superior performance to uniprocessors, the benefits of splitting the workload among several processing elements must outweigh the overheads associated with this "divide and conquer" strategy. Whether or not this is the case depends on the nature of the algorithm and on the cost:performance functions associated with the real computer hardware available at a given time. This thesis is an investigation into the tradeoff of grain of hardware versus speed of hardware, in an attempt to show how the optimal hardware parallelism can be assessed. A model is developed of the execution time T of an algorithm on a machine as a function of the number of nodes, N. The model is used to examine the degree to which it is possible to obtain an optimal value of N, corresponding to minimum execution time. Specifically, the optimization is investigated assuming a particular base architecture, an algorithm or class thereof and an overall hardware cost. Two base architectures and algorithm types are considered, corresponding to two common classes of parallel architectures: a shared memory multi-processor and a message-passing multi-computer. The former is represented by a simple shared-bus multi-processor in which each processing element performs operations on data stored in a global shared store. The second type is represented by a two-dimensional mesh-connected multi-connected multi-computer. In this type of system all memory is considered private and data sharing is carried out using "messages" explicitly passed among the processing elements.
APA, Harvard, Vancouver, ISO, and other styles
20

Roque, Pedro Miguel da Silva. "Contraint solving on massively parallel systems." Doctoral thesis, Universidade de Évora, 2020. http://hdl.handle.net/10174/27976.

Full text
Abstract:
Abstract Applying parallelism to constraint solving seems a promising approach and it has been done with varying degrees of success. Early attempts to parallelize constraint propagation, which constitutes the core of traditional interleaved propagation and search constraint solving, were hindered by its essentially sequential nature. Recently, parallelization efforts have focussed mainly on the search part of constraint solving. A particular source of parallelism has become pervasive, in the guise of GPUs, able to run thousands of parallel threads, and they have naturally drawn the attention of researchers in parallel constraint solving. This thesis addresses the challenges faced when using multiple devices for constraint solving, especially GPUs, such as deciding on the appropriate level of parallelism to employ, load balancing and inter-device communication. To overcome these challenges new techniques were implemented in a new constraint solver, named Parallel Heterogeneous Architecture Constraint Toolkit (PHACT), which allows to use one or more CPUs, GPUs, Intel Many Integrated Cores (MIC) and any other device compatible with OpenCL to solve a constraint problem. Several tests were made to measure the capabilities of some GPUs to solve constraint problems, and the conclusions of these tests are described in this thesis. PHACT’s architecture is presented and its performance was measured in each one of five machines, comprising eleven CPUs, six GPUs and two MICs. The tests were made using 10 constraint satisfaction problems, consisting in counting all the solutions, finding one solution or optimizing. Each of the problems has been instantiated with up to three different dimensions. PHACT’s performance was also compared with the ones of Gecode, Choco and OR-Tools. In the end, these tests allowed to detect which techniques implemented in PHACT were already achieving the expected results, and to point changes that may improve PHACT’s performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Nordin, Syarifah Zyurina. "Task scheduling in parallel processor systems." Thesis, Curtin University, 2011. http://hdl.handle.net/20.500.11937/2510.

Full text
Abstract:
Task scheduling in parallel processing systems is one of the most challenging industrial problems. This problem typically arises in the manufacturing and service industries. The task scheduling problem is to determine a set of task assignments to a set of parallel processors for execution so as to optimize a specified performance measure. The difficulty of the problem is that the scheduling needs to satisfy a set of requirements as well as a range of environmental constraints. The problem is known to be NP-complete.In this study, we consider a non-preemtive task scheduling problem on iden- tical and unrelated parallel processor systems. We are interested in the objective function that minimizes the maximum of the completion time of the entire set of tasks (i.e makespan) so as to ensure a good load balance on the parallel proces- sors. We consider three different task characteristics to the classical task scheduling problem that has a set of n independent tasks to be assigned to m parallel processors.The first task characteristic that we consider is an on-line scheduling with release date specifications on an identical parallel processing system with a centralized queue and no splitting structure. We focus on developing simple and efficient heuristics for this problem. Three heuristic algorithms are proposed to solve this non-deterministic problem with scheduling over time where the availability of each task is restricted by release date. Our approach uses a multi-step method in the task selection phase and a greedy search algorithm in the processor selection phase. The multi-step method is used to reduce the non-determinism in on-line scheduling by partitioning the scheduling process into several procedures. We introduce two procedures in the priority rule loop which we refer to as Cluster Insertion and Local Cluster Interchange.Computational testing on randomly generated data is conducted using Microsoft Visual C++ 6.0 to examine the effectiveness of the proposed multi-step method against the optimal solution. Different size of problems are tested in the experiment involving 3 processors by 200 tasks up to 5 processors by 1000 tasks with five clusters ranging from 10 to 50. The computational results show that all the three heuristics performed very well with the value of the average gaps are improved as the number of the tasks in the system is increases. The average gap for all the three heuristics are less than 1.04% for the largest tested cases (i.e for 1000 tasks run on 5 processors).In the second problem, we address priority consideration as an added feature to the basic task characteristics of unrelated parallel processors scheduling. The priority consideration is defined by a list of ordered independent tasks with priority. A task requires to start processing after another task is finished on the same processor based on priority but may require to start earlier if processed on other machine. Our aim is to develop Mixed Integer Linear Programming models to obtain optimal solutions for three type of priority lists which are ascending order, descending order and general priority list. We validate the model using a case study taken from the literature. Then, computational testing is implemented on the general priority list using AIMMS 3.10 package and CPLEX 12.1 as the solver. Computational results show that the proposed MILP model is effective and produces optimal results for all tested cases. The model is very efficient as 95% of all the instances, which are problem up to 80 tasks assigned on 5 processors, have been solved within 5 minutes of CPU time.In the final problem, we address a further problem for the task scheduling with a disruption problem that occurs on the parallel processor system. The disruption is causes by the unavailability of the processor during a certain time and it is called resource disruption. Our recovery solution for the disruption problem is a rescheduling approach. A MILP model is developed for the rescheduling model for the case of non-resumable tasks. Recovery model for the disrupted initial schedule with dummy insertion is proposed for predictive disruption management and match up schedule for post-disruption management. To evaluate the model, computational testing is performed with different sets of data.Different levels of disruptions are considered with different weights in the objective function to observe the stability of the model. The optimum initial schedule and the rescheduling model is performed using CPLEX 12.1 solver in AIMMS 3.10 package. In our computational results we measure the stability rate which is to observe the stability condition of the current schedule compared to the initial schedule in term of the task migration. From the results, the stability is improved when the number of tasks in the system increases within a reasonable amount of time. Another interesting observation is that our model yields small average gaps that are less than 7.99% within 300 seconds of the CPU time for a large data set that reach 200 tasks by 10 processors. The average gaps are considerably small for the disruption problem since the rescheduling model have to match up with the optimum initial schedule.
APA, Harvard, Vancouver, ISO, and other styles
22

Watkins, Stanley Hugh. "A porting methodology for parallel database systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA275256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hysing, Andreas Dreyer. "Parallel Seismic Inversion for Shared Memory Systems." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11795.

Full text
Abstract:
In this thesis will explore how a scientific application forseismic inversion can take advantage of multi-core programming on x86 architecture. The thesis will focus on most effective domain divisions, communication patterns and multithreaded scalability. Performance comparison withthe original codes will be included, as well as an evaluation of thedevelopment effort required for implementing such techniques.
APA, Harvard, Vancouver, ISO, and other styles
24

Nordén, Markus. "Parallel PDE Solvers on cc-NUMA Systems." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86307.

Full text
Abstract:
The current trend in parallel computers is that systems with a large shared memory are becoming more and more popular. A shared memory system can be either a uniform memory architecture (UMA) or a cache coherent non-uniform memory architecture (cc-NUMA). In the present thesis, the performance of parallel PDE solvers on cc-NUMA computers is studied. In particular, we consider the shared namespace programming model, represented by OpenMP. Since the main memory is physically, or geographically distributed over several multi-processor nodes, the latency for local memory accesses is smaller than for remote accesses. Therefore, the geographical locality of the data becomes important. The questions posed in this thesis are: (1) How large is the influence on performance of the non-uniformity of the memory system? (2) How should a program be written in order to reduce this influence? (3) Is it possible to introduce optimizations in the computer system for this purpose? Most of the application codes studied address the Euler equations using a finite difference method and a finite volume method respectively and are parallelized with OpenMP. Comparisons are made with an alternative implementation using MPI and with PDE solvers implemented with OpenMP that solve other equations using different numerical methods. The main conclusion is that geographical locality is important for performance on cc-NUMA systems. This can be achieved through self optimization provided in the system or through migrate-on-next-touch directives that could be inserted automatically by the compiler. We also conclude that OpenMP is competitive with MPI on cc-NUMA systems if care is taken to get a favourable data distribution.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Chi-Cheng. "Implementation of parallel intelligent information retrieval systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq30509.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Nordén, Markus. "Parallel PDE solvers on cc-NUMA systems /." Uppsala : Univ. : Dept. of Information Technology, Univ, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hopper, Michael A. "A compiler framework for multithreaded parallel systems." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Tambouris, Efthimios. "Performance and scalability analysis of parallel systems." Thesis, Brunel University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Matooane, ʾMantśika Amelia. "Parallel systems in symbolic and algebraic computation." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dempster, Euan W. "Performance prediction for parallel database management systems." Thesis, Heriot-Watt University, 2004. http://hdl.handle.net/10399/341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Marriott, Derek Garron. "Analysis of safety-critical parallel software systems." Thesis, University of Sheffield, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Shao, Jianhua. "A study of parallel deductive database systems." Thesis, University of Ulster, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tsimashenka, Iryna. "Reducing subtask dispersion in parallel queueing systems." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/24702.

Full text
Abstract:
In various real-world parallel processing systems, incoming tasks divide into several subtasks that are processed independently by parallel servers. Queueing networks are a natural way to represent the flow and processing of tasks and subtasks in such systems. Two useful classes of queueing network representations are split-merge and fork-join systems. There are two main metrics of interest in these systems: task response time and subtask dispersion. These metrics are in tension with each other: when one is reduced, it tends to lead to an increase in the other. Generally, using the fork-join paradigm leads to low task response times but high subtask dispersion, while using the split-merge paradigm leads to low subtask dispersion but moderate to high task response times. This thesis introduces methods for controlling subtask dispersion as well as for the trading off of subtask dispersion and task response time in parallel queueing systems. In the context of split-merge systems with generally distributed service times, we show how to control mean subtask dispersion by the application of judiciously-chosen delays to subtask processing and extend it to control percentiles of the distribution of subtask dispersion. Our analysis is based on extensions to the theory of heterogeneous order statistics. While solely focusing on the reduction of subtask dispersion leads to a large increase in task response time, together with a corresponding decrease in maximum sustainable system throughput, aiming to reduce a product of mean subtask dispersion and mean task response time leads to a marginal increase in task response time while dramatically improving mean subtask dispersion. Fork-join systems are widely deployed in the real world, but are notoriously more difficult to analyse. In the context of fork-join systems with heterogeneous exponentially distributed service times, we present an on-line technique which improves on both the mean task response time and mean subtask dispersion achievable in an equivalent split-merge system. For split-merge systems we validate our results analytically, while for fork-join systems we validate the solutions against simulations. We present case studies of different parts of our methodology in split-merge and fork-join systems with and without applications of the delays. These show the ability to reduce subtask dispersion while providing increasingly-sophisticated means to simultaneously control task response time.
APA, Harvard, Vancouver, ISO, and other styles
34

Gustavsson, Andreas. "Static Execution Time Analysis of Parallel Systems." Doctoral thesis, Mälardalens högskola, Inbyggda system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-31399.

Full text
Abstract:
The past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism is no longer feasible due to extensive power consumption and heat dissipation. Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level. This is most often done using multiple, relatively slow and simple, processing cores situated on a single processor chip. The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g., a bus, to that memory and also all higher levels of memory). To fully exploit this type of parallel processor chip, programs running on it will have to be concurrent. Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code. A real-time system is any system whose correctness is dependent both on its functional and temporal behavior. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is crucial that methods to derive safe estimations on the timing properties of parallel computer systems are developed, if at all possible. This thesis presents a method to derive safe (lower and upper) bounds on the execution time of a given parallel system, thus showing that such methods must exist. The interface to the method is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis. The method is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way. The thesis also proves the soundness of the presented method (i.e., that the estimated timing bounds are indeed safe) and evaluates a prototype implementation of it.
Den strategi som historiskt sett använts för att öka processorers prestanda (genom ökad klockfrekvens och ökad instruktionsnivåparallellism) är inte längre hållbar på grund av den ökade energikonsumtion som krävs. Därför är den nuvarande trenden inom processordesign att låta mjukvaran påverka det parallella exekveringsbeteendet. Detta görs vanligtvis genom att placera multipla processorkärnor på ett och samma processorchip. Kärnorna delar vanligtvis på några av processorchipets resurser, såsom cache-minne (och därmed också det nätverk, till exempel en buss, som ansluter kärnorna till detta minne, samt alla minnen på högre nivåer). För att utnyttja all den prestanda som denna typ av processorer erbjuder så måste mjukvaran som körs på dem kunna delas upp över de tillgängliga kärnorna. Eftersom flerkärniga processorer är standard idag så måste även realtidssystem baseras på dessa och den nämnda typen av kod.  Ett realtidssystem är ett datorsystem som måste vara både funktionellt och tidsmässigt korrekt. För vissa typer av realtidssystem kan ett inkorrekt tidsmässigt beteende ha katastrofala följder. Därför är det ytterst viktigt att metoder för att analysera och beräkna säkra gränser för det tidsmässiga beteendet hos parallella datorsystem tas fram. Denna avhandling presenterar en metod för att beräkna säkra gränser för exekveringstiden hos ett givet parallellt system, och visar därmed att sådana metoder existerar. Gränssnittet till metoden är ett litet formellt definierat trådat programmeringsspråk där trådarna tillåts kommunicera och synkronisera med varandra. Metoden baseras på abstrakt exekvering för att effektivt beräkna de säkra (men ofta överskattade) gränserna för exekveringstiden. Abstrakt exekvering baseras i sin tur på abstrakta interpreteringstekniker som vida används inom tidsanalys av sekventiella datorsystem. Avhandlingen bevisar även korrektheten hos den presenterade metoden (det vill säga att de beräknade gränserna för det analyserade systemets exekveringstid är säkra) och utvärderar en prototypimplementation av den.
Worst-Case Execution Time Analysis of Parallel Systems
RALF3 - Software for Embedded High Performance Architectures
APA, Harvard, Vancouver, ISO, and other styles
35

Yun, Paul M. "Parallel Bus Implementations in Satellite Communications Systems." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615247.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
As the volume of linkages in the satellite communications systems increases, the parallel bus between the various processors of the satellite becomes a bottle neck to transfer the commands and data. The remedies to this problem are trivial in the ground stations; however, this problem imposes severe restrictions in parallel bus implementation of the satellite communications systems. The most severe restriction is the minimization of wire connections in the physical layer to minimize the weight, size and power consumption, and also to maximize the reliability. Another restriction is the flexibility in the link layer to adapt the different characteristics of the command and data messages. In this paper, the implementation to overcome the imposed restrictions in both physical and link layer of the parallel bus will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
36

Millar, Dean Lee. "Parallel distributed processing in rock engineering systems." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/37116.

Full text
Abstract:
Rock Engineering Systems are a collection of ideas, mathematical tools and computer technology all of which are designed to solve problems in rock engineering with interacting components. The interactions between components can be complex and the rock engineering problems themselves contain a high degree of uncertainty. The research described in this thesis investigates the incorporation of computational techniques known as parallel distributed processing methods into the disciplines of rock mechanics and rock engineering. Two main applications of parallel distributed processing methods in rock engineering are investigated in this thesis. 1) Multilayered perceptron artificial neural networks are used successfully to encapsulate the laboratory behaviour of rocks under triaxial compression. Trained artificial neural networks are then used to replace conventional constitutive models within finite difference geomechanical numerical modelling codes. 2) Two multilayered perceptron artificial neural networks are developed to assist in the task of discrimination of rock fracture presence within digital imagery of rock exposures. The first is trained using samples of the image that contain fracture image content and samples that do not, and provides a probability-like measure of fracture presence. It was sufficiently successful to permit estimation of fracture intensity parameter , . The second was developed specifically to identify fracture termination condition by matching samples to a set of fracture termination condition templates. Seven original contributions to the rock mechanics and rock engineering disciplines have resulted across the three application areas. These contributions are itemised, with details, at the beginning of the final Chapter of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
37

Cutts, Geoff. "Modelling and analysis of parallel information systems." Thesis, Sheffield Hallam University, 1993. http://shura.shu.ac.uk/19524/.

Full text
Abstract:
This thesis presents an investigation of modelling and analysis of parallel information systems. The research was motivated by the recent developments in networks and powerful, low-cost, desk top multiprocessors. An integrated approach for the construction of parallel information systems was developed which focussed on modelling, verification and simulation of such systems. The thesis demonstrates how Petri nets can be used for the modelling and analysis of entity life histories and parallel information systems, place transition nets for the modelling and analysis of entity life histories and coloured Petri nets for the modelling and analysis of complex parallel information systems. These tools were integrated into a comprehensive framework which allowed for the modelling and analysis of complex parallel information systems and the framework was tested using a comprehensive case study. The thesis concludes that Petri nets are an ideal tool for the modelling and analysis of complex parallel systems. Verification is possible with deadlocks and similar properties being easily identified. Further the transformation rules proved to be beneficial to the process of moving from one model to another. Finally simulation of parallel behaviour was possible because the underlying models captured the notion of parallelism.
APA, Harvard, Vancouver, ISO, and other styles
38

Davies, Neil J. "The performance and scalability of parallel systems." Thesis, University of Bristol, 1994. http://hdl.handle.net/1983/964dec41-9a36-44ea-9cfc-f6d1013fcd12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gudino, Natalia. "NOVEL PARALLEL TRANSMIT SYSTEMS FOR MRI APPLICATIONS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1363178933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Fei. "Parallel computational methods for constrained mechanical systems." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282561.

Full text
Abstract:
Two methods suitable for parallel computation in the study of mechanical systems with holonomic and nonholonomic constraints are presented: one is an explicit solution based on generalized inverse algebra; the second solves problems of this class through the direct application of Gauss' principle of least constraint and genetic algorithms. Algorithms for both methods are presented for sequential and parallel implementations. The method using generalized inverses is able to solve problems that involve redundant, degenerate and intermittent constraints, and can identify inconsistent constraint sets. It also allows a single program to perform pure kinematic and dynamic analyses. Its computational cost is among the lowest in comparison with other methods. In addition, constraint violation control methods are investigated to improve integration accuracy and further reduce computational cost. Constrained dynamics problems are also solved using optimization methods by applying Gauss' principle directly. An objective function that incorporates constraints is derived using a symmetric scheme, which is implemented using genetic algorithms in a parallel computing environment. It is shown that this method is capable of solving the same cases of constraints as the former method. Examples and numerical experiments demonstrating the applications of the two methods to constrained multiparticle and multibody systems are presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Apentiik, Rowland. "Bulsa technologies and systems of thought." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq24571.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Haase, Peter. "Semantic technologies for distributed information systems." Karlsruhe : Univ.-Verl. Karlsruhe, 2006. http://www.uvka.de/univerlag/volltexte/2007/195/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Smith, Colin Frederick. "Jellyfish Inspired Underwater Systems and Technologies." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/76896.

Full text
Abstract:
Unmanned underwater vehicles (UUVs) have long been in use but increasingly there has been a wave of biomimetic robots taking over the duties and functions of traditional vehicles. A robotic jellyfish, inspired by the species Aurelia aurita was developed and characterized. In addition to the body of the main robotic vehicle, supporting technologies were developed including polymeric artificial muscles, hydrogel-based artificial mesoglea, and an inclinometer inspired by the jellyfish statocyst organ. Through multiple versions, the vehicle was able to attain an order of magnitude increase in proficiency from 0.022 s?? to 0.21 s?? and robustness not found in initial prototypes. A polyvinyl alcohol hydrogel reinforced with ferritin nanoparticles was found to accurately mimic the stress and strain characteristics of natural Aurelia mesoglea while maintaining a high water content similar to the animal. In addition, the optical properties were shown to be controlled by water to DMSO ratio. A five layer PPy-Au-PVDF-Au-PPy actuator stored in 0.5M KCl solution actuated at 4 VDC potential and produced an impressive 90% tip deflection. In addition, the rate of change was extremely high at 50% deflection of initial actuator length per second. The artificial jellyfish statocyst was found to produce the required highly linear voltage divider output. This sensor will provide the vehicle with biomimetic self-awareness of its own body position. Future directions are proposed for the biomimetic robotic jellyfish such as on-board power and computing, multi-material mesoglea with a dermal layer, a MEMS-based statocyst, and polymeric muscles with increased force production and time response.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
44

Topham, N. P. "A parallel vector processing system." Thesis, University of Manchester, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kopek, Christopher Vincent. "Parallel intrusion detection systems for high speed networks using the divided data parallel method." Electronic thesis, 2007. http://dspace.zsr.wfu.edu/jspui/handle/10339/191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Brandis, Robert Craig. "IPPM : Interactive parallel program monitor." Full text open access at:, 1986. http://content.ohsu.edu/u?/etd,111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Grass, Thomas. "Simulation methodologies for future large-scale parallel systems." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/461198.

Full text
Abstract:
Since the early 2000s, computer systems have seen a transition from single-core to multi-core systems. While single-core systems included only one processor core on a chip, current multi-core processors include up to tens of cores on a single chip, a trend which is likely to continue in the future. Today, multi-core processors are ubiquitous. They are used in all classes of computing systems, ranging from low-cost mobile phones to high-end High-Performance Computing (HPC) systems. Designing future multi-core systems is a major challenge [12]. The primary design tool used by computer architects in academia and industry is architectural simulation. Simulating a computer system executing a program is typically several orders of magnitude slower than running the program on a real system. Therefore, new techniques are needed to speed up simulation and allow the exploration of large design spaces in a reasonable amount of time. One way of increasing simulation speed is sampling. Sampling reduces simulation time by simulating only a representative subset of a program in detail. In this thesis, we present a workload analysis of a set of task-based programs. We then use the insights from this study to propose TaskPoint, a sampled simulation methodology for task-based programs. Task-based programming models can reduce the synchronization costs of parallel programs on multi-core systems and are becoming increasingly important. Finally, we present MUSA, a simulation methodology for simulating applications running on thousands of cores on a hybrid, distributed shared-memory system. The simulation time required for simulation with MUSA is comparable to the time needed for native execution of the simulated program on a production HPC system. The techniques developed in the scope of this thesis permit researchers and engineers working in computer architecture to simulate large workloads, which were infeasible to simulate in the past. Our work enables architectural research in the fields of future large-scale shared-memory and hybrid, distributed shared-memory systems.
Des dels principis dels anys 2000, els sistemes d'ordinadors han experimentat una transició de sistemes d'un sol nucli a sistemes de múltiples nuclis. Mentre els sistemes d'un sol nucli incloïen només un nucli en un xip, els sistemes actuals de múltiples nuclis n'inclouen desenes, una tendència que probablement continuarà en el futur. Avui en dia, els processadors de múltiples nuclis són omnipresents. Es fan servir en totes les classes de sistemes de computació, de telèfons mòbils de baix cost fins a sistemes de computació d'alt rendiment. Dissenyar els futurs sistemes de múltiples nuclis és un repte important. L'eina principal usada pels arquitectes de computadors, tant a l'acadèmia com a la indústria, és la simulació. Simular un ordinador executant un programa típicament és múltiples ordres de magnitud més lent que executar el mateix programa en un sistema real. Per tant, es necessiten noves tècniques per accelerar la simulació i permetre l'exploració de grans espais de disseny en un temps raonable. Una manera d'accelerar la velocitat de simulació és la simulació mostrejada. La simulació mostrejada redueix el temps de simulació simulant en detall només un subconjunt representatiu d¿un programa. En aquesta tesi es presenta una anàlisi de rendiment d'una col·lecció de programes basats en tasques. Com a resultat d'aquesta anàlisi, proposem TaskPoint, una metodologia de simulació mostrejada per programes basats en tasques. Els models de programació basats en tasques poden reduir els costos de sincronització de programes paral·lels executats en sistemes de múltiples nuclis i actualment estan guanyant importància. Finalment, presentem MUSA, una metodologia de simulació per simular aplicacions executant-se en milers de nuclis d'un sistema híbrid, que consisteix en nodes de memòria compartida que formen un sistema de memòria distribuïda. El temps que requereixen les simulacions amb MUSA és comparable amb el temps que triga l'execució nativa en un sistema d'alt rendiment en producció. Les tècniques desenvolupades al llarg d'aquesta tesi permeten simular execucions de programes que abans no eren viables, tant als investigadors com als enginyers que treballen en l'arquitectura de computadors. Per tant, aquest treball habilita futura recerca en el camp d'arquitectura de sistemes de memòria compartida o distribuïda, o bé de sistemes híbrids, a gran escala.
A principios de los años 2000, los sistemas de ordenadores experimentaron una transición de sistemas con un núcleo a sistemas con múltiples núcleos. Mientras los sistemas single-core incluían un sólo núcleo, los sistemas multi-core incluyen decenas de núcleos en el mismo chip, una tendencia que probablemente continuará en el futuro. Hoy en día, los procesadores multi-core son omnipresentes. Se utilizan en todas las clases de sistemas de computación, de teléfonos móviles de bajo coste hasta sistemas de alto rendimiento. Diseñar sistemas multi-core del futuro es un reto importante. La herramienta principal usada por arquitectos de computadores, tanto en la academia como en la industria, es la simulación. Simular un computador ejecutando un programa típicamente es múltiples ordenes de magnitud más lento que ejecutar el mismo programa en un sistema real. Por ese motivo se necesitan nuevas técnicas para acelerar la simulación y permitir la exploración de grandes espacios de diseño dentro de un tiempo razonable. Una manera de aumentar la velocidad de simulación es la simulación muestreada. La simulación muestreada reduce el tiempo de simulación simulando en detalle sólo un subconjunto representativo de la ejecución entera de un programa. En esta tesis presentamos un análisis de rendimiento de una colección de programas basados en tareas. Como resultado de este análisis presentamos TaskPoint, una metodología de simulación muestreada para programas basados en tareas. Los modelos de programación basados en tareas pueden reducir los costes de sincronización de programas paralelos ejecutados en sistemas multi-core y actualmente están ganando importancia. Finalmente, presentamos MUSA, una metodología para simular aplicaciones ejecutadas en miles de núcleos de un sistema híbrido, compuesto de nodos de memoria compartida que forman un sistema de memoria distribuida. El tiempo de simulación que requieren las simulaciones con MUSA es comparable con el tiempo necesario para la ejecución del programa simulado en un sistema de alto rendimiento en producción. Las técnicas desarolladas al largo de esta tesis permiten a los investigadores e ingenieros trabajando en la arquitectura de computadores simular ejecuciones largas, que antes no se podían simular. Nuestro trabajo facilita nuevos caminos de investigación en los campos de sistemas de memoria compartida o distribuida y en sistemas híbridos.
APA, Harvard, Vancouver, ISO, and other styles
48

Brito, Andrey. "Speculation in Parallel and Distributed Event Processing Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-38911.

Full text
Abstract:
Event stream processing (ESP) applications enable the real-time processing of continuous flows of data. Algorithmic trading, network monitoring, and processing data from sensor networks are good examples of applications that traditionally rely upon ESP systems. In addition, technological advances are resulting in an increasing number of devices that are network enabled, producing information that can be automatically collected and processed. This increasing availability of on-line data motivates the development of new and more sophisticated applications that require low-latency processing of large volumes of data. ESP applications are composed of an acyclic graph of operators that is traversed by the data. Inside each operator, the events can be transformed, aggregated, enriched, or filtered out. Some of these operations depend only on the current input events, such operations are called stateless. Other operations, however, depend not only on the current event, but also on a state built during the processing of previous events. Such operations are, therefore, named stateful. As the number of ESP applications grows, there are increasingly strong requirements, which are often difficult to satisfy. In this dissertation, we address two challenges created by the use of stateful operations in a ESP application: (i) stateful operators can be bottlenecks because they are sensitive to the order of events and cannot be trivially parallelized by replication; and (ii), if failures are to be tolerated, the accumulated state of an stateful operator needs to be saved, saving this state traditionally imposes considerable performance costs. Our approach is to evaluate the use of speculation to address these two issues. For handling ordering and parallelization issues in a stateful operator, we propose a speculative approach that both reduces latency when the operator must wait for the correct ordering of the events and improves throughput when the operation in hand is parallelizable. In addition, our approach does not require that user understand concurrent programming or that he or she needs to consider out-of-order execution when writing the operations. For fault-tolerant applications, traditional approaches have imposed prohibitive performance costs due to pessimistic schemes. We extend such approaches, using speculation to mask the cost of fault tolerance.
APA, Harvard, Vancouver, ISO, and other styles
49

Sreenivasan, C. R. "Task Scheduling Technlques for Distrlbuted/Parallel Processing Systems." Thesis, Indian Institute of Science, 1993. http://hdl.handle.net/2005/45.

Full text
Abstract:
Indian Institute of Science
This dissertation discusses the principles, techniques and approaches adopted in the design of task scheduling algorithms for Distributed Parallel Processing Computer Systems (DPCSs) connected with network of front-end systems (FSs), The primary goal in the design of scheduling algorithms is to minimise the total turnaround time of the jobs to be scheduled by maximizing the utilisation of the resources of the DPCS with minimum data communication overhead, The users present their jobs to be scheduled at the FS, The FS receives a job and generates a finite set of independent tasks based on mutually independent sections having inherent parallelism, Each task could be scheduled to different available processors of DPCS for concurrent execution, The tasks are of three groups viz,, compute intensive tasks, input. output intensive tasks and the combination of compute and input-output intensive tasks. They may have the execution time almost the same. Some of the tasks may have the execution time larger due to precedence constraints than that of other tasks and they are provided with logical breakpoints which can be utilised to further break the tasks into subtasks during scheduling, The technique of using breakpoint of the tasks is more appropriate when the number of available processors is more than the number of tasks to be scheduled. The tasks of a job thus generated are sent to the front-end processor (FEP or the host processor) of the DPCS in the form of data flow graph (DFG), The DFG is used to model the tasks and represent the precedence (or data dependencies) among the tasks, In order to preserve the constraints among the tasks during scheduling and realise efficient utilisation of the resources of DPCS, the DFG is structured in the form of levels, The FBP of DPCS has a resident Task Manager (TM). The key function of the TM is to schedule the tasks to the appropriate processors of DPCS either statically or dynamically based on the required resources. To realise efficient scheduling and utilisation of the processors of DPCS, the TM uses a set of buffers known as Task Forwarding Buffer (TFB), Task Output Buffer (TOB) and Task Status Buffer (TSB) maintained by the FEP of DPCS. The tasks of a job from the FS are received at the TFB. The TM picks up a set of tasks pertaining to a level for scheduling into a temporary buffer C and obtains the status of the processors of DPCS. In order to realise both static and dynamic approaches of allocation, task to processor relation is considered in the scheduling algorithm. If the number of tasks in C is equal to or greater than the number of processors available, one task per processor is allocated, the remaining tasks of C are scheduled subsequently as and when the processors become available. This method of allocation is called static approach. If the number of tasks in C is less than the number of processors available, the TM makes use of the logical breakpoints of the tasks to generate subtasks equal to the number of available processors. Each subtask is scheduled to a processor. This method of scheduling is called the dynamic approach. In all the case the precedence constraints among the tasks are preserved by scheduling the successor task to the parent processor or near neighbouring processor, maintaining minimum data communication between them. Various examples of Computational Fluid Dynamics problems' were tested and the objective of reduced total turnaround time and maximum utilisation of the processors was achieved. The total turnaround time achieved for different jobs varies between 51% and 86% with static approach and 16% and 89% with dynamic approach. The utilisation of the processors varies between the 50% and 92.5%. Hence a speed-up of 5 to 8 folds is realised.
APA, Harvard, Vancouver, ISO, and other styles
50

Friedrich, Markus [Verfasser]. "Parallel Co-Simulation for Mechatronic Systems / Markus Friedrich." München : Verlag Dr. Hut, 2012. http://d-nb.info/1021072753/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography