Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Parallel execution of tasks.

Dissertationen zum Thema „Parallel execution of tasks“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Parallel execution of tasks" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Mtshali, Progress Q. T. „Minimizing Parallel Virtual Machine [PVM] Tasks Execution Times Through Optimal Host Assignments“. NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/740.

Der volle Inhalt der Quelle
Annotation:
PVM is a message passing system that is designed to run parallel programs across a network of workstations. Its default scheduler uses the round-robin scheduling algorithm to assign tasks to hosts on the virtual machine. This task assignment strategy is deficient when it comes to a network of heterogeneous workstations. Heterogeneity could be software related, hardware related or both. The default scheduler does not attempt to assign a PVM task to a node within the virtual machine that best services the PVM task. The problem with this allocation scheme is that resource requirements for each task that must be assigned to a computer vary. The default scheduler could assign a computation intensive parallel task to a computer that has a slow central processing unit (CPU) thus taking longer to execute the task to completion (Ju and Wang, 1996). It is also possible that a communication bound process is assigned to a computer with very low bandwidth. The scheduler needs to assign tasks to computers in such a way that tasks are assigned to computers that will fulfill the resource requirements of a given task: schedule and balance the work load in such a way that the overall execution time is minimized. In this dissertation, a stochastic task assignment method was developed that not only takes into account the heterogeneity of the hosts but also utilizes state information for the tasks to be assigned, and hosts onto which they are assigned. Task assignment methods that exist today, such as used in Condor, PvmJobs, CROP, and Matchmaker tend to be designed for a homogenous environment using the UNIX operating system or its variants. They also tend to use operating system data structures that are sometimes not common across most architectures. This makes it difficult to port them to other architectures. The task assignment method that was developed in this dissertation was initially implemented across a PVM environment consisting of a network of Win32® architecture, Linux architecture, and Sun Solaris architecture. A portable prototype implementation that runs across these architectures was developed. A simulator based on the CSIMIS toolkit was developed and utilized to study the behavior of the M/G/I queueing model on which the task assignment is based. Four PVM programs were run on a PVM environment that consisted of six nodes. The run-times of these progran1s were measured for both the default scheduler and the method proposed in this dissertation. The analysis of the run-times suggests that the proposed task assignment method works better for PVM programs that are CPU intensive than those that are small and do not require a lot of CPU time. The reason for this behavior can be attributed to high overheard that is required to compute the task assignment probabilities for all the hosts in the PVM environment. However, the proposed method appears to minimize execution times for the CPU intensive tasks. The ease with which external resource managers in the PVM environment can be incorporated into the PVM environment at run-time makes this proposed method an alternative to the default scheduler. More work needs to be done before the results that were obtained from this dissertation could be generalized, if at all possible, across the board. A number of recommendations on the reduction of overheard are specified at the end of this report.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Raghu, Kumbakonam S. „Taskmaster: an interactive, graphical environment for task specification, execution and monitoring“. Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/43277.

Der volle Inhalt der Quelle
Annotation:
This thesis presents Taskmaster, an interactive, graphical environment for task specification, execution and monitoring. Taskmaster is an integrated user environment that employs a unique blend of the principles of Visual Programming, Tool Composition, Structured Analysis and Data Flow Computing to support user task specifications. Problem solving in the Taskmaster environment consists of decomposing the problem task into a partially ordered set of high-level subtasks. This decomposition is depicted graphically as a network in which the nodes correspond to the subtasks and the arcs represent the directed data paths between the nodes. The subtasks are successively decomposed into lower level subtasks until, at the lowest level, each network node is “bound” to a pre-existing tool from a tools database. Execution of the resulting network of software tools provides a task solution. Some of the novel features of the Taskmaster environment include 1) guidance to the user in the task specification process through menu-based interaction; 2) facility to interactively monitor the task network execution; 3) support for structured data flow among tools; and 4) enhanced support for reusability by providing operations to functionally abstract and reuse sub-networks of tools.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Grepl, Filip. „Aplikace pro řízení paralelního zpracování dat“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445490.

Der volle Inhalt der Quelle
Annotation:
This work deals with the design and implementation of a system for parallel execution of tasks in the Knowledge Technology Research Group. The goal is to create a web application that allows to control their processing and monitor runs of these tasks including the use of system resources. The work first analyzes the current method of parallel data processing and the shortcomings of this solution. Then the work describes the existing tools including the problems that their test deployment revealed. Based on this knowledge, the requirements for a new application are defined and the design of the entire system is created. After that the selected parts of implementation and the way of the whole system testing is described together with the comparison of the efficiency with the original system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

McGuigan, Brian. „The Effects of Stress and Executive Functions on Decision Making in an Executive Parallel Task“. Thesis, Umeå universitet, Institutionen för psykologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-124398.

Der volle Inhalt der Quelle
Annotation:
The aim of this study was to investigate the effects of acute stress on parallel task performance with the Game of Dice Task (GDT) to measure decision making and the Stroop test.  Two previous studies have found that the combination of stress and a parallel task with the GDT and an executive functions task preserved performance on the GDT for a stress group compared to a control group.  The purpose of this study was to create and use a new parallel task with the GDT and the stroop test to elucidate more information about the executive function contributions from the stroop test and to ensure that this parallel task preserves performance on the GDT for the stress group.  Sixteen participants (Mean Age: 26.88) were randomly assigned to either a stress group with the Trier Social Stress Test (TSST) or the control group with the placebo-TSST.  The Positive and Negative Affect Schedule (PANAS) and the State-Trait Anxiety Inventory (STAI) were given before and after the TSST or placebo-TSST and were used as stress indicators.  The results showed a trend towards the stress group performing marginally better than the control group on the GDT but not significantly.  There were no significant differences between the groups for accuracy on the Stroop test trial types.  However, the stress group had significantly slower mean response times on the congruent trial type of the Stroop test, p < .05, though.  This study has shown further evidence that stress and a parallel task together preserve performance on the GDT.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cheese, Andrew B. „Parallel execution of Parlog“. Thesis, University of Nottingham, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277848.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

King, Andrew. „Distributed parallel symbolic execution“. Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1643.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Narula, Neha. „Parallel execution for conflicting transactions“. Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99779.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-79).
Multicore main-memory databases only obtain parallel performance when transactions do not conflict. Conflicting transactions are executed one at a time in order to ensure that they have serializable effects. Sequential execution on contended data leaves cores idle and reduces throughput. In other parallel programming contexts---not serializable transactions--techniques have been developed that can reduce contention on shared variables using per-core state. This thesis asks the question, can these techniques apply to a general serializable database? This work introduces a new concurrency control technique, phase reconciliation, that uses per-core state to greatly reduce contention on popular database records for many important workloads. Phase reconciliation uses the idea of synchronized phases to amortize the cost of combining per-core data and to extract parallelism. Doppel, our phase reconciliation database, repeatedly cycles through joined and split phases. Joined phases use traditional concurrency control and allow any transaction to execute. When workload contention causes unnecessary sequential execution, Doppel switches to a split phase. During a split phase, commutative operations on popular records act on per-core state, and thus proceed in parallel on different cores. By explicitly using phases, phase reconciliation realizes two important performance benefits: First, it amortizes the potentially high costs of aggregating per-core state over many transactions. Second, it can dynamically split data or not based on observed contention, handling challenging, varying workloads. Doppel achieves higher performance because it parallelizes transactions on popular data that would be run sequentially by conventional concurrency control. Phase reconciliation helps most when there are many updates to a few popular database records. On an 80-core machine, its throughput is up to 38x higher than conventional concurrency control protocols on microbenchmarks, and up to 3x on a larger application, at the cost of increased latency for some transactions.
by Neha Narula.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhao, Zhijia. „Enabling Parallel Execution via Principled Speculation“. W&M ScholarWorks, 2015. https://scholarworks.wm.edu/etd/1593092101.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ben, Lahmar Imen. „Continuity of user tasks execution in pervasive environments“. Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00789725.

Der volle Inhalt der Quelle
Annotation:
The proliferation of small devices and the advancements in various technologies have introduced the concept of pervasive environments. In these environments, user tasks can be executed by using the deployed components provided by devices with different capabilities. One appropriate paradigm for building user tasks for pervasive environments is Service-Oriented Architecture (SOA). Using SOA, user tasks are represented as an assembly of abstract components (i.e., services) without specifying their implementations, thus they should be resolved into concrete components. The task resolution involves automatic matching and selection of components across various devices. For this purpose, we present an approach that allows for each service of a user task, the selection of the best device and component by considering the user preferences, devices capabilities, services requirements and components preferences. Due to the dynamicity of pervasive environments, we are interested in the continuity of execution of user tasks. Therefore, we present an approach that allows components to monitor locally or remotely the changes of properties, which depend on. We also considered the adaptation of user tasks to cope with the dynamicity of pervasive environments. To overcome captured failures, the adaptation is carried out by a partial reselection of devices and components. However, in case of mismatching between an abstract user task and a concrete level, we propose a structural adaptation approach by injecting some defined adaptation patterns, which exhibit an extra-functional behavior. We also propose an architectural design of a middleware allowing the task's resolution, monitoring of the environment and the task adaptation. We provide implementation details of the middleware's components along with evaluation results
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Simpson, David John. „Space-efficient execution of parallel functional programs“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0028/NQ51922.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Koukos, Konstantinos. „Efficient Execution Paradigms for Parallel Heterogeneous Architectures“. Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300831.

Der volle Inhalt der Quelle
Annotation:
This thesis proposes novel, efficient execution-paradigms for parallel heterogeneous architectures. The end of Dennard scaling is threatening the effectiveness of DVFS in future nodes; therefore, new execution paradigms are required to exploit the non-linear relationship between performance and energy efficiency of memory-bound application-regions. To attack this problem, we propose the decoupled access-execute (DAE) paradigm. DAE transforms regions of interest (at program-level) in two coarse-grain phases: the access-phase and the execute-phase, which we can independently DVFS. The access-phase is intended to prefetch the data in the cache, and is therefore expected to be predominantly memory-bound, while the execute-phase runs immediately after the access-phase (that has warmed-up the cache) and is therefore expected to be compute-bound. DAE, achieves good energy savings (on average 25% lower EDP) without performance degradation, as opposed to other DVFS techniques. Furthermore, DAE increases the memory level parallelism (MLP) of memory-bound regions, which results in performance improvements of memory-bound applications. To automatically transform application-regions to DAE, we propose compiler techniques to automatically generate and incorporate the access-phase(s) in the application. Our work targets affine, non-affine, and even complex, general-purpose codes. Furthermore, we explore the benefits of software multi-versioning to optimize DAE in dynamic environments, and handle codes with statically unknown access-phase overheads. In general, applications automatically-transformed to DAE by our compiler, maintain (or even exceed in some cases) the good performance and energy efficiency of manually-optimized DAE codes. Finally, to ease the programming environment of heterogeneous systems (with integrated GPUs), we propose a novel system-architecture that provides unified virtual memory with low overhead. The underlying insight behind our work is that existing data-parallel programming models are a good fit for relaxed memory consistency models (e.g., the heterogeneous race-free model). This allows us to simplify the coherency protocol between the CPU – GPU, as well as the GPU memory management unit. On average, we achieve 45% speedup and 45% lower EDP over the corresponding SC implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Mai, Xuan Trang. „Policy-Aware Parallel Execution of Composite Services“. 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/215682.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Gustavsson, Andreas. „Static Execution Time Analysis of Parallel Systems“. Doctoral thesis, Mälardalens högskola, Inbyggda system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-31399.

Der volle Inhalt der Quelle
Annotation:
The past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism is no longer feasible due to extensive power consumption and heat dissipation. Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level. This is most often done using multiple, relatively slow and simple, processing cores situated on a single processor chip. The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g., a bus, to that memory and also all higher levels of memory). To fully exploit this type of parallel processor chip, programs running on it will have to be concurrent. Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code. A real-time system is any system whose correctness is dependent both on its functional and temporal behavior. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is crucial that methods to derive safe estimations on the timing properties of parallel computer systems are developed, if at all possible. This thesis presents a method to derive safe (lower and upper) bounds on the execution time of a given parallel system, thus showing that such methods must exist. The interface to the method is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis. The method is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way. The thesis also proves the soundness of the presented method (i.e., that the estimated timing bounds are indeed safe) and evaluates a prototype implementation of it.
Den strategi som historiskt sett använts för att öka processorers prestanda (genom ökad klockfrekvens och ökad instruktionsnivåparallellism) är inte längre hållbar på grund av den ökade energikonsumtion som krävs. Därför är den nuvarande trenden inom processordesign att låta mjukvaran påverka det parallella exekveringsbeteendet. Detta görs vanligtvis genom att placera multipla processorkärnor på ett och samma processorchip. Kärnorna delar vanligtvis på några av processorchipets resurser, såsom cache-minne (och därmed också det nätverk, till exempel en buss, som ansluter kärnorna till detta minne, samt alla minnen på högre nivåer). För att utnyttja all den prestanda som denna typ av processorer erbjuder så måste mjukvaran som körs på dem kunna delas upp över de tillgängliga kärnorna. Eftersom flerkärniga processorer är standard idag så måste även realtidssystem baseras på dessa och den nämnda typen av kod.  Ett realtidssystem är ett datorsystem som måste vara både funktionellt och tidsmässigt korrekt. För vissa typer av realtidssystem kan ett inkorrekt tidsmässigt beteende ha katastrofala följder. Därför är det ytterst viktigt att metoder för att analysera och beräkna säkra gränser för det tidsmässiga beteendet hos parallella datorsystem tas fram. Denna avhandling presenterar en metod för att beräkna säkra gränser för exekveringstiden hos ett givet parallellt system, och visar därmed att sådana metoder existerar. Gränssnittet till metoden är ett litet formellt definierat trådat programmeringsspråk där trådarna tillåts kommunicera och synkronisera med varandra. Metoden baseras på abstrakt exekvering för att effektivt beräkna de säkra (men ofta överskattade) gränserna för exekveringstiden. Abstrakt exekvering baseras i sin tur på abstrakta interpreteringstekniker som vida används inom tidsanalys av sekventiella datorsystem. Avhandlingen bevisar även korrektheten hos den presenterade metoden (det vill säga att de beräknade gränserna för det analyserade systemets exekveringstid är säkra) och utvärderar en prototypimplementation av den.
Worst-Case Execution Time Analysis of Parallel Systems
RALF3 - Software for Embedded High Performance Architectures
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hofmann, Andreas G. (Andreas Gunther). „Robust execution of bipedal walking tasks from biomechanical principles“. Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38444.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 348-352).
Effective use of robots in unstructured environments requires that they have sufficient autonomy and agility to execute task-level commands successfully. A challenging example of such a robot is a bipedal walking machine. Such a robot should be able to walk to a particular location within a particular time, while observing foot placement constraints, and avoiding a fall, if this is physically possible. Although stable walking machines have been built, the problem of task-level control, where the tasks have stringent state-space and temporal requirements, and where significant disturbances may occur, has not been studied extensively. This thesis addresses this problem through three objectives. The first is to devise a plan specification where task requirements are expressed in a qualitative form that provides for execution flexibility. The second is to develop a task-level executive that accepts such a plan, and outputs a sequence of control actions that result in successful plan execution. The third is to provide this executive with disturbance handling ability. Development of such an executive is challenging because the biped is highly nonlinear and has limited actuation due to its limited base of support. We address these challenges with three key innovations.
(cont.) To address the nonlinearity, we develop a dynamic virtual model controller to linearize the biped, and thus, provide an abstracted biped that is easier to control. The controller is model-based, but uses a sliding control technique to compensate for model inaccuracy. To address the under-actuation, our system generates flow tubes, which define valid operating regions in the abstracted biped. The flow tubes represent sets of state trajectories that take into account dynamic limitations due to under-actuation, and also satisfy plan requirements. The executive keeps trajectories in the flow tubes by adjusting a small number of control parameters for key state variables in the abstracted biped, such as center of mass. Additionally, our system uses a novel strategy that employs angular momentum to enhance translational controllability of the system's center of mass. We evaluate our approach using a high-fidelity biped simulation. Tests include walking with foot-placement constraints, kicking a soccer ball, and disturbance recovery.
by Andreas G. Hofmann.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Dad, Cherifa. „Méthodologie et algorithmes pour la distribution large échelle de co-simulations de systèmes complexes : application aux réseaux électriques intelligents (Smart Grids)“. Electronic Thesis or Diss., CentraleSupélec, 2018. http://www.theses.fr/2018CSUP0004.

Der volle Inhalt der Quelle
Annotation:
L’apparition des réseaux électriques intelligents, ou « Smart Grids », engendre de profonds changements dans le métier de la distribution d’électricité. En effet, ces réseaux voient apparaître de nouveaux usages (véhicules électriques, climatisation) et de nouveaux producteurs décentralisés (photovoltaïque, éolien), ce qui rend plus difficile le besoin d’équilibre entre l’offre et la demande en électricité et qui impose d’introduire une forme d’intelligence répartie entre leurs différents composants. Au vu de la complexité et de l’ampleur de la mise en oeuvre des Smart Grids, il convient tout d’abord de procéder à des simulations afin de valider leur fonctionnement. Pour ce faire, CentraleSupélec et EDF R&D (au sein de l’institut RISEGrid) ont développé DACCOSIM, une plate-forme de co-simulation s’appuyant sur la norme FMI1(Functional Mock-up Interface), permettant de concevoir et de mettre au point des réseaux électriques intelligents et de grandes tailles. Les composants clés de cette plate-forme sont représentés sous forme de boîtes grises appelées FMU (Functional Mock-up Unit). En outre, les simulateurs des systèmes physiques des Smart Grids peuvent faire des retours arrière en cas de problème dans leurs calculs, contrairement aux simulateurs événementiels (unités de contrôle) qui, bien souvent, ne peuvent qu’avancer dans le temps. Pour faire collaborer ces différents simulateurs, nous avons conçu une solution hybride prenant en considération les contraintes de tous les composants, et permettant d’identifier précisément les types d’événements auxquels le système est confronté. Cette étude a débouché sur une proposition d’évolution de la norme FMI. Par ailleurs, il est difficile de simuler rapidement et efficacement un Smart Grid, surtout lorsque le problème est à l’échelle nationale ou même régionale. Pour pallier ce manque, nous nous sommes focalisés sur la partie la plus gourmande en calcul, à savoir la co-simulation des dispositifs physiques. Ainsi, nous avons proposé des méthodologies, approches et algorithmes permettant de répartir efficacement et rapidement ces différentes FMU sur des architectures distribuées. L’implantation de ces algorithmes a déjà permis de co-simuler des cas métiers de grande taille sur un cluster de PC multi-coeurs. L’intégration de ces méthodes dans DACCOSIM permettraaux ingénieurs d’EDF de concevoir des « réseaux électriques intelligents de très grande taille » plus résistants aux pannes
The emergence of Smart Grids is causing profound changes in the electricity distribution business. Indeed, these networks are seeing new uses (electric vehicles, air conditioning) and new decentralized producers (photovoltaic, wind), which make it more difficult to ensure a balance between electricity supply and demand, and imposes to introduce a form of distributed intelligence between their different components. Considering its complexity and the extent of its implementation, it is necessary to co-simulate it in order to validate its performances. In the RISEGrid institute, CentraleSupélec and EDF R&D have developed a co-simulation platform based on the FMI2 (Functional Mock-up Interface) standard called DACCOSIM, permitting to design and develop Smart Grids. The key components of this platform are represented as gray boxes called FMUs (Functional Mock-up Unit). In addition, simulators of the physical systems of Smart Grids can make backtracking when an inaccuracy is suspected in FMU computations, unlike discrete simulators (control units) that often can only advance in time. In order these different simulators collaborate, we designed a hybrid solution that takes into account the constraints of all the components, and precisely identifies the types of the events that system is facing. This study has led to a FMI standard change proposal. Moreover, it is difficult to rapidly design an efficient Smart Grid simulation, especially when the problem has a national or even a regional scale.To fill this gap,we have focused on the most computationally intensive part, which is the simulation of physical devices. We have therefore proposed methodologies, approaches and algorithms to quickly and efficiently distribute these different FMUs on distributed architectures. The implementation of these algorithms has already allowed simulating large-scale business cases on a multi-core PC cluster. The integration of these methods into DACCOSIM will enable EDF engineers to design « large scale Smart Grids » which will be more resistant to breakdowns
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Brown, Jeffery Alan. „Architectural support for efficient on-chip parallel execution“. Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3402089.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed May 14, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (leaves 131-142).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Mitta, Swetha. „Kentucky's adapter for parallel execution and rapid synchronization“. Lexington, Ky. : [University of Kentucky Libraries], 2007. http://lib.uky.edu/ETD/ukyelen2007t00558/Thesis_mitta.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of Kentucky, 2007.
Title from document title page (viewed on April 25, 2007). Document formatted into pages; contains: ix, 83 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 55-57).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Bangalore, Lakshminarayana Nagesh. „Efficient graph algorithm execution on data-parallel architectures“. Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53058.

Der volle Inhalt der Quelle
Annotation:
Mechanisms for improving the execution efficiency of graph algorithms on Data-Parallel Architectures were proposed and identified. Execution of graph algorithms on GPGPU architectures, the prevalent data-parallel architectures was considered. Irregular and data dependent accesses in graph algorithms were found to cause significant idle cycles in GPGPU cores. A prefetching mechanism that reduced the amount of idle cycles by prefetching a data-dependent access pattern found in graph algorithms was proposed. Storing prefetches in unused spare registers in addition to storing them in the cache was shown to be more effective by the prefetching mechanism. The design of the cache hierarchy for graph algorithms was explored. First, an exclusive cache hierarchy was shown to be beneficial at the cost of increased traffic; a region based exclusive cache hierarchy was shown to be similar in performance to an exclusive cache hierarchy while reducing on-chip traffic. Second, bypassing cache blocks at both the level one and level two caches was shown to be beneficial. Third, the use of fine-grained memory accesses (or cache sub-blocking) was shown to be beneficial. The combination of cache bypassing and fine-grained memory accesses was shown to be more beneficial than applying the two mechanisms individually. Finally, the impact of different implementation strategies on algorithm performance was evaluated for the breadth first search algorithm using different input graphs and heuristics to identify the best performing implementation for a given input graph were also discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

SILVA, VINICIUS FONTES VIEIRA DA. „QEEF-G: ADAPTIVE PARALLEL EXECUTION OF ITERATIVE QUERIES“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9824@1.

Der volle Inhalt der Quelle
Annotation:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
O processamento de consulta paralelo tradicional utilize- se de nós computacionais para reduzir o tempo de processamento de consultas. Com o surgimento das grades computacionais, milhares de nós podem ser utilizados, desafiando as atuais técnicas de processamento de consulta a oferecerem um suporte massivo ao paralelismo em um ambiente onde as condições variam todo a instante. Em adição, as aplicações científicas executadas neste ambiente oferecem novas características de processamento de dados que devem ser integradas em um sistema desenvolvido para este ambiente. Neste trabalho apresentamos o sistema de processamento de consulta paralelo do CoDIMS-G, e seu novo operador Orbit que foi desenvolvido para suportar a avaliação de consultas iterativas. Neste modelo de execução as tuplas são constantemente avaliadas por um fragmento paralelo do plano de execução. O trabalho inclui o desenvolvimento do sistema de processamento de consulta e um novo algoritmo de escalonamento que, considera as variações de rede e o throughput de cada nó, permitindo ao sistema se adaptar constantemente as variações no ambiente.
Traditional parallel query processing uses multiple computing nodes to reduce query response time. Within a Grid computing context, the availability of thousands of nodes challenge current parallel query processing techniques to support massive parallelism in a constantly varying environment conditions. In addition, scientific applications running on Grids offer new data processing characteristics that shall be integrated in such a framework. In this work we present the CoDIMS-G parallel query processing system with a full-fledged new query execution operator named Orbit. Orbit is designed for evaluating massive iterative based data processing. Tuples in Orbit iterate over a parallelized fragment of the query execution plan. This work includes the development of the query processing system and a new scheduling algorithm that considers variation on network and the throughput of each node. Such algorithm permits the system to adapt constantly to the changes in the environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Patinho, Pedro José Grilo Lopes. „An abstract model for parallel execution of prolog“. Doctoral thesis, Universidade de Évora, 2017. http://hdl.handle.net/10174/21002.

Der volle Inhalt der Quelle
Annotation:
Logic programming has been used in a broad range of fields, from artifficial intelligence applications to general purpose applications, with great success. Through its declarative semantics, by making use of logical conjunctions and disjunctions, logic programming languages present two types of implicit parallelism: and-parallelism and or-parallelism. This thesis focuses mainly in Prolog as a logic programming language, bringing out an abstract model for parallel execution of Prolog programs, leveraging the Extended Andorra Model (EAM) proposed by David H.D. Warren, which exploits the implicit parallelism in the programming language. A meta-compiler implementation for an intermediate language for the proposed model is also presented. This work also presents a survey on the state of the art relating to implemented Prolog compilers, either sequential or parallel, along with a walk-through of the current parallel programming frameworks. The main used model for Prolog compiler implementation, the Warren Abstract Machine (WAM) is also analyzed, as well as the WAM’s successor for supporting parallelism, the EAM; Sumário: Um Modelo Abstracto para Execução Paralela de Prolog A programação em lógica tem sido utilizada em diversas áreas, desde aplicações de inteligência artificial até aplicações de uso genérico, com grande sucesso. Pela sua semântica declarativa, fazendo uso de conjunções e disjunções lógicas, as linguagens de programação em lógica possuem dois tipos de paralelismo implícito: ou-paralelismo e e-paralelismo. Esta tese foca-se em particular no Prolog como linguagem de programação em lógica, apresentando um modelo abstracto para a execução paralela de programas em Prolog, partindo do Extended Andorra Model (EAM) proposto por David H.D. Warren, que tira partido do paralelismo implícito na linguagem. É apresentada uma implementação de um meta-compilador para uma linguagem intermédia para o modelo proposto. É feita uma revisão sobre o estado da arte em termos de implementações sequenciais e paralelas de compiladores de Prolog, em conjunto com uma visita pelas linguagens para implementação de sistemas paralelos. É feita uma análise ao modelo principal para implementação de compiladores de Prolog, a Warren Abstract Machine (WAM) e da sua evolução para suportar paralelismo, a EAM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Engelhardt, Dean. „A generalized execution model for nested data-parallel computing /“. Title page, contents and preface only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phe57.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Gustavsson, Andreas. „Static Timing Analysis of Parallel Systems Using Abstract Execution“. Licentiate thesis, Mälardalens högskola, Inbyggda system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26125.

Der volle Inhalt der Quelle
Annotation:
The Power Wall has stopped the past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism.Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level.This is most often done using multiple processing cores situated on a single processor chip.The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g. a bus, to that memory and also all higher levels of memory), and to fully exploit this type of parallel processor chip, programs running on it will have to be concurrent.Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code. A real-time system is any system whose correctness is dependent both on its functional and temporal output. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is of utmost importance that methods to analyze and derive safe estimations on the timing properties of parallel computer systems are developed. This thesis presents an analysis that derives safe (lower and upper) bounds on the execution time of a given parallel system.The interface to the analysis is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis.The analysis is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way.Basically, abstract execution simulates the execution of several real executions of the analyzed program in one go.The thesis also proves the soundness of the presented analysis (i.e. that the estimated timing bounds are indeed safe) and includes some examples, each showing different features or characteristics of the analysis.
Worst-Case Execution Time Analysis of Parallel Systems
RALF3 - Software for Embedded High Performance Architectures
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Karl, Holger. „Responsive Execution of Parallel Programs in Distributed Computing Environments“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 1999. http://dx.doi.org/10.18452/14455.

Der volle Inhalt der Quelle
Annotation:
Vernetzte Standardarbeitsplatzrechner (sog. Cluster) sind eine attraktive Umgebung zur Ausf"uhrung paralleler Programme; f"ur einige Anwendungsgebiete bestehen jedoch noch immer ungel"oste Probleme. Ein solches Problem ist die Verl"asslichkeit und Rechtzeitigkeit der Programmausf"uhrung: In vielen Anwendungen ist es wichtig, sich auf die rechtzeitige Fertigstellung eines Programms verlassen zu k"onnen. Mechanismen zur Kombination dieser Eigenschaften f"ur parallele Programme in verteilten Rechenumgebungen sind das Hauptanliegen dieser Arbeit. Zur Behandlung dieses Anliegens ist eine gemeinsame Metrik f"ur Verl"asslichkeit und Rechtzeitigkeit notwendig. Eine solche Metrik ist die Responsivit"at, die f"ur die Bed"urfnisse dieser Arbeit verfeinert wird. Als Fallstudie werden Calypso und Charlotte, zwei Systeme zur parallelen Programmierung, im Hinblick auf Responsivit"at untersucht und auf mehreren Abstraktionsebenen werden Ansatzpunkte zur Verbesserung ihrer Responsivit"at identifiziert. L"osungen f"ur diese Ansatzpunkte werden zu allgemeineren Mechanismen f"ur (parallele) responsive Dienste erweitert. Im Einzelnen handelt es sich um 1. eine Analyse der Responsivit"at von Calypsos ``eager scheduling'' (ein Verfahren zur Lastbalancierung und Fehlermaskierung), 2. die Behebung eines ``single point of failure,'' zum einen durch eine Responsivit"atsanalyse von Checkpointing, zum anderen durch ein auf Standardschnittstellen basierendes System zur Replikation bestehender Software, 3. ein Verfahren zur garantierten Ressourcenzuteilung f"ur parallele Programme und 4.die Einbeziehung semantischer Information "uber das Kommunikationsmuster eines Programms in dessen Ausf"uhrung zur Verbesserung der Leistungsf"ahigkeit. Die vorgeschlagenen Mechanismen sind kombinierbar und f"ur den Einsatz in Standardsystemen geeignet. Analyse und Experimente zeigen, dass diese Mechanismen die Responsivit"at passender Anwendungen verbessern.
Clusters of standard workstations have been shown to be an attractive environment for parallel computing. However, there remain unsolved problems to make them suitable to some application scenarios. One of these problems is a dependable and timely program execution: There are many applications in which a program should be successfully completed at a predictable point of time. Mechanisms to combine the properties of both dependable and timely execution of parallel programs in distributed computing environments are the main objective of this dissertation. Addressing these properties requires a joint metric for dependability and timeliness. Responsiveness is such a metric; it is refined for the purposes of this work. As a case study, Calypso and Charlotte, two parallel programming systems, are analyzed and their shortcomings on several abstraction levels with regard to responsiveness are identified. Solutions for them are presented and generalized, resulting in widely applicable mechanisms for (parallel) responsive services. Specifically, these solutions are: 1) a responsiveness analysis of Calypso's eager scheduling (a mechanism for load balancing and fault masking), 2) ameliorating a single point of failure by a responsiveness analysis of checkpointing and by a standard interface-based system for replication of legacy software, 3) managing resources in a way suitable for parallel programs, and 4) using semantical information about the communication pattern of a program to improve its performance. All proposed mechanisms can be combined and are suitable for use in standard environments. It is shown by analysis and experiments that these mechanisms improve the responsiveness of eligible applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Torrão, João Nuno Delgado. „Control and execution of incremental forming using parallel kinematics“. Master's thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/11657.

Der volle Inhalt der Quelle
Annotation:
Mestrado em Engenharia Mecânica
O projeto SPIF-A é um verdadeiro desa o de engenharia: desenvolver uma má- quina totalmente nova e inovadora para conformação plástica de chapa. Trata-se principalmente de um trabalho de equipa, que abrange varias áreas da engenharia mecânica, desde análise estrutural até automação e controlo, passando pela termodin âmica e cinemática, entre outras. Esta dissertação sendo mais uma peça no puzzle, vai-se focar no seu desenvolvimento, principalmente no estudo da cinemática inversa e directa da plataforma de Stewart, assim como no desenvolvimento do primeiro sistema de controlo de posição. O referido sistema é um controlador de lógica difusa e será implementado através de software num computador de processamento em tempo real. Durante o desenvolvimento destes componentes também foram optimizados e/ou actualizados os sistemas hidráulicos, eléctricos e mecânicos da máquina assim como se implementou e calibrou um sistema de medição de forças de trabalho recorrendo ao uso de células de carga.
The SPIF-A project is a true engineering challenge: to develop an entirely new and innovative machine for sheet metal forming. It is mostly a team e ort, covering various engineering subjects from structural analysis to automation and control but also thermodynamics, kinematics, among others. This dissertation being another piece of that puzzle, will focus on machine development, namely on de ning the machine's Stewart platform inverse kinematics, proposing a solution for the forward kinematics and devising its rst position control system. The referred system will be a fuzzy logic controller and will be implemented via software on a real time targeting machine. During this work several components like from its hydraulic, electrical and mechanical systems were updated and a force measuring system, using load cells was installed and calibrated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Phillips-Grafflin, Calder. „Enabling Motion Planning and Execution for Tasks Involving Deformation and Uncertainty“. Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/307.

Der volle Inhalt der Quelle
Annotation:
"A number of outstanding problems in robotic motion and manipulation involve tasks where degrees of freedom (DoF), be they part of the robot, an object being manipulated, or the surrounding environment, cannot be accurately controlled by the actuators of the robot alone. Rather, they are also controlled by physical properties or interactions - contact, robot dynamics, actuator behavior - that are influenced by the actuators of the robot. In particular, we focus on two important areas of poorly controlled robotic manipulation: motion planning for deformable objects and in deformable environments; and manipulation with uncertainty. Many everyday tasks we wish robots to perform, such as cooking and cleaning, require the robot to manipulate deformable objects. The limitations of real robotic actuators and sensors result in uncertainty that we must address to reliably perform fine manipulation. Notably, both areas share a common principle: contact, which is usually prohibited in motion planners, is not only sometimes unavoidable, but often necessary to accurately complete the task at hand. We make four contributions that enable robot manipulation in these poorly controlled tasks: First, an efficient discretized representation of elastic deformable objects and cost function that assess a ``cost of deformation' for a specific configuration of a deformable object that enables deformable object manipulation tasks to be performed without physical simulation. Second, a method using active learning and inverse-optimal control to build these discretized representations from expert demonstrations. Third, a motion planner and policy-based execution approach to manipulation with uncertainty which incorporates contact with the environment and compliance of the robot to generate motion policies which are then adapted during execution to reflect actual robot behavior. Fourth, work towards the development of an efficient path quality metric for paths executed with actuation uncertainty that can be used inside a motion planner or trajectory optimizer."
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Singh, Abhishek Jeffay Kevin. „Co-scheduling real-time tasks and non real-time tasks using empirical probability distribution of execution time requirements“. Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2724.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.
Title from electronic title page (viewed Mar. 10, 2010). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ernsund, Tommy, und Ingels Linus Sens. „Load Balancing of Parallel Tasks using Memory Bandwidth Restrictions“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44379.

Der volle Inhalt der Quelle
Annotation:
Shared resource contention is a significant problem in multi-core systems and can have a negative impact on the system. Memory contention occurs when the different cores in a processor access the same memory resource, resulting in a conflict. It is possible to limit memory contention through resource reservation where a part of the system or an application is reserved a partition of the shared resource. We investigated how applying memory bandwidth restrictions using MemGuard can aid in synchronizing execution times of parallel tasks. We further investigate when memory bandwidth restrictions are applicable. We conduct three experiments to investigate when bandwidth restrictions are applicable. Firstly, we conducted an experiment to pinpoint when the memory bandwidth saturates a core. Secondly, we investigated our adaptive memory partitioning scheme performance against static and no partitioning. Finally, we tested how our adaptive partitioning scheme and static partitioning can isolate a workload against an interfering memory intensive workload running on a separate core. As the experiments only were conducted on one system, pinpointing the general point of contention was difficult, seeing that it can differ significantly from system to system. Through our experiments, we see that memory bandwidth partitioning has the ability to decrease the execution time of feature detection algorithms, which means that memory bandwidth partitioning potentially can help threads to reach their synchronization points simultaneously.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Wächter, Mirko [Verfasser]. „Learning and Execution of Object Manipulation Tasks on Humanoid Robots / Mirko Wächter“. Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Doktor, Eugeniusz. „Organizing the execution of transportation tasks under spatial, temporal and other constraints“. Thesis, University of Strathclyde, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260543.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Pettersson, Andreas. „Parallel Instruction Decoding for DSP Controllers with Decoupled Execution Units“. Thesis, Linköpings universitet, Datorteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157695.

Der volle Inhalt der Quelle
Annotation:
Applications run on embedded processors are constantly evolving. They are for the most part growing more complex and the processors have to increase their performance to keep up. In this thesis, an embedded DSP SIMT processor with decoupled execution units is under investigation. A SIMT processor exploits the parallelism gained from issuing instructions to functional units or to decoupled execution units. In its basic form only a single instruction is issued per cycle. If the control of the decoupled execution units become too fine-grained or if the control burden of the master core becomes sufficiently high, the fetching and decoding of instructions can become a bottleneck of the system. This thesis investigates how to parallelize the instruction fetch, decode and issue process. Traditional parallel fetch and decode methods in superscalar and VLIW architectures are investigated. Benefits and drawbacks of the two are presented and discussed. One superscalar design and one VLIW design are implemented in RTL, and their costs and performances are compared using a benchmark program and synthesis. It is found that both the superscalar and the VLIW designs outperform a baseline scalar processor as expected, with the VLIW design performing slightly better than the superscalar design. The VLIW design is found to be able to achieve a higher clock frequency, with an area comparable to the area of the superscalar design. This thesis also investigates how instructions can be encoded to lower the decode complexity and increase the speed of issue to decoupled execution units. A number of possible encodings are proposed and discussed. Simulations show that the encodings have a possibility to considerably lower the time spent issuing to decoupled execution units.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Austin, Todd Michael. „Exploiting implicit parallelism in SPARC instruction execution /“. Online version of thesis, 1990. http://hdl.handle.net/1850/11007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Qamhieh, Manar. „Scheduling of parallel real-time DAG tasks on multiprocessor systems“. Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1030/document.

Der volle Inhalt der Quelle
Annotation:
Les applications temps réel durs sont celles qui doivent exécuter en respectant des contraintes temporelles. L'ordonnancement temps réel a bien été étudié sur mono-processeurs depuis plusieurs années. Récemment, l'utilisation d'architectures multiprocesseurs a augmenté dans les applications industrielles et des architectures parallèles sont proposées pour que le logiciel devienne compatible avec ces plateformes. L'ordonnancement multiprocesseurs de tâches parallèles dépendantes n'est pas une simple généralisation du cas mono-processeur et la problématique d'ordonnancement devient plus complexe et difficile. Dans cette thèse, nous étudions le problème d'ordonnancement temps réel de graphes de tâches parallèles acycliques sur des plateformes multiprocesseurs. Dans ce modèle, un graphe est composé d'un ensemble de sous-tâches dépendantes sous contraintes de précédence qui expriment les relations de précédences entre les sous-tâches. L'ordre d'exécution des sous-tâches est dynamique, c'est-à-dire que les sous-tâches peuvent s'exécuter en parallèle ou séquentiellement par rapport aux décisions de l'ordonnanceur temps réel. Pour traiter les contraintes de précédence, nous proposons deux méthodes pour l'ordonnancement des graphes : par transformation du modèle de graphe de sous tâches parallèles en un modèle de tâches séquentielles indépendantes, plus simple à ordonnancer et par ordonnancement direct des graphes en prenant en compte les relations de dépendance entre les sous-tâches. Nous proposons un ordonnancement des graphes en prenant directement en compte les paramètres temporels des graphes et un ordonnancement au niveau des sous-tâches, par rapport à des paramètres temporels attribués aux sous-tâches par un algorithme spécifique. Enfin, nous prouvons que les deux méthodes d'ordonnancement de graphes ne sont pas comparables. Nous fournissons alors des résultats de simulation pour comparer ces méthodes en utilisant les algorithmes d'ordonnancement globaux EDF et DM. Nous avons développé un logiciel nommé YARTISS pour générer des graphes aléatoires et réaliser les simulations
The interest for multiprocessor systems has recently been increased in industrial applications, and parallel programming API's have been introduced to benefit from new processing capabilities. The use of multiprocessors for real-time systems, whose execution is performed based on certain temporal constraints is now investigated by the industry. Real-time scheduling problem becomes more complex and challenging in that context. In multiprocessor systems, a hard real-time scheduler is responsible for allocating ready jobs to available processors of the systems while respecting their timing parameters. In this thesis, we study the problem of real-time scheduling of parallel Directed Acyclic Graph (DAG) tasks on homogeneous multiprocessor systems. In this model, a DAG task consists of a set of subtasks that execute under precedence constraints. At all times, the real-time scheduler is responsible for determining how subtasks execute, either sequentially or in parallel, based on the available processors of the system. We propose two DAG scheduling approaches to determine the execution form of DAG tasks. The first approach is the DAG Stretching algorithm, from the Model Transformation approach, which forces DAG tasks to execute as sequentially as possible. The second approach is the Direct Scheduling, which aims at scheduling DAG tasks while respecting their internal dependencies. We provide real-time schedulability analyses for Direct Scheduling at DAG-Level and at Subtask-Level. Due to the incomparability of DAG scheduling approaches, we use extensive simulations to compare performance of global EDF with global DM scheduling using our simulation tool YARTISS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

van, Paridon Kjell N. „The influence of stress on visual attention and performance execution in aiming tasks“. Thesis, Anglia Ruskin University, 2018. http://arro.anglia.ac.uk/703812/.

Der volle Inhalt der Quelle
Annotation:
This thesis examines the endocrine response in naturalistic sport environments and laboratory based stress manipulations to investigate the role of anxiety and biological stress on visual search behaviour and movement execution in perceptual-motor skills. The first study, a systematic review with meta-analysis, identified that athletes experience a significant cortisol response in anticipation to sport competition. Moderator analysis identified that females and athletes competing at international level do not demonstrate this anticipatory cortisol response. Study two, a validation of a golf putting task with a pressure manipulation including self-presentation and performance contingent motivational stimuli, identified distinct inter-individual differences in HPA-axis reactivity, in contrast to SNS reactivity. Responders demonstrated a significant increase in cortisol, in magnitude comparable to real sport competitions, where this was absent in non-responders. Non-significant correlations were found between endocrine reactivity and self-reported measures of anxiety, supporting previous research of the independence of the biological and emotional stress response. The effects of anxiety and endocrine reactivity on performance, visual attention and movement execution in a golf putting task were examined in study three and four. Study three identified that performance accuracy significantly improved under high pressure compared to low pressure. This improvement in performance was explained by a significant reduction in visual attention towards task-irrelevant stimuli and reduced variability in the club head angle at ball impact. Study 4 explored the effects of inter-individual differences in endocrine reactivity on the underlying processes of golf putting performance. Participants with high levels of cortisol were significantly less accurate in performance outcome compared to participants with low cortisol. A significant increase in visual attention towards task-irrelevant stimuli in participants with high cortisol, provided support for the influence of cortisol on the stimulus-driven attentional system in executing perceptual-motor skills under pressure. The interdisciplinary approach in the examination of stress and anxiety on sport performance suggests that both anxiety and cortisol reactivity effects sport performance through its influence on visual attention and movement execution. The inter-individual differences in cortisol reactivity and its effect on movement execution and visual attention, warrants further investigation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Haberland, Valeriia. „Strategies for the execution of long-term continuous and simultaneous tasks in grids“. Thesis, King's College London (University of London), 2015. http://kclpure.kcl.ac.uk/portal/en/theses/strategies-for-the-execution-of-longterm-continuous-and-simultaneous-tasks-in-grids(d3effb24-4899-4f4b-a4f5-b856a48831bb).html.

Der volle Inhalt der Quelle
Annotation:
Increasingly large amounts of computing resources are required to execute resource intensive, continuous and simultaneous tasks. For instance, automated monitoring of temperature within a building is necessary for maintaining comfortable conditions for people, and it has to be continuous and simultaneous for all rooms in the building. Such monitoring may function for months or even years. Continuity means that a task has to produce results in a real-time manner without significant interruptions, while simultaneity means that tasks have to be run at the same time because of data dependencies. Although a Grid environment has a large amount of computational resources, they might be scarce at times due to high demand and resources occasionally may fail. A Grid might be unable or unwilling to commit to providing clients’ tasks with resources for long durations such as years. Therefore, each task will be interrupted sooner or later, and our goal is to reduce the durations and number of interruptions. To find a mutually acceptable compromise, a client and Grid resource allocator (GRA) negotiate over time slots of resource utilisation. Assuming a client is not aware of resource availability changes, it can infer this information from the GRA’s proposals. The resource availability is considered to change near-periodically over time, which can be utilised by a client. We developed a client’s negotiation strategy, which can adapt to the tendencies in resource availability changes using fuzzy control rules. A client might become more generous towards the GRA, if there is a risk of resource exhaustion or the interruption (current or total) is too long. A client may also ask for a shorter task execution, if this execution ends around the maximum resource availability. In addition, a task re-allocation algorithm is introduced for inter-dependent tasks, when one task can donate its resources to another one.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Sander, Samuel Thomas. „Retargetable compilation for variable-grain data-parallel execution in image processing“. Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/13850.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Botadra, Harnish. „iC2mpi a platform for parallel execution of graph-structured iterative computations /“. unrestricted, 2006. http://etd.gsu.edu/theses/available/etd-07252006-165725/.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Georgia State University, 2006.
Title from title screen. Sushil Prasad, committee chair. Electronic text (106 p. : charts) : digital, PDF file. Description based on contents viewed June 11, 2007. Includes bibliographical references. Includes bibliographical references (p. 61-53).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Drakos, Nikos. „Sequential and parallel execution of logic programs with dependency directed backtracking“. Thesis, University of Leeds, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Gopinath, Prabha Shankar. „Programming and execution of object-based, parallel, hard, real-time applications /“. The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487592050228249.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Pout, Mike. „Performance evaluation of an associative processor array for computer vision tasks“. Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Xirogiannis, George. „Execution of Prolog by transformations on distributed memory multi-processors“. Thesis, Heriot-Watt University, 1998. http://hdl.handle.net/10399/639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Chen, Lin. „Process migration and runtime scheduling for parallel tasks in computational grids“. Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38574172.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Chen, Lin, und 陳琳. „Process migration and runtime scheduling for parallel tasks in computational grids“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38574172.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Haugli, Fredrik Bakkevig. „Using online worst-case execution time analysis and alternative tasks in real time systems“. Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-26100.

Der volle Inhalt der Quelle
Annotation:
As embedded hardware becomes more powerful, it allows for more complex realtime systems running tasks with highly dynamic execution times. This dynamicitymakes the already formidable task of producing accurate WCET analysis evenmore di?cult. Since the variation in execution time depends on task input andthe state of the system, it is postulated that a more accurate estimate for theWCET can be found online with knowledge about the task parameters.This thesis will explore the concept of online execution time analysis and itspotential utilization. Line detection in images through Hough line transform isfound to be a relevant application whose execution time can be estimated bythe contrast of the input image. A system for scheduling tasks utilizing theironline WCET estimate is then discussed. It dynamically checks for potentialdeadline misses and degrades tasks, either by running a more e?cient alternativetask instead or by aborting the task, until timely execution is guaranteed. Anexperiment is presented, demonstrating a higher throughput of tasks with onlineWCET estimation. Finally, the work on a framework for more precise simulationsand experiments is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Carson, John Anthony. „T-Brave : a transputer based OR-parallel execution model with blackboard support“. Thesis, University of Essex, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294672.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Dunlop, Alistair Neil. „Estimating the execution time of Fortran programs on distributed memory, parallel computers“. Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242759.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Ramanujam, Jagannathan. „Compile-time techniques for parallel execution of loops on distributed memory multiprocessors /“. The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487686243819778.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Zahaf, Houssam-Eddine. „Energy efficient scheduling of parallel real-time tasks on heterogeneous multicore systems“. Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10100/document.

Der volle Inhalt der Quelle
Annotation:
Les systèmes cyber-physiques (CPS) et d’Internet des objets génèrent un volume et une variété des données sans précédant. Le temps que ces données parcourent le réseau dans son chemin vers le cloud, la possibilité de réagir à un événement critique pourrait être tardive. Pour résoudre ce problème, les traitements de données nécessitant une réponse rapide sont faits à proximité d’où les données sont collectées. Ainsi, seuls les résultats du pré-traitement sont envoyées au cloud et la réaction pourrai être déclenché suffisamment rapide pour préserver l’intégrité du système. Ce modèle de calcul est connu comme Fog Computing. Un large spectre d’applications de CPS ont des contraintes temporelle et peuvent être facilement parallélisées en distribuant les calculs sur différents sous-ensembles de données en même temps. Ceci peut permettre d’obtenir un temps de réponse plus court et un temps de creux plus large. Ainsi, on peut réduire la fréquence du processeur et/ou éteindre des parties du processeur afin de réduire la consommation d’énergie. Dans cette thèse, nous nous concentrons sur le problème d'ordonnancement d’un ensemble de taches temps-réels parallèles sur des architectures multi-coeurs dans l’objectif de réduire la consommation d’énergie en respectant toutes les contraintes temporelles. Nous proposons ainsi plusieurs modèles de tâches et des testes d'ordonnançabilité pour résoudre le problème d’allocation des threads aux processeurs. Nous proposons aussi des méthodes qui permettent de sélectionner les fréquences et les états des processeurs. Les modèles proposés peuvent être implantés comme des directives dans la même logique que OpenMP
Cyber physical systems (CPS) and Internet of Objects (IoT) are generating an unprecedented volume and variety of data that needs to be collected and stored on the cloud before being processed. By the time the data makes its way to the cloud for analysis, the opportunity to trigger a reply might be late. One approach to solve this problem is to analyze the most time-sensitive data at the network edge, close to where it is generated. Thus, only the pre-processed results are sent to the cloud. This computation model is know as *Fog Computing* or *Edge computing*. Critical CPS applications using the fog computing model may have real-time constraints because results must be delivered in a pre-determined time window. Furthermore, in many relevant applications of CPS, the processing can be parallelized by applying the same processing on different sub-sets of data at the same time by the mean parallel programming techniques. This allow to achieve a shorter response time, and then, a larger slack time, which can be used to reduce energy consumption. In this thesis we focus on the problem of scheduling a set of parallel tasks on multicore processors, with the goal of reducing the energy consumption while all deadlines are met. We propose several realistic task models on architectures with identical and heterogeneous cores, and we develop algorithms for allocating threads to processors, select the core frequencies, and perform schedulability analysis. The proposed task models can be realized by using OpenMP-like APIs
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Babbar, Davender. „On-line hard real-time scheduling of parallel tasks on partitionable multiprocessors /“. The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487858417983777.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Tau, Sethunya Harriet Hlobisa. „An analysis of regulatory mechanisms during sustained task execution in cognitive, motor and sensory tasks“. Thesis, Rhodes University, 2013. http://hdl.handle.net/10962/d1006806.

Der volle Inhalt der Quelle
Annotation:
Fatigue is a state that, although researched for many years, is still not completely understood. Alongside this lack of a general understanding of fatigue is a lack of knowledge on the processes involved in the regulation of fatigue. The existing theories relating to regulation are focussed on mental effort regulation, suggesting that performance outcomes are co-ordinated by effort regulation that functions by making alterations to physiological processes and strategic adjustments at a cognitive level in response to cognitive demands and goals. Since fatigue is a multi-dimensional construct with psychological, physiological, and behavioural effects that respond to endogenous and exogenous variables, it follows then that fatigue assessment techniques ought to include multi-dimensional measures to acquire a holistic depiction of the fatigue symptom. This study aimed to assess whether or not a mechanism that regulated fatigue during sustained task execution could be identified and whether this mechanism resulted in regulation patterns that were distinct to a specific task. An additional aim of the study was on assessing whether the manner in which performance, psychophysical and subjective variables were modified over time followed a similar regulation pattern. The research design was aimed at inducing task-related fatigue twice on two different occasions in the same participants and evaluating the resultant changes in fatigue manifestation. This was done to assess the ability of participants to cope with fatigue as a result of previous experience. The research protocol included three tasks executed for an hour aimed at targeting and taxing the sensory, cognitive, motor resources, each task performed twice. 60 participants were recruited to participate in the current study, with 20 participants – 10 males and 10 females – randomly assigned to each of the three tasks. The cognitive resource task consisted of a memory recall task relying on working memory intended to evaluate the extent of reductions in memory and attention. The sensory resource task consisted of a reading task measuring visual scanning and perception designed to evaluate the extent of reduced vigilance. The motor resource task consisted of a modified Fitts’ stimulus response task targeted at monitoring the extent of movement timing disruption. Performance measures comprised of: response delay and the number of correctly identified digits during the cognitive resource task, the amount of correctly identified errors and reading speed during the sensory resource task, response time during the motor resource task, and responses to simple auditory reaction time tests (RTT) initiated at intervals during the task and then again at the end of each task. Physiological measures included ear temperature, eye blink frequency and duration, heart rate (HR), and heart rate variability (HRV). Subjective measures included the use of the Ratings of Perceived Exertion Category Ratio 10 scale (RPE CR 10) to measure cognitive exertion and the NASA-Task Load Index (NASA-TLX) to index mental workload. Eye blink frequency and duration, HR and HRV were sensitive to the type of task executed, showing differing response patterns both over the different tasks and over the two test sessions. The subjective measures indicated increasing RPE ratings over time in all tasks while the NASA-TLX indicated that each task elicited different workloads. Differing task performance responses were measured between the 1st test session and the 2nd test session during all tasks; while performance was found to improve during the 2nd test session for the motor and sensory tasks, it declined during the cognitive task. The findings of this research indicate that there was a regulatory mechanism for fatigue that altered the manner in which performance, psychophysical and subjective variables were modified over time, initiating a unique fatigue regulation pattern for each variable and each task. This regulation mechanism is understood to be a proactive and protective mechanism that functions through reducing a person’s ability to be vigilant, attentive, to exercise discernment, and to direct their level of responsiveness, essentially impacting how the body adapts to and copes with fatigue. The noted overall findings have industry implications; industries should consider accounting for the effects of this regulatory mechanism in their fatigue management interventions, specifically when designing job rotation and work/rest schedules because each cognitive task, having elicited a unique fatigue regulation pattern, ought to also have a different management program.
Microsoft� Office Word 2007
Adobe Acrobat 9.54 Paper Capture Plug-in
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Große, Philipp, Norman May und Wolfgang Lehner. „A Study of Partitioning and Parallel UDF Execution with the SAP HANA Database“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-144026.

Der volle Inhalt der Quelle
Annotation:
Large-scale data analysis relies on custom code both for preparing the data for analysis as well as for the core analysis algorithms. The map-reduce framework offers a simple model to parallelize custom code, but it does not integrate well with relational databases. Likewise, the literature on optimizing queries in relational databases has largely ignored user-defined functions (UDFs). In this paper, we discuss annotations for user-defined functions that facilitate optimizations that both consider relational operators and UDFs. We believe this to be the superior approach compared to just linking map-reduce evaluation to a relational database because it enables a broader range of optimizations. In this paper we focus on optimizations that enable the parallel execution of relational operators and UDFs for a number of typical patterns. A study on real-world data investigates the opportunities for parallelization of complex data flows containing both relational operators and UDFs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie