Academic literature on the topic 'Parallel execution of tasks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel execution of tasks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel execution of tasks"

1

TAN, KIAN-LEE, and HONGJUN LU. "ON PROCESSING MULTI-JOINS IN PARALLEL SYSTEMS." Parallel Processing Letters 01, no. 02 (December 1991): 157–64. http://dx.doi.org/10.1142/s0129626491000112.

Full text
Abstract:
In parallel systems, a number of joins from one or more queries can be executed either serially or in parallel. While serial execution assigns all processors to execute each join one after another, parallel execution distributes the joins to clusters formed by certain numbers of processors and executes them concurrently. However, data skew may result in load imbalance among processors executing the same join and some clusters may be overloaded with more expensive joins. As a result, the completion time will be much longer than what is expected. In this paper, we propose an algorithm to further minimize the completion time of concurrently executed multiple joins. For this algorithm, all the joins to be executed concurrently are decomposed into a set of tasks that are ordered according to decreasing task size. These tasks are dynamically acquired by available processors during execution. Our performance study shows that the proposed algorithm outperforms previously proposed approaches, especially when the number of processors increases, the relations are highly skewed and relation sizes are large.
APA, Harvard, Vancouver, ISO, and other styles
2

ZOMAYA, ALBERT Y., and GERARD CHAN. "EFFICIENT CLUSTERING FOR PARALLEL TASKS EXECUTION IN DISTRIBUTED SYSTEMS." International Journal of Foundations of Computer Science 16, no. 02 (April 2005): 281–99. http://dx.doi.org/10.1142/s0129054105002991.

Full text
Abstract:
The scheduling problem deals with the optimal assignment of a set of tasks to processing elements in a distributed system such that the total execution time is minimized. One approach for solving the scheduling problem is task clustering. This involves assigning tasks to clusters where each cluster is run on a single processor. This paper aims to show the feasibility of using Genetic Algorithms for task clustering to solve the scheduling problem. Genetic Algorithms are robust optimization and search techniques that are used in this work to solve the task-clustering problem. The proposed approach shows great promise to solve the clustering problem for a wide range of clustering instances.
APA, Harvard, Vancouver, ISO, and other styles
3

Oz, Isil, Muhammad Khurram Bhatti, Konstantin Popov, and Mats Brorsson. "Regression-Based Prediction for Task-Based Program Performance." Journal of Circuits, Systems and Computers 28, no. 04 (March 31, 2019): 1950060. http://dx.doi.org/10.1142/s0218126619500609.

Full text
Abstract:
As multicore systems evolve by increasing the number of parallel execution units, parallel programming models have been released to exploit parallelism in the applications. Task-based programming model uses task abstractions to specify parallel tasks and schedules tasks onto processors at runtime. In order to increase the efficiency and get the highest performance, it is required to identify which runtime configuration is needed and how processor cores must be shared among tasks. Exploring design space for all possible scheduling and runtime options, especially for large input data, becomes infeasible and requires statistical modeling. Regression-based modeling determines the effects of multiple factors on a response variable, and makes predictions based on statistical analysis. In this work, we propose a regression-based modeling approach to predict the task-based program performance for different scheduling parameters with variable data size. We execute a set of task-based programs by varying the runtime parameters, and conduct a systematic measurement for influencing factors on execution time. Our approach uses executions with different configurations for a set of input data, and derives different regression models to predict execution time for larger input data. Our results show that regression models provide accurate predictions for validation inputs with mean error rate as low as 6.3%, and 14% on average among four task-based programs.
APA, Harvard, Vancouver, ISO, and other styles
4

Agrawal, Amrit, and Pranay Chaudhuri. "An Algorithm for Task Scheduling in Heterogeneous Distributed Systems Using Task Duplication." International Journal of Grid and High Performance Computing 3, no. 1 (January 2011): 89–97. http://dx.doi.org/10.4018/jghpc.2011010105.

Full text
Abstract:
Task scheduling in heterogeneous parallel and distributed computing environment is a challenging problem. Applications identified by parallel tasks can be represented by directed-acyclic graphs (DAGs). Scheduling refers to the assignment of these parallel tasks on a set of bounded heterogeneous processors connected by high speed networks. Since task assignment is an NP-complete problem, instead of finding an exact solution, scheduling algorithms are developed based on heuristics, with the primary goal of minimizing the overall execution time of the application or schedule length. In this paper, the overall execution time (schedule length) of the tasks is reduced using task duplication on top of the Critical-Path-On-a-Processor (CPOP) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Brecht, Timothy, Xiaotie Deng, and Nian Gu. "Competitive Dynamic Multiprocessor Allocation for Parallel Applications." Parallel Processing Letters 07, no. 01 (March 1997): 89–100. http://dx.doi.org/10.1142/s0129626497000115.

Full text
Abstract:
We study dynamic multiprocessor allocation policies for parallel jobs, which allow the preemption and reallocation of processors to take place at any time. The objective is to minimize the completion time of the last job to finish executing (the makespan). We characterize a parallel job using two parameter. The job's parallelism, Pi, which is the number of tasks being executed in parallel by a job, and its execution time, li, when Pi processors are allocated to the job. The only information available to the scheduler is the parallelism of jobs. The job execution time is not known to the scheduler until the job's execution is completed. We apply the approach of competitive analysis to compare preemptive scheduling policies, and are interested in determining which policy achieves the best competitive ratio (i.e., is within the smallest constant factor of optimal). We devise an optimal competitive scheduling policy for scheduling two parallel jobs on P processors. Then, we apply the method to schedule N parallel jobs on P processors. Finally we extend our work to incorporate jobs for which the number of parallel tasks changes during execution (i.e., jobs with multiple phases of parallelism).
APA, Harvard, Vancouver, ISO, and other styles
6

González, J. Solano, and D. I. Jonest. "Parallel computation of configuration space." Robotica 14, no. 2 (March 1996): 205–12. http://dx.doi.org/10.1017/s0263574700019111.

Full text
Abstract:
SUMMARYMany motion planning methods use Configuration Space to represent a robot manipulator's range of motion and the obstacles which exist in its environment. The Cartesian to Configuration Space mapping is computationally intensive and this paper describes how the execution time can be decreased by using parallel processing. The natural tree structure of the algorithm is exploited to partition the computation into parallel tasks. An implementation programmed in the occam2 parallel computer language running on a network of INMOS transputers is described. The benefits of dynamically scheduling the tasks onto the processors are explained and verified by means of measured execution times on various processor network topologies. It is concluded that excellent speed-up and efficiency can be achieved provided that proper account is taken of the variable task lengths in the computation.
APA, Harvard, Vancouver, ISO, and other styles
7

Hirata, Hiroaki, and Atsushi Nunome. "Decoupling Computation and Result Write-Back for Thread-Level Parallelization." International Journal of Software Innovation 8, no. 3 (July 2020): 19–34. http://dx.doi.org/10.4018/ijsi.2020070102.

Full text
Abstract:
Thread-level speculation (TLS) is an approach to enhance the opportunity of parallelization of programs. A TLS system enables multiple threads to begin the execution of tasks in parallel even if there may be the dependency between tasks. When any dependency violation is detected, the TLS system enforces the violating thread to abort and re-execute the task. So, the frequency of aborts is one of the factors that damage the performance of the speculative execution. This article proposes a new technique named the code shelving, which enables threads not to need to abort. It is available not only for TLS but also as a flexible synchronization technique in conventional and non-speculatively parallel execution. The authors implemented the code shelving on their parallel execution system called Speculative Memory (SM) and verified the effectiveness of the code shelving.
APA, Harvard, Vancouver, ISO, and other styles
8

Dümmler, Jörg, Thomas Rauber, and Gudula Rünger. "Combined Scheduling and Mapping for Scalable Computing with Parallel Tasks." Scientific Programming 20, no. 1 (2012): 45–67. http://dx.doi.org/10.1155/2012/514940.

Full text
Abstract:
Recent and future parallel clusters and supercomputers use symmetric multiprocessors (SMPs) and multi-core processors as basic nodes, providing a huge amount of parallel resources. These systems often have hierarchically structured interconnection networks combining computing resources at different levels, starting with the interconnect within multi-core processors up to the interconnection network combining nodes of the cluster or supercomputer. The challenge for the programmer is that these computing resources should be utilized efficiently by exploiting the available degree of parallelism of the application program and by structuring the application in a way which is sensitive to the heterogeneous interconnect. In this article, we pursue a parallel programming method using parallel tasks to structure parallel implementations. A parallel task can be executed by multiple processors or cores and, for each activation of a parallel task, the actual number of executing cores can be adapted to the specific execution situation. In particular, we propose a new combined scheduling and mapping technique for parallel tasks with dependencies that takes the hierarchical structure of modern multi-core clusters into account. An experimental evaluation shows that the presented programming approach can lead to a significantly higher performance compared to standard data parallel implementations.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Guo Quan, Guo Qing Shi, Hao Guang Zhao, and Yong Hong Chen. "A Parallel Test Task Scheduling of Integrated Avionics System Based on the Ant Colony Algorithm." Applied Mechanics and Materials 713-715 (January 2015): 2069–72. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.2069.

Full text
Abstract:
Parallel testing is the key to achieving parallel test task scheduling, and its core is allocating resources fairly and reasonably to the test tasks, then rearrange the execution order of the test tasks in meeting the priority relationship between the resource constraints and the conditions of the test tasks, making the whole test mission of this system can be completed in the shortest possible time and improving the test efficiency. The basic ant colony algorithm has been improved in this paper to fit the parallel test task scheduling and to obtain the task scheduling sequence that complete all testing tasks in shortest test time.
APA, Harvard, Vancouver, ISO, and other styles
10

Dhanesh, Lavanya, and P. Murugesan. "A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multicore Processor with Fuzzy Logic Technique For Micro-grid Power Management." International Journal of Power Electronics and Drive Systems (IJPEDS) 9, no. 1 (March 1, 2018): 80. http://dx.doi.org/10.11591/ijpeds.v9.i1.pp80-88.

Full text
Abstract:
Scheduling of tasks based on real time requirement is a major issue in the heterogeneous multicore systemsfor micro-grid power management . Heterogeneous multicore processor schedules the serial tasks in the high performance core and parallel tasks are executed on the low performance cores. The aim of this paper is to implement a scheduling algorithm based on fuzzy logic for heterogeneous multicore processor for effective micro-grid application. Real – time tasks generally have different execution time and dead line. The main idea is to use two fuzzy logic based scheduling algorithm, first is to assign priority based on execution time and deadline of the task. Second , the task which has assigned higher priority get allotted for execution in high performance core and remaining tasks which are assigned low priority get allotted in low performance cores. The main objective of this scheduling algorithm is to increase the throughput and to improve CPU utilization there by reducing the overall power consumption of the micro-grid power management systems. Test cases with different task execution time and deadline were generated to evaluate the algorithms using MATLAB software.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parallel execution of tasks"

1

Mtshali, Progress Q. T. "Minimizing Parallel Virtual Machine [PVM] Tasks Execution Times Through Optimal Host Assignments." NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/740.

Full text
Abstract:
PVM is a message passing system that is designed to run parallel programs across a network of workstations. Its default scheduler uses the round-robin scheduling algorithm to assign tasks to hosts on the virtual machine. This task assignment strategy is deficient when it comes to a network of heterogeneous workstations. Heterogeneity could be software related, hardware related or both. The default scheduler does not attempt to assign a PVM task to a node within the virtual machine that best services the PVM task. The problem with this allocation scheme is that resource requirements for each task that must be assigned to a computer vary. The default scheduler could assign a computation intensive parallel task to a computer that has a slow central processing unit (CPU) thus taking longer to execute the task to completion (Ju and Wang, 1996). It is also possible that a communication bound process is assigned to a computer with very low bandwidth. The scheduler needs to assign tasks to computers in such a way that tasks are assigned to computers that will fulfill the resource requirements of a given task: schedule and balance the work load in such a way that the overall execution time is minimized. In this dissertation, a stochastic task assignment method was developed that not only takes into account the heterogeneity of the hosts but also utilizes state information for the tasks to be assigned, and hosts onto which they are assigned. Task assignment methods that exist today, such as used in Condor, PvmJobs, CROP, and Matchmaker tend to be designed for a homogenous environment using the UNIX operating system or its variants. They also tend to use operating system data structures that are sometimes not common across most architectures. This makes it difficult to port them to other architectures. The task assignment method that was developed in this dissertation was initially implemented across a PVM environment consisting of a network of Win32® architecture, Linux architecture, and Sun Solaris architecture. A portable prototype implementation that runs across these architectures was developed. A simulator based on the CSIMIS toolkit was developed and utilized to study the behavior of the M/G/I queueing model on which the task assignment is based. Four PVM programs were run on a PVM environment that consisted of six nodes. The run-times of these progran1s were measured for both the default scheduler and the method proposed in this dissertation. The analysis of the run-times suggests that the proposed task assignment method works better for PVM programs that are CPU intensive than those that are small and do not require a lot of CPU time. The reason for this behavior can be attributed to high overheard that is required to compute the task assignment probabilities for all the hosts in the PVM environment. However, the proposed method appears to minimize execution times for the CPU intensive tasks. The ease with which external resource managers in the PVM environment can be incorporated into the PVM environment at run-time makes this proposed method an alternative to the default scheduler. More work needs to be done before the results that were obtained from this dissertation could be generalized, if at all possible, across the board. A number of recommendations on the reduction of overheard are specified at the end of this report.
APA, Harvard, Vancouver, ISO, and other styles
2

Raghu, Kumbakonam S. "Taskmaster: an interactive, graphical environment for task specification, execution and monitoring." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/43277.

Full text
Abstract:
This thesis presents Taskmaster, an interactive, graphical environment for task specification, execution and monitoring. Taskmaster is an integrated user environment that employs a unique blend of the principles of Visual Programming, Tool Composition, Structured Analysis and Data Flow Computing to support user task specifications. Problem solving in the Taskmaster environment consists of decomposing the problem task into a partially ordered set of high-level subtasks. This decomposition is depicted graphically as a network in which the nodes correspond to the subtasks and the arcs represent the directed data paths between the nodes. The subtasks are successively decomposed into lower level subtasks until, at the lowest level, each network node is “bound” to a pre-existing tool from a tools database. Execution of the resulting network of software tools provides a task solution. Some of the novel features of the Taskmaster environment include 1) guidance to the user in the task specification process through menu-based interaction; 2) facility to interactively monitor the task network execution; 3) support for structured data flow among tools; and 4) enhanced support for reusability by providing operations to functionally abstract and reuse sub-networks of tools.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Grepl, Filip. "Aplikace pro řízení paralelního zpracování dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445490.

Full text
Abstract:
This work deals with the design and implementation of a system for parallel execution of tasks in the Knowledge Technology Research Group. The goal is to create a web application that allows to control their processing and monitor runs of these tasks including the use of system resources. The work first analyzes the current method of parallel data processing and the shortcomings of this solution. Then the work describes the existing tools including the problems that their test deployment revealed. Based on this knowledge, the requirements for a new application are defined and the design of the entire system is created. After that the selected parts of implementation and the way of the whole system testing is described together with the comparison of the efficiency with the original system.
APA, Harvard, Vancouver, ISO, and other styles
4

McGuigan, Brian. "The Effects of Stress and Executive Functions on Decision Making in an Executive Parallel Task." Thesis, Umeå universitet, Institutionen för psykologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-124398.

Full text
Abstract:
The aim of this study was to investigate the effects of acute stress on parallel task performance with the Game of Dice Task (GDT) to measure decision making and the Stroop test.  Two previous studies have found that the combination of stress and a parallel task with the GDT and an executive functions task preserved performance on the GDT for a stress group compared to a control group.  The purpose of this study was to create and use a new parallel task with the GDT and the stroop test to elucidate more information about the executive function contributions from the stroop test and to ensure that this parallel task preserves performance on the GDT for the stress group.  Sixteen participants (Mean Age: 26.88) were randomly assigned to either a stress group with the Trier Social Stress Test (TSST) or the control group with the placebo-TSST.  The Positive and Negative Affect Schedule (PANAS) and the State-Trait Anxiety Inventory (STAI) were given before and after the TSST or placebo-TSST and were used as stress indicators.  The results showed a trend towards the stress group performing marginally better than the control group on the GDT but not significantly.  There were no significant differences between the groups for accuracy on the Stroop test trial types.  However, the stress group had significantly slower mean response times on the congruent trial type of the Stroop test, p < .05, though.  This study has shown further evidence that stress and a parallel task together preserve performance on the GDT.
APA, Harvard, Vancouver, ISO, and other styles
5

Cheese, Andrew B. "Parallel execution of Parlog." Thesis, University of Nottingham, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

King, Andrew. "Distributed parallel symbolic execution." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Narula, Neha. "Parallel execution for conflicting transactions." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99779.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-79).
Multicore main-memory databases only obtain parallel performance when transactions do not conflict. Conflicting transactions are executed one at a time in order to ensure that they have serializable effects. Sequential execution on contended data leaves cores idle and reduces throughput. In other parallel programming contexts---not serializable transactions--techniques have been developed that can reduce contention on shared variables using per-core state. This thesis asks the question, can these techniques apply to a general serializable database? This work introduces a new concurrency control technique, phase reconciliation, that uses per-core state to greatly reduce contention on popular database records for many important workloads. Phase reconciliation uses the idea of synchronized phases to amortize the cost of combining per-core data and to extract parallelism. Doppel, our phase reconciliation database, repeatedly cycles through joined and split phases. Joined phases use traditional concurrency control and allow any transaction to execute. When workload contention causes unnecessary sequential execution, Doppel switches to a split phase. During a split phase, commutative operations on popular records act on per-core state, and thus proceed in parallel on different cores. By explicitly using phases, phase reconciliation realizes two important performance benefits: First, it amortizes the potentially high costs of aggregating per-core state over many transactions. Second, it can dynamically split data or not based on observed contention, handling challenging, varying workloads. Doppel achieves higher performance because it parallelizes transactions on popular data that would be run sequentially by conventional concurrency control. Phase reconciliation helps most when there are many updates to a few popular database records. On an 80-core machine, its throughput is up to 38x higher than conventional concurrency control protocols on microbenchmarks, and up to 3x on a larger application, at the cost of increased latency for some transactions.
by Neha Narula.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Zhijia. "Enabling Parallel Execution via Principled Speculation." W&M ScholarWorks, 2015. https://scholarworks.wm.edu/etd/1593092101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ben, Lahmar Imen. "Continuity of user tasks execution in pervasive environments." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00789725.

Full text
Abstract:
The proliferation of small devices and the advancements in various technologies have introduced the concept of pervasive environments. In these environments, user tasks can be executed by using the deployed components provided by devices with different capabilities. One appropriate paradigm for building user tasks for pervasive environments is Service-Oriented Architecture (SOA). Using SOA, user tasks are represented as an assembly of abstract components (i.e., services) without specifying their implementations, thus they should be resolved into concrete components. The task resolution involves automatic matching and selection of components across various devices. For this purpose, we present an approach that allows for each service of a user task, the selection of the best device and component by considering the user preferences, devices capabilities, services requirements and components preferences. Due to the dynamicity of pervasive environments, we are interested in the continuity of execution of user tasks. Therefore, we present an approach that allows components to monitor locally or remotely the changes of properties, which depend on. We also considered the adaptation of user tasks to cope with the dynamicity of pervasive environments. To overcome captured failures, the adaptation is carried out by a partial reselection of devices and components. However, in case of mismatching between an abstract user task and a concrete level, we propose a structural adaptation approach by injecting some defined adaptation patterns, which exhibit an extra-functional behavior. We also propose an architectural design of a middleware allowing the task's resolution, monitoring of the environment and the task adaptation. We provide implementation details of the middleware's components along with evaluation results
APA, Harvard, Vancouver, ISO, and other styles
10

Simpson, David John. "Space-efficient execution of parallel functional programs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0028/NQ51922.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Parallel execution of tasks"

1

Parallel execution of Parlog. Berlin: Springer-Verlag, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Adams, Loyce M. Reordering computations for parallel execution. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Parallel execution of logic programs. Boston: Kluwer Academic Publishers, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Conery, JohnS. Parallel execution of logic programs. Boston, Mass: Kluwer, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Conery, John S. Parallel Execution of Logic Programs. Boston, MA: Springer US, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Beaumont, A., and G. Gupta, eds. Parallel Execution of Logic Programs. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-55038-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Conery, John S. Parallel Execution of Logic Programs. Boston, MA: Springer US, 1987. http://dx.doi.org/10.1007/978-1-4613-1987-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Execution models of Prolog for parallel computers. Cambridge, Mass: MIT Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kacsuk, Péter. Execution models of Prolog for parallel computers. London: Pitman, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Xian-He. The reliability of scalability and execution time. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallel execution of tasks"

1

Laure, Erwin, Hans Zima, Matthew Haines, and Piyush Mehrotra. "Compiling Data Parallel Tasks for Coordinated Execution⋆." In Euro-Par’99 Parallel Processing, 413–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wiewiura, Piotr, Maciej Malawski, and Monika Piwowar. "Distributed Execution of Dynamically Defined Tasks on Microsoft Azure." In Parallel Processing and Applied Mathematics, 291–301. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-32149-3_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baruah, Sanjoy. "Resource-Efficient Execution of Conditional Parallel Real-Time Tasks." In Euro-Par 2018: Parallel Processing, 218–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96983-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dietze, Robert, Michael Hofmann, and Gudula Rünger. "Resource Contention Aware Execution of Multiprocessor Tasks on Heterogeneous Platforms." In Euro-Par 2017: Parallel Processing Workshops, 390–402. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75178-8_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Serrano, Estefania, Javier Garcia Blas, Jesus Carretero, and Monica Abella. "Architecture for the Execution of Tasks in Apache Spark in Heterogeneous Environments." In Euro-Par 2016: Parallel Processing Workshops, 504–15. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58943-5_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ciesko, Jan, Sergi Mateo, Xavier Teruel, Xavier Martorell, Eduard Ayguadé, Jesús Labarta, Alex Duran, et al. "Towards Task-Parallel Reductions in OpenMP." In OpenMP: Heterogenous Execution and Data Movements, 189–201. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24595-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hbeika, Jad, and Milind Kulkarni. "Locality-Aware Task-Parallel Execution on GPUs." In Languages and Compilers for Parallel Computing, 250–64. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52709-3_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Riekki, Jukka, Jussi Pajala, Antti Tikanmäki, and Juha Röning. "CAT Finland: Executing Primitive Tasks in Parallel." In Lecture Notes in Computer Science, 396–401. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48422-1_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hoge, Christian, Dan Keith, and Allen D. Malony. "Client-Side Task Support in Matlab for Concurrent Distributed Execution." In Distributed and Parallel Systems, 113–22. Boston, MA: Springer US, 2007. http://dx.doi.org/10.1007/978-0-387-69858-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shichkina, Yulia, and Mikhail Kupriyanov. "Creating a Schedule for Parallel Execution of Tasks Based on the Adjacency Lists." In Lecture Notes in Computer Science, 102–15. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01168-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel execution of tasks"

1

Trevisan, Daniela G., Luciana P. Nedel, and Ariane Boge da SIlva. "Evaluation of Multimodal Interaction in Parallel Tasks Execution." In 2011 XIII Symposium on Virtual Reality (SVR). IEEE, 2011. http://dx.doi.org/10.1109/svr.2011.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Darbha, Sekhar, and Dharma Agrawal. "A Task Duplication Based Optimal Scheduling Algorithm for Variable Execution Time Tasks." In 1994 International Conference on Parallel Processing (ICPP'94). IEEE, 1994. http://dx.doi.org/10.1109/icpp.1994.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balasubramanian, Vivekanandan, Antons Treikalis, Ole Weidner, and Shantenu Jha. "Ensemble Toolkit: Scalable and Flexible Execution of Ensembles of Tasks." In 2016 45th International Conference on Parallel Processing (ICPP). IEEE, 2016. http://dx.doi.org/10.1109/icpp.2016.59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Coronato, Antonio, Giuseppe De Pietro, and Luigi Gallo. "Dynamic Distribution and Execution of Tasks in Pervasive Grids." In 15th EUROMICRO International Conference on Parallel, Distributed and Network-Based Processing (PDP'07). IEEE, 2007. http://dx.doi.org/10.1109/pdp.2007.39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sen, Uddalok, Madhulina Sarkar, and Nandini Mukherjee. "A Heuristic-Based Resource Allocation Approach for Parallel Execution of Interacting Tasks." In 2017 IEEE 7th International Advance Computing Conference (IACC). IEEE, 2017. http://dx.doi.org/10.1109/iacc.2017.0158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Albers, Rob, Eric Suijs, and Peter H. N. de With. "Resource prediction and quality control for parallel execution of heterogeneous medical imaging tasks." In 2009 16th IEEE International Conference on Image Processing ICIP 2009. IEEE, 2009. http://dx.doi.org/10.1109/icip.2009.5414222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Freire de Souza, Jaime, Hermes Senger, and Fabricio A. B. Silva. "Escalabilidade de Aplicações Bag-of-Tasks em Plataformas Heterogêneas." In XXXVII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sbrc.2019.7394.

Full text
Abstract:
Bag-of-Tasks (BoT) applications are parallel applications composed of independent (i.e., embarrassingly parallel) tasks, which do not communicate with each other, may depend upon one or more input files, and can be executed in any order. BoT applications are very frequent in several scientific areas, and it is the ideal application class for execution on large distributed computing systems composed of hundreds to many thousands of computational resources. This paper focusses on the scalability of BoT applications running on large heterogeneous distributed computing systems organized as a master-slave platform. The results demonstrate that heterogeneous master-slave platforms can achieve higher scalability than homogeneous platforms for the execution of BoT applications, when the computational power of individual nodes in the homogeneous platform is fixed. However, when individual nodes of the homogeneous platform can scale-up, experiments show that master-slave platforms can achieve near linear speedups.
APA, Harvard, Vancouver, ISO, and other styles
8

Ghafarian-M., T., and H. Deldari. "Task Execution Availability Prediction in the Enterprise Desktop Grid." In Parallel and Distributed Computing and Networks. Calgary,AB,Canada: ACTAPRESS, 2010. http://dx.doi.org/10.2316/p.2010.676-039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nesmachnow, S., and A. Tchernykh. "Affinity multiprocessor scheduling considering communications and synchronizations using a Multiobjective Iterated Local Search algorithm." In 1st International Workshop on Advanced Information and Computation Technologies and Systems 2020. Crossref, 2021. http://dx.doi.org/10.47350/aicts.2020.14.

Full text
Abstract:
This article studies the affinity scheduling problem in multicore computing systems, considering the minimization of communications and synchronizations. The problem consists in assigning a set of tasks to resources to minimize the overall execution time of the set of tasks and the execution time required to compute the schedule. A Multiobjective Iterated Local Search method is proposed to solve the studied affinity scheduling problem, which considers the different times required for communication and synchronization of tasks executing on different cores of a multicore computer. The experimental evaluation of the proposed scheduling method is performed over realistic instances of the scheduling problem, considering a set of common benchmark applications from the parallel scientific computing field, and a modern multicore platform from National Supercomputing Center, Uruguay. The main results indicate that the proposed multiobjective Iterated Local Search method improves up to 21.6% over the traditional scheduling techniques (a standard Round Robin and a Greedy scheduler)
APA, Harvard, Vancouver, ISO, and other styles
10

Wilson, Lucas A., and Jeffery von Ronne. "A Distributed Dataflow Model for Task-Uncoordinated Parallel Program Execution." In 2014 43nd International Conference on Parallel Processing Workshops (ICCPW). IEEE, 2014. http://dx.doi.org/10.1109/icppw.2014.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel execution of tasks"

1

Subhlok, Jaspal. Automatic Mapping of Task and Data Parallel Programs for Efficient Execution on Multicomputers. Fort Belvoir, VA: Defense Technical Information Center, November 1993. http://dx.doi.org/10.21236/ada274125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Amela, R., R. Badia, S. Böhm, R. Tosi, C. Soriano, and R. Rossi. D4.2 Profiling report of the partner’s tools, complete with performance suggestions. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.023.

Full text
Abstract:
This deliverable focuses on the proling activities developed in the project with the partner's applications. To perform this proling activities, a couple of benchmarks were dened in collaboration with WP5. The rst benchmark is an embarrassingly parallel benchmark that performs a read and then multiple writes of the same object, with the objective of stressing the memory and storage systems and evaluate the overhead when these reads and writes are performed in parallel. A second benchmark is dened based on the Continuation Multi Level Monte Carlo (C-MLMC) algorithm. While this algorithm is normally executed using multiple levels, for the proling and performance analysis objectives, the execution of a single level was enough since the forthcoming levels have similar performance characteristics. Additionally, while the simulation tasks can be executed as parallel (multi-threaded tasks), in the benchmark, single threaded tasks were executed to increase the number of simulations to be scheduled and stress the scheduling engines. A set of experiments based on these two benchmarks have been executed in the MareNostrum 4 supercomputer and using PyCOMPSs as underlying programming model and dynamic scheduler of the tasks involved in the executions. While the rst benchmark was executed several times in a single iteration, the second benchmark was executed in an iterative manner, with cycles of 1) Execution and trace generation; 2) Performance analysis; 3) Improvements. This had enabled to perform several improvements in the benchmark and in the scheduler of PyCOMPSs. The initial iterations focused on the C-MLMC structure itself, performing re-factors of the code to remove ne grain and sequential tasks and merging them in larger granularity tasks. The next iterations focused on improving the PyCOMPSs scheduler, removing existent bottlenecks and increasing its performance by making the scheduler a multithreaded engine. While the results can still be improved, we are satised with the results since the granularity of the simulations run in this evaluation step are much ner than the one that will be used for the real scenarios. The deliverable nishes with some recommendations that should be followed along the project in order to obtain good performance in the execution of the project codes.
APA, Harvard, Vancouver, ISO, and other styles
3

Gelman, Andrew. Petascale Hierarchical Modeling VIA Parallel Execution. Office of Scientific and Technical Information (OSTI), April 2014. http://dx.doi.org/10.2172/1127434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Smith, Tyler Barratt, and James Thomas Perry. Dual compile strategy for parallel heterogeneous execution. Office of Scientific and Technical Information (OSTI), June 2012. http://dx.doi.org/10.2172/1121974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ji, Zhengrong, Junlan Zhou, Mineo Takai, Jay Martin, and Rajive Bagrodia. Optimizing Parallel Execution of Detailed Wireless Network Simulation. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada467039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Knoblock, Craig A. Generating Parallel Execution Plans with a Partial Order Planner. Fort Belvoir, VA: Defense Technical Information Center, May 1994. http://dx.doi.org/10.21236/ada285888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Manacero, Aleardo. Performance prediction of parallel programs through execution graph simulation. Office of Scientific and Technical Information (OSTI), September 1997. http://dx.doi.org/10.2172/1421702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Andersson, Bjorn A., Dionisio de Niz, Hyoseung Kim, Mark Klein, and Ragunathan Rajkumar. Scheduling Constrained-Deadline Sporadic Parallel Tasks Considering Memory Contention. Fort Belvoir, VA: Defense Technical Information Center, October 2014. http://dx.doi.org/10.21236/ada610918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Pey-yun P., and Alain J. Martin. The Sync Model: A Parallel Execution Method for Logic Programming. Fort Belvoir, VA: Defense Technical Information Center, March 1986. http://dx.doi.org/10.21236/ada442971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ozmen, Ozgur, James J. Nutaro, and Joshua Ryan New. Parallel Execution of Functional Mock-up Units in Buildings Modeling. Office of Scientific and Technical Information (OSTI), June 2016. http://dx.doi.org/10.2172/1257905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography