Auswahl der wissenschaftlichen Literatur zum Thema „Parallel job“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Parallel job" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Parallel job"

1

Brecht, Timothy, Xiaotie Deng und Nian Gu. „Competitive Dynamic Multiprocessor Allocation for Parallel Applications“. Parallel Processing Letters 07, Nr. 01 (März 1997): 89–100. http://dx.doi.org/10.1142/s0129626497000115.

Der volle Inhalt der Quelle
Annotation:
We study dynamic multiprocessor allocation policies for parallel jobs, which allow the preemption and reallocation of processors to take place at any time. The objective is to minimize the completion time of the last job to finish executing (the makespan). We characterize a parallel job using two parameter. The job's parallelism, Pi, which is the number of tasks being executed in parallel by a job, and its execution time, li, when Pi processors are allocated to the job. The only information available to the scheduler is the parallelism of jobs. The job execution time is not known to the scheduler until the job's execution is completed. We apply the approach of competitive analysis to compare preemptive scheduling policies, and are interested in determining which policy achieves the best competitive ratio (i.e., is within the smallest constant factor of optimal). We devise an optimal competitive scheduling policy for scheduling two parallel jobs on P processors. Then, we apply the method to schedule N parallel jobs on P processors. Finally we extend our work to incorporate jobs for which the number of parallel tasks changes during execution (i.e., jobs with multiple phases of parallelism).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Shim, Sang Oh, und Seong Woo Choi. „Scheduling Jobs on Dedicated Parallel Machines“. Applied Mechanics and Materials 433-435 (Oktober 2013): 2363–66. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.2363.

Der volle Inhalt der Quelle
Annotation:
This paper considers scheduling problem on dedicated parallel machines where several types of machines are grouped into one process. The dedicated machine is that a job with a specific recipe should be processed on the dedicated machine even though the job can be produced on any other machine originally. In this process, a setup is required when different jobs are done consecutively. To minimize the completion time of the last job, a scheduling method is developed. Computational experiments are performed on a number of test problems and results show that the suggested algorithm give good solutions in a reasonable amount of computation time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kumar, P. R., und J. Walrand. „Individually optimal routing in parallel systems“. Journal of Applied Probability 22, Nr. 4 (Dezember 1985): 989–95. http://dx.doi.org/10.2307/3213970.

Der volle Inhalt der Quelle
Annotation:
Jobs arrive at a buffer from which there are several parallel routes to a destination. A socially optimal policy is one which minimizes the average delay of all jobs, whereas an individually optimal policy is one which, for each job, minimizes its own delay, with route preference given to jobs at the head of the buffer. If there is a socially optimal policy for a system with no arrivals, which can be implemented by each job following a policy γ in such a way that no job ever utilizes a previously declined route, then we show that such a γ is an individually optimal policy for each job. Moreover γ continues to be individually optimal even if the system has an arbitrary arrival process, subject only to the restriction that past arrivals are independent of future route-traversal times. Thus, γ is an individually optimal policy which is insensitive to the nature of the arrival process. In the particular case where the times to traverse the routes are exponentially distributed with a possibly different mean time for each of the parallel routes, then such an insensitive individually optimal policy does in fact exist and is moreover trivially determined by certain threshold numbers. A conjecture is also made about more general situations where such individually optimal policies exist.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kumar, P. R., und J. Walrand. „Individually optimal routing in parallel systems“. Journal of Applied Probability 22, Nr. 04 (Dezember 1985): 989–95. http://dx.doi.org/10.1017/s0021900200108265.

Der volle Inhalt der Quelle
Annotation:
Jobs arrive at a buffer from which there are several parallel routes to a destination. A socially optimal policy is one which minimizes the average delay of all jobs, whereas an individually optimal policy is one which, for each job, minimizes its own delay, with route preference given to jobs at the head of the buffer. If there is a socially optimal policy for a system with no arrivals, which can be implemented by each job following a policy γ in such a way that no job ever utilizes a previously declined route, then we show that such a γ is an individually optimal policy for each job. Moreover γ continues to be individually optimal even if the system has an arbitrary arrival process, subject only to the restriction that past arrivals are independent of future route-traversal times. Thus, γ is an individually optimal policy which is insensitive to the nature of the arrival process. In the particular case where the times to traverse the routes are exponentially distributed with a possibly different mean time for each of the parallel routes, then such an insensitive individually optimal policy does in fact exist and is moreover trivially determined by certain threshold numbers. A conjecture is also made about more general situations where such individually optimal policies exist.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Weng, Wentao, und Weina Wang. „Achieving Zero Asymptotic Queueing Delay for Parallel Jobs“. ACM SIGMETRICS Performance Evaluation Review 49, Nr. 1 (22.06.2022): 25–26. http://dx.doi.org/10.1145/3543516.3456268.

Der volle Inhalt der Quelle
Annotation:
Zero queueing delay is highly desirable in large-scale computing systems. Existing work has shown that it can be asymptotically achieved by using the celebrated Power-of-d-choices (Pod) policy with a probe overhead d = Ω(log N/1-λ), and it is impossible when d = O(1/1-λ), where N is the number of servers and λ is the load of the system. However, these results are based on the model where each job is an indivisible unit, which does not capture the parallel structure of jobs in today's predominant parallel computing paradigm. This paper considers a model where each job consists of a batch of parallel tasks. Under this model, we propose a new notion of zero (asymptotic) queueing delay that requires the job delay under a policy to approach the job delay given by the max of its tasks' service times, i.e., the job delay assuming its tasks entered service right upon arrival. This notion quantifies the effect of queueing on a job level for jobs consisting of multiple tasks, and thus deviates from the conventional zero queueing delay for single-task jobs in the literature. We show that zero queueing delay for parallel jobs can be achieved using the batch-filling policy (a variant of the celebrated Pod policy) with a probe overhead d = Ω(1/(1-λ)log k) in the sub-Halfin-Whitt heavy-traffic regime, where k is the number of tasks in each job and k properly scales with N (the number of servers). This result demonstrates that for parallel jobs, zero queueing delay can be achieved with a smaller probe overhead. We also establish an impossibility result: we show that zero queueing delay cannot be achieved if d = (o(log N/log k)). Simulation results are provided to demonstrate the consistency between numerical results and theoretical results under reasonable settings, and to investigate gaps in the theoretical analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Peng Fei, und Shou Bin Dong. „Multi-Objective Scheduling for Parallel Jobs on Grid“. Key Engineering Materials 439-440 (Juni 2010): 1281–86. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.1281.

Der volle Inhalt der Quelle
Annotation:
Focused on the complexity of the parallel job scheduling on heterogeneous Grid, the paper proposes a multi-objective optimization based scheduling algorithm. The algorithm first splits the parallel job up into a series of independent processes with constraints, and then adopts particles to represent the mapping of job-resource. Multi-objective PSO is employed to simultaneously optimize the scheduling objectives of throughput and average turnaround time. Experimental result indicates that the proposed approach is effective while dealing with large scale parallel jobs scheduling on heterogeneous Grid and outperforms other conventional algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

DONG, J. M., X. S. WANG, L. L. WANG und J. L. HU. „PARALLEL MACHINE SCHEDULING WITH JOB DELIVERY COORDINATION“. ANZIAM Journal 58, Nr. 3-4 (April 2017): 306–13. http://dx.doi.org/10.1017/s1446181117000190.

Der volle Inhalt der Quelle
Annotation:
We analyse a parallel (identical) machine scheduling problem with job delivery to a single customer. For this problem, each job needs to be processed on $m$ parallel machines non-pre-emptively and then transported to a customer by one vehicle with a limited physical capacity. The optimization goal is to minimize the makespan, the time at which all the jobs are processed and delivered and the vehicle returns to the machine. We present an approximation algorithm with a tight worst-case performance ratio of $7/3-1/m$ for the general case, $m\geq 3$.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Schwiegelshohn, Uwe, und Ramin Yahyapour. „Fairness in parallel job scheduling“. Journal of Scheduling 3, Nr. 5 (2000): 297–320. http://dx.doi.org/10.1002/1099-1425(200009/10)3:5<297::aid-jos50>3.0.co;2-d.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Xu, Susan H. „On a job resequencing issue in parallel processor stochastic scheduling“. Advances in Applied Probability 24, Nr. 4 (Dezember 1992): 915–33. http://dx.doi.org/10.2307/1427719.

Der volle Inhalt der Quelle
Annotation:
In flexible assembly systems, it is often necessary to coordinate jobs and materials so that specific jobs are matched with specific materials. This requires that jobs depart from upstream parallel workstations in some predetermined order. One way to satisfy this requirement is to temporarily hold the serviced jobs getting out of order at a resequencing buffer and to release them to downstream workstations as soon as all their predecessors are serviced. In this paper we consider the problem of scheduling a fixed number of non-preemptive jobs on two IHR non-identical processors with the resequencing requirement. We prove that the individually optimal policy, in which each job minimizes its own expected departure time subject to the constraint that available processors are offered to jobs in their departure order, is of a threshold type. The policy is independent of job weights and the jobs residing at the resequencing buffer and possesses the monotonicity property which states that a job will never utilize a processor in the future once it has declined the processor. Most importantly, we prove that the individually optimal policy has the stability property; namely: if at any time a job deviated from the individually optimal policy, then the departure time of every job, including its own, would be prolonged. As a direct consequence of this property, the individually optimal policy is socially optimal in the sense that it minimizes the expected total weighted departure time of the system as a whole. We identify situations under which the individually optimal policy also minimizes the expected makespan of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xu, Susan H. „On a job resequencing issue in parallel processor stochastic scheduling“. Advances in Applied Probability 24, Nr. 04 (Dezember 1992): 915–33. http://dx.doi.org/10.1017/s0001867800025015.

Der volle Inhalt der Quelle
Annotation:
In flexible assembly systems, it is often necessary to coordinate jobs and materials so that specific jobs are matched with specific materials. This requires that jobs depart from upstream parallel workstations in some predetermined order. One way to satisfy this requirement is to temporarily hold the serviced jobs getting out of order at a resequencing buffer and to release them to downstream workstations as soon as all their predecessors are serviced. In this paper we consider the problem of scheduling a fixed number of non-preemptive jobs on two IHR non-identical processors with the resequencing requirement. We prove that the individually optimal policy, in which each job minimizes its own expected departure time subject to the constraint that available processors are offered to jobs in their departure order, is of a threshold type. The policy is independent of job weights and the jobs residing at the resequencing buffer and possesses the monotonicity property which states that a job will never utilize a processor in the future once it has declined the processor. Most importantly, we prove that the individually optimal policy has the stability property; namely: if at any time a job deviated from the individually optimal policy, then the departure time of every job, including its own, would be prolonged. As a direct consequence of this property, the individually optimal policy is socially optimal in the sense that it minimizes the expected total weighted departure time of the system as a whole. We identify situations under which the individually optimal policy also minimizes the expected makespan of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Parallel job"

1

Sabin, Gerald M. „Unfairness in parallel job scheduling“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1164826017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Islam, Mohammad Kamrul. „QoS In Parallel Job Scheduling“. The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218566682.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wing, A. J. „Parallel simulation of PCB job shops“. Thesis, University of East Anglia, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359342.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lynch, Gerard. „Parallel job scheduling on heterogeneous networks of multiprocessor workstations“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0006/MQ45952.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ali, Syed Zeeshan. „An investigation into parallel job scheduling using service level agreements“. Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/an-investigation-into-parallel-job-scheduling-using-service-level-agreements(f4685321-374e-41c4-86da-d07f09ea4bac).html.

Der volle Inhalt der Quelle
Annotation:
A scheduler, as a central components of a computing site, aggregates computing resources and is responsible to distribute the incoming load (jobs) between the resources. Under such an environment, the optimum performance of the system against the service level agreement (SLA) based workloads, can be achieved by calculating the priority of SLA bound jobs using integrated heuristic. The SLA defines the service obligations and expectations to use the computational resources. The integrated heuristic is the combination of different SLA terms. It combines the SLA terms with a specific weight for each term. Theweights are computed by applying parameter sweep technique in order to obtain the best schedule for the optimum performance of the system under the workload. The sweepingof parameters on the integrated heuristic observed to be computationally expensive. The integrated heuristic becomes more expensive if no value of the computed weights result in improvement in performance with the resulting schedule. Hence, instead of obtaining optimum performance it incurs computation cost in such situations. Therefore, there is a need of detection of situations where the integrated heuristic can be exploited beneficially. For that reason, in this thesis we propose a metric based on the concept of utilization, to evaluate the SLA based parallel workloads of independent jobs to detect any impact of integrated heuristic on the workload.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Jianqing. „A parallel approach for solving a multiple machine job sequencing problem“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60235.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhou, Huajun. „The CON Job scheduling problem on a single and parallel machines /“. Electronic version (PDF), 2003. http://dl.uncw.edu/etd/2003/zhouh/huajunzhou.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Vélez, Gallego Mario César. „Algorithms for Scheduling Parallel Batch Processing Machines with Non-Identical Job Ready Times“. FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/276.

Der volle Inhalt der Quelle
Annotation:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed – especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hulett, Maria. „Analytical Approximations to Predict Performance Measures of Manufacturing Systems with Job Failures and Parallel Processing“. FIU Digital Commons, 2010. http://digitalcommons.fiu.edu/etd/167.

Der volle Inhalt der Quelle
Annotation:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Khan, Mukhtaj. „Hadoop performance modeling and job optimization for big data analytics“. Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11078.

Der volle Inhalt der Quelle
Annotation:
Big data has received a momentum from both academia and industry. The MapReduce model has emerged into a major computing model in support of big data analytics. Hadoop, which is an open source implementation of the MapReduce model, has been widely taken up by the community. Cloud service providers such as Amazon EC2 cloud have now supported Hadoop user applications. However, a key challenge is that the cloud service providers do not a have resource provisioning mechanism to satisfy user jobs with deadline requirements. Currently, it is solely the user responsibility to estimate the require amount of resources for their job running in a public cloud. This thesis presents a Hadoop performance model that accurately estimates the execution duration of a job and further provisions the required amount of resources for a job to be completed within a deadline. The proposed model employs Locally Weighted Linear Regression (LWLR) model to estimate execution time of a job and Lagrange Multiplier technique for resource provisioning to satisfy user job with a given deadline. The performance of the propose model is extensively evaluated in both in-house Hadoop cluster and Amazon EC2 Cloud. Experimental results show that the proposed model is highly accurate in job execution estimation and jobs are completed within the required deadlines following on the resource provisioning scheme of the proposed model. In addition, the Hadoop framework has over 190 configuration parameters and some of them have significant effects on the performance of a Hadoop job. Manually setting the optimum values for these parameters is a challenging task and also a time consuming process. This thesis presents optimization works that enhances the performance of Hadoop by automatically tuning its parameter values. It employs Gene Expression Programming (GEP) technique to build an objective function that represents the performance of a job and the correlation among the configuration parameters. For the purpose of optimization, Particle Swarm Optimization (PSO) is employed to find automatically an optimal or a near optimal configuration settings. The performance of the proposed work is intensively evaluated on a Hadoop cluster and the experimental results show that the proposed work enhances the performance of Hadoop significantly compared with the default settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Parallel job"

1

Fan, Yang. Parallel job scheduling in massively parallel processors. Ottawa: National Library of Canada, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Klusáček, Dalibor, Walfredo Cirne und Gonzalo P. Rodrigo, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88224-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Klusáček, Dalibor, Walfredo Cirne und Gonzalo P. Rodrigo, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88224-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Klusáček, Dalibor, Walfredo Cirne und Narayan Desai, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77398-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Frachtenberg, Eitan, und Uwe Schwiegelshohn, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16505-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Feitelson, Dror G., und Larry Rudolph, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-47954-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Feitelson, Dror G., und Larry Rudolph, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60153-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Feitelson, Dror G., und Larry Rudolph, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-39997-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Desai, Narayan, und Walfredo Cirne, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61756-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Klusáček, Dalibor, Walfredo Cirne und Narayan Desai, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63171-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Parallel job"

1

Dandamudi, Sivarama. „Parallel Job Scheduling“. In Hierarchical Scheduling in Parallel and Cluster Systems, 49–84. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0133-6_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tripiccione, Raffaele, Michael Philippsen, Rolf Riesen und Arthur B. Maccabe. „Job Scheduling“. In Encyclopedia of Parallel Computing, 997–1002. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_212.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Aida, Kento. „Effect of Job Size Characteristics on Job Scheduling Performance“. In Job Scheduling Strategies for Parallel Processing, 1–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-39997-6_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sanders, Peter, und Dominik Schreiber. „Decentralized Online Scheduling of Malleable NP-hard Jobs“. In Euro-Par 2022: Parallel Processing, 119–35. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12597-3_8.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this work, we address an online job scheduling problem in a large distributed computing environment. Each job has a priority and a demand of resources, takes an unknown amount of time, and is malleable, i.e., the number of allotted workers can fluctuate during its execution. We subdivide the problem into (a) determining a fair amount of resources for each job and (b) assigning each job to an according number of processing elements. Our approach is fully decentralized, uses lightweight communication, and arranges each job as a binary tree of workers which can grow and shrink as necessary. Using the NP-complete problem of propositional satisfiability (SAT) as a case study, we experimentally show on up to 128 machines (6144 cores) that our approach leads to near-optimal utilization, imposes minimal computational overhead, and performs fair scheduling of incoming jobs within a few milliseconds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cirne, Walfredo, und Eitan Frachtenberg. „Web-Scale Job Scheduling“. In Job Scheduling Strategies for Parallel Processing, 1–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35867-8_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Abawajy, J. H. „Fault-Tolerant Dynamic Job Scheduling Policy“. In Distributed and Parallel Computing, 165–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Klusáček, Dalibor, Mehmet Soysal und Frédéric Suter. „Alea – Complex Job Scheduling Simulator“. In Parallel Processing and Applied Mathematics, 217–29. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43222-5_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Aida, Kento, Hironori Kasahara und Seinosuke Narita. „Job scheduling scheme for pure space sharing among rigid jobs“. In Job Scheduling Strategies for Parallel Processing, 98–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0053983.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pruyne, Jim, und Miron Livny. „Managing checkpoints for parallel programs“. In Job Scheduling Strategies for Parallel Processing, 140–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0022292.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lopez, Victor, Ana Jokanovic, Marco D’Amico, Marta Garcia, Raul Sirvent und Julita Corbalan. „DJSB: Dynamic Job Scheduling Benchmark“. In Job Scheduling Strategies for Parallel Processing, 174–88. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77398-8_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Parallel job"

1

Sabin, G., G. Kochhar und P. Sadayappan. „Job fairness in non-preemptive job scheduling“. In International Conference on Parallel Processing, 2004. ICPP 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpp.2004.1327920.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhou, Longfang, Xiaorong Zhang, Wenxiang Yang, Yongguo Han, Fang Wang, Yadong Wu und Jie Yu. „PREP: Predicting Job Runtime with Job Running Path on Supercomputers“. In ICPP 2021: 50th International Conference on Parallel Processing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3472456.3473521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhou, Longfang, Xiaorong Zhang, Wenxiang Yang, Yongguo Han, Fang Wang, Yadong Wu und Jie Yu. „PREP: Predicting Job Runtime with Job Running Path on Supercomputers“. In ICPP 2021: 50th International Conference on Parallel Processing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3472456.3473521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jakobsche, Thomas, Nicolas Lachiche und Florina M. Ciorba. „Investigating HPC Job Resource Requests and Job Efficiency Reporting“. In 2023 22nd International Symposium on Parallel and Distributed Computing (ISPDC). IEEE, 2023. http://dx.doi.org/10.1109/ispdc59212.2023.00024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhou, Wei, K. Preston White und Hongfeng Yu. „Improving Short Job Latency Performance in Hybrid Job Schedulers with Dice“. In ICPP 2019: 48th International Conference on Parallel Processing. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3337821.3337851.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shengwei Yi, Zhichao Wang, Shilong Ma, Zhanbin Che, Feng Liang und Yonggang Huang. „Combinational backfilling for parallel job scheduling“. In 2010 2nd International Conference on Education Technology and Computer (ICETC). IEEE, 2010. http://dx.doi.org/10.1109/icetc.2010.5529424.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Berenbrink, Petra, Artur Czumaj, Tom Friedetzky und Nikita D. Vvedenskaya. „Infinite parallel job allocation (extended abstract)“. In the twelfth annual ACM symposium. New York, New York, USA: ACM Press, 2000. http://dx.doi.org/10.1145/341800.341813.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Berg, Benjamin, Jan-Pieter Dorsman und Mor Harchol-Balter. „Towards Optimality in Parallel Job Scheduling“. In SIGMETRICS '18: ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3219617.3219666.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Toporkov, Victor, Anna Toporkova, Alexey Tselishchev, Dmitry Yemelyanov und Petr Potekhin. „Job Flow Distribution and Ranked Jobs Scheduling in Grid Virtual Organizations“. In 2015 44th International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2015. http://dx.doi.org/10.1109/icppw.2015.36.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sharma, Debendra, und Dhiraj Pradhan. „Job Scheduling in Mesh Multicomputers“. In 1994 International Conference on Parallel Processing (ICPP'94). IEEE, 1994. http://dx.doi.org/10.1109/icpp.1994.119.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Parallel job"

1

Leung, Vitus Joseph, Gerald Sabin und Ponnuswamy Sadayappan. Parallel job scheduling policies to improve fairness : a case study. Office of Scientific and Technical Information (OSTI), Februar 2008. http://dx.doi.org/10.2172/929521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie