Auswahl der wissenschaftlichen Literatur zum Thema „Job parallèle“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Job parallèle" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Job parallèle"

1

Brecht, Timothy, Xiaotie Deng und Nian Gu. „Competitive Dynamic Multiprocessor Allocation for Parallel Applications“. Parallel Processing Letters 07, Nr. 01 (März 1997): 89–100. http://dx.doi.org/10.1142/s0129626497000115.

Der volle Inhalt der Quelle
Annotation:
We study dynamic multiprocessor allocation policies for parallel jobs, which allow the preemption and reallocation of processors to take place at any time. The objective is to minimize the completion time of the last job to finish executing (the makespan). We characterize a parallel job using two parameter. The job's parallelism, Pi, which is the number of tasks being executed in parallel by a job, and its execution time, li, when Pi processors are allocated to the job. The only information available to the scheduler is the parallelism of jobs. The job execution time is not known to the scheduler until the job's execution is completed. We apply the approach of competitive analysis to compare preemptive scheduling policies, and are interested in determining which policy achieves the best competitive ratio (i.e., is within the smallest constant factor of optimal). We devise an optimal competitive scheduling policy for scheduling two parallel jobs on P processors. Then, we apply the method to schedule N parallel jobs on P processors. Finally we extend our work to incorporate jobs for which the number of parallel tasks changes during execution (i.e., jobs with multiple phases of parallelism).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Weng, Wentao, und Weina Wang. „Achieving Zero Asymptotic Queueing Delay for Parallel Jobs“. ACM SIGMETRICS Performance Evaluation Review 49, Nr. 1 (22.06.2022): 25–26. http://dx.doi.org/10.1145/3543516.3456268.

Der volle Inhalt der Quelle
Annotation:
Zero queueing delay is highly desirable in large-scale computing systems. Existing work has shown that it can be asymptotically achieved by using the celebrated Power-of-d-choices (Pod) policy with a probe overhead d = Ω(log N/1-λ), and it is impossible when d = O(1/1-λ), where N is the number of servers and λ is the load of the system. However, these results are based on the model where each job is an indivisible unit, which does not capture the parallel structure of jobs in today's predominant parallel computing paradigm. This paper considers a model where each job consists of a batch of parallel tasks. Under this model, we propose a new notion of zero (asymptotic) queueing delay that requires the job delay under a policy to approach the job delay given by the max of its tasks' service times, i.e., the job delay assuming its tasks entered service right upon arrival. This notion quantifies the effect of queueing on a job level for jobs consisting of multiple tasks, and thus deviates from the conventional zero queueing delay for single-task jobs in the literature. We show that zero queueing delay for parallel jobs can be achieved using the batch-filling policy (a variant of the celebrated Pod policy) with a probe overhead d = Ω(1/(1-λ)log k) in the sub-Halfin-Whitt heavy-traffic regime, where k is the number of tasks in each job and k properly scales with N (the number of servers). This result demonstrates that for parallel jobs, zero queueing delay can be achieved with a smaller probe overhead. We also establish an impossibility result: we show that zero queueing delay cannot be achieved if d = (o(log N/log k)). Simulation results are provided to demonstrate the consistency between numerical results and theoretical results under reasonable settings, and to investigate gaps in the theoretical analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Xu, Susan H. „On a job resequencing issue in parallel processor stochastic scheduling“. Advances in Applied Probability 24, Nr. 4 (Dezember 1992): 915–33. http://dx.doi.org/10.2307/1427719.

Der volle Inhalt der Quelle
Annotation:
In flexible assembly systems, it is often necessary to coordinate jobs and materials so that specific jobs are matched with specific materials. This requires that jobs depart from upstream parallel workstations in some predetermined order. One way to satisfy this requirement is to temporarily hold the serviced jobs getting out of order at a resequencing buffer and to release them to downstream workstations as soon as all their predecessors are serviced. In this paper we consider the problem of scheduling a fixed number of non-preemptive jobs on two IHR non-identical processors with the resequencing requirement. We prove that the individually optimal policy, in which each job minimizes its own expected departure time subject to the constraint that available processors are offered to jobs in their departure order, is of a threshold type. The policy is independent of job weights and the jobs residing at the resequencing buffer and possesses the monotonicity property which states that a job will never utilize a processor in the future once it has declined the processor. Most importantly, we prove that the individually optimal policy has the stability property; namely: if at any time a job deviated from the individually optimal policy, then the departure time of every job, including its own, would be prolonged. As a direct consequence of this property, the individually optimal policy is socially optimal in the sense that it minimizes the expected total weighted departure time of the system as a whole. We identify situations under which the individually optimal policy also minimizes the expected makespan of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Xu, Susan H. „On a job resequencing issue in parallel processor stochastic scheduling“. Advances in Applied Probability 24, Nr. 04 (Dezember 1992): 915–33. http://dx.doi.org/10.1017/s0001867800025015.

Der volle Inhalt der Quelle
Annotation:
In flexible assembly systems, it is often necessary to coordinate jobs and materials so that specific jobs are matched with specific materials. This requires that jobs depart from upstream parallel workstations in some predetermined order. One way to satisfy this requirement is to temporarily hold the serviced jobs getting out of order at a resequencing buffer and to release them to downstream workstations as soon as all their predecessors are serviced. In this paper we consider the problem of scheduling a fixed number of non-preemptive jobs on two IHR non-identical processors with the resequencing requirement. We prove that the individually optimal policy, in which each job minimizes its own expected departure time subject to the constraint that available processors are offered to jobs in their departure order, is of a threshold type. The policy is independent of job weights and the jobs residing at the resequencing buffer and possesses the monotonicity property which states that a job will never utilize a processor in the future once it has declined the processor. Most importantly, we prove that the individually optimal policy has the stability property; namely: if at any time a job deviated from the individually optimal policy, then the departure time of every job, including its own, would be prolonged. As a direct consequence of this property, the individually optimal policy is socially optimal in the sense that it minimizes the expected total weighted departure time of the system as a whole. We identify situations under which the individually optimal policy also minimizes the expected makespan of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kumar, P. R., und J. Walrand. „Individually optimal routing in parallel systems“. Journal of Applied Probability 22, Nr. 4 (Dezember 1985): 989–95. http://dx.doi.org/10.2307/3213970.

Der volle Inhalt der Quelle
Annotation:
Jobs arrive at a buffer from which there are several parallel routes to a destination. A socially optimal policy is one which minimizes the average delay of all jobs, whereas an individually optimal policy is one which, for each job, minimizes its own delay, with route preference given to jobs at the head of the buffer. If there is a socially optimal policy for a system with no arrivals, which can be implemented by each job following a policy γ in such a way that no job ever utilizes a previously declined route, then we show that such a γ is an individually optimal policy for each job. Moreover γ continues to be individually optimal even if the system has an arbitrary arrival process, subject only to the restriction that past arrivals are independent of future route-traversal times. Thus, γ is an individually optimal policy which is insensitive to the nature of the arrival process. In the particular case where the times to traverse the routes are exponentially distributed with a possibly different mean time for each of the parallel routes, then such an insensitive individually optimal policy does in fact exist and is moreover trivially determined by certain threshold numbers. A conjecture is also made about more general situations where such individually optimal policies exist.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kumar, P. R., und J. Walrand. „Individually optimal routing in parallel systems“. Journal of Applied Probability 22, Nr. 04 (Dezember 1985): 989–95. http://dx.doi.org/10.1017/s0021900200108265.

Der volle Inhalt der Quelle
Annotation:
Jobs arrive at a buffer from which there are several parallel routes to a destination. A socially optimal policy is one which minimizes the average delay of all jobs, whereas an individually optimal policy is one which, for each job, minimizes its own delay, with route preference given to jobs at the head of the buffer. If there is a socially optimal policy for a system with no arrivals, which can be implemented by each job following a policy γ in such a way that no job ever utilizes a previously declined route, then we show that such a γ is an individually optimal policy for each job. Moreover γ continues to be individually optimal even if the system has an arbitrary arrival process, subject only to the restriction that past arrivals are independent of future route-traversal times. Thus, γ is an individually optimal policy which is insensitive to the nature of the arrival process. In the particular case where the times to traverse the routes are exponentially distributed with a possibly different mean time for each of the parallel routes, then such an insensitive individually optimal policy does in fact exist and is moreover trivially determined by certain threshold numbers. A conjecture is also made about more general situations where such individually optimal policies exist.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zheng, Feifeng, Ming Liu, Chengbin Chu und Yinfeng Xu. „Online Parallel Machine Scheduling to Maximize the Number of Early Jobs“. Mathematical Problems in Engineering 2012 (2012): 1–7. http://dx.doi.org/10.1155/2012/939717.

Der volle Inhalt der Quelle
Annotation:
We study a maximization problem: online scheduling onmidentical machines to maximize the number of early jobs. The problem is online in the sense that all jobs arrive over time. Each job's characteristics, such as processing time and due date, become known at its arrival time. We consider thepreemption-restart model, in which preemption is allowed, while once a job is restarted, it loses all the progress that has been made on this job so far. If in some schedule a job is completed before or at its due date, then it is calledearly(oron time). The objective is to maximize the number of early jobs. Formidentical machines, we prove an upper bound1-(1/2m)of competitive ratio and show thatECT(earliest completion time) algorithm is1/2-competitive.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shim, Sang Oh, und Seong Woo Choi. „Scheduling Jobs on Dedicated Parallel Machines“. Applied Mechanics and Materials 433-435 (Oktober 2013): 2363–66. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.2363.

Der volle Inhalt der Quelle
Annotation:
This paper considers scheduling problem on dedicated parallel machines where several types of machines are grouped into one process. The dedicated machine is that a job with a specific recipe should be processed on the dedicated machine even though the job can be produced on any other machine originally. In this process, a setup is required when different jobs are done consecutively. To minimize the completion time of the last job, a scheduling method is developed. Computational experiments are performed on a number of test problems and results show that the suggested algorithm give good solutions in a reasonable amount of computation time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Liu, Ming, Feifeng Zheng, Zhanguo Zhu und Chengbin Chu. „Optimal Semi-Online Algorithm for Scheduling on Two Parallel Batch Processing Machines“. Asia-Pacific Journal of Operational Research 31, Nr. 05 (Oktober 2014): 1450038. http://dx.doi.org/10.1142/s0217595914500389.

Der volle Inhalt der Quelle
Annotation:
Batch processing machine scheduling in uncertain environment attracts more and more attention in the last decade. This paper deals with semi-online scheduling on two parallel batch processing machines with non-decreasing processing time of job. Jobs arrive over time in the online paradigm, and the processing time of any batch is equal to the length of the last arrival job in the batch. We study the unbounded model where each processing batch may contain an unlimited number of jobs, and the objective is to minimize the makespan. Given any job Jj together with its following job Jj+1, it is assumed that their processing times satisfy pj+1 ≥ αpj where α ≥ 1 is a constant. That is, jobs arrive in a non-decreasing order of processing times. We mainly propose an optimal ϕ-competitive online algorithm where ϕ ≥ 1 is a solution of equation ϕ3 + (α-1)ϕ2 + (α2 - α - 1)ϕ - α2 = 0.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Luo, Cheng Xin. „An FPTAS for Uniform Parallel-Machine Scheduling Problem with Deteriorating Jobs and Rejection“. Applied Mechanics and Materials 433-435 (Oktober 2013): 2335–38. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.2335.

Der volle Inhalt der Quelle
Annotation:
This paper studies uniform parallel-machine scheduling problem with deteriorating jobs and rejection. The processing time of each job is a linear nondecreasing function of its starting time. A job can be rejected by paying a penalty cost. The objective is to minimize the sum of the total load of the accepted jobs on all machines and the total rejection penalties of the rejected jobs. We propose a fully polynomial-time approximation scheme (FPTAS) for this problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Job parallèle"

1

Sabin, Gerald M. „Unfairness in parallel job scheduling“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1164826017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Islam, Mohammad Kamrul. „QoS In Parallel Job Scheduling“. The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218566682.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wing, A. J. „Parallel simulation of PCB job shops“. Thesis, University of East Anglia, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359342.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lynch, Gerard. „Parallel job scheduling on heterogeneous networks of multiprocessor workstations“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0006/MQ45952.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Song, Bin 1970. „Scheduling adaptively parallel jobs“. Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50354.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ali, Syed Zeeshan. „An investigation into parallel job scheduling using service level agreements“. Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/an-investigation-into-parallel-job-scheduling-using-service-level-agreements(f4685321-374e-41c4-86da-d07f09ea4bac).html.

Der volle Inhalt der Quelle
Annotation:
A scheduler, as a central components of a computing site, aggregates computing resources and is responsible to distribute the incoming load (jobs) between the resources. Under such an environment, the optimum performance of the system against the service level agreement (SLA) based workloads, can be achieved by calculating the priority of SLA bound jobs using integrated heuristic. The SLA defines the service obligations and expectations to use the computational resources. The integrated heuristic is the combination of different SLA terms. It combines the SLA terms with a specific weight for each term. Theweights are computed by applying parameter sweep technique in order to obtain the best schedule for the optimum performance of the system under the workload. The sweepingof parameters on the integrated heuristic observed to be computationally expensive. The integrated heuristic becomes more expensive if no value of the computed weights result in improvement in performance with the resulting schedule. Hence, instead of obtaining optimum performance it incurs computation cost in such situations. Therefore, there is a need of detection of situations where the integrated heuristic can be exploited beneficially. For that reason, in this thesis we propose a metric based on the concept of utilization, to evaluate the SLA based parallel workloads of independent jobs to detect any impact of integrated heuristic on the workload.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Jianqing. „A parallel approach for solving a multiple machine job sequencing problem“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60235.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhou, Huajun. „The CON Job scheduling problem on a single and parallel machines /“. Electronic version (PDF), 2003. http://dl.uncw.edu/etd/2003/zhouh/huajunzhou.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Vélez, Gallego Mario César. „Algorithms for Scheduling Parallel Batch Processing Machines with Non-Identical Job Ready Times“. FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/276.

Der volle Inhalt der Quelle
Annotation:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed – especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Georgiou, Yiannis. „Contributions for resource and job management in high performance computing“. Grenoble, 2010. http://www.theses.fr/2010GRENM079.

Der volle Inhalt der Quelle
Annotation:
Le domaine du Calcul à Haute Performance (HPC) évolue étroitement avec les dernières avancées technologiques des architectures informatiques et des besoins toujours croissants en demande de puissance de calcul. Cette thèse s'intéresse à l'étude d'un type d'intergiciel particulier appelé gestionnaire de tâches et ressources (RJMS) qui est chargé de distribuer la puissance de calcul aux applications dans les plateformes pour le HPC. Le RJMS joue un rôle central du fait de sa position dans la pile logicielle. Les dernières évolutions dans les couches matérielles et dans les applications ont largement augmenté le niveau de complexité auquel doit faire face ce type d'intergiciel. Des problématiques telles que le passage à l'échelle, la prise en compte d'un taux d'activité irrégulier, la gestion des contraintes liées à la topologie du matériel, l'efficacité énergétique et la tolérance aux pannes doivent être particulièrement pris en considération, afin, entre autres, de fournir une meilleure exploitation des ressources à la fois du point de vue global du système ainsi que de celui des utilisateurs. La première contribution de cette thèse est un état de l'art sur la gestion des tâches et des ressources ainsi qu'une analyse comparative des principaux intergiciels actuels et des différentes problématiques de recherche associées. Une métrique importante pour évaluer l'apport d'un RJMS sur une plate-forme est le niveau d'utilisation de l'ensemble du système. On constate parmi les traces d'activité de plusieurs plateformes qu'un grand nombre d'entre elles présentent un taux d'utilisation significativement inférieure à une pleine utilisation. Ce constat est la principale motivation des autres contributions de cette thèse qui portent sur les méthodes d'exploitations de ces périodes de sous-utilisation au profit de la gestion globale du système ou des applications en court d'exécution. Plus particulièrement cette thèse explore premièrement, les moyens d'accroître le taux de calculs utiles dans le contexte des grilles légères en présence d'une forte variabilité de la disponibilité des ressources de calcul. Deuxièmement, nous avons étudié le cas des tâches dynamiques et proposé différentes techniques s'intégrant au RJMS OAR et troisièmement nous évalués plusieurs modes d'exploitation des ressources en prenant en compte la consommation énergétique. Finalement, les évaluations de cette thèse reposent sur une approche expérimentale pour laquelle nous avons proposés des outils et une méthodologie permettant d'améliorer significativement la maîtrise et la reproductibilité d'expériences complexes propre à ce domaine d'étude
High Performance Computing is characterized by the latest technological evolutions in computing architectures and by the increasing needs of applications for computing power. A particular middleware called Resource and Job Management System (RJMS), is responsible for delivering computing power to applications. The RJMS plays an important role in HPC since it has a strategic place in the whole software stack because it stands between the above two layers. However, the latest evolutions in hardware and applications layers have provided new levels of complexities to this middleware. Issues like scalability, management of topological constraints, energy efficiency and fault tolerance have to be particularly considered, among others, in order to provide a better system exploitation from both the system and user point of view. This dissertation provides a state of the art upon the fundamental concepts and research issues of Resources and Jobs Management Systems. It provides a multi-level comparison (concepts, functionalities, performance) of some Resource and Jobs Management Systems in High Performance Computing. An important metric to evaluate the work of a RJMS on a platform is the observed system utilization. However, studies and logs of production platforms show that HPC systems in general suffer of significant un-utilization rates. Our study deals with these clusters' un-utilization periods by proposing methods to aggregate otherwise un-utilized resources for the benefit of the system or the application. More particularly this thesis explores RJMS level mechanisms: 1) for increasing the jobs valuable computation rates in the high volatile environments of a lightweight grid context, 2) for improving system utilization with malleability techniques and 3) providing energy efficient system management through the exploitation of idle computing machines. The experimentation and evaluation in this type of contexts provide important complexities due to the inter-dependency of multiple parameters that have to be taken into control. In this thesis we have developed a methodology based upon real-scale controlled experimentation with submission of synthetic or real workload traces
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Job parallèle"

1

Fan, Yang. Parallel job scheduling in massively parallel processors. Ottawa: National Library of Canada, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Klusáček, Dalibor, Walfredo Cirne und Gonzalo P. Rodrigo, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88224-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Klusáček, Dalibor, Walfredo Cirne und Gonzalo P. Rodrigo, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88224-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Klusáček, Dalibor, Walfredo Cirne und Narayan Desai, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77398-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Frachtenberg, Eitan, und Uwe Schwiegelshohn, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16505-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Feitelson, Dror G., und Larry Rudolph, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-47954-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Feitelson, Dror G., und Larry Rudolph, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60153-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Feitelson, Dror G., und Larry Rudolph, Hrsg. Job Scheduling Strategies for Parallel Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-39997-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Desai, Narayan, und Walfredo Cirne, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61756-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Klusáček, Dalibor, Walfredo Cirne und Narayan Desai, Hrsg. Job Scheduling Strategies for Parallel Processing. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63171-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Job parallèle"

1

Tripiccione, Raffaele, Michael Philippsen, Rolf Riesen und Arthur B. Maccabe. „Job Scheduling“. In Encyclopedia of Parallel Computing, 997–1002. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_212.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dandamudi, Sivarama. „Parallel Job Scheduling“. In Hierarchical Scheduling in Parallel and Cluster Systems, 49–84. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0133-6_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sanders, Peter, und Dominik Schreiber. „Decentralized Online Scheduling of Malleable NP-hard Jobs“. In Euro-Par 2022: Parallel Processing, 119–35. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12597-3_8.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this work, we address an online job scheduling problem in a large distributed computing environment. Each job has a priority and a demand of resources, takes an unknown amount of time, and is malleable, i.e., the number of allotted workers can fluctuate during its execution. We subdivide the problem into (a) determining a fair amount of resources for each job and (b) assigning each job to an according number of processing elements. Our approach is fully decentralized, uses lightweight communication, and arranges each job as a binary tree of workers which can grow and shrink as necessary. Using the NP-complete problem of propositional satisfiability (SAT) as a case study, we experimentally show on up to 128 machines (6144 cores) that our approach leads to near-optimal utilization, imposes minimal computational overhead, and performs fair scheduling of incoming jobs within a few milliseconds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Aida, Kento, Hironori Kasahara und Seinosuke Narita. „Job scheduling scheme for pure space sharing among rigid jobs“. In Job Scheduling Strategies for Parallel Processing, 98–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0053983.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Aida, Kento. „Effect of Job Size Characteristics on Job Scheduling Performance“. In Job Scheduling Strategies for Parallel Processing, 1–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-39997-6_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cirne, Walfredo, und Eitan Frachtenberg. „Web-Scale Job Scheduling“. In Job Scheduling Strategies for Parallel Processing, 1–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35867-8_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Klusáček, Dalibor, Mehmet Soysal und Frédéric Suter. „Alea – Complex Job Scheduling Simulator“. In Parallel Processing and Applied Mathematics, 217–29. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43222-5_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pruyne, Jim, und Miron Livny. „Managing checkpoints for parallel programs“. In Job Scheduling Strategies for Parallel Processing, 140–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0022292.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Abawajy, J. H. „Fault-Tolerant Dynamic Job Scheduling Policy“. In Distributed and Parallel Computing, 165–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lopez, Victor, Ana Jokanovic, Marco D’Amico, Marta Garcia, Raul Sirvent und Julita Corbalan. „DJSB: Dynamic Job Scheduling Benchmark“. In Job Scheduling Strategies for Parallel Processing, 174–88. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77398-8_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Job parallèle"

1

Toporkov, Victor, Anna Toporkova, Alexey Tselishchev, Dmitry Yemelyanov und Petr Potekhin. „Job Flow Distribution and Ranked Jobs Scheduling in Grid Virtual Organizations“. In 2015 44th International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2015. http://dx.doi.org/10.1109/icppw.2015.36.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Silva, Fabricio Alves Barbosa da, und Isaac D. Scherson. „Concurrent Gang: Towards a Flexible and Scalable Gang Scheduler“. In International Symposium on Computer Architecture and High Performance Computing. Sociedade Brasileira de Computação, 1999. http://dx.doi.org/10.5753/sbac-pad.1999.19796.

Der volle Inhalt der Quelle
Annotation:
Gang scheduling has been widely used as a practical solution to the dynamic parallel job scheduling problem. Parallel tasks of a job are scheduled for simultaneous execution on a partition of a parallel computer. Gang Scheduling has many advantages, such as responsiveness, efficient sharing of resources and ease of programming. However, there are two major problems associated with gang scheduling: scalability and the decision of what to do when a task blocks. In this paper we propose a class of scheduling policies, dubbed Concurrent Gang, that is a generalization of gang-scheduling, and allows for the flexible simultaneous scheduling of multiple parallel jobs with different characteristics. Besides that, scalability in Concurrent Gang is achieved through the use of a global clock that coordinates the gang scheduler among different processors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sabin, G., G. Kochhar und P. Sadayappan. „Job fairness in non-preemptive job scheduling“. In International Conference on Parallel Processing, 2004. ICPP 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpp.2004.1327920.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Souza, Augusto, und Islene Garcia. „A Preemptive Fair Scheduler Policy for Disco MapReduce Framework“. In XV Workshop em Desempenho de Sistemas Computacionais e de Comunicação. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/wperformance.2016.9723.

Der volle Inhalt der Quelle
Annotation:
Disco is an open source MapReduce framework and an alternative to Hadoop. Preemption of tasks is an important feature which helps organizations relying on the MapReduce paradigm to handle their heterogeneous workload usually constituted of research (long duration and with low priority) and production (short duration and with high priority) applications. The missing preemption in Disco affects the production jobs when these two kinds of jobs need to be executed in parallel: the high priority response is delayed because there aren’t resources to compute it. In this paper we describe the implementation of the Preemptive Fair Scheduler Policy which improved largely our experimental production job execution time with a small impact on the research job.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Weber, Daniel H., Wenchao Zhou und Zhenghui Sha. „Job Placement for Cooperative 3D Printing“. In ASME 2023 18th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/msec2023-104613.

Der volle Inhalt der Quelle
Annotation:
Abstract Cooperative 3D Printing (C3DP), an additive manufacturing platform consisting of a swarm of mobile printing robots, is an emerging technology designed to address the size and printing speed limitations of conventional, gantry-based 3D printers. A typical C3DP process often involves several interconnected stages, including project/job partitioning, job placement on the floor, task scheduling, path planning, and motion planning. In our previous work on project partitioning, we presented a Z-Chunker, which vertically divides a tall print project into multiple jobs to overcome the physical constraints of printers in the Z direction, and an XY Chunker, to partition jobs into discrete chunks, which are allocated to individual printing robots for parallel printing. These geometry partitioning algorithms determine what is to be printed, but other information, such as when, where, and in what order chunks should be printed, is required to carry out the print physically. This paper introduces the first Job Placement Optimizer for C3DP based on Dynamic Dependency List schedule assignment and Conflict-Based Search path planning. Our algorithm determines the optimal locations for all jobs and chunks (i.e., subtasks of a job) on the factory floor to minimize the makespan for C3DP. To validate the proposed approach, we conduct three case studies: a simple geometry with homogeneous jobs in the Z direction and two complex geometries (one with moderate complexity and one relatively more complex) with non-homogeneous jobs in the Z direction. We also performed simulations to understand the impact of other factors, such as the number of robots, the number of jobs, chunking orientation, and the heterogeneity of prints (e.g., when chunks are different in size and materials), on the effectiveness of this placement optimizer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhou, Longfang, Xiaorong Zhang, Wenxiang Yang, Yongguo Han, Fang Wang, Yadong Wu und Jie Yu. „PREP: Predicting Job Runtime with Job Running Path on Supercomputers“. In ICPP 2021: 50th International Conference on Parallel Processing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3472456.3473521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhou, Longfang, Xiaorong Zhang, Wenxiang Yang, Yongguo Han, Fang Wang, Yadong Wu und Jie Yu. „PREP: Predicting Job Runtime with Job Running Path on Supercomputers“. In ICPP 2021: 50th International Conference on Parallel Processing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3472456.3473521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jakobsche, Thomas, Nicolas Lachiche und Florina M. Ciorba. „Investigating HPC Job Resource Requests and Job Efficiency Reporting“. In 2023 22nd International Symposium on Parallel and Distributed Computing (ISPDC). IEEE, 2023. http://dx.doi.org/10.1109/ispdc59212.2023.00024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhou, Wei, K. Preston White und Hongfeng Yu. „Improving Short Job Latency Performance in Hybrid Job Schedulers with Dice“. In ICPP 2019: 48th International Conference on Parallel Processing. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3337821.3337851.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Konovalov, Mikhail, und Rostislav Razumchik. „Minimizing Mean Response Time In Batch-Arrival Non-Observable Systems With Single-Server FIFO Queues Operating In Parallel“. In 35th ECMS International Conference on Modelling and Simulation. ECMS, 2021. http://dx.doi.org/10.7148/2021-0272.

Der volle Inhalt der Quelle
Annotation:
Consideration is given to a dispatching system, where jobs, arriving in batches, cannot be stored and thus must be immediately routed to single-server FIFO queues operating in parallel. The dispatcher can memorize its routing decisions but at any time instant does not have any system's state information. The only information available is the batch/job size and inter-arrival time distributions, and the servers' service rates. Under these conditions, one is interested in the routing policies which minimize the job's long-run mean response time. The single-parameter routing policy is being proposed which, according to the numerical experiments, outperforms best routing rules known by now for non-observable dispatching systems: probabilistic and deterministic. Both the batch-wise and job-wise assignments are studied. Extension to systems with unreliable servers is also addressed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Job parallèle"

1

Leung, Vitus Joseph, Gerald Sabin und Ponnuswamy Sadayappan. Parallel job scheduling policies to improve fairness : a case study. Office of Scientific and Technical Information (OSTI), Februar 2008. http://dx.doi.org/10.2172/929521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie