Dissertations / Theses on the topic 'Job parallèle'

To see the other types of publications on this topic, follow the link: Job parallèle.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Job parallèle.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sabin, Gerald M. "Unfairness in parallel job scheduling." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1164826017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Islam, Mohammad Kamrul. "QoS In Parallel Job Scheduling." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218566682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wing, A. J. "Parallel simulation of PCB job shops." Thesis, University of East Anglia, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lynch, Gerard. "Parallel job scheduling on heterogeneous networks of multiprocessor workstations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0006/MQ45952.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Bin 1970. "Scheduling adaptively parallel jobs." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ali, Syed Zeeshan. "An investigation into parallel job scheduling using service level agreements." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/an-investigation-into-parallel-job-scheduling-using-service-level-agreements(f4685321-374e-41c4-86da-d07f09ea4bac).html.

Full text
Abstract:
A scheduler, as a central components of a computing site, aggregates computing resources and is responsible to distribute the incoming load (jobs) between the resources. Under such an environment, the optimum performance of the system against the service level agreement (SLA) based workloads, can be achieved by calculating the priority of SLA bound jobs using integrated heuristic. The SLA defines the service obligations and expectations to use the computational resources. The integrated heuristic is the combination of different SLA terms. It combines the SLA terms with a specific weight for each term. Theweights are computed by applying parameter sweep technique in order to obtain the best schedule for the optimum performance of the system under the workload. The sweepingof parameters on the integrated heuristic observed to be computationally expensive. The integrated heuristic becomes more expensive if no value of the computed weights result in improvement in performance with the resulting schedule. Hence, instead of obtaining optimum performance it incurs computation cost in such situations. Therefore, there is a need of detection of situations where the integrated heuristic can be exploited beneficially. For that reason, in this thesis we propose a metric based on the concept of utilization, to evaluate the SLA based parallel workloads of independent jobs to detect any impact of integrated heuristic on the workload.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Jianqing. "A parallel approach for solving a multiple machine job sequencing problem." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60235.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Huajun. "The CON Job scheduling problem on a single and parallel machines /." Electronic version (PDF), 2003. http://dl.uncw.edu/etd/2003/zhouh/huajunzhou.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vélez, Gallego Mario César. "Algorithms for Scheduling Parallel Batch Processing Machines with Non-Identical Job Ready Times." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/276.

Full text
Abstract:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed – especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short.
APA, Harvard, Vancouver, ISO, and other styles
10

Georgiou, Yiannis. "Contributions for resource and job management in high performance computing." Grenoble, 2010. http://www.theses.fr/2010GRENM079.

Full text
Abstract:
Le domaine du Calcul à Haute Performance (HPC) évolue étroitement avec les dernières avancées technologiques des architectures informatiques et des besoins toujours croissants en demande de puissance de calcul. Cette thèse s'intéresse à l'étude d'un type d'intergiciel particulier appelé gestionnaire de tâches et ressources (RJMS) qui est chargé de distribuer la puissance de calcul aux applications dans les plateformes pour le HPC. Le RJMS joue un rôle central du fait de sa position dans la pile logicielle. Les dernières évolutions dans les couches matérielles et dans les applications ont largement augmenté le niveau de complexité auquel doit faire face ce type d'intergiciel. Des problématiques telles que le passage à l'échelle, la prise en compte d'un taux d'activité irrégulier, la gestion des contraintes liées à la topologie du matériel, l'efficacité énergétique et la tolérance aux pannes doivent être particulièrement pris en considération, afin, entre autres, de fournir une meilleure exploitation des ressources à la fois du point de vue global du système ainsi que de celui des utilisateurs. La première contribution de cette thèse est un état de l'art sur la gestion des tâches et des ressources ainsi qu'une analyse comparative des principaux intergiciels actuels et des différentes problématiques de recherche associées. Une métrique importante pour évaluer l'apport d'un RJMS sur une plate-forme est le niveau d'utilisation de l'ensemble du système. On constate parmi les traces d'activité de plusieurs plateformes qu'un grand nombre d'entre elles présentent un taux d'utilisation significativement inférieure à une pleine utilisation. Ce constat est la principale motivation des autres contributions de cette thèse qui portent sur les méthodes d'exploitations de ces périodes de sous-utilisation au profit de la gestion globale du système ou des applications en court d'exécution. Plus particulièrement cette thèse explore premièrement, les moyens d'accroître le taux de calculs utiles dans le contexte des grilles légères en présence d'une forte variabilité de la disponibilité des ressources de calcul. Deuxièmement, nous avons étudié le cas des tâches dynamiques et proposé différentes techniques s'intégrant au RJMS OAR et troisièmement nous évalués plusieurs modes d'exploitation des ressources en prenant en compte la consommation énergétique. Finalement, les évaluations de cette thèse reposent sur une approche expérimentale pour laquelle nous avons proposés des outils et une méthodologie permettant d'améliorer significativement la maîtrise et la reproductibilité d'expériences complexes propre à ce domaine d'étude
High Performance Computing is characterized by the latest technological evolutions in computing architectures and by the increasing needs of applications for computing power. A particular middleware called Resource and Job Management System (RJMS), is responsible for delivering computing power to applications. The RJMS plays an important role in HPC since it has a strategic place in the whole software stack because it stands between the above two layers. However, the latest evolutions in hardware and applications layers have provided new levels of complexities to this middleware. Issues like scalability, management of topological constraints, energy efficiency and fault tolerance have to be particularly considered, among others, in order to provide a better system exploitation from both the system and user point of view. This dissertation provides a state of the art upon the fundamental concepts and research issues of Resources and Jobs Management Systems. It provides a multi-level comparison (concepts, functionalities, performance) of some Resource and Jobs Management Systems in High Performance Computing. An important metric to evaluate the work of a RJMS on a platform is the observed system utilization. However, studies and logs of production platforms show that HPC systems in general suffer of significant un-utilization rates. Our study deals with these clusters' un-utilization periods by proposing methods to aggregate otherwise un-utilized resources for the benefit of the system or the application. More particularly this thesis explores RJMS level mechanisms: 1) for increasing the jobs valuable computation rates in the high volatile environments of a lightweight grid context, 2) for improving system utilization with malleability techniques and 3) providing energy efficient system management through the exploitation of idle computing machines. The experimentation and evaluation in this type of contexts provide important complexities due to the inter-dependency of multiple parameters that have to be taken into control. In this thesis we have developed a methodology based upon real-scale controlled experimentation with submission of synthetic or real workload traces
APA, Harvard, Vancouver, ISO, and other styles
11

Ramachandra, Girish. "Scheduling Precedence Related Jobs on Identical Parallel Processors." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-20020121-185145.

Full text
Abstract:

The problem of concern to us in this thesis is the scheduling ofprecedence-related jobs non-preemptively on two identical parallelprocessors to minimize the sum of the weighted completion times. The problemis known to be NP-hard.We develop, in chapter 2, a binary integer program which iscapable of solving only small size problems (no larger than 12jobs) to optimality at the present time. We also present a linearprogramming(LP) model adopted from the literature todetermine the lower bound on the optimum. This LP stands us ingood stead when we perform the optimization via the GeneticAlgorithm approach (which is the subject matter of chapter 3). Wealso present a dynamic programming formulation based on theapproach used for solving the "weighted earliness-tardiness"problem. Although DP expands somewhat the size of the problemsthat can be solved to optimality, its computing burden becomesonerous for more than 25 jobs.In an attempt to solve larger, and more realistic problems, a GeneticAlgorithm(GA) is presented in chapter 3. The salient feature of the GAmodel is that the "initial population" of trial solutions are not allrandomly generated but are constituted from a set of priority rules whichare known to be "good" relaxation (in the sense of being "close" to theoptimum) of the original problem. Also, generation of infeasible solutionsis avoided by the use of post-processing procedures after crossover andmutation operations. Computational results show that the GA approach arrivesto within 20% of the lower bound (and hence of the optimum) in very fewiterations.

APA, Harvard, Vancouver, ISO, and other styles
12

Khan, Mukhtaj. "Hadoop performance modeling and job optimization for big data analytics." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11078.

Full text
Abstract:
Big data has received a momentum from both academia and industry. The MapReduce model has emerged into a major computing model in support of big data analytics. Hadoop, which is an open source implementation of the MapReduce model, has been widely taken up by the community. Cloud service providers such as Amazon EC2 cloud have now supported Hadoop user applications. However, a key challenge is that the cloud service providers do not a have resource provisioning mechanism to satisfy user jobs with deadline requirements. Currently, it is solely the user responsibility to estimate the require amount of resources for their job running in a public cloud. This thesis presents a Hadoop performance model that accurately estimates the execution duration of a job and further provisions the required amount of resources for a job to be completed within a deadline. The proposed model employs Locally Weighted Linear Regression (LWLR) model to estimate execution time of a job and Lagrange Multiplier technique for resource provisioning to satisfy user job with a given deadline. The performance of the propose model is extensively evaluated in both in-house Hadoop cluster and Amazon EC2 Cloud. Experimental results show that the proposed model is highly accurate in job execution estimation and jobs are completed within the required deadlines following on the resource provisioning scheme of the proposed model. In addition, the Hadoop framework has over 190 configuration parameters and some of them have significant effects on the performance of a Hadoop job. Manually setting the optimum values for these parameters is a challenging task and also a time consuming process. This thesis presents optimization works that enhances the performance of Hadoop by automatically tuning its parameter values. It employs Gene Expression Programming (GEP) technique to build an objective function that represents the performance of a job and the correlation among the configuration parameters. For the purpose of optimization, Particle Swarm Optimization (PSO) is employed to find automatically an optimal or a near optimal configuration settings. The performance of the proposed work is intensively evaluated on a Hadoop cluster and the experimental results show that the proposed work enhances the performance of Hadoop significantly compared with the default settings.
APA, Harvard, Vancouver, ISO, and other styles
13

Hulett, Maria. "Analytical Approximations to Predict Performance Measures of Manufacturing Systems with Job Failures and Parallel Processing." FIU Digital Commons, 2010. http://digitalcommons.fiu.edu/etd/167.

Full text
Abstract:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
APA, Harvard, Vancouver, ISO, and other styles
14

Sen, Siddhartha 1981. "Dynamic processor allocation for adaptively parallel work-stealing jobs." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/33355.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 77-82).
TCP's burstiness is usually regarded as harmful, or at best, inconvenient. Instead, this thesis suggests a new perspective and examines whether TCP's burstiness is useful for certain applications. It claims that burstiness can be harnessed to insulate traffic from packet reordering caused by route change. We introduce the use of flowlets, a new abstraction for a burst of packets from a particular flow followed by an idle interval. We apply flowlets to the routing of traffic along multiple paths and develop a scheme using flowlet-switching to split traffic across multiple parallel paths. Flowlet switching is an ideal technique for load balancing traffic across multiple paths as it achieves the accuracy of packet-switching and the robustness to packet reordering of flow-switching. This research evaluates the accuracy, simplicity, overhead and robustness to reordering flowlet switching entails. Using a combination of trace analysis and network simulation, we demonstrate the feasibility of implementing flowlet-based switching.
by Siddhartha Sen.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
15

Sobalvarro, Patrick G. "Demand-based coscheduling of parallel jobs on multiprogrammed multiprocessors." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43588.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 92-94).
by Patrick Gregory Sobalvarro.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
16

Ahmed, Zubair. "Developing an efficient scheduling template of a chemotherapy treatment unit: simulation and optimization approach." Australasian Medical Journal, 2011. http://hdl.handle.net/1993/5084.

Full text
Abstract:
This study is undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput of the clinic and reducing the average patients’ waiting time. In order to achieve this objective, a simulation model of this system is built and several scenarios that target matching the arrival pattern of the patients and resources availability are designed and evaluated. After performing detailed analysis, one scenario proves to provide the best system’s performance. The best scenario determines a rational arrival pattern of the patient matching with the nurses’ availability and can serve 22.5% more patients daily. Although the simulation study shows the way to serve more patients daily, it does not explain how to sequence them properly to minimize the average patients’ waiting time. Therefore, an efficient scheduling algorithm was developed to build a scheduling template that minimizes the total flow time of the system.
APA, Harvard, Vancouver, ISO, and other styles
17

Lakkimsetti, Praveen Kumar. "A framework for automatic optimization of MapReduce programs based on job parameter configurations." Kansas State University, 2011. http://hdl.handle.net/2097/12011.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Mitchell L. Neilsen
Recently, cost-effective and timely processing of large datasets has been playing an important role in the success of many enterprises and the scientific computing community. Two promising trends ensure that applications will be able to deal with ever increasing data volumes: first, the emergence of cloud computing, which provides transparent access to a large number of processing, storage and networking resources; and second, the development of the MapReduce programming model, which provides a high-level abstraction for data-intensive computing. MapReduce has been widely used for large-scale data analysis in the Cloud [5]. The system is well recognized for its elastic scalability and fine-grained fault tolerance. However, even to run a single program in a MapReduce framework, a number of tuning parameters have to be set by users or system administrators to increase the efficiency of the program. Users often run into performance problems because they are unaware of how to set these parameters, or because they don't even know that these parameters exist. With MapReduce being a relatively new technology, it is not easy to find qualified administrators [4]. The major objective of this project is to provide a framework that optimizes MapReduce programs that run on large datasets. This is done by executing the MapReduce program on a part of the dataset using stored parameter combinations and setting the program with the most efficient combination and this modified program can be executed over the different datasets. We know that many MapReduce programs are used over and over again in applications like daily weather analysis, log analysis, daily report generation etc. So, once the parameter combination is set, it can be used on a number of data sets efficiently. This feature can go a long way towards improving the productivity of users who lack the skills to optimize programs themselves due to lack of familiarity with MapReduce or with the data being processed.
APA, Harvard, Vancouver, ISO, and other styles
18

Utrera, Iglesias Gladys Miriam. ""Virtual malleability" applied to MPI jobs to improve their execution in a multiprogrammed environment"." Doctoral thesis, Universitat Politècnica de Catalunya, 2007. http://hdl.handle.net/10803/6013.

Full text
Abstract:
This work focuses on scheduling of MPI jobs when executing in shared-memory multiprocessors (SMPs).
The objective was to obtain the best performance in response time in multiprogrammed multiprocessors systems using batch systems, assuming all the jobs have the same priority.
To achieve that purpose, the benefits of supporting malleability on MPI jobs to reduce fragmentation and consequently improve the performance of the system were studied.
The contributions made in this work can be summarized as follows:
· Virtual malleability: A mechanism where a job is assigned a dynamic processor partition, where the number of processes is greater than the number of processors. The partition size is modified at runtime, according to external requirements such as the load of the system, by varying the multiprogramming level, making the job contend for resources with itself.
In addition to this, a mechanism which decides at runtime if applying local or global process queues to an application depending on the load balancing between processes of it.
· A job scheduling policy, that takes decisions such as how many processes to start with and the maximum multiprogramming degree based on the type and number of applications running and queued. Moreover, as soon as a job finishes execution and where there are queued jobs, this algorithm analyzes whether it is better to start execution of another job immediately or just wait until there are more resources available.
· A new alternative to backfilling strategies for the problema of window execution time expiring. Virtual malleability is applied to the backfilled job, reducing its partition size but without aborting or suspending it as in traditional backfilling.

The evaluation of this thesis has been done using a practical approach. All the proposals were implemented, modifying the three scheduling levels: queuing system, processor scheduler and runtime library.
The impact of the contributions were studied under several types of workloads, varying machine utilization, communication and, balance degree of the applications, multiprogramming level, and job size.
Results showed that it is possible to offer malleability over MPI jobs.
An application obtained better performance when contending for the resources with itself than with other applications, especially in workloads with high machine utilization. Load imbalance was taken into account obtaining better performance if applying the right queue type to each application independently.
The job scheduling policy proposed exploited virtual malleability by choosing at the beginning of execution some parameters like the number of processes and maximum multiprogramming level. It performed well under bursty workloads with low to medium machine utilizations.
However as the load increases, virtual malleability was not enough. That is because, when the machine is heavily loaded, the jobs, once shrunk are not able to expand, so they must be executed all the time with a partition smaller than the job size, thus degrading performance. Thus, at this point the job scheduling policy concentrated just in moldability.
Fragmentation was alleviated also by applying backfilling techniques to the job scheduling algorithm. Virtual malleability showed to be an interesting improvement in the window expiring problem. Backfilled jobs even on a smaller partition, can continue execution reducing memory swapping generated by aborts/suspensions In this way the queueing system is prevented from reinserting the backfilled job in the queue and re-executing it in the future.
APA, Harvard, Vancouver, ISO, and other styles
19

Schlagkamp, Stephan [Verfasser], Uwe Akademischer Betreuer] Schwiegelshohn, and Andrei [Gutachter] [Tchernykh. "User-aware performance evaluation and optimization of parallel job schedulers / Stephan Schlagkamp ; Gutachter: Andrei Tchernykh ; Betreuer: Uwe Schwiegelshohn." Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1139892584/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Schlagkamp, Stephan Verfasser], Uwe [Akademischer Betreuer] Schwiegelshohn, and Andrei [Gutachter] [Tchernykh. "User-aware performance evaluation and optimization of parallel job schedulers / Stephan Schlagkamp ; Gutachter: Andrei Tchernykh ; Betreuer: Uwe Schwiegelshohn." Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1139892584/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Etinski, Maja. "DVFS power management in HPC systems." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/96192.

Full text
Abstract:
Recent increase in performance of High Performance Computing (HPC) systems has been followed by even higher increase in power consumption. Power draw of modern supercomputers leads to very high operating costs and reliability concerns. Furthermore, it has negative consequences on the environment. Accordingly, over the last decade there have been many works dealing with power/energy management in HPC systems. Since CPUs accounts for a high portion of the total system power consumption, our work aims at CPU power reduction. Dynamic Voltage Frequency Scaling (DVFS) is a widely used technique for CPU power management. Running an application at lower frequency/voltage reduces its power consumption. However, frequency scaling should be used carefully since it has negative effects on the application performance. We argue that the job scheduler level presents a good place for power management in an HPC center having in mind that a parallel job scheduler has a global overview of the entire system. In this thesis we propose power-aware parallel job scheduling policies where the scheduler determines the job CPU frequency, besides the job execution order. Based on the goal, the proposed policies can be classified into two groups: energy saving and power budgeting policies. The energy saving policies aim to reduce CPU energy consumption with a minimal job performance penalty. The first of the energy saving policies assigns the job frequency based on system utilization while the other makes job performance predictions. While for less loaded workloads these policies achieve energy savings, highly loaded workloads suffer from a substantial performance degradation because of higher job wait times due to an increase in load caused by longer job run times. Our results show higher potential of the DVFS technique when applied for power budgeting. The second group of policies are policies for power constrained systems. In contrast to the systems without a power limitation, in the case of a given power budget the DVFS technique even improves overall job performance reducing the average job wait time. This comes from a lower job power consumption that allows more jobs to run simultaneously. The first proposed policy from this group assigns CPU frequency using the job predicted performance and current power draw of already running jobs. The other power budgeting policy is based on an optimization problem which solution determines the job execution order, as well as power distribution among jobs selected for execution. This policy fully exploits available power and leads to further performance improvements. The last contribution of the thesis is an analysis of the DVFS technique potential for energyperformance trade-off in current and future HPC systems. Ongoing changes in technology decrease the DVFS applicability for energy savings but the technique still reduces power consumption making it useful for power constrained systems. In order to analyze DVFS potential, a model of frequency scaling impact on MPI application execution time has been proposed and validated against measurements on a large-scale system. This parametric analysis showed for which application/platform characteristic, frequency scaling leads to energy savings.
El aumento de rendimiento que han experimentado los sistemas de altas prestaciones ha venido acompañado de un aumento aún mayor en el consumo de energía. El consumo de los supercomputadores actuales implica unos costes muy altos de funcionamiento. Estos costes no tienen simplemente implicaciones a nivel económico sino también implicaciones en el medio ambiente. Dado la importancia del problema, en los últimos tiempos se han realizado importantes esfuerzos de investigación para atacar el problema de la gestión eficiente de la energía que consumen los sistemas de supercomputación. Dado que la CPU supone un alto porcentaje del consumo total de un sistema, nuestro trabajo se centra en la reducción y gestión eficiente de la energía consumida por la CPU. En concreto, esta tesis se centra en la viabilidad de realizar esta gestión mediante la técnica de Dynamic Voltage Frequency Scalingi (DVFS), una técnica ampliamente utilizada con el objetivo de reducir el consumo energético de la CPU. Sin embargo, esta técnica puede implicar una reducción en el rendimiento de las aplicaciones que se ejecutan, ya que implica una reducción de la frecuencia. Si tenemos en cuenta que el contexto de esta tesis son sistemas de alta prestaciones, minimizar el impacto en la pérdida de rendimiento será uno de nuestros objetivos. Sin embargo, en nuestro contexto, el rendimiento de un trabajo viene determinado por dos factores, tiempo de ejecución y tiempo de espera, por lo que habrá que considerar los dos componentes. Los sistemas de supercomputación suelen estar gestionados por sistemas de colas. Los trabajos, dependiendo de la política que se aplique y el estado del sistema, deberán esperar más o menos tiempo antes de ser ejecutado. Dado las características del sistema objetivo de esta tesis, nosotros consideramos que el Planificador de trabajo (o Job Scheduler), es el mejor componente del sistema para incluir la gestión de la energía ya que es el único punto donde se tiene una visión global de todo el sistema. En este trabajo de tesis proponemos un conjunto de políticas de planificación que considerarán el consumo energético como un recurso más. Estas políticas decidirán que trabajo ejecutar, el número de cpus asignadas y la lista de cpus (y nodos) sino también la frecuencia a la que estas cpus se ejecutarán. Estas políticas estarán orientadas a dos objetivos: reducir la energía total consumida por un conjunto de trabajos y controlar en consumo puntual de un conjunto puntual para evitar saturaciones del sistema en aquellos centros que puedan tener una capacidad limitada (permanente o puntual). El primer grupo de políticas intentará reducir el consumo total minimizando el impacto en el rendimiento. En este grupo encontramos una primera política que asigna la frecuencia de las cpus en función de la utilización del sistema y una segunda que calcula una estimación de la penalización que sufrirá el trabajo que va a empezar para decidir si reducir o no la frecuencia. Estas políticas han mostrado unos resultados aceptables con sistemas poco cargados, pero han mostrado unas pérdidas de rendimiento significativas cuando el sistema está muy cargado. Estas pérdidas de rendimiento no han sido a nivel de incremento significativo del tiempo de ejecución de los trabajos, pero sí de las métricas de rendimiento que incluyen el tiempo de espera de los trabajos (habituales en este contexto). El segundo grupo de políticas, orientadas a sistemas con limitaciones en cuanto a la potencia que pueden consumir, han mostrado un gran potencial utilizando DVFS como mecanismo de gestión. En este caso, comparado con un sistema que no incluya esta gestión, han demostrado mejoras en el rendimiento ya que permiten ejecutar más trabajos de forma simultánea, reduciendo significativamente el tiempo de espera de los trabajos. En este segundo grupo proponemos una política basada en el rendimiento del trabajo que se va a ejecutar y una segunda que considera la asignación de todos los recursos como un problema de optimización lineal. Esta última política es la contribución más importante de la tesis ya que demuestra un buen comportamiento en todos los casos evaluados. La última contribución de la tesis es un estudio del potencial de DVFS como técnica de gestión de la energía en un futuro próximo, en función de un estudio de las características de las aplicaciones, de la reducción de DVFS en el consumo de la CPU y del peso de la CPU dentro de todo el sistema. Este estudio indica que la capacidad de DVFS de ahorrar energía será limitado pero sigue mostrando un gran potencial de cara al control del consumo energético.
APA, Harvard, Vancouver, ISO, and other styles
22

Calmels, Dorothea [Verfasser], and Hans [Akademischer Betreuer] Ziegler. "Job Sequencing and Tool Switching Problems with a Generalisation to Non-Identical Parallel Machines / Dorothea Calmels ; Betreuer: Hans Ziegler." Passau : Universität Passau, 2020. http://d-nb.info/1218780703/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gama, Pinheiro Vinicius. "The management of multiple submissions in parallel systems : the fair scheduling approach." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM042/document.

Full text
Abstract:
Le problème étudié est celui de l'ordonnancement d'applications dans lessystèmes parallèles et distribués avec plusieurs utilisateurs. Les nouvellesplates-formes de calcul parallèle et distribué offrent des puissances trèsgrandes qui permettent d'envisager la résolution d'applications complexesinteractives. Aujourd'hui, il reste encore difficile d'utiliser efficacementcette puissance par manque d'outils de gestion de ressources. Le travaileffectué dans cette thèse se place dans cette perspective d'analyser etdévelopper des algorithmes efficaces pour gérer efficacement des ressources decalcul partagées entre plusieurs utilisateurs. On analyse les scénarios avecplusieurs soumissions lancées par multiples utilisateurs au cours du temps. Cessoumissions ont un ou plus de processus et l'ensemble de soumissions estorganisé en successifs campagnes. Les processus d'une seule campagnesont séquentiels et indépendants, mais les processus d'une campagne ne peuventpas commencer leur exécution avant que tous les processus provenant de ladernière campagne sont completés. Chaque utilisateur est intéressé à minimiserla somme des temps de réponses des campagnes. On définit un modèle théorique pour l'ordonnancement des campagnes et on montreque, dans le cas général, c'est NP-difficile. Pour le cas avec un utilisateur,on démontre qu'un algorithme d'ordonnancement $ho$-approximation pour le(classique) problème d'ordonnancement de tâches parallèles est aussi un$ho$-approximation pour le problème d'ordonnancement de campagnes. Pour lecas général avec $k$ utilisateurs, on établis un critère de emph{fairness}inspiré par partage de temps. On propose FairCamp, un algorithmed'ordonnancement qu'utilise dates limite pour réaliser emph{fairness} parmiles utilisateurs entre consécutifes campagnes. On prouve que FairCamp augmentele temps de réponse de chaque utilisateur par a facteur maximum de $kho$ parrapport un processeur dédiée à l'utilisateur. On prouve aussi que FairCamp estun algorithme $ho$-approximation pour le maximum emph{stretch}.On compare FairCamp contre emph{First-Come-First-Served} (FCFS) parsimulation. On démontre que, comparativement à FCFS, FairCamp réduit le maximal{em stretch} a la limite de $3.4$ fois. La différence est significative dansles systèmes utilisé pour plusieurs ($k>5$) utilisateurs.Les résultats montrent que, plutôt que juste des tâches individuelle etindépendants, campagnes de tâches peuvent être manipulées d'une manièreefficace et équitable
We study the problem of scheduling in parallel and distributedsystems with multiple users. New platforms for parallel and distributedcomputing offers very large power which allows to contemplate the resolution ofcomplex interactive applications. Nowadays, it is still difficult to use thispower efficiently due to lack of resource management tools. The work done inthis thesis lies in this context: to analyse and develop efficient algorithmsfor manage computing resources shared among multiple users. We analyzescenarios with many submissions issued from multiple users over time. Thesesubmissions contain one or more jobs and the set of submissions are organizedin successive campaigns. Any job from a campaign can not start until allthe jobs from the previous campaign are completed. Each user is interested inminimizing the sum of flow times of the campaigns.In the first part of this work, we define a theoretical model for Campaign Scheduling under restrictive assumptions andwe show that, in the general case, it is NP-hard. For the single-user case, we show that an$ho$-approximation scheduling algorithm for the (classic) parallel jobscheduling problem is also an $ho$-approximation for the Campaign Schedulingproblem. For the general case with $k$ users, we establish a fairness criteriainspired by time sharing. Then, we propose FairCamp, a scheduling algorithm whichuses campaign deadlines to achieve fairness among users between consecutivecampaigns. We prove that FairCamp increases the flow time of each user by afactor of at most $kho$ compared with a machine dedicated to the user. Wealso prove that FairCamp is an $ho$-approximation algorithm for the maximumstretch.We compare FairCamp to {em First-Come-First-Served} (FCFS) by simulation. We showthat, compared with FCFS, FairCamp reduces the maximum stretch by up to $3.4$times. The difference is significant in systems used by many ($k>5$) users.Our results show that, rather than just individual, independent jobs, campaignsof jobs can be handled by the scheduler efficiently and fairly
APA, Harvard, Vancouver, ISO, and other styles
24

Hung, Hui-Chih. "Allocation of jobs and resources to work centers." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1141849609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Speck, Jochen Matthias [Verfasser], and P. [Akademischer Betreuer] Sanders. "Theory and Engineering of Scheduling Parallel Jobs / Jochen Matthias Speck ; Betreuer: P. Sanders." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1162540745/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

GHEDJATI-GUESSOUM, FATIMA, and Jean-Charles Pomerol. "Resolution par des heuristiques dynamiques et des algorithmes genetiques du probleme d'ordonnancement de type job-shop generalise (a machines non identiques en parallele et contraintes de precedence)." Paris 6, 1994. http://www.theses.fr/1994PA066581.

Full text
Abstract:
L'objectif de cette these est la resolution approchee du probleme d'ordonnancement d'atelier de type job-shop generalise a machines non identiques en parallele et contraintes de precedente (ou les gammes des travaux sont non-lineaires). La premiere phase de ce travail consiste a creer un generateur d'ordonnancement ainsi qu'un environnement de programmation permettant d'une part, de tester rapidement differentes heuristiques statiques et surtout dynamiques et d'autre part, de basculer facilement et dynamiquement d'une heuristique a une autre sans changer l'algorithme de base. La strategie utilisee repose sur deux schemas de resolution. La premiere idee est developpee dans la seconde partie de cette these, dans laquelle nous proposons de nouvelles heuristiques qui essaient d'utiliser au mieux la polyvalence et la charge potentielle des machines. Dans la troisieme phase de la these, nous ameliorons la population de solutions obtenues par les heuristiques precedentes en utilisant une variete d'algorithmes genetiques concus pour ce probleme. Des experimentations ont ete effectuees sur les deux approches avec divers types de donnees issues de la litterature ou generees aleatoirement. Notre approche permet de traiter des problemes relativement importants. Les resultats sont prometteurs et l'interet de chaque approche est discutee
APA, Harvard, Vancouver, ISO, and other styles
27

Araújo, Felipe Francisco Bezerra. "Modelos e algoritmos para variações do problema de balanceamento de linhas de produção e designação de trabalhadores." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30112016-141117/.

Full text
Abstract:
O problema de balanceamento de linhas de produção e designação de trabalhadores é uma extensão do problema simples de balanceamento de linhas onde os tempos de execução de tarefas são dependentes dos trabalhadores. Este problema tem sua motivação prática oriunda de linhas de produção com trabalhadores com deficiência. Nesta tese de doutorado estudamos duas extensões para o problema: a primeira layouts de linhas com estações em paralelo, enquanto que a segunda possibilita o uso de múltiplas linhas. As extensões fora aplicadas tanto ao problema básico quanto para o problema de rotação de tarefas. Apresentamos formulações matemáticas e métodos exatos e heurísticos para todos os casos. Teste computacionais em instâncias da literatura e novas instâncias e uma análise detalhada dos resultados são apresentados.
The assembly line worker assignment and balancing problems is an extension of the simple assembly line balancing problem in which the task execution times depend on the assigned workers. This problem draws its practical motivation from assembly lines with workers with disabilities. In this doctoral thesis, we study two extensions for this problem: the first one allows layouts with parallel workstations, while the second one allows multiple parallel lines. These extensions were applied for the base problem as well as the job rotation problem. We present mathematical formulations and exact and heuristic methods for all cases. Computational tests in instances from literature and new instances and detailed analysis of the results are presented.
APA, Harvard, Vancouver, ISO, and other styles
28

Cincioglu, Derya. "A Rescheduling Problem With Controllable Processing Times:trade-off Between Number Of Disrupted Jobs And Reschedulingcosts." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613950/index.pdf.

Full text
Abstract:
In this thesis, we consider a rescheduling problem on non-identical parallel machines with controllable processing times. A period of unavailability occurs on one of the machines due to a machine failure, material shortage or broken tool. These disruptions may cause the original schedule to become inecient and sometimes infeasible. In order to generate a new and feasible schedule, we are dealing with two conflicting measures called the eciency and stability measures simultaneously. The eciency measure evaluates the satisfaction of a desired objective function value and the stability measure evaluates the amount of change between the schedule before and after the disruption. In this study, we measure stability by the number of disrupted jobs. In this thesis, the job is referred as a disrupted job if it completes processing after its planned completion time in the original schedule. The eciency is measured by the additional manufacturing cost of jobs. Decreasing number of disrupted jobs requires compressing the processing time of a job which cause an increase in its additional manufacturing cost. For that reason we cannot minimize these objectives at the same time. In order to handle this, we developed a mixed integer programming model for the considered problem by applying the epsilon-constraint approach. This approach makes focusing on the single objective possible to get efficient solutions. Therefore, we studied the problem of minimizing additional manufacturing cost subject to a limit on the number of disrupted jobs. We also considered a convex compression cost function for each job and solved a cost minimization problem by applying conic quadratic reformulation for the model. The convexity of cost functions is a major source of diculty in finding optimal integer solutions in this problem, but applying strengthened conic reformulation has eliminated this diculty. In addition, we prepare an improvement search algorithm in order to find good solution in reasonable CPU times. We use our heuristic procedure on optimality properties we showed for a single machine subproblem. We made computational experiments on small and medium scale test problems. Afterwards, we compare the performance of the improvement search algorithm and mathematical model for their solution quality and durations.
APA, Harvard, Vancouver, ISO, and other styles
29

Delgado, Javier. "Scheduling Medical Application Workloads on Virtualized Computing Systems." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/633.

Full text
Abstract:
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of “cloud computing” services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. A performance prediction methodology applicable to the target environment. A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20-30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
APA, Harvard, Vancouver, ISO, and other styles
30

Renaud-Goud, Paul. "Energy-aware scheduling : complexity and algorithms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.

Full text
Abstract:
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
31

Yeh, Shih-Lun, and 葉士綸. "Parallel Machine Scheduling with Job Splitting." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/36832661505975830611.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系
97
This study focuses on the problem of scheduling jobs on identical parallel machines with job-splitting property. In this problem, it assumed that a job can be split into several sub-jobs, and these sub-jobs can be processed on different machines simultaneously. Each job has a processing time, a ready date and a due date. If a sub-job of a job is assigned after a sub-job of another job on a machine, a sequence-dependent setup time is incurred. A two-phase heuristic algorithm is proposed to minimize the total tardiness. In phase 1, five efficient heuristic methods are proposed. In phase 2, the sub-job of tardy jobs in the schedule provided by phase 1 are further split, and the split sub-jobs are rescheduled on the machines to reduce the total tardiness. Computer experiments show that the proposed algorithm outperforms a previously suggested algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Chang, Wen-Ting, and 張文亭. "The job-splitting scheduling of varying-time-window jobs on parallel machines by mixed integer programming." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/23391267698472535574.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系
101
This study is motivated by the production management problem found in many large-volume MTO systems. In such an environment, a decision maker has to evaluate the requested due dates if are capacity feasibility, and further, to determine how to distribute each confirmed order with arbitrary ready date and due date to parallel machines so that the demand quantity and the due date can be met. Sequence- and machine-dependent setup times in unrelated parallel machine systems are considered in this study. In order to finish promised orders on time, splitting the production items into parallel machines for simultaneous processing is necessary. Two different approaches are proposed for the two hierarchical levels of production management – the preemptive earliest due date (PEDD) CRP approach in order entry level and the lexicographic goal programming approach in operational scheduling level. Two priority objectives are considered in the goal programming, the first priority is to minimize the total tardiness and the second priority is to minimize the total setup time. Therefore, two different mixed integer programming (MIP) formulations are proposed for the two priority objectives. Furthermore, two models with two different assumptions on job splitting are investigated in the operational scheduling level. Model 1 allows at most one setup for a job on a machine, while Model 2 allows multiple setups. Because of the assumption difference, the two models adopted two different modeling techniques of binary variables. The immediate-precedence variables modeling technique is used in Model 1; whereas, the sequence position variable modeling technique is used in Model 2. The experimental result shows that the proposed approach can effectively and efficiently solves the problems.
APA, Harvard, Vancouver, ISO, and other styles
33

Chu, Liang-Chi, and 朱良琪. "Minimizing total tardiness on uniform parallel machine with job arrival and incompatible job families." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/52814737125486202766.

Full text
Abstract:
碩士
中原大學
工業與系統工程研究所
101
We consider the problem of scheduling n jobs with f families on m uniform parallel machines. Every job has arrival time and belongs to one family. We have to add setup time when machine is processing one job( this job’s family is different to last job ). Our objective is to minimize total tardiness. First, we use heuristic assign family into machine for reducing setup time then we find families with greater overlap and using greedy algorithm to remove jobs from machine . For those jobs witch are removed we try assign them into every position. When we are assigning jobs into machine , we must consider it’s arrival time. The average execution time of problem with 30 machines, 50 families, 512 jobs and processing time [1,00] can be solved in 286.79 seconds.
APA, Harvard, Vancouver, ISO, and other styles
34

Patterson, Jordan. "Jole: a library for dynamic job-level parallel workloads." Master's thesis, 2009. http://hdl.handle.net/10048/727.

Full text
Abstract:
Problems in scientific computing often consist of a workload of jobs with dependencies between them. Batch schedulers are job-oriented, and are not well-suited to executing these workloads with complex dependencies. We introduce Jole, a Python library created to run these workloads. Jole has three contributions that allow flexibility not possible with a batch scheduler. First, dynamic job execution allows control and monitoring of jobs as they are running. Second, dynamic workload specification allows the creation of workloads that can adjust their execution while running. Lastly, dynamic infrastructure aggregation allows workloads to take advantage of additional resources as they become available. We evaluate Jole using GAFolder, a protein structure prediction tool. We show that our contributions can be used to create GAFolder workloads that use less cluster resources, iterate on global protein structures, and take advantage of additional cluster resources to search more thoroughly.
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Hsin-Chiang, and 黃信強. "A Genetic Algorithm for Job Scheduling on Parallel Machine." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/96291080791312524760.

Full text
Abstract:
碩士
淡江大學
資訊管理學系
88
In order to make the solution of manufacturing scheduling more realistic, we consider earliness/tardiness cost, machine idle time cost, and machine setup cost in the selection of sub-optimal solution. In the case of parallel-machine scheduling, the problem has been proven to be NP-hard. Several applications of Genetic Algorithms to solve the optimization problems have been proposed recently. They have been shown to obtain better results than other algorithms. Thus, this research will investigate a new type of chromosomes and define its evolution processes in a Genetic Algorithm for solving parallel-machine scheduling problems under the above mentioned factors. In this paper, we first propose a new mode of chromosomes to make search space of the problem more complete. Secondly, we redesign evolution processes and fitness functions to help the speed of obtaining sub-optimal solution more rapidly. Finally, we design and implement a working scheduling system and compare its results with those of previous studies. The experimental results are analyzed and we demonstrated our algorithm has the following advantages: 1. The proposed fitness function and chromosomes perform better than traditional genetic algorithms. 2. The obtained solutions are more stable in the same that the deviations are smaller. 3. It considers more factors, and is more suitable for real manufacturing cases.
APA, Harvard, Vancouver, ISO, and other styles
36

Patterson, Jordan Dacey Lee. "Jole a library for dynamic job-level parallel workloads /." 2009. http://hdl.handle.net/10048/727.

Full text
Abstract:
Thesis (M. Sc.)--University of Alberta, 2009.
Title from PDF file main screen (viewed on Nov. 27, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta." Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Tai-Lung, and 陳泰龍. "Optimizing Communications and Job Scheduling in Heterogeneous Parallel Systems." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/70753115420465323988.

Full text
Abstract:
博士
中華大學
工程科學博士學位學程
98
Job scheduling and broadcasting strategy are the important issues to improve system performance in heterogeneous systems. To investigate the problems of grid technologies in high performance computing, the message broadcasting, job scheduling, resource management and quality of services have always been the main challenges. In the variety of heterogeneous environments, the design of job allocating and message forwarding strategies depending on the network architecture and the construct of resources. In this study, the contention-free broadcasting, task scheduling, job grouping with dynamic dispatching and QoS guided job scheduling are proposed. We focus on the heterogeneous networks in different environments of irregular networks, master-slave model and grid computing. The main extensions of this study are the consideration of heterogeneous communication overheads and the system load balance. One significant improvement of our approach is that the system throughput could be maximized by decreasing the computation and communication overhead. The other advantage of the proposed method is that processors utilization can be increased with dynamic scheduling. To evaluate performance of the proposed techniques, we have implemented the proposed algorithms and compare with previous methods. The experimental results show that our techniques outperform other algorithms in terms of higher system throughput, minimum broadcasting time, less processor idle, and higher processors’ utilization.
APA, Harvard, Vancouver, ISO, and other styles
38

Pan, Guo-Cheng, and 潘國丞. "A Study of Multi-Objective Parallel-Machine Job Shop Rescheduling." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/78217015721347891195.

Full text
Abstract:
碩士
東海大學
工業工程與經營資訊學系
94
In the dynamic production environment, unexpected events or disruptions occur frequently, make the production schedule infeasible and then need to be updated. This research addresses a job-shop parallel-machine rescheduling problem with multiple-objectives concerned. Rescheduling factors included in this research are machine breakdown, shortage of materials and rush order. In this research, a rescheduling method that uses partial rescheduling and complete rescheduling is proposed. We use partial rescheduling to update the initial schedule and preserve it as much as possible. In the complete rescheduling, we use Hybrid Genetic Algorithm to regenerate a schedule with better quality. In addition to the study of rescheduling method, we present a Hybrid Genetic Algorithm to find a robust schedule (solution). A robust solution means that the value of the objective function remains high when some variations occur. Because a manufacturing system is dynamic, a traditionally optimal schedule may be hard to implement due to disruptions or changes. Thus, a robust or flexible schedule may be more valuable than an optimal schedule.
APA, Harvard, Vancouver, ISO, and other styles
39

Chiou, Jian-hao, and 邱建豪. "A Dynamic Worldwide Computing Paltform on Job-Parallel Computing System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/04875343504822859207.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
97
A job-parallel grid system considers each program to be executed as a job,and looks for available computing resources for the job. The major advantages of a job-parallel grid system are: (1) job execution can be easily handled by users, and (2) executable files can be submitted to the system without program re-engineering. The disadvantage is that its programmability is not good enough to support advanced communication primitives. On the contrary, a worldwide computing grid system utilizes the power of the internet and the technology of virtual machines to integrate heterogeneous computing resources as a whole. It provides high-level communication primitives for better programmability, supports numerous coordination approaches for distributed computing, and enables dynamic system reconfiguration for dynamic load-balancing. In this paper, we suggest a novel, dynamic worldwide computing platform which operates on a task-parallel computing system. The proposed platform uses Condor, a task-parallel computing system, as its fundamental infrastructure, and it submits the virtual machines of SALSA, which is a worldwide computing system, along with the SALSA applications to the Condor system for execution. The proposed platform will be more flexible, more manageable, and runs as a complete worldwide computing platform because it is actually a system of two faces. To construct the proposed platform, we will use Condor to build a Condor pool (a set of computing resources) first.Consequently, we will devise a mechanism to detect and manage virtual machines on the Condor Pool and SALSA applications on virtual machines. Then we will implement necessary interface to shorten the gap between the users and our system. Our goal is to integrate the advantages of job-parallel computing and worldwide computing, develop one new grid computing platform.
APA, Harvard, Vancouver, ISO, and other styles
40

Chuang, Shan-ping, and 莊尚平. "Simultaneous Job Scheduling and Resource Allocaton on Parallel Work Centers." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/87469084469396726792.

Full text
Abstract:
博士
國立臺灣科技大學
工業管理系
96
This study addresses a job scheduling and resource allocation (JSRA) problem with distinct release dates and due dates to minimize total tardiness in parallel work centers with a multi-processor environment. To solve the problem, this study proposes a hybrid genetic algorithm (HGA) with release and due date based decomposition heuristic. Five small-sized test problems are performed to evaluate the performance of the HGA, the pure GA (PGA), and the optimum solution obtained using Lingo 7.0. The results show that the percentage deviations between the HGA and Lingo are smaller than 15%, and the HGA has smaller variance than the PGA. Other three large-sized test problems are performed to evaluate the performance of the HGA and the PGA. Computational results show that the HGA performs well for large-sized problems. Additionally, comparing the computational results with those obtained using = 0, 0.1, …, 0.5, value between 0.1 and 0.2 have better solution quality. Finally, this study proposes a decision-supporting model, which integrates simulation, genetic algorithms and decision support tools, for solving the JSRA problem by practical perspective.
APA, Harvard, Vancouver, ISO, and other styles
41

Lai, Yi-hsiang, and 賴以翔. "Parallel Machine Scheduling with Machine Availability, Eligibility and Job Incompatible Constraint." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/55577676941553418505.

Full text
Abstract:
碩士
國立中央大學
工業管理研究所
95
In this paper we consider the problem of scheduling n non-preemptive jobs on m identical machines with machine availability, eligibility and incompatible job constraints when minimizing the maximum lateness. Each machine is not continuously available at all time and each job is only allowed to be processed on specific machines. Each job has to be processed at an availability interval with the same service level or higher. Some job belongs to a family and only jobs from different family may be processed in the same availability interval. We propose a branch and bound algorithm to find out the optimal solution. Firstly, we propose an algorithm which bases on the Most Remaining Jobs Family First/Earliest Due Date First (MJF/EDD) rule to find an upper bound. Then, we use a network flow technique to model the scheduling problem with the job preemption into a series of maximum flow problems. We propose an algorithm which extends from the work of Lin (2006). Our algorithm combines a network flow technique and a binary search procedure to find an optimal solution for the scheduling problem with the job preemption and use the result as the lower bound. Finally, we use four dominance rules to increase the efficiency of the branch and bound algorithm. Computational analysis shows that eliminating rules proposed is effective and very low percentage of nodes is generated by the branch and bound algorithm. Among those eliminating rules, we find the percentage of nodes eliminated by job incompatible restriction increases as the probability in generating jobs with a family type increases or number of families decreases. Our algorithm can get the optimal solution for the problem with up to 15 jobs, 7 machines and 3 families.
APA, Harvard, Vancouver, ISO, and other styles
42

Wei, Wang Chen, and 王甄薇. "On multi-stage parallel-machine job shop scheduling with due windows." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/13681256911560614536.

Full text
Abstract:
碩士
輔仁大學
管理學研究所
96
How to lower cost is every manufacturer’s the most important problem. Consider due window into scheduling can reduce not only the stock cost and the waste of space but also the idle time. Besides, add the quantity of machine can enlarge capacity and reach economic of scale. The study confer multi-stage parallel-machine job shop scheduling with due windows and focusing on scheduling with this study and expect to get a closer solution near the optimal solution. Minimize the total penalty due to the earliness and tardiness. We construct the mathematics model for multi-stage parallel-machine job shop scheduling with due windows first, and then test the simulate data. The solution got by ant colony optimum (ACO) programs that written by C-language and compare with the best solution gotten by LINGO 8.0 to test the efficiency and robust. The result was pointed out the ACO has good efficiency on the test. And the solving time is less than LINGO 8.0. So ACO had both effectiveness and efficiency that stressed on business management.
APA, Harvard, Vancouver, ISO, and other styles
43

Hsu, Hao-chun, and 徐豪駿. "Parallel Machine Scheduling with Machine Availability,Eligibility and Job Incompatible Constraints." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/24082829472579173289.

Full text
Abstract:
碩士
國立中央大學
工業管理研究所
100
In this paper we consider the problem of scheduling n preemptive jobs on m identical machines with machine availability, eligibility and incompatible job constraints when minimizing the maximum makespan. Each machine is not continuously available for processing at all time and each job is only allowed to be processed on specific machines. In the same availability interval, each job belongs to a family and only jobs from different family can be processed. Firstly, we use a network flow technique to model the scheduling problem with the job preemption into a series of maximum flow problems. Secondly, we propose an algorithm which bases on the Family With Most Job Remaining Processing Time First /Earliest Release Date First (FRPF/ERD) rule to find an upper bound. Thirdly, we use a branch and bound algorithm to deal with incompatible constraint of our problem and use six dominance rules to increase the efficiency of the branch and bound algorithm. Finally, we modify an algorithm proposed by Liao and Sheen (2008). This algorithm includes a network flow technique and a binary search procedure to find an optimal solution for the scheduling problem. Computational analysis shows that eliminating rules are effective. The percentage of nodes generated by the branch and bound algorithm is low. More than half of nodes eliminated are attributed to Proposition 5, which will become more effective when n and m increase.
APA, Harvard, Vancouver, ISO, and other styles
44

Chan, Pei-Yun, and 詹珮芸. "Parallel Machine Scheduling with Minimal and Maximal Time Lags for Reentrant Job." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/87573722329193258185.

Full text
Abstract:
碩士
國立中央大學
工業管理研究所
95
In this paper we study the problem of scheduling recirculation jobs on identical parallel machines with eligibility and availability restrictions when minimizing the makespan. Namely, each job may visit a machine more than once and is only allowed to be processed on specific machines; each machine is not always available for processing. Besides, minimal and maximal time lag constraints on the starting time of each reentrant job are also considered. We develop two branch and bound algorithms to solve the scheduling problem optimally. One is to deal with the jobs allocation problem with machine eligibility, and the other is to schedule the sequence on each allocated machine with minimal and maximal time lags constraints. We introduce a dummy job to denote each machine unavailable interval, and propose the first branch and bound algorithm which uses the depth first strategy to allocate jobs. We transform the scheduling problem corresponding to each leaf node into m single machine problems, which can be represented by a disjunctive graph. Finally, we use the second branch and bound algorithm adopted from Sheen and Lao (2007) for obtaining the optimal solution for each single machine problem to find the maximum makespan. Computational analysis shows that the eliminating rules proposed is effective and can eliminate more than 80% node when the total operation number is larger than 5 with eighty percent in generating machine availability by the branch and bound algorithm. Our algorithm can get the optimal solution for the problem with up to 10 operations and 3 machines.
APA, Harvard, Vancouver, ISO, and other styles
45

CHAO, CHIEN-WEN, and 趙謙文. "Minimizing Sum of Job Completion Times on Parallel Machines with Machine Restrictions." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25ywxu.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
95
The thesis studies two problems of parallel machines to minimize the sum of job completion times. The first problem is to schedule jobs on two identical parallel machines when a machine is available only in a specified time period. This thesis proposes an optimal branch-and-bound algorithm which employs three powerful elements, including an algorithm for computing upper bound, a lower bound algorithm, and a fathoming condition. The branch and bound algorithm was tested on problems of various sizes and parameters. The results show that the algorithm is quite efficient to solve all the test problems. In particular, the total computation time for the hardest problem is less than 0.1 second for a set of 100 problem instances. An important finding of the tests is that the upper bound algorithm can actually find optimal solutions to a quite large number of problems. The second problem considered in this thesis is a parallel machine problem in which machines and jobs can be classified into two levels: high and low levels. A high-level machine can process all jobs while a low-level machine can process only low-level jobs. This problem is a special case of the parallel machine problem with machine eligibility restriction, which can be formulated as a weighted bipartite matching problem and solved by the famous Hungarian method. By exploiting the special structure of the problem, we develop a special streamlined algorithm that achieves dramatic computational savings. Results of computational experiments show that the suggested algorithm outperforms the Hungarian method in solving the problem.
APA, Harvard, Vancouver, ISO, and other styles
46

Su, Chi-shiang, and 蘇啟祥. "Two parallel machines scheduling problems with job delivery coordination and availability constraint." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/21196232035287077834.

Full text
Abstract:
博士
國立臺灣科技大學
工業管理系
99
In a rapid changing environment, the way of competition among enterprises has a tendency towards competing between supply chain systems instead of competing between individual companies. Traditional scheduling models which only address the sequence of jobs to be processed at the production stage under some criteria are no longer suitable and should be extended to cope with the distribution stage after production. Emphasizing on the coordination and the integration among various members of a supply chain has become one of the vital strategies for the modern manufacturers to gain competitive advantages. This paper studies two NP-hard problem of the two-stage scheduling in which jobs are processed by two identical parallel machines and delivered to a customer with the objective of minimizing the makespan First, this study focuses mainly on a class of two identical parallel machines problem, in which jobs need to be delivered to single customer by a vehicle after their production stages and jobs size are different. The problem was first proposed by Chang and Lee(2004). Chang et al. presented a polynomial time algorithm with a worst-case ratio of 2. Zhong et al. [53] presented an improvement algorithm for the problem and reduced the worst-case ratio to 5/3. The purpose of this paper is to propose a new algorithm which leads to a best possible solution with a worst-case ratio of 63/40, except for the two particular cases where CMH3/C* ?T 8/5. Second, this paper studies the NP-hard problem of the two-stage scheduling in which jobs are processed by two identical parallel machines and delivered to a customer. We consider one of machines that exists an unavailable interval due to preventive maintenance. The problem was first proposed by Wang and Cheng(2007) and they proposed an algorithm H2 with a worst-case ratio of 5/3. We propose a new algorithm which leads to a best possible solution with a worst-case ratio of 3/2.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Kuan Ming, and 陳冠名. "An Application of Parallel Metaheuristic Methods for Solving Job Shop Scheduling Problem." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/54019754291739772494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Jui-Ling, and 楊瑞玲. "Scheduling Independent Jobs on Unrelated Parallel Machines." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/28689606830728299653.

Full text
Abstract:
碩士
國立清華大學
工業工程研究所
84
In this paper, we propose several heuristics to schedule independent jobs in unrelated parallel machine systems. We limit our scope to the static and deterministic problem. The system we investigate consists of several different types of machines, and machines of different types are unrelated and machines of the same type are identical. Both the total lateness and total tardiness problem in this thesis are formulated into mathematical programming models. Nevertheless, an optimal solution may not be available; and in practice, all that is required may be just a near-optimal solution. The proposed heuristics contain two phases: The first phase is to allocate jobs onto machines or subsystems. The second phase is to sequence the jobs on each machine or in each subsystem. The first phase consists of two stages: the constructing stage and the allocating stage. The constructing stage is to construct an initial job list which is used as reference to allocate jobs. The allocating stage is to allocate jobs onto machines or subsystems by balancing machine workload or by the criterion taking performacne measure directly into account. The second stage is composed of the sequencing stage which determine the final processing order of jobs on each machine. For the total lateness problem, the SPT rule is adopted to determine the final job sequence; for the total tardiness problem, the TPI rule is adopted in this stage.
APA, Harvard, Vancouver, ISO, and other styles
49

WANG, YUE-FANG, and 王月芳. "The effect of job splitting on flow time forecasts for parallel machine scheduling." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/cub4b6.

Full text
Abstract:
碩士
國立高雄第一科技大學
運籌管理系碩士班
105
In order to improve manufacturing services to face the global competition, companies usually use parallel machines to increase production capacity. But pursuing utilization often change schedule, might lose predictive accuracy at flow time, therefore, the purpose of the study is explore the parallel machine operating, for different type of work needed to setup and with the flexibility of the rules of job splitting, hope to find appropriate combination with job splitting and scheduling, can take care due date and stability of the production process, making the production line stable and easy to predict. This study was based on simulation software ARENA constructing simulation experiments, and according to the simulation results to collect simulation data; then the present study divides the predictor into process time and wait time, using Minitab statistical analysis software to do prediction mode at flow time, and then through the test prediction model to assess the control variable change, and what impact from job due date and flow time. The experimental results show that ATCS outperforms FIFO and EDD in terms of both delivery and forecast performance, and the combination of ATCS rules and splitting large jobs into two jobs can achieve the best of both worlds. In addition, if the K1 and K2 parameters of the ATCS schedule are changed, there will be an impact on the delivery and forecast performance. This study finds that large values of K1 will reduce the tardy ratio, but hurt the accuracy of the prediction model. If K2 increases, setup frequency and forecast accuracy also increases.
APA, Harvard, Vancouver, ISO, and other styles
50

Hsueh, Po-Jen, and 薛博仁. "A Research of Multi-Objective Parallel-Machine Drum-Buffer-Rope Job-Shop Scheduling." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/86810114596356011678.

Full text
Abstract:
碩士
東海大學
工業工程與經營資訊學系
102
Drum-Buffer-Rope (DBR) is Theory of Constraints (TOC) use scheduling theory of technology to the field, main spirit for take advantage of bottlenecks or constraint resources, in order to ensure that the overall capacity to maximize output, the main trick is to buffer management. This study is based on job-shop parallel-machine scheduling, proposed use Genetic Algorithms(GA) combine with Drum-Buffer-Rope, as a priority in the quantitative bottlemeck and qualitative bottleneck, for multi-enterprise business strategy goals, solving a set of complete schedule planning. When the reality of companies pursuing business goals, not just only a single goal, but moving more goals to pursue it, but usually obstruction with different types of bottlencks, it make scheduling become very difficult, most goals for the quantitative bottlemeck, for example: manufacturing length and due date, but not consider the qualitative bottleneck, for example:orders potential profits and previous transaction history, this goals also very important in the word. This study focuses on job-shop parallel-machine scheduling, to develop a DBR method, and take advantage of GA to maximize the use of resources for a finite constraints, making changes to the buffer length for most of the past experience that do it to determine and protection of bottleneck, consideration of multi-objective strategy includes quantitative strategy and qualitative strategy and schedule planners to design an appropriate relevant weights, also construct the multi adaptation function with this study in many consideration. The results will be identified to the GA combine DBR better than general scheduling method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography