Dissertationen zum Thema „Parallel job“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Parallel job" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Sabin, Gerald M. „Unfairness in parallel job scheduling“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1164826017.
Der volle Inhalt der QuelleIslam, Mohammad Kamrul. „QoS In Parallel Job Scheduling“. The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218566682.
Der volle Inhalt der QuelleWing, A. J. „Parallel simulation of PCB job shops“. Thesis, University of East Anglia, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359342.
Der volle Inhalt der QuelleLynch, Gerard. „Parallel job scheduling on heterogeneous networks of multiprocessor workstations“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0006/MQ45952.pdf.
Der volle Inhalt der QuelleAli, Syed Zeeshan. „An investigation into parallel job scheduling using service level agreements“. Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/an-investigation-into-parallel-job-scheduling-using-service-level-agreements(f4685321-374e-41c4-86da-d07f09ea4bac).html.
Der volle Inhalt der QuelleLi, Jianqing. „A parallel approach for solving a multiple machine job sequencing problem“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60235.pdf.
Der volle Inhalt der QuelleZhou, Huajun. „The CON Job scheduling problem on a single and parallel machines /“. Electronic version (PDF), 2003. http://dl.uncw.edu/etd/2003/zhouh/huajunzhou.pdf.
Der volle Inhalt der QuelleVélez, Gallego Mario César. „Algorithms for Scheduling Parallel Batch Processing Machines with Non-Identical Job Ready Times“. FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/276.
Der volle Inhalt der QuelleHulett, Maria. „Analytical Approximations to Predict Performance Measures of Manufacturing Systems with Job Failures and Parallel Processing“. FIU Digital Commons, 2010. http://digitalcommons.fiu.edu/etd/167.
Der volle Inhalt der QuelleKhan, Mukhtaj. „Hadoop performance modeling and job optimization for big data analytics“. Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11078.
Der volle Inhalt der QuelleUtrera, Iglesias Gladys Miriam. „"Virtual malleability" applied to MPI jobs to improve their execution in a multiprogrammed environment"“. Doctoral thesis, Universitat Politècnica de Catalunya, 2007. http://hdl.handle.net/10803/6013.
Der volle Inhalt der QuelleThe objective was to obtain the best performance in response time in multiprogrammed multiprocessors systems using batch systems, assuming all the jobs have the same priority.
To achieve that purpose, the benefits of supporting malleability on MPI jobs to reduce fragmentation and consequently improve the performance of the system were studied.
The contributions made in this work can be summarized as follows:
· Virtual malleability: A mechanism where a job is assigned a dynamic processor partition, where the number of processes is greater than the number of processors. The partition size is modified at runtime, according to external requirements such as the load of the system, by varying the multiprogramming level, making the job contend for resources with itself.
In addition to this, a mechanism which decides at runtime if applying local or global process queues to an application depending on the load balancing between processes of it.
· A job scheduling policy, that takes decisions such as how many processes to start with and the maximum multiprogramming degree based on the type and number of applications running and queued. Moreover, as soon as a job finishes execution and where there are queued jobs, this algorithm analyzes whether it is better to start execution of another job immediately or just wait until there are more resources available.
· A new alternative to backfilling strategies for the problema of window execution time expiring. Virtual malleability is applied to the backfilled job, reducing its partition size but without aborting or suspending it as in traditional backfilling.
The evaluation of this thesis has been done using a practical approach. All the proposals were implemented, modifying the three scheduling levels: queuing system, processor scheduler and runtime library.
The impact of the contributions were studied under several types of workloads, varying machine utilization, communication and, balance degree of the applications, multiprogramming level, and job size.
Results showed that it is possible to offer malleability over MPI jobs.
An application obtained better performance when contending for the resources with itself than with other applications, especially in workloads with high machine utilization. Load imbalance was taken into account obtaining better performance if applying the right queue type to each application independently.
The job scheduling policy proposed exploited virtual malleability by choosing at the beginning of execution some parameters like the number of processes and maximum multiprogramming level. It performed well under bursty workloads with low to medium machine utilizations.
However as the load increases, virtual malleability was not enough. That is because, when the machine is heavily loaded, the jobs, once shrunk are not able to expand, so they must be executed all the time with a partition smaller than the job size, thus degrading performance. Thus, at this point the job scheduling policy concentrated just in moldability.
Fragmentation was alleviated also by applying backfilling techniques to the job scheduling algorithm. Virtual malleability showed to be an interesting improvement in the window expiring problem. Backfilled jobs even on a smaller partition, can continue execution reducing memory swapping generated by aborts/suspensions In this way the queueing system is prevented from reinserting the backfilled job in the queue and re-executing it in the future.
Lakkimsetti, Praveen Kumar. „A framework for automatic optimization of MapReduce programs based on job parameter configurations“. Kansas State University, 2011. http://hdl.handle.net/2097/12011.
Der volle Inhalt der QuelleDepartment of Computing and Information Sciences
Mitchell L. Neilsen
Recently, cost-effective and timely processing of large datasets has been playing an important role in the success of many enterprises and the scientific computing community. Two promising trends ensure that applications will be able to deal with ever increasing data volumes: first, the emergence of cloud computing, which provides transparent access to a large number of processing, storage and networking resources; and second, the development of the MapReduce programming model, which provides a high-level abstraction for data-intensive computing. MapReduce has been widely used for large-scale data analysis in the Cloud [5]. The system is well recognized for its elastic scalability and fine-grained fault tolerance. However, even to run a single program in a MapReduce framework, a number of tuning parameters have to be set by users or system administrators to increase the efficiency of the program. Users often run into performance problems because they are unaware of how to set these parameters, or because they don't even know that these parameters exist. With MapReduce being a relatively new technology, it is not easy to find qualified administrators [4]. The major objective of this project is to provide a framework that optimizes MapReduce programs that run on large datasets. This is done by executing the MapReduce program on a part of the dataset using stored parameter combinations and setting the program with the most efficient combination and this modified program can be executed over the different datasets. We know that many MapReduce programs are used over and over again in applications like daily weather analysis, log analysis, daily report generation etc. So, once the parameter combination is set, it can be used on a number of data sets efficiently. This feature can go a long way towards improving the productivity of users who lack the skills to optimize programs themselves due to lack of familiarity with MapReduce or with the data being processed.
Ahmed, Zubair. „Developing an efficient scheduling template of a chemotherapy treatment unit: simulation and optimization approach“. Australasian Medical Journal, 2011. http://hdl.handle.net/1993/5084.
Der volle Inhalt der QuelleSchlagkamp, Stephan [Verfasser], Uwe Akademischer Betreuer] Schwiegelshohn und Andrei [Gutachter] [Tchernykh. „User-aware performance evaluation and optimization of parallel job schedulers / Stephan Schlagkamp ; Gutachter: Andrei Tchernykh ; Betreuer: Uwe Schwiegelshohn“. Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1139892584/34.
Der volle Inhalt der QuelleSchlagkamp, Stephan Verfasser], Uwe [Akademischer Betreuer] Schwiegelshohn und Andrei [Gutachter] [Tchernykh. „User-aware performance evaluation and optimization of parallel job schedulers / Stephan Schlagkamp ; Gutachter: Andrei Tchernykh ; Betreuer: Uwe Schwiegelshohn“. Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1139892584/34.
Der volle Inhalt der QuelleCalmels, Dorothea [Verfasser], und Hans [Akademischer Betreuer] Ziegler. „Job Sequencing and Tool Switching Problems with a Generalisation to Non-Identical Parallel Machines / Dorothea Calmels ; Betreuer: Hans Ziegler“. Passau : Universität Passau, 2020. http://d-nb.info/1218780703/34.
Der volle Inhalt der QuelleGama, Pinheiro Vinicius. „The management of multiple submissions in parallel systems : the fair scheduling approach“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM042/document.
Der volle Inhalt der QuelleWe study the problem of scheduling in parallel and distributedsystems with multiple users. New platforms for parallel and distributedcomputing offers very large power which allows to contemplate the resolution ofcomplex interactive applications. Nowadays, it is still difficult to use thispower efficiently due to lack of resource management tools. The work done inthis thesis lies in this context: to analyse and develop efficient algorithmsfor manage computing resources shared among multiple users. We analyzescenarios with many submissions issued from multiple users over time. Thesesubmissions contain one or more jobs and the set of submissions are organizedin successive campaigns. Any job from a campaign can not start until allthe jobs from the previous campaign are completed. Each user is interested inminimizing the sum of flow times of the campaigns.In the first part of this work, we define a theoretical model for Campaign Scheduling under restrictive assumptions andwe show that, in the general case, it is NP-hard. For the single-user case, we show that an$ho$-approximation scheduling algorithm for the (classic) parallel jobscheduling problem is also an $ho$-approximation for the Campaign Schedulingproblem. For the general case with $k$ users, we establish a fairness criteriainspired by time sharing. Then, we propose FairCamp, a scheduling algorithm whichuses campaign deadlines to achieve fairness among users between consecutivecampaigns. We prove that FairCamp increases the flow time of each user by afactor of at most $kho$ compared with a machine dedicated to the user. Wealso prove that FairCamp is an $ho$-approximation algorithm for the maximumstretch.We compare FairCamp to {em First-Come-First-Served} (FCFS) by simulation. We showthat, compared with FCFS, FairCamp reduces the maximum stretch by up to $3.4$times. The difference is significant in systems used by many ($k>5$) users.Our results show that, rather than just individual, independent jobs, campaignsof jobs can be handled by the scheduler efficiently and fairly
Etinski, Maja. „DVFS power management in HPC systems“. Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/96192.
Der volle Inhalt der QuelleEl aumento de rendimiento que han experimentado los sistemas de altas prestaciones ha venido acompañado de un aumento aún mayor en el consumo de energía. El consumo de los supercomputadores actuales implica unos costes muy altos de funcionamiento. Estos costes no tienen simplemente implicaciones a nivel económico sino también implicaciones en el medio ambiente. Dado la importancia del problema, en los últimos tiempos se han realizado importantes esfuerzos de investigación para atacar el problema de la gestión eficiente de la energía que consumen los sistemas de supercomputación. Dado que la CPU supone un alto porcentaje del consumo total de un sistema, nuestro trabajo se centra en la reducción y gestión eficiente de la energía consumida por la CPU. En concreto, esta tesis se centra en la viabilidad de realizar esta gestión mediante la técnica de Dynamic Voltage Frequency Scalingi (DVFS), una técnica ampliamente utilizada con el objetivo de reducir el consumo energético de la CPU. Sin embargo, esta técnica puede implicar una reducción en el rendimiento de las aplicaciones que se ejecutan, ya que implica una reducción de la frecuencia. Si tenemos en cuenta que el contexto de esta tesis son sistemas de alta prestaciones, minimizar el impacto en la pérdida de rendimiento será uno de nuestros objetivos. Sin embargo, en nuestro contexto, el rendimiento de un trabajo viene determinado por dos factores, tiempo de ejecución y tiempo de espera, por lo que habrá que considerar los dos componentes. Los sistemas de supercomputación suelen estar gestionados por sistemas de colas. Los trabajos, dependiendo de la política que se aplique y el estado del sistema, deberán esperar más o menos tiempo antes de ser ejecutado. Dado las características del sistema objetivo de esta tesis, nosotros consideramos que el Planificador de trabajo (o Job Scheduler), es el mejor componente del sistema para incluir la gestión de la energía ya que es el único punto donde se tiene una visión global de todo el sistema. En este trabajo de tesis proponemos un conjunto de políticas de planificación que considerarán el consumo energético como un recurso más. Estas políticas decidirán que trabajo ejecutar, el número de cpus asignadas y la lista de cpus (y nodos) sino también la frecuencia a la que estas cpus se ejecutarán. Estas políticas estarán orientadas a dos objetivos: reducir la energía total consumida por un conjunto de trabajos y controlar en consumo puntual de un conjunto puntual para evitar saturaciones del sistema en aquellos centros que puedan tener una capacidad limitada (permanente o puntual). El primer grupo de políticas intentará reducir el consumo total minimizando el impacto en el rendimiento. En este grupo encontramos una primera política que asigna la frecuencia de las cpus en función de la utilización del sistema y una segunda que calcula una estimación de la penalización que sufrirá el trabajo que va a empezar para decidir si reducir o no la frecuencia. Estas políticas han mostrado unos resultados aceptables con sistemas poco cargados, pero han mostrado unas pérdidas de rendimiento significativas cuando el sistema está muy cargado. Estas pérdidas de rendimiento no han sido a nivel de incremento significativo del tiempo de ejecución de los trabajos, pero sí de las métricas de rendimiento que incluyen el tiempo de espera de los trabajos (habituales en este contexto). El segundo grupo de políticas, orientadas a sistemas con limitaciones en cuanto a la potencia que pueden consumir, han mostrado un gran potencial utilizando DVFS como mecanismo de gestión. En este caso, comparado con un sistema que no incluya esta gestión, han demostrado mejoras en el rendimiento ya que permiten ejecutar más trabajos de forma simultánea, reduciendo significativamente el tiempo de espera de los trabajos. En este segundo grupo proponemos una política basada en el rendimiento del trabajo que se va a ejecutar y una segunda que considera la asignación de todos los recursos como un problema de optimización lineal. Esta última política es la contribución más importante de la tesis ya que demuestra un buen comportamiento en todos los casos evaluados. La última contribución de la tesis es un estudio del potencial de DVFS como técnica de gestión de la energía en un futuro próximo, en función de un estudio de las características de las aplicaciones, de la reducción de DVFS en el consumo de la CPU y del peso de la CPU dentro de todo el sistema. Este estudio indica que la capacidad de DVFS de ahorrar energía será limitado pero sigue mostrando un gran potencial de cara al control del consumo energético.
Araújo, Felipe Francisco Bezerra. „Modelos e algoritmos para variações do problema de balanceamento de linhas de produção e designação de trabalhadores“. Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30112016-141117/.
Der volle Inhalt der QuelleThe assembly line worker assignment and balancing problems is an extension of the simple assembly line balancing problem in which the task execution times depend on the assigned workers. This problem draws its practical motivation from assembly lines with workers with disabilities. In this doctoral thesis, we study two extensions for this problem: the first one allows layouts with parallel workstations, while the second one allows multiple parallel lines. These extensions were applied for the base problem as well as the job rotation problem. We present mathematical formulations and exact and heuristic methods for all cases. Computational tests in instances from literature and new instances and detailed analysis of the results are presented.
Georgiou, Yiannis. „Contributions for resource and job management in high performance computing“. Grenoble, 2010. http://www.theses.fr/2010GRENM079.
Der volle Inhalt der QuelleHigh Performance Computing is characterized by the latest technological evolutions in computing architectures and by the increasing needs of applications for computing power. A particular middleware called Resource and Job Management System (RJMS), is responsible for delivering computing power to applications. The RJMS plays an important role in HPC since it has a strategic place in the whole software stack because it stands between the above two layers. However, the latest evolutions in hardware and applications layers have provided new levels of complexities to this middleware. Issues like scalability, management of topological constraints, energy efficiency and fault tolerance have to be particularly considered, among others, in order to provide a better system exploitation from both the system and user point of view. This dissertation provides a state of the art upon the fundamental concepts and research issues of Resources and Jobs Management Systems. It provides a multi-level comparison (concepts, functionalities, performance) of some Resource and Jobs Management Systems in High Performance Computing. An important metric to evaluate the work of a RJMS on a platform is the observed system utilization. However, studies and logs of production platforms show that HPC systems in general suffer of significant un-utilization rates. Our study deals with these clusters' un-utilization periods by proposing methods to aggregate otherwise un-utilized resources for the benefit of the system or the application. More particularly this thesis explores RJMS level mechanisms: 1) for increasing the jobs valuable computation rates in the high volatile environments of a lightweight grid context, 2) for improving system utilization with malleability techniques and 3) providing energy efficient system management through the exploitation of idle computing machines. The experimentation and evaluation in this type of contexts provide important complexities due to the inter-dependency of multiple parameters that have to be taken into control. In this thesis we have developed a methodology based upon real-scale controlled experimentation with submission of synthetic or real workload traces
Delgado, Javier. „Scheduling Medical Application Workloads on Virtualized Computing Systems“. FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/633.
Der volle Inhalt der QuelleSong, Bin 1970. „Scheduling adaptively parallel jobs“. Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50354.
Der volle Inhalt der QuelleRamachandra, Girish. „Scheduling Precedence Related Jobs on Identical Parallel Processors“. NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-20020121-185145.
Der volle Inhalt der QuelleThe problem of concern to us in this thesis is the scheduling ofprecedence-related jobs non-preemptively on two identical parallelprocessors to minimize the sum of the weighted completion times. The problemis known to be NP-hard.We develop, in chapter 2, a binary integer program which iscapable of solving only small size problems (no larger than 12jobs) to optimality at the present time. We also present a linearprogramming(LP) model adopted from the literature todetermine the lower bound on the optimum. This LP stands us ingood stead when we perform the optimization via the GeneticAlgorithm approach (which is the subject matter of chapter 3). Wealso present a dynamic programming formulation based on theapproach used for solving the "weighted earliness-tardiness"problem. Although DP expands somewhat the size of the problemsthat can be solved to optimality, its computing burden becomesonerous for more than 25 jobs.In an attempt to solve larger, and more realistic problems, a GeneticAlgorithm(GA) is presented in chapter 3. The salient feature of the GAmodel is that the "initial population" of trial solutions are not allrandomly generated but are constituted from a set of priority rules whichare known to be "good" relaxation (in the sense of being "close" to theoptimum) of the original problem. Also, generation of infeasible solutionsis avoided by the use of post-processing procedures after crossover andmutation operations. Computational results show that the GA approach arrivesto within 20% of the lower bound (and hence of the optimum) in very fewiterations.
Sen, Siddhartha 1981. „Dynamic processor allocation for adaptively parallel work-stealing jobs“. Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/33355.
Der volle Inhalt der QuelleIncludes bibliographical references (p. 77-82).
TCP's burstiness is usually regarded as harmful, or at best, inconvenient. Instead, this thesis suggests a new perspective and examines whether TCP's burstiness is useful for certain applications. It claims that burstiness can be harnessed to insulate traffic from packet reordering caused by route change. We introduce the use of flowlets, a new abstraction for a burst of packets from a particular flow followed by an idle interval. We apply flowlets to the routing of traffic along multiple paths and develop a scheme using flowlet-switching to split traffic across multiple parallel paths. Flowlet switching is an ideal technique for load balancing traffic across multiple paths as it achieves the accuracy of packet-switching and the robustness to packet reordering of flow-switching. This research evaluates the accuracy, simplicity, overhead and robustness to reordering flowlet switching entails. Using a combination of trace analysis and network simulation, we demonstrate the feasibility of implementing flowlet-based switching.
by Siddhartha Sen.
M.Eng.
Sobalvarro, Patrick G. „Demand-based coscheduling of parallel jobs on multiprogrammed multiprocessors“. Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43588.
Der volle Inhalt der QuelleIncludes bibliographical references (p. 92-94).
by Patrick Gregory Sobalvarro.
Ph.D.
Speck, Jochen Matthias [Verfasser], und P. [Akademischer Betreuer] Sanders. „Theory and Engineering of Scheduling Parallel Jobs / Jochen Matthias Speck ; Betreuer: P. Sanders“. Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1162540745/34.
Der volle Inhalt der QuelleHung, Hui-Chih. „Allocation of jobs and resources to work centers“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1141849609.
Der volle Inhalt der QuelleCincioglu, Derya. „A Rescheduling Problem With Controllable Processing Times:trade-off Between Number Of Disrupted Jobs And Reschedulingcosts“. Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613950/index.pdf.
Der volle Inhalt der QuelleGHEDJATI-GUESSOUM, FATIMA, und Jean-Charles Pomerol. „Resolution par des heuristiques dynamiques et des algorithmes genetiques du probleme d'ordonnancement de type job-shop generalise (a machines non identiques en parallele et contraintes de precedence)“. Paris 6, 1994. http://www.theses.fr/1994PA066581.
Der volle Inhalt der QuelleRenaud-Goud, Paul. „Energy-aware scheduling : complexity and algorithms“. Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.
Der volle Inhalt der QuelleLasmezas, Franck. „Vies et mises en récits (auto) biographiques : la collection "l'un et l'autre" de J.B Pontalis (1988 - 2013)“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE2091.
Der volle Inhalt der QuelleIn France, biography was said to be close to its death at the end of the twentieth century, but J.-B. Pontalis and other editors helped promoting its renewal. Biographical relationship is being explored by L’un et l’autre under the focus of fatherhood, motherhood and care. These topics are less things than words, they belong to literature and language, and to a peculiar language. L’un et l’autre reflects the literary work of J.-B. Pontalis : one had an impact on the other and then the other way round. This is true for the human, animal and objectal topics which are spread throughout L’un et l’autre and may be called variorum of otherness. The other being or the other thing might be brother, sister, place or time, the last one referring especially to the Holocaust. In L’un et l’autre as well as in the work of Pontalis, the writer may find his alter ego in a painter or a musician, words can become pictures or tunes. Nevertheless the best mirror for a writer, and a biographer is a writer indeed, is another writer. More and more, the books edited by J.-B. Pontalis evolved from biography of the other towards biography of the self, following thus the late work of Pontalis as a writer. During the late life and after the death of J.-B. Pontalis, a « autographic » way of writing was born, the widespread topics of which show that french literature performed and still performs a comeback towards its second self : life
Yeh, Shih-Lun, und 葉士綸. „Parallel Machine Scheduling with Job Splitting“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/36832661505975830611.
Der volle Inhalt der Quelle國立清華大學
工業工程與工程管理學系
97
This study focuses on the problem of scheduling jobs on identical parallel machines with job-splitting property. In this problem, it assumed that a job can be split into several sub-jobs, and these sub-jobs can be processed on different machines simultaneously. Each job has a processing time, a ready date and a due date. If a sub-job of a job is assigned after a sub-job of another job on a machine, a sequence-dependent setup time is incurred. A two-phase heuristic algorithm is proposed to minimize the total tardiness. In phase 1, five efficient heuristic methods are proposed. In phase 2, the sub-job of tardy jobs in the schedule provided by phase 1 are further split, and the split sub-jobs are rescheduled on the machines to reduce the total tardiness. Computer experiments show that the proposed algorithm outperforms a previously suggested algorithm.
Chu, Liang-Chi, und 朱良琪. „Minimizing total tardiness on uniform parallel machine with job arrival and incompatible job families“. Thesis, 2013. http://ndltd.ncl.edu.tw/handle/52814737125486202766.
Der volle Inhalt der Quelle中原大學
工業與系統工程研究所
101
We consider the problem of scheduling n jobs with f families on m uniform parallel machines. Every job has arrival time and belongs to one family. We have to add setup time when machine is processing one job( this job’s family is different to last job ). Our objective is to minimize total tardiness. First, we use heuristic assign family into machine for reducing setup time then we find families with greater overlap and using greedy algorithm to remove jobs from machine . For those jobs witch are removed we try assign them into every position. When we are assigning jobs into machine , we must consider it’s arrival time. The average execution time of problem with 30 machines, 50 families, 512 jobs and processing time [1,00] can be solved in 286.79 seconds.
Patterson, Jordan. „Jole: a library for dynamic job-level parallel workloads“. Master's thesis, 2009. http://hdl.handle.net/10048/727.
Der volle Inhalt der QuelleHuang, Hsin-Chiang, und 黃信強. „A Genetic Algorithm for Job Scheduling on Parallel Machine“. Thesis, 2000. http://ndltd.ncl.edu.tw/handle/96291080791312524760.
Der volle Inhalt der Quelle淡江大學
資訊管理學系
88
In order to make the solution of manufacturing scheduling more realistic, we consider earliness/tardiness cost, machine idle time cost, and machine setup cost in the selection of sub-optimal solution. In the case of parallel-machine scheduling, the problem has been proven to be NP-hard. Several applications of Genetic Algorithms to solve the optimization problems have been proposed recently. They have been shown to obtain better results than other algorithms. Thus, this research will investigate a new type of chromosomes and define its evolution processes in a Genetic Algorithm for solving parallel-machine scheduling problems under the above mentioned factors. In this paper, we first propose a new mode of chromosomes to make search space of the problem more complete. Secondly, we redesign evolution processes and fitness functions to help the speed of obtaining sub-optimal solution more rapidly. Finally, we design and implement a working scheduling system and compare its results with those of previous studies. The experimental results are analyzed and we demonstrated our algorithm has the following advantages: 1. The proposed fitness function and chromosomes perform better than traditional genetic algorithms. 2. The obtained solutions are more stable in the same that the deviations are smaller. 3. It considers more factors, and is more suitable for real manufacturing cases.
Patterson, Jordan Dacey Lee. „Jole a library for dynamic job-level parallel workloads /“. 2009. http://hdl.handle.net/10048/727.
Der volle Inhalt der QuelleTitle from PDF file main screen (viewed on Nov. 27, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta." Includes bibliographical references.
Chen, Tai-Lung, und 陳泰龍. „Optimizing Communications and Job Scheduling in Heterogeneous Parallel Systems“. Thesis, 2010. http://ndltd.ncl.edu.tw/handle/70753115420465323988.
Der volle Inhalt der Quelle中華大學
工程科學博士學位學程
98
Job scheduling and broadcasting strategy are the important issues to improve system performance in heterogeneous systems. To investigate the problems of grid technologies in high performance computing, the message broadcasting, job scheduling, resource management and quality of services have always been the main challenges. In the variety of heterogeneous environments, the design of job allocating and message forwarding strategies depending on the network architecture and the construct of resources. In this study, the contention-free broadcasting, task scheduling, job grouping with dynamic dispatching and QoS guided job scheduling are proposed. We focus on the heterogeneous networks in different environments of irregular networks, master-slave model and grid computing. The main extensions of this study are the consideration of heterogeneous communication overheads and the system load balance. One significant improvement of our approach is that the system throughput could be maximized by decreasing the computation and communication overhead. The other advantage of the proposed method is that processors utilization can be increased with dynamic scheduling. To evaluate performance of the proposed techniques, we have implemented the proposed algorithms and compare with previous methods. The experimental results show that our techniques outperform other algorithms in terms of higher system throughput, minimum broadcasting time, less processor idle, and higher processors’ utilization.
Pan, Guo-Cheng, und 潘國丞. „A Study of Multi-Objective Parallel-Machine Job Shop Rescheduling“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/78217015721347891195.
Der volle Inhalt der Quelle東海大學
工業工程與經營資訊學系
94
In the dynamic production environment, unexpected events or disruptions occur frequently, make the production schedule infeasible and then need to be updated. This research addresses a job-shop parallel-machine rescheduling problem with multiple-objectives concerned. Rescheduling factors included in this research are machine breakdown, shortage of materials and rush order. In this research, a rescheduling method that uses partial rescheduling and complete rescheduling is proposed. We use partial rescheduling to update the initial schedule and preserve it as much as possible. In the complete rescheduling, we use Hybrid Genetic Algorithm to regenerate a schedule with better quality. In addition to the study of rescheduling method, we present a Hybrid Genetic Algorithm to find a robust schedule (solution). A robust solution means that the value of the objective function remains high when some variations occur. Because a manufacturing system is dynamic, a traditionally optimal schedule may be hard to implement due to disruptions or changes. Thus, a robust or flexible schedule may be more valuable than an optimal schedule.
Chiou, Jian-hao, und 邱建豪. „A Dynamic Worldwide Computing Paltform on Job-Parallel Computing System“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/04875343504822859207.
Der volle Inhalt der Quelle國立中央大學
資訊工程研究所
97
A job-parallel grid system considers each program to be executed as a job,and looks for available computing resources for the job. The major advantages of a job-parallel grid system are: (1) job execution can be easily handled by users, and (2) executable files can be submitted to the system without program re-engineering. The disadvantage is that its programmability is not good enough to support advanced communication primitives. On the contrary, a worldwide computing grid system utilizes the power of the internet and the technology of virtual machines to integrate heterogeneous computing resources as a whole. It provides high-level communication primitives for better programmability, supports numerous coordination approaches for distributed computing, and enables dynamic system reconfiguration for dynamic load-balancing. In this paper, we suggest a novel, dynamic worldwide computing platform which operates on a task-parallel computing system. The proposed platform uses Condor, a task-parallel computing system, as its fundamental infrastructure, and it submits the virtual machines of SALSA, which is a worldwide computing system, along with the SALSA applications to the Condor system for execution. The proposed platform will be more flexible, more manageable, and runs as a complete worldwide computing platform because it is actually a system of two faces. To construct the proposed platform, we will use Condor to build a Condor pool (a set of computing resources) first.Consequently, we will devise a mechanism to detect and manage virtual machines on the Condor Pool and SALSA applications on virtual machines. Then we will implement necessary interface to shorten the gap between the users and our system. Our goal is to integrate the advantages of job-parallel computing and worldwide computing, develop one new grid computing platform.
Chuang, Shan-ping, und 莊尚平. „Simultaneous Job Scheduling and Resource Allocaton on Parallel Work Centers“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/87469084469396726792.
Der volle Inhalt der Quelle國立臺灣科技大學
工業管理系
96
This study addresses a job scheduling and resource allocation (JSRA) problem with distinct release dates and due dates to minimize total tardiness in parallel work centers with a multi-processor environment. To solve the problem, this study proposes a hybrid genetic algorithm (HGA) with release and due date based decomposition heuristic. Five small-sized test problems are performed to evaluate the performance of the HGA, the pure GA (PGA), and the optimum solution obtained using Lingo 7.0. The results show that the percentage deviations between the HGA and Lingo are smaller than 15%, and the HGA has smaller variance than the PGA. Other three large-sized test problems are performed to evaluate the performance of the HGA and the PGA. Computational results show that the HGA performs well for large-sized problems. Additionally, comparing the computational results with those obtained using = 0, 0.1, …, 0.5, value between 0.1 and 0.2 have better solution quality. Finally, this study proposes a decision-supporting model, which integrates simulation, genetic algorithms and decision support tools, for solving the JSRA problem by practical perspective.
Lai, Yi-hsiang, und 賴以翔. „Parallel Machine Scheduling with Machine Availability, Eligibility and Job Incompatible Constraint“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/55577676941553418505.
Der volle Inhalt der Quelle國立中央大學
工業管理研究所
95
In this paper we consider the problem of scheduling n non-preemptive jobs on m identical machines with machine availability, eligibility and incompatible job constraints when minimizing the maximum lateness. Each machine is not continuously available at all time and each job is only allowed to be processed on specific machines. Each job has to be processed at an availability interval with the same service level or higher. Some job belongs to a family and only jobs from different family may be processed in the same availability interval. We propose a branch and bound algorithm to find out the optimal solution. Firstly, we propose an algorithm which bases on the Most Remaining Jobs Family First/Earliest Due Date First (MJF/EDD) rule to find an upper bound. Then, we use a network flow technique to model the scheduling problem with the job preemption into a series of maximum flow problems. We propose an algorithm which extends from the work of Lin (2006). Our algorithm combines a network flow technique and a binary search procedure to find an optimal solution for the scheduling problem with the job preemption and use the result as the lower bound. Finally, we use four dominance rules to increase the efficiency of the branch and bound algorithm. Computational analysis shows that eliminating rules proposed is effective and very low percentage of nodes is generated by the branch and bound algorithm. Among those eliminating rules, we find the percentage of nodes eliminated by job incompatible restriction increases as the probability in generating jobs with a family type increases or number of families decreases. Our algorithm can get the optimal solution for the problem with up to 15 jobs, 7 machines and 3 families.
Wei, Wang Chen, und 王甄薇. „On multi-stage parallel-machine job shop scheduling with due windows“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/13681256911560614536.
Der volle Inhalt der Quelle輔仁大學
管理學研究所
96
How to lower cost is every manufacturer’s the most important problem. Consider due window into scheduling can reduce not only the stock cost and the waste of space but also the idle time. Besides, add the quantity of machine can enlarge capacity and reach economic of scale. The study confer multi-stage parallel-machine job shop scheduling with due windows and focusing on scheduling with this study and expect to get a closer solution near the optimal solution. Minimize the total penalty due to the earliness and tardiness. We construct the mathematics model for multi-stage parallel-machine job shop scheduling with due windows first, and then test the simulate data. The solution got by ant colony optimum (ACO) programs that written by C-language and compare with the best solution gotten by LINGO 8.0 to test the efficiency and robust. The result was pointed out the ACO has good efficiency on the test. And the solving time is less than LINGO 8.0. So ACO had both effectiveness and efficiency that stressed on business management.
Hsu, Hao-chun, und 徐豪駿. „Parallel Machine Scheduling with Machine Availability,Eligibility and Job Incompatible Constraints“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/24082829472579173289.
Der volle Inhalt der Quelle國立中央大學
工業管理研究所
100
In this paper we consider the problem of scheduling n preemptive jobs on m identical machines with machine availability, eligibility and incompatible job constraints when minimizing the maximum makespan. Each machine is not continuously available for processing at all time and each job is only allowed to be processed on specific machines. In the same availability interval, each job belongs to a family and only jobs from different family can be processed. Firstly, we use a network flow technique to model the scheduling problem with the job preemption into a series of maximum flow problems. Secondly, we propose an algorithm which bases on the Family With Most Job Remaining Processing Time First /Earliest Release Date First (FRPF/ERD) rule to find an upper bound. Thirdly, we use a branch and bound algorithm to deal with incompatible constraint of our problem and use six dominance rules to increase the efficiency of the branch and bound algorithm. Finally, we modify an algorithm proposed by Liao and Sheen (2008). This algorithm includes a network flow technique and a binary search procedure to find an optimal solution for the scheduling problem. Computational analysis shows that eliminating rules are effective. The percentage of nodes generated by the branch and bound algorithm is low. More than half of nodes eliminated are attributed to Proposition 5, which will become more effective when n and m increase.
Chang, Wen-Ting, und 張文亭. „The job-splitting scheduling of varying-time-window jobs on parallel machines by mixed integer programming“. Thesis, 2013. http://ndltd.ncl.edu.tw/handle/23391267698472535574.
Der volle Inhalt der Quelle國立清華大學
工業工程與工程管理學系
101
This study is motivated by the production management problem found in many large-volume MTO systems. In such an environment, a decision maker has to evaluate the requested due dates if are capacity feasibility, and further, to determine how to distribute each confirmed order with arbitrary ready date and due date to parallel machines so that the demand quantity and the due date can be met. Sequence- and machine-dependent setup times in unrelated parallel machine systems are considered in this study. In order to finish promised orders on time, splitting the production items into parallel machines for simultaneous processing is necessary. Two different approaches are proposed for the two hierarchical levels of production management – the preemptive earliest due date (PEDD) CRP approach in order entry level and the lexicographic goal programming approach in operational scheduling level. Two priority objectives are considered in the goal programming, the first priority is to minimize the total tardiness and the second priority is to minimize the total setup time. Therefore, two different mixed integer programming (MIP) formulations are proposed for the two priority objectives. Furthermore, two models with two different assumptions on job splitting are investigated in the operational scheduling level. Model 1 allows at most one setup for a job on a machine, while Model 2 allows multiple setups. Because of the assumption difference, the two models adopted two different modeling techniques of binary variables. The immediate-precedence variables modeling technique is used in Model 1; whereas, the sequence position variable modeling technique is used in Model 2. The experimental result shows that the proposed approach can effectively and efficiently solves the problems.
Chan, Pei-Yun, und 詹珮芸. „Parallel Machine Scheduling with Minimal and Maximal Time Lags for Reentrant Job“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/87573722329193258185.
Der volle Inhalt der Quelle國立中央大學
工業管理研究所
95
In this paper we study the problem of scheduling recirculation jobs on identical parallel machines with eligibility and availability restrictions when minimizing the makespan. Namely, each job may visit a machine more than once and is only allowed to be processed on specific machines; each machine is not always available for processing. Besides, minimal and maximal time lag constraints on the starting time of each reentrant job are also considered. We develop two branch and bound algorithms to solve the scheduling problem optimally. One is to deal with the jobs allocation problem with machine eligibility, and the other is to schedule the sequence on each allocated machine with minimal and maximal time lags constraints. We introduce a dummy job to denote each machine unavailable interval, and propose the first branch and bound algorithm which uses the depth first strategy to allocate jobs. We transform the scheduling problem corresponding to each leaf node into m single machine problems, which can be represented by a disjunctive graph. Finally, we use the second branch and bound algorithm adopted from Sheen and Lao (2007) for obtaining the optimal solution for each single machine problem to find the maximum makespan. Computational analysis shows that the eliminating rules proposed is effective and can eliminate more than 80% node when the total operation number is larger than 5 with eighty percent in generating machine availability by the branch and bound algorithm. Our algorithm can get the optimal solution for the problem with up to 10 operations and 3 machines.
CHAO, CHIEN-WEN, und 趙謙文. „Minimizing Sum of Job Completion Times on Parallel Machines with Machine Restrictions“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25ywxu.
Der volle Inhalt der Quelle國立臺灣科技大學
工業管理系
95
The thesis studies two problems of parallel machines to minimize the sum of job completion times. The first problem is to schedule jobs on two identical parallel machines when a machine is available only in a specified time period. This thesis proposes an optimal branch-and-bound algorithm which employs three powerful elements, including an algorithm for computing upper bound, a lower bound algorithm, and a fathoming condition. The branch and bound algorithm was tested on problems of various sizes and parameters. The results show that the algorithm is quite efficient to solve all the test problems. In particular, the total computation time for the hardest problem is less than 0.1 second for a set of 100 problem instances. An important finding of the tests is that the upper bound algorithm can actually find optimal solutions to a quite large number of problems. The second problem considered in this thesis is a parallel machine problem in which machines and jobs can be classified into two levels: high and low levels. A high-level machine can process all jobs while a low-level machine can process only low-level jobs. This problem is a special case of the parallel machine problem with machine eligibility restriction, which can be formulated as a weighted bipartite matching problem and solved by the famous Hungarian method. By exploiting the special structure of the problem, we develop a special streamlined algorithm that achieves dramatic computational savings. Results of computational experiments show that the suggested algorithm outperforms the Hungarian method in solving the problem.
Su, Chi-shiang, und 蘇啟祥. „Two parallel machines scheduling problems with job delivery coordination and availability constraint“. Thesis, 2010. http://ndltd.ncl.edu.tw/handle/21196232035287077834.
Der volle Inhalt der Quelle國立臺灣科技大學
工業管理系
99
In a rapid changing environment, the way of competition among enterprises has a tendency towards competing between supply chain systems instead of competing between individual companies. Traditional scheduling models which only address the sequence of jobs to be processed at the production stage under some criteria are no longer suitable and should be extended to cope with the distribution stage after production. Emphasizing on the coordination and the integration among various members of a supply chain has become one of the vital strategies for the modern manufacturers to gain competitive advantages. This paper studies two NP-hard problem of the two-stage scheduling in which jobs are processed by two identical parallel machines and delivered to a customer with the objective of minimizing the makespan First, this study focuses mainly on a class of two identical parallel machines problem, in which jobs need to be delivered to single customer by a vehicle after their production stages and jobs size are different. The problem was first proposed by Chang and Lee(2004). Chang et al. presented a polynomial time algorithm with a worst-case ratio of 2. Zhong et al. [53] presented an improvement algorithm for the problem and reduced the worst-case ratio to 5/3. The purpose of this paper is to propose a new algorithm which leads to a best possible solution with a worst-case ratio of 63/40, except for the two particular cases where CMH3/C* ?T 8/5. Second, this paper studies the NP-hard problem of the two-stage scheduling in which jobs are processed by two identical parallel machines and delivered to a customer. We consider one of machines that exists an unavailable interval due to preventive maintenance. The problem was first proposed by Wang and Cheng(2007) and they proposed an algorithm H2 with a worst-case ratio of 5/3. We propose a new algorithm which leads to a best possible solution with a worst-case ratio of 3/2.
Chen, Kuan Ming, und 陳冠名. „An Application of Parallel Metaheuristic Methods for Solving Job Shop Scheduling Problem“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/54019754291739772494.
Der volle Inhalt der QuelleWANG, YUE-FANG, und 王月芳. „The effect of job splitting on flow time forecasts for parallel machine scheduling“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/cub4b6.
Der volle Inhalt der Quelle國立高雄第一科技大學
運籌管理系碩士班
105
In order to improve manufacturing services to face the global competition, companies usually use parallel machines to increase production capacity. But pursuing utilization often change schedule, might lose predictive accuracy at flow time, therefore, the purpose of the study is explore the parallel machine operating, for different type of work needed to setup and with the flexibility of the rules of job splitting, hope to find appropriate combination with job splitting and scheduling, can take care due date and stability of the production process, making the production line stable and easy to predict. This study was based on simulation software ARENA constructing simulation experiments, and according to the simulation results to collect simulation data; then the present study divides the predictor into process time and wait time, using Minitab statistical analysis software to do prediction mode at flow time, and then through the test prediction model to assess the control variable change, and what impact from job due date and flow time. The experimental results show that ATCS outperforms FIFO and EDD in terms of both delivery and forecast performance, and the combination of ATCS rules and splitting large jobs into two jobs can achieve the best of both worlds. In addition, if the K1 and K2 parameters of the ATCS schedule are changed, there will be an impact on the delivery and forecast performance. This study finds that large values of K1 will reduce the tardy ratio, but hurt the accuracy of the prediction model. If K2 increases, setup frequency and forecast accuracy also increases.
Hsueh, Po-Jen, und 薛博仁. „A Research of Multi-Objective Parallel-Machine Drum-Buffer-Rope Job-Shop Scheduling“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/86810114596356011678.
Der volle Inhalt der Quelle東海大學
工業工程與經營資訊學系
102
Drum-Buffer-Rope (DBR) is Theory of Constraints (TOC) use scheduling theory of technology to the field, main spirit for take advantage of bottlenecks or constraint resources, in order to ensure that the overall capacity to maximize output, the main trick is to buffer management. This study is based on job-shop parallel-machine scheduling, proposed use Genetic Algorithms(GA) combine with Drum-Buffer-Rope, as a priority in the quantitative bottlemeck and qualitative bottleneck, for multi-enterprise business strategy goals, solving a set of complete schedule planning. When the reality of companies pursuing business goals, not just only a single goal, but moving more goals to pursue it, but usually obstruction with different types of bottlencks, it make scheduling become very difficult, most goals for the quantitative bottlemeck, for example: manufacturing length and due date, but not consider the qualitative bottleneck, for example:orders potential profits and previous transaction history, this goals also very important in the word. This study focuses on job-shop parallel-machine scheduling, to develop a DBR method, and take advantage of GA to maximize the use of resources for a finite constraints, making changes to the buffer length for most of the past experience that do it to determine and protection of bottleneck, consideration of multi-objective strategy includes quantitative strategy and qualitative strategy and schedule planners to design an appropriate relevant weights, also construct the multi adaptation function with this study in many consideration. The results will be identified to the GA combine DBR better than general scheduling method.