To see the other types of publications on this topic, follow the link: ROBIN PROCESS SCHEDULING.

Journal articles on the topic 'ROBIN PROCESS SCHEDULING'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'ROBIN PROCESS SCHEDULING.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chhugani, Barkha, and Mahima Silvester. "Improving Round Robin Process Scheduling Algorithm." International Journal of Computer Applications 166, no. 6 (May 17, 2017): 12–16. http://dx.doi.org/10.5120/ijca2017914034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Santika, Monica, and Seng Hansun. "Implementasi Algoritma Shortest Job First dan Round Robin pada Sistem Penjadwalan Pengiriman Barang." Jurnal ULTIMATICS 6, no. 2 (November 1, 2014): 94–99. http://dx.doi.org/10.31937/ti.v6i2.336.

Full text
Abstract:
Delivery of goods will normally be conducted in accordance with the queuing time of booking. Sometimes, it is inefficient and results in a delay on the delivery of goods. Therefore, to make a better scheduling system, the Shortest Job First and Round Robin algorithms been implemented. From the results of experiments, Shortest Job First and Round Robin algorithms successfully applied to the scheduling delivery application. Shortest Job First algorithm is better than Round Robin scheduling in the case of delivery of goods, because the algorithm execution process which takes small time will be moved before the process which takes much time, so it needs smaller time than using Round Robin algorithm. Index Terms - Round Robin, Scheduling, Shipping, Shortest Job First
APA, Harvard, Vancouver, ISO, and other styles
3

Thirumala Rao, B., M. Susmitha, T. Swathi, and G. Akhil. "Implementation Of Hybrid Scheduler In Hadoop." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 868. http://dx.doi.org/10.14419/ijet.v7i2.7.11084.

Full text
Abstract:
The paper focusses on priority based round robin scheduling algorithm for scheduling jobs in Hadoop environment. By Using this Proposed Scheduling Algorithm it reduces the starvation of jobs. And the advantage of priority scheduling is that the process with the highest priority will be executed first. Combining the both strategies of round robin and priority scheduling algorithm a optimized algorithm is to be implemented. Which works more efficiently even after considering all the parameters of scheduling algorithm. This proposed algorithm is also compared with existing round robin and priority scheduling algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Putra, Tri Dharma, and Rakhmat Purnomo. "Simulation of Priority Round Robin Scheduling Algorithm." Sinkron 7, no. 4 (October 3, 2022): 2170–81. http://dx.doi.org/10.33395/sinkron.v7i4.11665.

Full text
Abstract:
In this journal, simulation of priority round robin scheduling algorithm is presented. To imitate the processes of operating system operation, simulation can be used. By simulation, model is used, namely models that represent the characteristics or behaviour of systems. Process scheduling is one important operation in operating system. OS-SIM can be used to model and simulate the operations of process scheduling. Some scheduling algorithms are available in modern operating systems, like First come First Serve (FCFS), Shortest Job First (SJF), Round Robin (RR), Priority Scheduling or combination of these algorithms. One important scheduling algorithm for real-time or embedded system is priority round robin scheduling algorithm. Priority round robin scheduling algorithm is a preemptive algorithm. Each process is given time quantum. Each process has a priority. Here time quantum 3 is given. The higher the time quantum, the more the context switching. By the use of OS-SIM, simulation can be understood easily and thoroughly. The statistics, will be calculated automatically by the system by the simulator, like the number of context switching, average waiting time, average turn around time, and average responds time. With one example, by using quantum=3. The average turn around time is 18.25 ms. The Average Waiting Time is 12 ms. The Average Responds time is 2.75 ms. The total burst time is 25 ms.
APA, Harvard, Vancouver, ISO, and other styles
5

Prasad Arya, Govind, Kumar Nilay, and Devendra Prasad. "An Improved Round Robin CPU Scheduling Algorithm based on Priority of Process." International Journal of Engineering & Technology 7, no. 4.5 (September 22, 2018): 238. http://dx.doi.org/10.14419/ijet.v7i4.5.20077.

Full text
Abstract:
The most important and integral part of a computer system is its operating system. Scheduling various resources is one of the most critical tasks an operating system needs to perform. Process scheduling being one of those tasks, involves various techniques that define how more than one processes can be executed simultaneously. The primary aim here is to the system more efficient and faster. The fundamental scheduling algorithms are: First Come First Serve (FCFS), Round Robin, Priority Based Scheduling, and Shortest Job First (SJF). This paper focuses on Round Robin Scheduling algorithm and various issues related to it. One major issue in RR scheduling is determining the length of Time Quantum. If the Time Quantum is too large RR scheduling behaves as FCFS. On the other hand, if it is too small it forces considerable increase in the number of context switches. Our main objective is to overcome this limitation of traditional RR scheduling algorithm and maximize CPU utilization, further, leading to more efficient and faster system. Here we propose an algorithm that categorizes available processes into High Priority processes and Low Priority process. The proposed algorithm reduces the average waiting time of High Priority processes in all cases and of Low Priority processes in not all but some cases. The overall waiting time changes on the basis of set of processes considered. The simulation results justify that the proposed schemes reduces the overall average waiting time when compared to the existing schemes.
APA, Harvard, Vancouver, ISO, and other styles
6

Faizan, Khaji, Abhijeet Marikal, and Kakelli Anil. "A Hybrid Round Robin Scheduling Mechanism for Process Management." International Journal of Computer Applications 177, no. 36 (February 17, 2020): 14–19. http://dx.doi.org/10.5120/ijca2020919851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Biswas, Dipto, Md Samsuddoha, Md Rashid Al Asif, and Md Manjur Ahmed. "Optimized Round Robin Scheduling Algorithm Using Dynamic Time Quantum Approach in Cloud Computing Environment." International Journal of Intelligent Systems and Applications 15, no. 1 (February 8, 2023): 22–34. http://dx.doi.org/10.5815/ijisa.2023.01.03.

Full text
Abstract:
Cloud computing refers to a sophisticated technology that deals with the manipulation of data in internet-based servers dynamically and efficiently. The utilization of the cloud computing has been rapidly increased because of its scalability, accessibility, and incredible flexibility. Dynamic usage and process sharing facilities require task scheduling which is a prominent issue and plays a significant role in developing an optimal cloud computing environment. Round robin is generally an efficient task scheduling algorithm that has a powerful impact on the performance of the cloud computing environment. This paper introduces a new approach for round robin based task scheduling algorithm which is suitable for cloud computing environment. The proposed algorithm determines time quantum dynamically based on the differences among three maximum burst time of tasks in the ready queue for each round. The concerning part of the proposed method is utilizing additive manner among the differences, and the burst times of the processes during determining the time quantum. The experimental results showed that the proposed approach has enhanced the performance of the round robin task scheduling algorithm in reducing average turn-around time, diminishing average waiting time, and minimizing number of contexts switching. Moreover, a comparative study has been conducted which showed that the proposed approach outperforms some of the similar existing round robin approaches. Finally, it can be concluded based on the experiment and comparative study that the proposed dynamic round robin scheduling algorithm is comparatively better, acceptable and optimal for cloud environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Siva Nageswara Rao, G., N. Srinivasu, S. V. N. Srinivasu, and G. Rama Koteswara Rao. "Dynamic Time Slice Calculation for Round Robin Process Scheduling Using NOC." International Journal of Electrical and Computer Engineering (IJECE) 5, no. 6 (December 1, 2015): 1480. http://dx.doi.org/10.11591/ijece.v5i6.pp1480-1485.

Full text
Abstract:
<p>Process scheduling means allocating a certain amount of CPU time to each of the user processes. One of the popular scheduling algorithms is the “Round Robin” algorithm, which allows each and every process to utilize the CPU for short time duration. Processes which finish executing during the time slice are removed from the ready queue. Processes which do not complete execution during the specified time slice are removed from the front of the queue, and placed at the rear end of the queue. This paper presents an improvisation to the traditional round robin scheduling algorithm, by proposing a new method. The new method represents the time slice as a function of the burst time of the waiting process in the ready queue. Fixing the time slice for a process is a crucial factor, because it subsequently influences many performance parameters like turnaround time, waiting time, response time and the frequency of context switches. Though the time slot is fixed for each process, this paper explores the fine-tuning of the time slice for processes which do not complete in the stipulated time allotted to them.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Putra, Muhammad Taufik Dwi, Haryanto Hidayat, Naziva Septian, and Tiara Afriani. "Analisis Perbandingan Algoritma Penjadwalan CPU First Come First Serve (FCFS) Dan Round Robin." Building of Informatics, Technology and Science (BITS) 3, no. 3 (December 31, 2021): 207–12. http://dx.doi.org/10.47065/bits.v3i3.1047.

Full text
Abstract:
CPU scheduling is important in multitasking and multiprocessing an operating system because of the many processes that need to be run in a computer. This causes the operating system to need to divide resources for running processes. CPU scheduling has several algorithms in it such as First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin (RR) algorithms. The writing of this study is intended to compare the First Come First Serve and Round Robin algorithms with four specified parameters namely Average Turn Around Time, Waiting Time, Throughput, and CPU Utilization. The experiment was conducted with the First Come First Serve algorithm and the Round Robin of three different Quantum Times. These calculations at different quantum times aim to find out if the differences affect the advantages of the Round Robin algorithm over the First Come First Serve algorithm. The conclusion is that the First Come First Serve (FCFS) algorithm is superior to the Round Robin (RR) algorithm. This is indicated by the average turn around time, waiting time, and throughput values of the First Come First Serve algorithm more effective in running the process
APA, Harvard, Vancouver, ISO, and other styles
10

Parinduri, Ikhsan, and Siti Nurhabibah Hutagalung. "Teknik Penjadwalan Prosesor FIFO, SJF Non Preempetive, Round Robin." Prosiding Seminar Nasional Riset Information Science (SENARIS) 1 (September 30, 2019): 864. http://dx.doi.org/10.30645/senaris.v1i0.93.

Full text
Abstract:
Processor scheduling is divided into several methods including FIFO, Non Preempetive SJF, Round Robin. Implementation can know the performance of the processor which consists of process, waiting time, arrival time and completion stage. In this case the processor scheduling is made in NetBeans IDE.7.0.1 programming with input on the main menu and calculation process display menu with AWT (Average Waiting Time) value with units of ms, table display process: process, burst time and gaint chart : process, waiting time, start time and end time.
APA, Harvard, Vancouver, ISO, and other styles
11

Bisht, Aashna, Mohd Abdul Ahad, and Sielvie Sharma. "Calculating Dynamic Time Quantum for Round Robin Process Scheduling Algorithm." International Journal of Computer Applications 98, no. 21 (July 18, 2014): 20–27. http://dx.doi.org/10.5120/17307-7760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mostafa, Samih M., and Hirofumi Amano. "Dynamic Round Robin CPU Scheduling Algorithm Based on K-Means Clustering Technique." Applied Sciences 10, no. 15 (July 26, 2020): 5134. http://dx.doi.org/10.3390/app10155134.

Full text
Abstract:
Minimizing time cost in time-shared operating system is the main aim of the researchers interested in CPU scheduling. CPU scheduling is the basic job within any operating system. Scheduling criteria (e.g., waiting time, turnaround time and number of context switches (NCS)) are used to compare CPU scheduling algorithms. Round robin (RR) is the most common preemptive scheduling policy used in time-shared operating systems. In this paper, a modified version of the RR algorithm is introduced to combine the advantageous of favor short process and low scheduling overhead of RR for the sake of minimizing average waiting time, turnaround time and NCS. The proposed work starts by clustering the processes into clusters where each cluster contains processes that are similar in attributes (e.g., CPU service period, weights and number of allocations to CPU). Every process in a cluster is assigned the same time slice depending on the weight of its cluster and its CPU service period. The authors performed comparative study of the proposed approach and popular scheduling algorithms on nine groups of processes vary in their attributes. The evaluation was measured in terms of waiting time, turnaround time, and NCS. The experiments showed that the proposed approach gives better results.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Xiao Yong, Jun Peng, and Shuo Li. "Clock Compensation Strategy in Train Ethernet." Applied Mechanics and Materials 347-350 (August 2013): 2061–66. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2061.

Full text
Abstract:
How to degrade the delay time of gateway queuing with heavy network load in train communication networks is the key to guarantee the stability of the train. This article models the queue scheduling problem modeling into reinforcement learning process, puts forward an adaptive weights learning polling scheduling algorithm with the purpose of dynamic scheduling for vehicle gateway node queue. With the compare of algorithm in this paper and weighted round-robin scheduling algorithm in adequate and inadequate bandwidth resources situations, we have approved the superiority of the algorithm in this paper.
APA, Harvard, Vancouver, ISO, and other styles
14

Zakir, Ahmad, Sheila Azhari Dalimunthe, and Dedy Irwan. "PENERAPAN ALGORITMA ROUND ROBIN PADA PENJADWALAN PREVENTIVE MAINTENANCE DI PT. PASIFIK SATELIT NUSANTARA." Jurnal Teknik Informasi dan Komputer (Tekinkom) 3, no. 2 (January 4, 2021): 54. http://dx.doi.org/10.37600/tekinkom.v3i2.142.

Full text
Abstract:
Scheduling is needed when several activities must be processed at a certain time. Good scheduling maximizes the effectiveness of the use of every activity, so scheduling is an important part of planning and controlling activities. At PT. Pasifik Satelit Nusantara, there are maintenance activities on the devices used by the preventive maintenance process customers needed so that no additional costs occur, so it needs to be done scheduling that can minimize downtime. For this reason, the schedule must be in accordance with the actual conditions in the company so as to minimize downtime so that the process of customer activity is not hampered by damaged equipment. In this study utilizing a round robin algorithm that can schedule human resources that carry out preventife maintenance activities. The author uses website-based media so that it can easily be used by company superiors to schedule technicians or workers.
APA, Harvard, Vancouver, ISO, and other styles
15

Kumar, Sarvesh, Gaurav Kumar, Komal Jain, and Aditi Jain. "An approach to reduce turn around timeand waiting timeby the selection of round robin and shortest job first algorithm." International Journal of Engineering & Technology 7, no. 2.8 (March 19, 2018): 667. http://dx.doi.org/10.14419/ijet.v7i2.8.10553.

Full text
Abstract:
In this research,a study on operating system tells about its working, how it helps as interface between user software and system hardware .To implement this, different scheduling is used to provide multiple processing in a hardware. There are different levels of scheduler applied in different levels of process from ready queue to termination. This paper focuses on the average amount of waiting time and amount of turnaround time of processes. The proposed algorithm purely defines less waiting time and turnaround time as compared to the round robin scheduling and shortest job first scheduling algorithm.
APA, Harvard, Vancouver, ISO, and other styles
16

Srilatha, N., M. Sravani, and Y. Divya. "Optimal Round Robin CPU Scheduling Algorithm Using Manhattan Distance." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 6 (December 1, 2017): 3664. http://dx.doi.org/10.11591/ijece.v7i6.pp3664-3668.

Full text
Abstract:
In Round Robin Scheduling the time quantum is fixed and then processes are scheduled such that no process get CPU time more than one time quantum in one go. The performance of Round robin CPU scheduling algorithm is entirely dependent on the time quantum selected. If time quantum is too large, the response time of the processes is too much which may not be tolerated in interactive environment. If time quantum is too small, it causes unnecessarily frequent context switch leading to more overheads resulting in less throughput. In this paper a method using Manhattan distance has been proposed that decides a quantum value. The computation of the time quantum value is done by the distance or difference between the highest burst time and lowest burst time. The experimental analysis also shows that this algorithm performs better than RR algorithm and by reducing number of context switches, reducing average waiting time and also the average turna round time.
APA, Harvard, Vancouver, ISO, and other styles
17

Peng, Chen, and Hongchenyu Yang. "Networked scheduling for decentralized load frequency control." Intelligence & Robotics 2, no. 3 (2022): 298–312. http://dx.doi.org/10.20517/ir.2022.27.

Full text
Abstract:
This paper investigates the scheduling process for multi-area interconnected power systems under shared but band-limited networks and decentralized load frequency controllers. To cope with sub-area information and avoid node collision of large-scale power systems, round-robin and try-once-discard scheduling are used to schedule sampling data among different sub-grids. Different from existing decentralized load frequency control methods, this paper studies multi-packet transmission scheme and introduces scheduling protocols to deal with multi-node collision. Considering the scheduling process and decentralized load frequency controllers, an impulsive power system closed-loop model is well established. Furthermore, sufficient stabilization criteria are derived to obtain decentralized output feedback controller gains and scheduling protocol parameters. Under the designed decentralized output feedback controllers, the prescribed system performances have been achieved. Finally, a three-area power system example is used to verify the effectiveness of the proposed scheduling method.
APA, Harvard, Vancouver, ISO, and other styles
18

Putri, Raissa Amanda. "Aplikasi Simulasi Algoritma Penjadwalan Sistem Operasi." Jurnal Teknologi Informasi 5, no. 1 (July 1, 2021): 98–102. http://dx.doi.org/10.36294/jurti.v5i1.2215.

Full text
Abstract:
Abstract - In the operating system course, various scheduling algorithms with complex calculations are studied. The scheduling algorithms that are often used are FIFO (First-in, first out) or FCFS (first come, first serve), SJF (Shortest Job First), RR (Round robin) and SRF (Shortest remaining first). Unfortunately, the scheduling algorithm learning method often only uses the Gantt Chart as a tool for its calculations. For this reason, the researcher intends to design and build a desktop-based operating system scheduling algorithm simulation application as a learning medium for operating system courses. The application built can simulate four types of queues, namely FIFO (First-in, first out) or FCFS (first come, first serve), SJF (Shortest Job First), RR (Round robin) and SRF (Shortest remaining first). This application performs a simulation by calculating the start time, completion time, response time and waiting time for each process. In addition, the system also produces the results of the average response time and average waiting time, as well as a gantt chart of the entire process.Keywords - Application, Simulation, Operating System, Scheduling Algorithm. Abstrak - Dalam mata kuliah sistem operasi, berbagai algoritma penjadwalan dengan perhitungan yang kompleks dipelajari. Algoritma penjadwalan yang sering digunakan adalah FIFO (First-in, first out) atau FCFS (first come, first serve), SJF (Shortest Job First), RR (Round robin) dan SRF (Shortest Remaining First). Sayangnya, metode pembelajaran algoritma penjadwalan seringkali hanya menggunakan Gantt Chart sebagai alat bantu perhitungannya. Untuk itu peneliti merancang dan membangun aplikasi simulasi algoritma penjadwalan sistem operasi berbasis desktop sebagai media pembelajaran mata kuliah sistem operasi. Aplikasi yang dibangun dapat mensimulasikan empat jenis antrian yaitu FIFO (First-in, first out) atau FCFS (first come, first serve), SJF (Shortest Job First), RR (Round robin) dan SRF (Shortest remaining first). Aplikasi ini melakukan simulasi dengan menghitung saat mulai, saat rampung, lama tanggap dan lama tunggu masing-masing proses. Selain itu, sistem juga mengeluarkan hasil rata-rata lama tanggap dan rata-rata waktu tunggu, serta gantt chart dari keseluruhan proses.Kata Kunci - Aplikasi, Simulasi, Sistem Operasi, Algoritma Penjadwalan.
APA, Harvard, Vancouver, ISO, and other styles
19

Saeidi, Shahram, and Hakimeh Alemi Baktash. "Determining the Optimum Time Quantum Value in Round Robin Process Scheduling Method." International Journal of Information Technology and Computer Science 4, no. 10 (September 1, 2012): 67–73. http://dx.doi.org/10.5815/ijitcs.2012.10.08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Nageswara Rao, Siva, Ramkumar Jayaraman, and Dr S.V.N Srinivasu. "Efficient PIMRR Algorithm Based on Scheduling Measures for Improving Real Time Systems." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 275. http://dx.doi.org/10.14419/ijet.v7i2.32.15583.

Full text
Abstract:
Scheduling play an important role to perform a single (or) multiple process activities by considering scheduling criteria’s such as, waiting time, turnaround time, CPU utilization and context switches. The scheduling criteria’s mainly depends on the quantum time which is specific to real time systems. The challenges faced by the real time systems based on scheduling activities viz., higher waiting time, more context switches and high turnaround time. All the scheduling criteria’s are integrated to achieve Quality of Service (QoS) like throughput and delay. To improve the scheduling criteria’s like waiting time, context switches and turnaround time, PIMRR algorithm is proposed. The PIMRR algorithm is first integrated with modulo operation to provide priority to all the process. The average of all the processes burst time is equal to the quantum time. Performance analysis is done for PMIRR with the existing simple round robin, PRR, Priority based RR scheduling based on the scheduling criteria’s. Our results demonstrates that the PIMRR is more efficient compared to the existing ones, in terms of waiting time and turnaround time versus quantum time.
APA, Harvard, Vancouver, ISO, and other styles
21

Mohd Pakhrudin, Nor Syazwani, Murizah Kassim, and Azlina Idris. "Cloud service analysis using round-robin algorithm for quality-of-service aware task placement for internet of things services." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 3 (June 1, 2023): 3464. http://dx.doi.org/10.11591/ijece.v13i3.pp3464-3473.

Full text
Abstract:
Round-robin (RR) is a process approach to sharing resources that requires each user to get a turn using them in an agreed order in cloud computing. It is suited for time-sharing systems since it automatically reduces the problem of priority inversion, which are low-priority tasks delayed. The time quantum is limited, and only a one-time quantum process is allowed in round-robin scheduling. The objective of this research is to improve the functionality of the current RR method for scheduling actions in the cloud by lowering the average waiting, turnaround, and response time. CloudAnalyst tool was used to enhance the RR technique by changing the parameter value in optimizing the high accuracy and low cost. The result presents the achieved overall min and max response times are 36.69 and 650.30 ms for running 300 min RR. The cost for the virtual machines (VMs) is identified from $0.5 to $3. The longer the time used, the higher the cost of the data transfer. This research is significant in improving communication and the quality of relationships within groups.
APA, Harvard, Vancouver, ISO, and other styles
22

Gupta, Amit Kumar, Narendra Singh Yadav, and Dinesh Goyal. "Design and Performance Evaluation of Smart Job First Multilevel Feedback Queue (SJFMLFQ) Scheduling Algorithm with Dynamic Smart Time Quantum." International Journal of Multimedia Data Engineering and Management 8, no. 2 (April 2017): 50–64. http://dx.doi.org/10.4018/ijmdem.2017040106.

Full text
Abstract:
Multilevel feedback queue scheduling (MLFQ) algorithm is based on the concept of several queues in which a process moves. In earlier scenarios there are three queues defined for scheduling. The two higher level queues are running on Round Robin scheduling and last level queue is running on FCFS (First Come First Serve). A fix time quantum is defined for RR scheduling and scheduling of process depends upon the arrival time in ready queue. Previously a lot of work has been done in MLFQ. In our propose algorithm Smart Job First Multilevel feedback queue (SJFMLFQ) with smart time quantum (STQ), the processes are arranged in ascending order of their CPU execution time and calculate a Smart Priority Factor SPF on which processes are scheduled in queue. The process which has lowest SPF value will schedule first and the process which has highest SF value will schedule last in queue. Then a smart time quantum (STQ) is calculated for each queue. As a result, we found decreasing in turnaround time, average waiting time and increasing throughput as compared to the previous approaches and hence increase in the overall performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Gakher, Ritika, and Saman Rasool. "A New Approach for Dynamic Time Quantum Allocation in Round Robin Process Scheduling Algorithm." International Journal of Computer Applications 102, no. 14 (September 18, 2014): 27–32. http://dx.doi.org/10.5120/17884-8834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sahu, Babuli, Sangram Keshari Swain, Sudheer Mangalampalli, and Satyasis Mishra. "Multiobjective Prioritized Workflow Scheduling in Cloud Computing Using Cuckoo Search Algorithm." Applied Bionics and Biomechanics 2023 (July 7, 2023): 1–13. http://dx.doi.org/10.1155/2023/4350615.

Full text
Abstract:
Effective workflow scheduling in cloud computing is still a challenging problem as incoming workflows to cloud console having variable task processing capacities and dependencies as they will arise from various heterogeneous resources. Ineffective scheduling of workflows to virtual resources in cloud environment leads to violations in service level agreements and high energy consumption, which impacts the quality of service of cloud provider. Many existing authors developed workflow scheduling algorithms addressing operational costs and makespan, but still, there is a provision to improve the scheduling process in cloud paradigm as it is an nondeterministic polynomial-hard problem. Therefore, in this research, a task-prioritized multiobjective workflow scheduling algorithm was developed by using cuckoo search algorithm to precisely map incoming workflows onto corresponding virtual resources. Extensive simulations were carried out on workflowsim using randomly generated workflows from simulator. For evaluating the efficacy of our proposed approach, we compared our proposed scheduling algorithm with existing approaches, i.e., Max–Min, first come first serve, minimum completion time, Min–Min, resource allocation security with efficient task scheduling in cloud computing-hybrid machine learning, and Round Robin. Our proposed approach is outperformed by minimizing energy consumption by 15% and reducing service level agreement violations by 22%.
APA, Harvard, Vancouver, ISO, and other styles
25

Bt Ismail, Shafinaz, Darmawaty Bt Mohd Ali, and Norsuzila Ya’acob. "Performance Analysis of Uplink Scheduling Algorithms in LTE Networks." Indonesian Journal of Electrical Engineering and Computer Science 9, no. 2 (February 1, 2018): 373. http://dx.doi.org/10.11591/ijeecs.v9.i2.pp373-379.

Full text
Abstract:
Scheduling is referring to the process of allocating resources to User Equipment based on scheduling algorithms that is located at the LTE base station. Various algorithms have been proposed as the execution of scheduling algorithm, which represents an open issue in Long Term Evolution (LTE) standard. This paper makes an attempt to study and compare the performance of three well-known uplink schedulers namely, Maximum Throughput (MT), First Maximum Expansion (FME), and Round Robin (RR). The evaluation is considered for a single cell with interference for three flows such as Best effort, Video and VoIP in a pedestrian environment using the LTE-SIM network simulator. The performance evaluation is conducted in terms of system throughput, fairness index, delay and packet loss ratio (PLR). The simulations results show that RR algorithm always reaches the lowest PLR, delivering highest throughput for video and VoIP flows among all those strategies. Thus, RR is the most suitable scheduling algorithm for VoIP and video flows while MT and FME is appropriate for BE flows in LTE networks.
APA, Harvard, Vancouver, ISO, and other styles
26

Raof, R. A. A., S. Sudin, N. Mahrom, and A. N. C. Rosli. "Sport Tournament Automated Scheduling System." MATEC Web of Conferences 150 (2018): 05027. http://dx.doi.org/10.1051/matecconf/201815005027.

Full text
Abstract:
The organizer of sport events often facing problems such as wrong calculations of marks and scores, as well as difficult to create a good and reliable schedule. Most of the time, the issues about the level of integrity of committee members and also issues about errors made by human came into the picture. Therefore, the development of sport tournament automated scheduling system is proposed. The system will be able to automatically generate the tournament schedule as well as automatically calculating the scores of each tournament. The problem of scheduling the matches of a round robin and knock-out phase in a sport league are given focus. The problem is defined formally and the computational complexity is being noted. A solution algorithm is presented using a two-step approach. The first step is the creation of a tournament pattern and is based on known graph-theoretic method. The second one is an assignment problem and it is solved using a constraint based depth-first branch and bound procedure that assigns actual teams to numbers in the pattern. As a result, the scheduling process and knock down phase become easy for the tournament organizer and at the same time increasing the level of reliability.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Kaibin, Zhiping Peng, Delong Cui, and Qirui Li. "SLA-DQTS: SLA Constrained Adaptive Online Task Scheduling Based on DDQN in Cloud Computing." Applied Sciences 11, no. 20 (October 9, 2021): 9360. http://dx.doi.org/10.3390/app11209360.

Full text
Abstract:
Task scheduling is key to performance optimization and resource management in cloud computing systems. Because of its complexity, it has been defined as an NP problem. We introduce an online scheme to solve the problem of task scheduling under a dynamic load in the cloud environment. After analyzing the process, we propose a server level agreement constraint adaptive online task scheduling algorithm based on double deep Q-learning (SLA-DQTS) to reduce the makespan, cost, and average overdue time under the constraints of virtual machine (VM) resources and deadlines. In the algorithm, we prevent the change of the model input dimension with the number of VMs by taking the Gaussian distribution of related parameters as a part of the state space. Through the design of the reward function, the model can be optimized for different goals and task loads. We evaluate the performance of the algorithm by comparing it with three heuristic algorithms (Min-Min, random, and round robin) under different loads. The results show that the algorithm in this paper can achieve similar or better results than the comparison algorithms at a lower cost.
APA, Harvard, Vancouver, ISO, and other styles
28

N. Sirhan, Najem, and Manel Martinez-Ramon. "LTE Cellular Networks Packet Scheduling Algorithms in Downlink and Uplink Transmission: A Survey." International Journal of Wireless & Mobile Networks 14, no. 2 (April 30, 2022): 1–15. http://dx.doi.org/10.5121/ijwmn.2022.14201.

Full text
Abstract:
This survey paper provides a detailed explanation of Long Term Evolution (LTE) cellular network’s packet scheduling algorithms in both downlink and uplink directions. It starts by explaining the difference between Orthogonal Frequency Division Multiple Access (OFDMA) that is used in downlink transmission, and Single Carrier – Frequency Division Multiple Access (SC-FDMA) is used in uplink. Then, it explains the difference between the LTE scheduling process in the donwlink and uplink through explaining the interaction between users and the scheduler. Then, it explains the most commonly used downlink and uplink scheduling algorithms through analyzing their formulas, characteristics, most suitable conditions for them to work in, and the main differences among them. This explanation covers the Max Carrier-toInterference (C/I), Round Robin (RR), Proportional Fair (PF), Earliest Deadline First (EDF), Modified EDF-PF, Modified-Largest Weighted Delay First (M-LWDF), Exponential Proportional Fairness (EXPPF), Token Queues Mechanism, Packet Loss Ratio (PLR), Quality Guaranteed (QG), Opportunistic Packet Loss Fair (OPLF), Low Complexity (LC), LC-Delay, PF-Delay, Maximum Throughput (MT), First Maximum Expansion (FME), and Adaptive Resource Allocation Based Packet Scheduling (ARABPS). Lastly, it provides some concluding remarks.
APA, Harvard, Vancouver, ISO, and other styles
29

Triangga, Hasta, Ilham Faisal, and Imran Lubis. "Analisis Perbandingan Algoritma Static Round-Robin dengan Least-Connection Terhadap Efisiensi Load Balancing pada Load Balancer Haproxy." InfoTekJar (Jurnal Nasional Informatika dan Teknologi Jaringan) 4, no. 1 (September 25, 2019): 70–75. http://dx.doi.org/10.30743/infotekjar.v4i1.1688.

Full text
Abstract:
In IT networking, load balancing used to share the traffic between backend servers. The idea is to make effective and efficient load sharing. Load balancing uses scheduling algorithms in the process includes Static round-robin and Least-connection algorithm. Haproxy is a load balancer that can be used to perform the load balancing technique and run by Linux operating systems. In this research, Haproxy uses 4 Nginx web server as backend servers. Haproxy act as a reverse proxy which accessed by the client while the backend servers handle HTTP requests. The experiment involves 20 Client PCs that are used to perform HTTP requests simultaneously, using the Static round-robin algorithm and Least-connection on the haproxy load balancer alternately. When using Static round-robin algorithm, the results obtained average percentages of CPU usage successively for 1 minute; 5 minutes; and 15 minutes are; 0.1%; 0.25%; and 1.15% with average throughput produced is 14.74 kbps. Average total delay produced 64.3 kbps. The average total delay and jitter is 181.3 ms and 11.1 ms, respectively. As for the Least-connection algorithm average percentage obtained successively for 1 minute; 5 minutes; and 15 minutes are 0.1%; 0.3%; and 1.25% with the average throughput produced is 14.66 kbps. The average total delay and jitter is 350.3 ms and 24.5 ms, respectively. It means Static round-robin algorithm is more efficient than the algorithms Least-connection because it can produce a greater throughput with less CPU load and less total delay.
APA, Harvard, Vancouver, ISO, and other styles
30

Saurabh, Saurabh, and Rajesh Kumar Dhanaraj. "Improved QoS with Fog computing based on Adaptive Load Balancing Algorithm." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 5 (May 17, 2023): 347–62. http://dx.doi.org/10.17762/ijritcc.v11i5.6623.

Full text
Abstract:
As the number of sensing devices rises, traffic on the cloud servers is boosting day by day. When a device connected to the IoTwants access to data, cloud computing encourages the pairing of fog & cloud nodes to provide that information. One of the key needs in a fog-based cloud system, is efficient job scheduling to decrease the data delay and improve the QoS (Quality of Service). The researchers have used a variety of strategies to maintain the QoS criteria. However, because of the increased service delay caused by the busty traffic, job scheduling is impacted which leads to the unbalanced load on the fog environment. The proposed work uses a novel model which curates the features and working style of Genetic algorithm and the optimization algorithm with the load balancing scheduling on the fog nodes. The performance of the proposed hybrid model is contrasted with the other well-known algorithms in contrast to the fundamental benchmark optimization test functions. The proposed work displays better results in sustaining the task scheduling process when compared to the existing algorithms, which include Round Robin (RR) method, Hybrid RR, Hybrid Threshold based and Hybrid Predictive Based models, which ensures the efficacy of the proposed load balancing model to improve the quality of service in fog environment.
APA, Harvard, Vancouver, ISO, and other styles
31

Et al., Hassan. "PWRR Algorithm for Video Streaming Process Using Fog Computing." Baghdad Science Journal 16, no. 3 (September 1, 2019): 0667. http://dx.doi.org/10.21123/bsj.2019.16.3.0667.

Full text
Abstract:
The most popular medium that being used by people on the internet nowadays is video streaming. Nevertheless, streaming a video consumes much of the internet traffics. The massive quantity of internet usage goes for video streaming that disburses nearly 70% of the internet. Some constraints of interactive media might be detached; such as augmented bandwidth usage and lateness. The need for real-time transmission of video streaming while live leads to employing of Fog computing technologies which is an intermediary layer between the cloud and end user. The latter technology has been introduced to alleviate those problems by providing high real-time response and computational resources near to the client at the network boundary. The present research paper proposes priority weighted round robin (PWRR) algorithm for streaming operations scheduling in the fog architecture. This will give preemptive for streaming live video request to be delivered in a very short response time and real-time communication. The results of experimenting the PWRR in the proposed architecture display a minimize latency and good quality of live video requests which has been achieved with bandwidth changes as well as meeting all other clients requests at the same time
APA, Harvard, Vancouver, ISO, and other styles
32

Alsmadi, Ahmad Mohammad, Roba Mahmoud Ali Aloglah, Nisrein Jamal sanad Abu-darwish, Ahmad Al Smadi, Muneerah Alshabanah, Daniah Alrajhi, Hanouf Alkhaldi, and Mutasem K. Alsmadi. "Fog computing scheduling algorithm for smart city." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (June 1, 2021): 2219. http://dx.doi.org/10.11591/ijece.v11i3.pp2219-2228.

Full text
Abstract:
With the advent of the number of smart devices across the globe, increasing the number of users using the Internet. The main aim of the fog computing (FC) paradigm is to connect huge number of smart objects (billions of object) that can make a bright future for smart cities. Due to the large deployments of smart devices, devices are expected to generate huge amounts of data and forward the data through the Internet. FC also refers to an edge computing framework that mitigates the issue by applying the process of knowledge discovery using a data analysis approach to the edges. Thus, the FC approaches can work together with the internet of things (IoT) world, which can build a sustainable infrastructure for smart cities. In this paper, we propose a scheduling algorithm namely the weighted round-robin (WRR) scheduling algorithm to execute the task from one fog node (FN) to another fog node to the cloud. Firstly, a fog simulator is used with the emergent concept of FC to design IoT infrastructure for smart cities. Then, spanning-tree routing (STP) protocol is used for data collection and routing. Further, 5G networks are proposed to establish fast transmission and communication between users. Finally, the performance of our proposed system is evaluated in terms of response time, latency, and amount of data used.
APA, Harvard, Vancouver, ISO, and other styles
33

AL-SAFAR, AHMED. "Hybrid CPU Scheduling algorithm SJF-RR In Static Set of Processes." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 1 (October 20, 2021): 36–60. http://dx.doi.org/10.55562/jrucs.v29i1.377.

Full text
Abstract:
Round Robin (RR) algorithm is widely used in modern operating systems (OS) as it has a better responsiveness as periodic quantum (occurring at regular intervals) in addition to have a good feature such as low scheduling overhead of n processes in a ready queue which takes a constant time O(1). But, RR algorithms have the worse features such as having low throughput, long average turnaround and waiting time, in addition to the number of context switches for (n) processes is (n) switches. Shortest Job First (SJF) however, itself is not practical in time sharing Oss due to its low response. More over the scheduling overhead of n processes in a ready queue is O(n), But the good features of SJF algorithm are the best average turnaround time and waiting time.By considering a static set of n processes, desirable features of CPU scheduler to maximize CPU utilization, response time and minimize waiting time and turnaround time are obtained by combining the kernel properties of SJF algorithm with the best features of RR algorithm to produce a new algorithm as an original and novel algorithm called; " Hybrid CPU Scheduling algorithm SJF-RR in Static Set of Processes " which, proposed in this research.The proposed algorithm is implemented through an innovative optimal equation to adapt time quantum factor for each process in each round as a periodic quantum (occurred at irregular intervals). That is while applying proposed algorithm, mathematical calculations take place to have particular time quantum for each process. Once, a criterion has been selected for comparison, deterministic modeling with the same numbers for input is proven that proposed algorithm is the best.
APA, Harvard, Vancouver, ISO, and other styles
34

R., Pushpalatha, and Ramesh B. "Taylor CFRO-Based Deep Learning Model for Service-Level Agreement-Aware VM Migration and Workload Prediction-Enabled Power Model in Cloud Computing." International Journal of Swarm Intelligence Research 13, no. 1 (April 13, 2023): 1–31. http://dx.doi.org/10.4018/ijsir.304724.

Full text
Abstract:
In this research, Taylor Chaotic Fruitfly Rider Optimization (TaylorCFRO)-based Deep Belief Network (DBN) approach is designed for workload prediction and Service level agreement (SLA)-aware Virtual Machine (VM) migration in the cloud. In this model, the round robin technique is applied for the task scheduling process. The Chaotic Fruitfly Rider Optimization driven Neural Network (CFRideNN) is also introduced in order to perform workload prediction. The DBN classifier is employed to detect SLA violations, and the DBN is trained using devised optimization model, named the TaylorCFRO technique. Accordingly, the introduced TaylorCFRO approach is newly designed by incorporating the Taylor series, Chaotic Fruitfly Optimization Algorithm (CFOA), and Rider Optimization Algorithm (ROA). The developed TaylorCFRO-based DBN scheme outperformed other workload and SLA Violation (SLAV) detection methods with violation detection rate of 0.8048, power consumption of 0.0132, SLAV of 0.0215, and load of 0.0033.
APA, Harvard, Vancouver, ISO, and other styles
35

Munaye, Yirga Yayeh, Rong-Terng Juang, Hsin-Piao Lin, Getaneh Berie Tarekegn, and Ding-Bing Lin. "Deep Reinforcement Learning Based Resource Management in UAV-Assisted IoT Networks." Applied Sciences 11, no. 5 (March 1, 2021): 2163. http://dx.doi.org/10.3390/app11052163.

Full text
Abstract:
The resource management in wireless networks with massive Internet of Things (IoT) users is one of the most crucial issues for the advancement of fifth-generation networks. The main objective of this study is to optimize the usage of resources for IoT networks. Firstly, the unmanned aerial vehicle is considered to be a base station for air-to-ground communications. Secondly, according to the distribution and fluctuation of signals; the IoT devices are categorized into urban and suburban clusters. This clustering helps to manage the environment easily. Thirdly, real data collection and preprocessing tasks are carried out. Fourthly, the deep reinforcement learning approach is proposed as a main system development scheme for resource management. Fifthly, K-means and round-robin scheduling algorithms are applied for clustering and managing the users’ resource requests, respectively. Then, the TensorFlow (python) programming tool is used to test the overall capability of the proposed method. Finally, this paper evaluates the proposed approach with related works based on different scenarios. According to the experimental findings, our proposed scheme shows promising outcomes. Moreover, on the evaluation tasks, the outcomes show rapid convergence, suitable for heterogeneous IoT networks, and low complexity.
APA, Harvard, Vancouver, ISO, and other styles
36

Christos Liaskos and Kostas Katsalis. "A scheduling framework for performing resource slicing with guarantees in 6G RIS-enabled smart radio environments." ITU Journal on Future and Evolving Technologies 4, no. 1 (February 21, 2023): 33–49. http://dx.doi.org/10.52953/oytf1310.

Full text
Abstract:
Smart Radio Environments (SRE) transform the wireless propagation phenomenon in a programmable process. Leveraging multiple Reconfigurable Intelligent Surfaces (RIS), the wireless waves emitted by a device can be almost freely routed and manipulated, reaching their end destination via improbable paths, with minimized fading and path losses. This work begins with the observation that each such wireless communication customization occupies a certain number of RIS units, e.g., to form a wireless path with consecutive customized reflections. Therefore, SREs can be modeled as a resource of constrained capacity, which needs to be sliced among interested clients. This work provides a foundational model of SRE-as-a-resource, defining Service Level Agreements (SLAs) and Service Level Objectives (SLOs) for the SRE client requests. Employing this model, we study a class of negative drift dynamic weighted round robin policies, that is able to guarantee specific SRE resource shares to competing user requests. We provide a general mathematical framework where the class of policies map ping user requests to resources does not require statistical knowledge regarding the arrival distribution or the duration of each user communication. We study the meaning of work conserving and non-work conserving modes of SRE operation, and also study the convergence properties of our scheduling framework for both cases. Finally, we perform the feasibility space analysis for our framework and we validate our analysis through extensive simulations.
APA, Harvard, Vancouver, ISO, and other styles
37

Kuramata, Michiya, Ryota Katsuki, and Kazuhide Nakata. "Solving large break minimization problems in a mirrored double round-robin tournament using quantum annealing." PLOS ONE 17, no. 4 (April 8, 2022): e0266846. http://dx.doi.org/10.1371/journal.pone.0266846.

Full text
Abstract:
Quantum annealing has gained considerable attention because it can be applied to combinatorial optimization problems, which have numerous applications in logistics, scheduling, and finance. In recent years, with the technical development of quantum annealers, research on solving practical combinatorial optimization problems using them has accelerated. However, researchers struggle to find practical combinatorial optimization problems, for which quantum annealers outperform mathematical optimization solvers. Moreover, there are only a few studies that compare the performance of quantum annealers with the state-of-the-art solvers, such as Gurobi and CPLEX. This study determines that quantum annealing demonstrates better performance than the solvers in that the solvers take longer to reach the objective function value of the solution obtained by the quantum annealers for the break minimization problem in a mirrored double round-robin tournament. We also explain the desirable performance of quantum annealing for the sparse interaction between variables and a problem without constraints. In this process, we demonstrate that this problem can be expressed as a 4-regular graph. Through computational experiments, we solve this problem using our quantum annealing approach and two-integer programming approaches, which were performed using the latest quantum annealer D-Wave Advantage, and Gurobi, respectively. Further, we compare the quality of the solutions and the computational time. Quantum annealing was able to determine the exact solution in 0.05 seconds for problems with 20 teams, which is a practical size. In the case of 36 teams, it took 84.8 s for the integer programming method to reach the objective function value, which was obtained by the quantum annealer in 0.05 s. These results not only present the break minimization problem in a mirrored double round-robin tournament as an example of applying quantum annealing to practical optimization problems, but also contribute to find problems that can be effectively solved by quantum annealing.
APA, Harvard, Vancouver, ISO, and other styles
38

BOKIYE, Lencho M., and Ilker Ali OZKAN. "HYBRID LOAD BALANCING POLICY TO OPTIMIZE RESOURCE DISTRIBUTION AND RESPONSE TIME IN CLOUD ENVIRONMENT." International Journal of Applied Mathematics Electronics and Computers 10, no. 4 (December 31, 2022): 101–9. http://dx.doi.org/10.18100/ijamec.1158866.

Full text
Abstract:
Load balancing and task scheduling are the main challenges in Cloud Computing. Existing load balancing algorithms have a drawback in considering the capacity of virtual machines while distributing loads among them. The proposed algorithm works toward solving existing issues, such as fair load distribution, avoiding underloading and overloading, and improving response time. It implements best practices of Throttled load balancing algorithm and Equally Shared Current Execution algorithm. Virtual machines are selected based on the ratio of their bandwidth and load allocation count. Requests are sent to a Virtual Machine with higher bandwidth and lower load allocation count. Proposed algorithm checks for the availability of VM based on their capacity. This process is performed by selecting two VMs and comparing their vmWeight capacity. The one with the least vmWeight is selected. CloudAnalyst is used for simulation, response time evaluation, and resource utilization evaluation. The simulation result of the proposed algorithm is compared with three well-known load-balancing algorithms. These are Round Robin, Throttled Load balancing algorithm, and Enhanced Active Monitoring. Load-balancing Proposed Algorithm selects VMs based on their Algorithm. The proposed algorithm has improved over other algorithms in load distribution, response time, and resource utilization. All virtual machines in the data centers are loaded with a relatively equal number of tasks according to their capacity. This resulted in fair resource sharing and load distribution.
APA, Harvard, Vancouver, ISO, and other styles
39

Tareen, Faheem Nawaz, Ahmad Naseem Alvi, Asad Ali Malik, Muhammad Awais Javed, Muhammad Badruddin Khan, Abdul Khader Jilani Saudagar, Mohammed Alkhathami, and Mozaherul Hoque Abul Hasanat. "Efficient Load Balancing for Blockchain-Based Healthcare System in Smart Cities." Applied Sciences 13, no. 4 (February 13, 2023): 2411. http://dx.doi.org/10.3390/app13042411.

Full text
Abstract:
Smart cities are emerging rapidly due to the provisioning of comfort in the human lifestyle. The healthcare system is an important segment of the smart city. The timely delivery of critical human vital signs data to emergency health centers without delay can save human lives. Blockchain is a secure technology that provides the immutable record-keeping of data. Secure data transmission by avoiding erroneous data delivery also demands blockchain technology in healthcare systems of smart cities where patients’ health history is required for their necessary treatments. The health parameter data of each patient are embedded in a separate block in blockchain technology with SHA-256-based cryptography hash values. Mining computing nodes are responsible to find a 32-bit nonce (number only used once) value for each data block to compute a valid SHA-256-based hash value in blockchain technology. Computing nonce for valid hash values is a time-taking process that may cause life losses in the healthcare system. Increasing the mining nodes reduces this delay; however, the uniform distribution of mining data blocks to these nodes by considering the priority data is a challenging task. In this work, an efficient scheme is proposed for scheduling nonce computing tasks at the mining nodes to ensure the timely execution of these tasks. The proposed scheme consists of two parts, the first one provides a load balancing scheme to distribute the nonce execution tasks among the mining nodes such that makespan is minimized and the second part prioritizes more sensitive patient data for quick execution. The results show that the proposed load balancing scheme effectively allocates data blocks in different mining nodes as compared to round-robin and greedy algorithms and computes hash values of most of the higher-risk patients’ data blocks in a reduced amount of time.
APA, Harvard, Vancouver, ISO, and other styles
40

Le, Dan, Charles Henry Lim, Rouhi Fazelzad, and Monika K. Krzyzanowska. "Safety culture interventions in cancer care: A systematic review." Journal of Clinical Oncology 37, no. 27_suppl (September 20, 2019): 247. http://dx.doi.org/10.1200/jco.2019.37.27_suppl.247.

Full text
Abstract:
247 Background: Creation of a culture of safety in healthcare organizations is fundamentally important to patient safety. However, there is limited guidance on how to effectively promote a culture of safety in healthcare settings including in oncology. We performed a systematic review to identify interventions or strategies to promote safety culture in cancer care. Methods: Medical Subject Headings and text words for “safety culture” and “cancer care” were combined to conduct structured searches in MEDLINE, EMBASE, CDSR, CINAHL, Cochrane CENTRAL, Epub Ahead of Print & In-Process, PsycINFO, Scopus, and Web of Science databases, for peer-reviewed articles published between 1999 and 2017. Articles were included if they described an intervention or strategy to promote safety culture in an oncology setting, and quantitative outcomes were reported. Study quality was assessed using the ROBINS-I risk of bias tool. Results: We screened 21,572 studies, of which 46 underwent full-text review, and 19 met the inclusion criteria. Studies described interventions in radiation oncology (15 articles), medical oncology (3), and general oncology (1) settings in either North America (15) or Europe (4). The most common experimental designs were interrupted time series (10) or before-and-after comparisons (6), and were of either moderate (89%) or severe (11%) risk of bias. Interventions varied but could be broadly categorized as incident learning systems (8), quality improvement programs (7), provider education programs (2), a provider scheduling system (1), and a patient safety champion intervention (1). While 89% of studies reported improvement in safety culture, there was substantial heterogeneity in evaluated outcomes. Most assessed provider outcomes such as number of reported adverse events (11) or Agency for Healthcare Research and Quality Safety Culture survey results (7). Conclusions: Despite a growing evidence base to identify interventions to promote safety culture in cancer care, definitive recommendations were difficult to make due to heterogeneity in study designs and outcomes. Given the importance of safety culture in cancer care, additional high-quality studies and standardization of outcome measures are needed.
APA, Harvard, Vancouver, ISO, and other styles
41

"Round Robin Scheduling Algorithm based on Dynamic time quantum." International Journal of Engineering and Advanced Technology 8, no. 6 (August 30, 2019): 593–95. http://dx.doi.org/10.35940/ijeat.f8070.088619.

Full text
Abstract:
After studying various CPU scheduling algorithms in Operating System, Round Robin scheduling algorithm is found to be most optimal algorithm in timeshared systems because of the static time quantum that is designated for every process. The efficacy of Round Robin algorithm entirely depends on the static time quantum that is being selected. After studying and analyzing Round Robin algorithm, I have proposed a new modified Round Robin algorithm that is based on shortest remaining burst time which has resulted in dynamic time quantum in place of static time quantum. This improves the performance of existing algorithm by reducing average waiting time and turn-around time and minimizing the number of context switches.
APA, Harvard, Vancouver, ISO, and other styles
42

Berliantara, Agung Yudha. "SCHEDULING OPTIMIZATION FOR EXTRACT, TRANSFORM, LOAD (ETL) PROCESS ON DATA WAREHOUSE USING ROUND ROBIN METHOD (CASE STUDY: UNIVERSITY of XYZ)." Journal of Information Technology and Computer Science 2, no. 2 (December 4, 2017). http://dx.doi.org/10.25126/jitecs.20172232.

Full text
Abstract:
ETL scheduling is a challenging and exciting issue to solve. The ETL scheduling problem has many facets, one of which is the cost of time. If it is not handled correctly, it may take a very long time to execute and inconsistent data in very large data. In this study using Round-robin algorithm method that proved able to produce efficient results and in accordance with conventional methods. After doing the research, the difference between these two methods is about execution time. Through this experiment, the Round-robin scheduling method gives a more efficient execution time of up to 61% depending on the amount of data and the number of partitions used.
APA, Harvard, Vancouver, ISO, and other styles
43

Abubakar, Suleiman Ebaiya. "Modified Round Robin with Highest Response Ratio Next CPU Scheduling Algorithm using Dynamic Time Quantum." SLU Journal of Science and Technology, March 31, 2023, 87–99. http://dx.doi.org/10.56471/slujst.v6i.363.

Full text
Abstract:
Background: The most popular time-sharing operating systems scheduling technique, whose efficiency heavily dependent on time slice selection, is the round robin CPU scheduling algorithm. The time slice works similar to the First-Come-First-Serve (FCFS) scheduling or processor sharing algorithm if it is large or extremely too small. Some of the existing research papers have an algorithm called Improved Round Robin with Highest Response Ratio Next (IRRHRRN) which made use of response ratio with a predefined time quantum of 10ms with the major aim of avoiding starvation. However, the IRRHRRN algorithm favors processes with shorter burst time than the ones with longer burst time, and gave no regard to the process arrival time, thus leading to starvation. Aim: This study tries to improve on the IRRHRRN algorithm by proposing the Modified Round Robin with Highest Response Ratio Next (MRRHRRN) CPU Scheduling Algorithm using Dynamic Time Quantum in order to reduce the problem of starvation. Method: Dynamic method of determining the time quantum was adopted. Results: The proposed algorithm was compared with four other existing algorisms such as Standard Round Robin (RR), Improved Round Robin (IRR), An Additional Improvement in Round Robin (AAIRR), and the Improved Round Robin with Highest Response Ratio Next (IRRHRRN) and it provided some promising results in terms of the Average Waiting Time of 35407.6 ms, Average Turnaround Time of 36117.6 ms, Average Response Time of 10894.8 ms and Number of Context Switch of 301for the Non-Zero Arrival Times Processes
APA, Harvard, Vancouver, ISO, and other styles
44

"Convoy Effect Elimination in FCFS Scheduling." International Journal of Engineering and Advanced Technology 9, no. 3 (February 29, 2020): 3218–21. http://dx.doi.org/10.35940/ijeat.c6092.029320.

Full text
Abstract:
One of the important activities of operating systems is process scheduling. There are many algorithms available for scheduling like First Come First Served, Shortest Job First, Priority Scheduling and Round Robin. The fundamental algorithm is First Come First Served. It has some drawback of convoy effect. Convoy effect occurs when the small processes are waiting for lengthy process to complete. In this paper novel method is proposed to reduce convoy effect and to make the Scheduling optimal which reduces average waiting time and turnaround time.
APA, Harvard, Vancouver, ISO, and other styles
45

"Performance Evaluation of Hybrid Round Robin Algorithm and Modified Round Robin Algorithm in Cloud Computing." International Journal of Recent Technology and Engineering 8, no. 2 (July 30, 2019): 5047–51. http://dx.doi.org/10.35940/ijrte.a9139.078219.

Full text
Abstract:
The scheduling Round Robin (RR) is an impartial algorithm that schedules cloud resources by giving static time quantum to all processes. Time quantum selection is very crucial as it determines performance of algorithms. This research paper suggests an approach to improve RR scheduling algorithm in cloud computing by considering the quantum to be equal to burst time of start request, which dynamically vary after each execution of a request. And also, if the remaining burst time of CPU for currently executing process is lesser than time quantum, then the CPU will be allocated again to the executing process for rest of CPU burst time. MatLAb was used to implement the planned algorithm and benchmarked against MRRA available in literature. In comparison with the planned algorithm, Average Turnaround Time (ATAT) and minimal Average Waiting Time (AWT) was recorded. Based on the obtained simulated outcome, the planned algorithm should be preferred over modified round robin algorithm as it significantly improves the system efficiency. Keywords: Cloud Computing, throughput, Cloud Services, Response Time, Turnaround Time.
APA, Harvard, Vancouver, ISO, and other styles
46

Upanshu Kumar and Shatendra Dubey. "Efficient Load Balancing of Resources for Different Cloud Service Providers in Cloud Computing." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, January 4, 2023, 09–16. http://dx.doi.org/10.32628/cseit239016.

Full text
Abstract:
In distributed computing cloud computing is an emerging technology which provides pay per model as per user demand or requirement. Cloud has a collection of virtual machines which facilities both computational and storage requirement. Scheduling and Load balancing are the main challenges in the cloud computing on which we are emphasizing. Scheduling is the process to control the order of work going to be performed by computer system. Load balancing has an important role in the performance in cloud computing. Better load balancing will make cloud computing more efficient and will also increase user satisfaction. It provides a way to handle several inquiries residing inside cloud computing environment set. Complete balancing acquires two tasks, one is resource provisioning/resource allocation and task scheduling throughout the system. In the proposed research paper, we are presenting a hybrid algorithm created by FCFS and Round Robin algorithms. As the Round Robin is the easiest algorithm that's why it is frequently used and the first preference for implementing easy schedulers. The Round Robin algorithm only requires a list of nodes. In the proposed solution we have eliminated the drawbacks of simple Round Robin algorithm by introducing assignment of time slices to different processes depending upon priorities.
APA, Harvard, Vancouver, ISO, and other styles
47

Kusuma, Purba Daru. "Hybrid Make-to-Stock and Make-to-Order (MTS-MTO)Scheduling Model in Multi-Product Production System." International Journal of Integrated Engineering 14, no. 4 (June 21, 2022). http://dx.doi.org/10.30880/ijie.2022.14.04.014.

Full text
Abstract:
In production process, scheduling isan important role in meeting the orders and reducing the cost. This process becomes more complicated when the factory produces various products. Scheduling model in the production process can be divided into two groups: make-to-stock (MTS) and make-to-order (MTO). General constraint in the MTSmodel is limited production capacity and inventory cost. This work aims to propose hybrid MTS-MTO model that can improve lead time and maintain low inventory. In this work, we propose three hybrid MTS-MTO scheduling models for the multi-product production system. In these models, we modify several scheduling algorithms that are used in computer system, such as shortest remaining time(SRT), shortest processing time(SPT), and Round Robin(RR). These models are hybrid (s, S)-first-come-first served (FCFS)model, hybrid modified (s, Q)-SPT model, and modified (s, Q)-SRT model. This model then is implemented into production simulation. In this simulation process, we compare the proposed models with the existing item-by-item based (s, S)model. Based on the simulation result, these proposed models perform better than the basic MTS model. The hybrid (s, Q)-modified SPTmodel performs as the best model in creating high completion ratio, low lead time, and low inventory ratio. In certain condition, our proposed model performs 344 percent in completion ratio, 19.8 percent in lead time, and 3 percent in inventory ratio compared with the existing model.
APA, Harvard, Vancouver, ISO, and other styles
48

"Time Restraint Load Balancing in Cloud Environment." International Journal of Grid and High Performance Computing 14, no. 1 (January 2022): 0. http://dx.doi.org/10.4018/ijghpc.301592.

Full text
Abstract:
The outlook for cloud computing is growing day by day. It is a developing field which is evolving and giving new ways to build, manage and process data. The most difficult task in cloud computing is to provide best quality parameters like maintaining deadline, minimizing make-span time, increasing utilization of resources etc. Therefore, dynamic scheduling of algorithm is needed by a service provider that executes the task within given time span while reducing make-span time.The proposed algorithm utilizes merits of max min, round robin, and min-min and tries to remove the demerits of them. The proposed algorithm has been simulated in cloud Sim by varying number of tasks and analysis has been made based on make-span, average utilization of resources and balancing of load. The results show that the proposed technique has better results as compared to heuristic techniques such as min-min, round robin and max-min.
APA, Harvard, Vancouver, ISO, and other styles
49

"Exhaustive Appraisal of Adaptive Hybrid LTE-A Downlink Scheduling Algorithm." International Journal of Engineering and Advanced Technology 9, no. 2 (December 30, 2019): 4765–72. http://dx.doi.org/10.35940/ijeat.b2632.129219.

Full text
Abstract:
Long Term Evolution- Advanced (LTE-A) networks have been introduced in Third Generation Partnership Project (3GPP) release – 10 specifications, with an objective of obtaining a high data rate for the cell edge users, higher spectral efficiency and high Quality of service for multimedia services at the cell edge/Indoor areas. A Heterogeneous network (HetNet) in a LTE-A is a network consisting of high power macro-nodes and low power micro-nodes of different cell coverage capabilities. Due to this, non-desired signals acting as interference exist between the micro and macro nodes and their users. Interference is broadly classified as cross-tier and co-tier interference. The cross tier interference can be reduced by controlling the base station transmit power while the co-tier interference can be reduced by proper resource allocation among the users. Scheduling is the process of optimal allocation of resources to the users. For proper resource allocation, scheduling is done at the Main Base station (enodeB). Some LTE-A downlink scheduling algorithms are based on transmission channel quality feedback given by user equipment in uplink transmission. Various scheduling algorithms are being developed and evaluated using a network simulator. This paper presents the performance evaluation of the Adaptive Hybrid LTE-A Downlink scheduling algorithm. The evaluation is done in terms of parameters like user’s throughput (Peak, Average, and Edge), Average User’s spectral efficiency and Fairness Index. The evaluated results of the proposed algorithm is compared with the existing downlink scheduling algorithms such as Round Robin, Proportional Fair, Best Channel Quality Indicator (CQI) using a network simulator. The comparison results show the effectiveness of the proposed adaptive Hybrid Algorithm in improving the cell Edge user’s throughput as well the Fairness Index.
APA, Harvard, Vancouver, ISO, and other styles
50

Rajab, Hadeel T., and Manal F. Younis. "Dynamic Fault Tolerance Aware Scheduling for Healthcare System on Fog Computing." Iraqi Journal of Science, January 30, 2021, 308–18. http://dx.doi.org/10.24996/ijs.2021.62.1.29.

Full text
Abstract:
Internet of Things (IoT) contributes to improve the quality of life as it supports many applications, especially healthcare systems. Data generated from IoT devices is sent to the Cloud Computing (CC) for processing and storage, despite the latency caused by the distance. Because of the revolution in IoT devices, data sent to CC has been increasing. As a result, another problem added to the latency was increasing congestion on the cloud network. Fog Computing (FC) was used to solve these problems because of its proximity to IoT devices, while filtering data is sent to the CC. FC is a middle layer located between IoT devices and the CC layer. Due to the massive data generated by IoT devices on FC, Dynamic Weighted Round Robin (DWRR) algorithm was used, which represents a load balancing (LB) algorithm that is applied to schedule and distributes data among fog servers by reading CPU and memory values of these servers in order to improve system performance. The results proved that DWRR algorithm provides high throughput which reaches 3290 req/sec at 919 users. A lot of research is concerned with distribution of workload by using LB techniques without paying much attention to Fault Tolerance (FT), which implies that the system continues to operate even when fault occurs. Therefore, we proposed a replication FT technique called primary-backup replication based on dynamic checkpoint interval on FC. Checkpoint was used to replicate new data from a primary server to a backup server dynamically by monitoring CPU values of primary fog server, so that checkpoint occurs only when the CPU value is larger than 0.2 to reduce overhead. The results showed that the execution time of data filtering process on the FC with a dynamic checkpoint is less than the time spent in the case of the static checkpoint that is independent on the CPU status.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography