Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Dynamic Partitioned Scheduling.

Artykuły w czasopismach na temat „Dynamic Partitioned Scheduling”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 48 najlepszych artykułów w czasopismach naukowych na temat „Dynamic Partitioned Scheduling”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Wirawan Wijiutomo, Catur, Bambang Riyanto Trilaksono i Achmad Imam Kistijantoro. "Fault Tolerant Dynamic Scheduling on Real Time Hierarchical System: Proposals for Fault Tolerant Mechanism on Safety-Critical System". International Journal of Engineering & Technology 7, nr 4.44 (1.12.2018): 99. http://dx.doi.org/10.14419/ijet.v7i4.44.26871.

Pełny tekst źródła
Streszczenie:
The paradigm changes from federated architecture to integrated architecture in the real time system introduces a partitioned system to ensure fault isolation and for scheduling the hierarchy scheduling at the global level between partition and local in partition. Integrated architecture based on partitioned system with hierarchical scheduling is referred as real time hierarchical system which is a solution to increase efficiency in terms of hardware cost and size. This approach increasing the complexity of the integration process including the handling of faults. In this paper the authors describe a proposal with three components for dealing with fault tolerant in real time hierarchical systems by handling fault in task level, partition level and distributed level. The contribution of this proposal is the mechanism for building fault tolerant system on real time hierarchical system.
Style APA, Harvard, Vancouver, ISO itp.
2

Baruah, Sanjoy K., i Nathan Wayne Fisher. "The partitioned dynamic-priority scheduling of sporadic task systems". Real-Time Systems 36, nr 3 (25.04.2007): 199–226. http://dx.doi.org/10.1007/s11241-007-9022-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sheikh, Saad Zia, i Muhammad Adeel Pasha. "A Dynamic Cache-Partition Schedulability Analysis for Partitioned Scheduling on Multicore Real-Time Systems". IEEE Letters of the Computer Society 3, nr 2 (1.07.2020): 46–49. http://dx.doi.org/10.1109/locs.2020.3013660.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Mascitti, Agostino, Tommaso Cucinotta, Mauro Marinoni i Luca Abeni. "Dynamic partitioned scheduling of real-time tasks on ARM big.LITTLE architectures". Journal of Systems and Software 173 (marzec 2021): 110886. http://dx.doi.org/10.1016/j.jss.2020.110886.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

DIESSEL, OLIVER, i HOSSAM ELGINDY. "ON DYNAMIC TASK SCHEDULING FOR FPGA-BASED SYSTEMS". International Journal of Foundations of Computer Science 12, nr 05 (październik 2001): 645–69. http://dx.doi.org/10.1142/s0129054101000709.

Pełny tekst źródła
Streszczenie:
The development of FPGAs that can be programmed to implement custom circuits by modifying memory has inspired researchers to investigate how FPGAs can be used as a computational resource in systems designed for high performance applications. When such FPGA–based systems are composed of arrays of chips or chips that can be partially reconfigured, the programmable array space can be partitioned among several concurrently executing tasks. If partition sizes are adapted to the needs of tasks, then array resources become fragmented as tasks with varying requirements are processed. Tasks may end up waiting despite their being sufficient, albeit fragmented resources available. We examine the problem of repartitioning the system (rearranging a subset of the executing tasks) at run–time in order to allow waiting tasks to enter the system sooner. In this paper, we introduce the problems of identifying and scheduling feasible task rearrangements when tasks are moved by reloading. It is shown that both problems are NP–complete. We develop two very different heuristic approaches to finding and scheduling suitable rearrangements. The first method, known as Local Repacking, attempts to minimize the size of the subarray needing rearrangement. Candidate subarrays are repacked using known bin packing algorithms. Task movements are scheduled so as to minimize delays to their execution. The second approach, called Ordered Compaction, constrains the movements of tasks in order to efficiently identify and schedule feasible rearrangements. The heuristics are compared by time complexity and resulting system performance on simulated task sets. The results indicate that considerable scheduling advantages are to be gained for acceptable computational effort. However, the benefits may be jeopardized by delays to moving tasks when the average cost of reloading tasks becomes significant relative to task service periods. We indicate directions for future research to mitigate the cost of moving executing tasks.
Style APA, Harvard, Vancouver, ISO itp.
6

Mahmood, Basharat, Naveed Ahmad, Majid Iqbal Khan i Adnan Akhunzada. "Dynamic Priority Real-Time Scheduling on Power Asymmetric Multicore Processors". Symmetry 13, nr 8 (13.08.2021): 1488. http://dx.doi.org/10.3390/sym13081488.

Pełny tekst źródła
Streszczenie:
The use of real-time systems is growing at an increasing rate. This raises the power efficiency as the main challenge for system designers. Power asymmetric multicore processors provide a power-efficient platform for building complex real-time systems. The utilization of this efficient platform can be further enhanced by adopting proficient scheduling policies. Unfortunately, the research on real-time scheduling of power asymmetric multicore processors is in its infancy. In this research, we have addressed this problem and added new results. We have proposed a dynamic-priority semi-partitioned algorithm named: Earliest-Deadline First with C=D Task Splitting (EDFwC=D-TS) for scheduling real-time applications on power asymmetric multicore processors. EDFwC=D-TS outclasses its counterparts in terms of system utilization. The simulation results show that EDFwC=D-TS schedules up to 67% more tasks with heavy workloads. Furthermore, it improves the processor utilization up to 11% and on average uses 14% less cores to schedule the given workload.
Style APA, Harvard, Vancouver, ISO itp.
7

Li, Xiao Feng, Peng Fan, Xiao Hua Liu, Xing Chao Wang, Chuan Hu, Chun Xiang Liu i Shi Guang Bie. "Parallel Rendering Strategies for 3D Emulational Scene of Live Working". Applied Mechanics and Materials 457-458 (październik 2013): 1021–27. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.1021.

Pełny tekst źródła
Streszczenie:
Because of abundant deep scene nodes in 3D emulational scene of live working, the existing three-dimensional scene data organization methods and rendering strategies have many flaws, such as the jumping of rendering and the delay of interactive response. A real-time rendering method for huge amount of urban data was presented utilizing the techniques such as identifying model that is based on multi-grid block partition, thread pool, caching and real time external memory scheduling algorithms. The whole scene was partitioned into blocks of different size and the blocks were arranged with multi-grid which is related to model ID and tile ID to accelerate model scheduling. Fast clipping was achieved through the nailing of position and direction of block-based view frustum, and touching task of data downloading off into thread pool executed in background which achieve the dynamic data loading and parallelism of three-dimensional scene rendering. To solve the choke point at computer hardware, in-out memory scheduling algorithms are adopted to eliminate invisible scene models and recycle dirty data in memory. Experimental results showed that the method is very efficient and suitable for applications in massive urban models rendering and interactive walkthrough.
Style APA, Harvard, Vancouver, ISO itp.
8

BANERJEE, AYAN, i EMMANOUEL (MANOS) VARVARIGOS. "A DYNAMIC SCHEDULING COMMUNICATION PROTOCOL AND ITS ANALYSIS FOR HYPERCUBE NETWORKS". International Journal of Foundations of Computer Science 09, nr 01 (marzec 1998): 39–56. http://dx.doi.org/10.1142/s0129054198000064.

Pełny tekst źródła
Streszczenie:
We propose a new protocol for one-to-one communication in multiprocessor networks, which we call the Dynamic Scheduling Communication (or DSC) protocol. In the DSC protocol, the capacity of a link is partitioned into two channels: a data channel, used to transmit packets, and a control channel used to make reservations. We initially describe the DSC protocol and the data structures needed to implement it for a general network topology. We then analyze the steady-state throughput of the DSC protocol for random node-to-node communication in a hypercube topology. The analytical results obtained are in very close agreement with corresponding simulation results. For the hypercube topology, and under the same set of assumptions on the node architecture and the routing algorithm used, the DSC protocol is found to achieve higher throughput than packet switching, provided that the size of the network is sufficiently large. We also investigate the relationship between the achievable throughput and the fraction of network capacity dedicated to the control channel, and present a method to select this fraction so as to optimize throughput.
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Qiushi, Jian Zhao, Xiaoyu Wang, Li Tong, Hang Jiang i Jinhui Zhou. "Distribution Network Hierarchically Partitioned Optimization Considering Electric Vehicle Orderly Charging with Isolated Bidirectional DC-DC Converter Optimal Efficiency Model". Energies 14, nr 6 (14.03.2021): 1614. http://dx.doi.org/10.3390/en14061614.

Pełny tekst źródła
Streszczenie:
The access of large-scale electric vehicles (EVs) will increase the network loss of medium voltage distribution network, which can be alleviated by adjusting the network structure and orderly charging for EVs. However, it is difficult to accurately evaluate the charging efficiency in the orderly charging of electric vehicle (EV), which will cause the scheduling model to be insufficiently accurate. Therefore, this paper proposes an EV double-layer scheduling model based on the isolated bidirectional DC–DC (IBDC) converter optimal efficiency model, and establishes the hierarchical and partitioned optimization model with feeder–branch–load layer. Firstly, based on the actual topology of medium voltage distribution network, a dynamic reconfiguration model between switching stations is established with the goal of load balancing. Secondly, with the goal of minimizing the branch layer network loss, a dynamic reconstruction model under the switch station is established, and the chaotic niche particle swarm optimization is proposed to improve the global search capability and iteration speed. Finally, the power transmission loss model of IBDC converter is established, and the optimal phase shift parameter is determined to formulate the double-layer collaborative optimization operation strategy of electric vehicles. The example verifies that the above model can improve the system load balancing degree and reduce the operation loss of medium voltage distribution network.
Style APA, Harvard, Vancouver, ISO itp.
10

Shaheen, Anwar, i Sundar kumar. "Efficient Task Scheduling of Virtual Machines using Novel Spectral Partitioning and Differential Evaluation Algorithm". International Journal of Advances in Soft Computing and its Applications 14, nr 1 (28.03.2022): 160–75. http://dx.doi.org/10.15849/ijasca.220328.11.

Pełny tekst źródła
Streszczenie:
Abstract Task-scheduling is a major challenge in cloud computing environment that degrades the performance of the system. To enhance the performance of the system, an effective task-scheduling algorithm is needed. Hence an effective task-partitioning and taskscheduling algorithm is introduced for enhancing the system performance. To create resources (datacentre, broker, Virtual Machine - VM and cloudlet) in a dynamic way through the use of CloudSim. In addition, this study intended to perform taskpartitioning and task-scheduling in an effective manner by utilizing the novel spectral partitioning - (SP) and differential evaluation algorithm - (DEA). At first, the task and datacentre is initialized. Subsequently, task-partitioning is performed using the novel SP. It includes a series of steps in which a Laplacian matrix is computed initially. Then based on the Eigen-values and Eigen-vectors of the Laplacian matrix the tasks are partitioned. Followed by this, taskscheduling is performed with the employment of proposed novel DEA. The process comprise the following series of steps such as threshold calculation, mutation, crossover, selection and knee solution for achieving efficient task-partitioning and scheduling. The performance of the proposed system is evaluated by comparing it with other traditional methods. And validated in terms of service cost, load balancing, makespan and energy consumption. The results proved the efficacy of the introduced system. The overall results obtained from comparative analysis also reveal that proposed method outperformed other traditional techniques thereby accomplishing effective task scheduling of VMs in cloud computing environment. Keywords: Cloud computing environment, Virtual Machines, Task Scheduling, Novel Spectral Partitioning and Differential Evaluation Algorithm.
Style APA, Harvard, Vancouver, ISO itp.
11

Kadri, Walid, i Belabbas Yagoubi. "Optimized Scheduling Approach for Scientific Applications Based on Clustering in Cloud Computing Environment". Scalable Computing: Practice and Experience 20, nr 3 (22.09.2019): 527–40. http://dx.doi.org/10.12694/scpe.v20i3.1548.

Pełny tekst źródła
Streszczenie:
Cloud Computing refers to the use of the computing capabilities of remote computers, where the user has considerable computing power without having powerful units. Scientific applications, usually represented as Directed Acyclic Graphs (DAGs), are an important class of applications that lead to challenging problems for resource management in distributed computing. With the advent of Cloud Computing, particularly the IaaS offers for on demand virtual machines leasing, multiple jobs execution, consisting of a large number of DAGs, needs an elaborated scheduling and resource provisioning policies, for efficient use of resources. Only few works exists that consider this problem in the context of clouds environment. In goal of optimization and fault tolerance, DAGs applications are generally partitioned into multiple parallel DAGs using clustering algorithm and assigned to VM (Virtual Machine) resources independently. In this work, we investigate through simulation, the impact of clustering for both provisioning and scheduling policies in the total makespan and financial costs for execution of user's application. We implemented four scheduling policies well-known in grid computing systems, and adapted clustering algorithm to our resource management policy that leases and destroys dynamically VMs. We show that dynamic policies can achieve equal or even better performance than static management policies.
Style APA, Harvard, Vancouver, ISO itp.
12

Song, De-Ning, Yu-Guang Zhong i Jian-Wei Ma. "Look-ahead-window-based interval adaptive feedrate scheduling for long five-axis spline toolpaths under axial drive constraints". Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 234, nr 13 (20.07.2020): 1656–70. http://dx.doi.org/10.1177/0954405420937538.

Pełny tekst źródła
Streszczenie:
Scheduling of the five-axis spline toolpath feedrate is of great significance for high-quality and high-efficiency machining using five-axis machine tools. Due to the fact that there exists nonlinear relationship between the Cartesian space of the cutting tool and the joint space of the five feed axes, it is a challenging task to schedule the five-axis feedrate under axial drive constraints. Most existing methods are researched for routine short spline toolpaths, however, the five-axis feedrate scheduling method expressed for long spline toolpaths is limited. This article proposes an interval adaptive feedrate scheduling method based on a dynamic moving look-ahead window, so as to generate smooth feedrate for long five-axis toolpath in a piecewise manner without using the integral toolpath geometry. First, the length of the look-ahead window which equals to that of the toolpath interval is determined in case of abrupt braking at the end of the toolpath. Then, the interval permissible tangential feed parameters in terms of the velocity, acceleration, and jerk are determined according to the axial drive constraints at each toolpath interval. At the same time, the end velocity of the current interval is obtained through looking ahead the next interval. Using the start and end velocities and the permissible feed parameters of each interval, the five-axis motion feedrate is scheduled via an interval adaptive manner. Thus, the feedrate scheduling task for long five-axis toolpath is partitioned into a series of extremely short toolpaths, which realizes the efficient scheduling of long spline toolpath feedrate. Experimental results on two representative five-axis spline toolpaths demonstrate the feasibility of the proposed approach, especially for long toolpaths.
Style APA, Harvard, Vancouver, ISO itp.
13

Ocampo, Erica, Yen-Chih Huang i Cheng-Chien Kuo. "Feasible Reserve in Day-Ahead Unit Commitment Using Scenario-Based Optimization". Energies 13, nr 20 (9.10.2020): 5239. http://dx.doi.org/10.3390/en13205239.

Pełny tekst źródła
Streszczenie:
This paper investigates the feasible reserve of diesel generators in day-ahead unit commitment (DAUC) in order to handle the uncertainties of renewable energy sources. Unlike other studies that deal with the ramping of generators, this paper extends the ramp rate consideration further, using dynamic limits for the scheduling of available reserves (feasible reserve) to deal with hidden infeasible reserve issues found in the literature. The unit commitment (UC) problem is solved as a two-stage day-ahead robust scenario-based unit commitment using a metaheuristic new variant of particle swarm optimization (PSO) called partitioned step PSO (PSPSO) that can deal with the dynamic system. The PSPSO was pre-optimized and was able to find the solution for the base-case UC problem in a short time. The evaluation of the optimized UC schedules for different degrees of reserve consideration was analyzed. The results reveal that there is a significant advantage in using the feasible reserve formulation, especially for the deterministic approach, over the conventional computation in dealing with uncertainties in on-the-day operations even with the increase in the reserve schedule.
Style APA, Harvard, Vancouver, ISO itp.
14

Sabeeh, Saif, Krzysztof Wesołowski i Paweł Sroka. "C-V2X Centralized Resource Allocation with Spectrum Re-Partitioning in Highway Scenario". Electronics 11, nr 2 (16.01.2022): 279. http://dx.doi.org/10.3390/electronics11020279.

Pełny tekst źródła
Streszczenie:
Cellular Vehicle-to-Everything communication is an important scenario of 5G technologies. Modes 3 and 4 of the wireless systems introduced in Release 14 of 3GPP standards are intended to support vehicular communication with and without cellular infrastructure. In the case of Mode 3, dynamic resource selection and semi-persistent resource scheduling algorithms result in a signalling cost problem between vehicles and infrastructure, therefore, we propose a means to decrease it. This paper employs Re-selection Counter in centralized resource allocation as a decremental counter of new resource requests. Furthermore, two new spectrum re-partitioning and frequency reuse techniques in Roadside Units (RSUs) are considered to avoid resource collisions and diminish high interference impact via increasing the frequency reuse distance. The two techniques, full and partial frequency reuse, partition the bandwidth into two sub-bands. Two adjacent RSUs apply these sub-bands with the Full Frequency Reuse (FFR) technique. In the Partial Frequency Reuse (PFR) technique, the sub-bands are further re-partitioned among vehicles located in the central and edge parts of the RSU coverage. The sub-bands assignment in the nearest RSUs using the same sub-bands is inverted concerning the current RSU to increase the frequency reuse distance. The PFR technique shows promising results compared with the FFR technique. Both techniques are compared with the single band system for different vehicle densities.
Style APA, Harvard, Vancouver, ISO itp.
15

Günzel, Mario, Christian Hakert, Kuan-Hsun Chen i Jian-Jia Chen. "HEART: H ybrid Memory and E nergy- A ware R eal- T ime Scheduling for Multi-Processor Systems". ACM Transactions on Embedded Computing Systems 20, nr 5s (31.10.2021): 1–23. http://dx.doi.org/10.1145/3477019.

Pełny tekst źródła
Streszczenie:
Dynamic power management (DPM) reduces the power consumption of a computing system when it idles, by switching the system into a low power state for hibernation. When all processors in the system share the same component, e.g., a shared memory, powering off this component during hibernation is only possible when all processors idle at the same time. For a real-time system, the schedulability property has to be guaranteed on every processor, especially if idle intervals are considered to be actively introduced. In this work, we consider real-time systems with hybrid shared-memory architectures, which consist of shared volatile memory (VM) and non-volatile memory (NVM). Energy-efficient execution is achieved by applying DPM to turn off all memories during the hibernation mode. Towards this, we first explore the hybrid memory architectures and suggest a task model, which features configurable hibernation overheads. We propose a multi-processor procrastination algorithm (HEART), based on partitioned earliest-deadline-first (pEDF) scheduling. Our algorithm facilitates reducing the energy consumption by actively enlarging the hibernation time. It enforces all processors to idle simultaneously without violating the schedulability condition, such that the system can enter the hibernation state, where shared memories are turned off. Throughout extensive evaluation of HEART, we demonstrate (1) the increase in potential hibernation time, respectively the decrease in energy consumption, and (2) that our algorithm is not only more general but also has better performance than the state of the art with respect to energy efficiency in most cases.
Style APA, Harvard, Vancouver, ISO itp.
16

Mu, Yashuang, Lidong Wang i Xiaodong Liu. "Dynamic programming based fuzzy partition in fuzzy decision tree induction". Journal of Intelligent & Fuzzy Systems 39, nr 5 (19.11.2020): 6757–72. http://dx.doi.org/10.3233/jifs-191497.

Pełny tekst źródła
Streszczenie:
Fuzzy decision trees are one of the most popular extensions of decision trees for symbolic knowledge acquisition by fuzzy representation. Among the majority of fuzzy decision trees learning methods, the number of fuzzy partitions is given in advance, that is, there are the same amount of fuzzy items utilized in each condition attribute. In this study, a dynamic programming-based partition criterion for fuzzy items is designed in the framework of fuzzy decision tree induction. The proposed criterion applies an improved dynamic programming algorithm used in scheduling problems to establish an optimal number of fuzzy items for each condition attribute. Then, based on these fuzzy partitions, a fuzzy decision tree is constructed in a top-down recursive way. A comparative analysis using several traditional decision trees verify the feasibility of the proposed dynamic programming based fuzzy partition criterion. Furthermore, under the same framework of fuzzy decision trees, the proposed fuzzy partition solution can obtain a higher classification accuracy than some cases with the same amount of fuzzy items.
Style APA, Harvard, Vancouver, ISO itp.
17

Liu, Mengqi, Zhong Shao, Hao Chen, Man-Ki Yoon i Jung-Eun Kim. "Compositional virtual timelines: verifying dynamic-priority partitions with algorithmic temporal isolation". Proceedings of the ACM on Programming Languages 6, OOPSLA2 (31.10.2022): 60–88. http://dx.doi.org/10.1145/3563290.

Pełny tekst źródła
Streszczenie:
Real-time systems power safety-critical applications that require strong isolation among each other. Such isolation needs to be enforced at two orthogonal levels. On the micro-architectural level, this mainly involves avoiding interference through micro-architectural states, such as cache lines. On the algorithmic level, this is usually achieved by adopting real-time partitions to reserve resources for each application. Implementations of such systems are often complex and require formal verification to guarantee proper isolation. In this paper, we focus on algorithmic isolation, which is mainly related to scheduling-induced interferences. We address earliest-deadline-first (EDF) partitions to achieve compositionality and utilization, while imposing constraints on tasks' periods and enforcing budgets on these periodic partitions to ensure isolation between each other. The formal verification of such a real-time OS kernel is challenging due to the inherent complexity of the dynamic priority assignment on the partition level. We tackle this problem by adopting a dynamically constructed abstraction to lift the reasoning of a concrete scheduler into an abstract domain. Using this framework, we verify a real-time operating system kernel with budget-enforcing EDF partitions and prove that it indeed ensures isolation between partitions. All the proofs are mechanized in Coq.
Style APA, Harvard, Vancouver, ISO itp.
18

Sato, Masa-aki, i Shin Ishii. "On-line EM Algorithm for the Normalized Gaussian Network". Neural Computation 12, nr 2 (1.02.2000): 407–32. http://dx.doi.org/10.1162/089976600300015853.

Pełny tekst źródła
Streszczenie:
A normalized gaussian network (NGnet) (Moody & Darken, 1989) is a network of local linear regression units. The model softly partitions the input space by normalized gaussian functions, and each local unit linearly approximates the output within the partition. In this article, we propose a new on-line EM algorithm for the NGnet, which is derived from the batch EM algorithm (Xu, Jordan, & Hinton 1995), by introducing a discount factor. We show that the on-line EM algorithm is equivalent to the batch EM algorithm if a specific scheduling of the discount factor is employed. In addition, we show that the on-line EM algorithm can be considered as a stochastic approximation method to find the maximum likelihood estimator. A new regularization method is proposed in order to deal with a singular input distribution. In order to manage dynamic environments, where the input-output distribution of data changes over time, unit manipulation mechanisms such as unit production, unit deletion, and unit division are also introduced based on probabilistic interpretation. Experimental results show that our approach is suitable for function approximation problems in dynamic environments. We also apply our on-line EM algorithm to robot dynamics problems and compare our algorithm with the mixtures-of-experts family.
Style APA, Harvard, Vancouver, ISO itp.
19

Kari, Chadi, Alexander Russell i Narasimha Shashidhar. "Work-Competitive Scheduling on Task Dependency Graphs". Parallel Processing Letters 25, nr 02 (czerwiec 2015): 1550001. http://dx.doi.org/10.1142/s0129626415500012.

Pełny tekst źródła
Streszczenie:
A fundamental problem in distributed computing is the task of cooperatively executing a given set of [Formula: see text] tasks by [Formula: see text] asynchronous processors where the communication medium is dynamic and subject to failures. Also known as do-all, this problem been studied extensively in various distributed settings. In [2], the authors consider a partitionable network scenario and analyze the competitive performance of a randomized scheduling algorithm for the case where the tasks to be completed are independent of each other. In this paper, we study a natural extension of this problem where the tasks have dependencies among them. We present a simple randomized algorithm for [Formula: see text] processors cooperating to perform [Formula: see text] known tasks where the dependencies between them are defined by a [Formula: see text]-partite task dependency graph and additionally these processors are subject to a dynamic communication medium. By virtue of the problem setting, we pursue competitive analysis where the performance of our algorithm is measured against that of the omniscient offline algorithm which has complete knowledge of the dynamics of the communication medium. We show that the competitive ratio of our algorithm is tight and depends on the dynamics of the communication medium viz. the computational width defined in [2] and also on the number of partitions of the task dependency graph.
Style APA, Harvard, Vancouver, ISO itp.
20

AGUILAR, JOSE, i ERNST LEISS. "PARALLEL LOOP SCHEDULING APPROACHES FOR DISTRIBUTED AND SHARED MEMORY SYSTEMS". Parallel Processing Letters 15, nr 01n02 (marzec 2005): 131–52. http://dx.doi.org/10.1142/s0129626405002118.

Pełny tekst źródła
Streszczenie:
In this paper, we propose different approaches for the parallel loop scheduling problem on distributed as well as shared memory systems. Specifically, we propose adaptive loop scheduling models in order to achieve load balancing, low runtime scheduling, low synchronization overhead and low communication overhead. Our models are based on an adaptive determination of the chunk size and an exploitation of the processor affinity property, and consider different situations (central or local queues, and dynamic or static loop partition).
Style APA, Harvard, Vancouver, ISO itp.
21

Sun, Hong Fei, Xiao Dang Liu i Wei Hou. "Research of Parallel Machine Scheduling with Flexible Resources Based on Nested Partition Method". Applied Mechanics and Materials 459 (październik 2013): 488–93. http://dx.doi.org/10.4028/www.scientific.net/amm.459.488.

Pełny tekst źródła
Streszczenie:
In the fierce market competition environment, to maximize the production efficiency, manufacturing enterprises mostly adopt the many varieties of small batch and discrete mode of production. The production process has the very strong flexibility. But the production process are lack of scheduling management or parallel machine production unit mostly adopts the scheduling method of static quota system, so the flexible resource in enterprise production system parallel machine scheduling began to reveal inadequate and it has reduce the production efficiency of the enterprise. This paper provide the Nested Partition Method for the problem, establishing the mathematic models for dynamic scheduling, and developing the corresponding algorithm, in order to improve the utilization rate of equipment and flexible resource.
Style APA, Harvard, Vancouver, ISO itp.
22

Ye, Chang, Kan Cao, Haiteng Han, Ziwen Liu, Defu Cai i Dan Liu. "Cluster Partition Method of Large-Scale Grid-Connected Distributed Generations considering Expanded Dynamic Time Scenarios". Mathematical Problems in Engineering 2022 (16.02.2022): 1–10. http://dx.doi.org/10.1155/2022/1934992.

Pełny tekst źródła
Streszczenie:
The reasonable clustering of large-scale distributed generations (DGs) can optimize the scheduling control and operation monitoring of the power grid, which ensures the orderly and efficient integration of DGs into the power system. In this article, the influence of internal and external flexible resources is considered in the DG cluster partition, and the comprehensive performance indexes with expanded dynamic time scenario are proposed to realize the dynamic cluster partition. Firstly, the active and reactive power balance indexes considering the flexible resources are derived, which forms the comprehensive index together with the structure index. Then, the comprehensive index is expanded to the dynamic forms, which reflects the real-time cluster performance, and the cluster partition method is given with the genetic algorithm. Finally, the effectiveness verification of the proposed cluster partition method is carried out with the 14- and 33-bus systems.
Style APA, Harvard, Vancouver, ISO itp.
23

Zhang, Yong Qiang, i Xu Chao. "Using Dynamic Multi-Resolution Render 3D Terrain". Advanced Materials Research 159 (grudzień 2010): 420–23. http://dx.doi.org/10.4028/www.scientific.net/amr.159.420.

Pełny tekst źródła
Streszczenie:
The problem that the three-dimensional terrain data of Rendering was studied in Simulation and Game Engine .At first,a new real-time scheduling algorithm was put forward based on determining the simulation vision.in the visible area.the partition area was set up thought Level of Detail(LOD). Then setting up an adaptive node evaluation system was established based on distance and space error function. Through estimate plane of trangle render by layer,and adding the effort of rendering.
Style APA, Harvard, Vancouver, ISO itp.
24

Cao, Jin, Bo Li, Mengni Fan i Huiyu Liu. "Inference Acceleration with Adaptive Distributed DNN Partition over Dynamic Video Stream". Algorithms 15, nr 7 (13.07.2022): 244. http://dx.doi.org/10.3390/a15070244.

Pełny tekst źródła
Streszczenie:
Deep neural network-based computer vision applications have exploded and are widely used in intelligent services for IoT devices. Due to the computationally intensive nature of DNNs, the deployment and execution of intelligent applications in smart scenarios face the challenge of limited device resources. Existing job scheduling strategies are single-focused and have limited support for large-scale end-device scenarios. In this paper, we present ADDP, an adaptive distributed DNN partition method that supports video analysis on large-scale smart cameras. ADDP applies to the commonly used DNN models for computer vision and contains a feature-map layer partition module (FLP) supporting edge-to-end collaborative model partition and a feature-map size partition (FSP) module supporting multidevice parallel inference. Based on the inference delay minimization objective, FLP and FSP achieve a tradeoff between the arithmetic and communication resources of different devices. We validate ADDP on heterogeneous devices and show that both the FLP module and the FSP module outperform existing approaches and reduce single-frame response latency by 10–25% compared to the pure on-device processing.
Style APA, Harvard, Vancouver, ISO itp.
25

Himmich, Ilyas, Hatem Ben Amor, Issmail El Hallaoui i François Soumis. "A Primal Adjacency-Based Algorithm for the Shortest Path Problem with Resource Constraints". Transportation Science 54, nr 5 (wrzesień 2020): 1153–69. http://dx.doi.org/10.1287/trsc.2019.0941.

Pełny tekst źródła
Streszczenie:
The shortest path problem with resource constraints (SPPRC) is often used as a subproblem within a column-generation approach for routing and scheduling problems. It aims to find a least-cost path between the source and the destination nodes in a network while satisfying the resource consumption limitations on every node. The SPPRC is usually solved using dynamic programming. Such approaches are effective in practice, but they can be inefficient when the network is large and especially when the number of resources is high. To cope with this major drawback, we propose a new exact primal algorithm to solve the SPPRC defined on acyclic networks. The proposed algorithm explores the solution space iteratively using a path adjacency–based partition. Numerical experiments for vehicle and crew scheduling problem instances demonstrate that the new approach outperforms both the standard dynamic programming and the multidirectional dynamic programming methods.
Style APA, Harvard, Vancouver, ISO itp.
26

Niu, X. N., X. J. Zhai, H. Tang i L. X. Wu. "MULTI-SATELLITE SCHEDULING APPROACH FOR DYNAMIC AREAL TASKS TRIGGERED BY EMERGENT DISASTERS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (3.06.2016): 475–81. http://dx.doi.org/10.5194/isprsarchives-xli-b1-475-2016.

Pełny tekst źródła
Streszczenie:
The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
Style APA, Harvard, Vancouver, ISO itp.
27

Niu, X. N., X. J. Zhai, H. Tang i L. X. Wu. "MULTI-SATELLITE SCHEDULING APPROACH FOR DYNAMIC AREAL TASKS TRIGGERED BY EMERGENT DISASTERS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (3.06.2016): 475–81. http://dx.doi.org/10.5194/isprs-archives-xli-b1-475-2016.

Pełny tekst źródła
Streszczenie:
The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
Style APA, Harvard, Vancouver, ISO itp.
28

CAO, Yang-Jie, De-Pei QIAN, Wei-Guo WU i Xiao-She DONG. "Adaptive Scheduling Algorithm Based on Dynamic Core-Resource Partitions for Many-Core Processor Systems". Journal of Software 23, nr 2 (6.03.2012): 240–52. http://dx.doi.org/10.3724/sp.j.1001.2012.04141.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Zhang, Long, Tao Liu, Min Liu i Xionghai Wang. "Scheduling Semiconductor Wafer Fabrication Using a New Fuzzy Association Classification Rules Based on Dynamic Fuzzy Partition". Chinese Journal of Electronics 26, nr 1 (1.01.2017): 112–17. http://dx.doi.org/10.1049/cje.2016.11.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Korf, Richard, i Ethan Schreiber. "Optimally Scheduling Small Numbers of Identical Parallel Machines". Proceedings of the International Conference on Automated Planning and Scheduling 23 (2.06.2013): 144–52. http://dx.doi.org/10.1609/icaps.v23i1.13544.

Pełny tekst źródła
Streszczenie:
Given a set of n different jobs, each with an associated running time, and a set of k identical machines, our task is to assign each job to a machine to minimize the time to complete all jobs. In the OR literature, this is called identical parallel machine scheduling, while in AI it is called number partitioning. For eight or more machines, an OR approach based on bin packing appears best, while for fewer machines, a collection of AI search algorithms perform best. We focus here on scheduling up to seven machines, and make several new contributions. One is a new method that significantly reduces duplicate partitions for all values of k, including k = 2. Another is a new version of the Complete-Karmarkar-Karp (CKK) algorithm that minimizes the makespan. A surprising negative result is that dynamic programming is not competitive for this problem, even for k = 2. We also explore the effect of precision of values on the choice of the best algorithm. Despite the simplicity of this problem, a number of different algorithms have been proposed, and the most efficient algorithm depends on the number of jobs, the number of machines, and the precision of the running times.
Style APA, Harvard, Vancouver, ISO itp.
31

Feng, Lv, Gao Chunlin i Ma Kaiyang. "Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph". Open Physics 15, nr 1 (4.05.2017): 225–32. http://dx.doi.org/10.1515/phys-2017-0024.

Pełny tekst źródła
Streszczenie:
AbstractWith rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.
Style APA, Harvard, Vancouver, ISO itp.
32

Wang, Na, Yaping Fu i Hongfeng Wang. "A meta-heuristic algorithm for integrated optimization of dynamic resource allocation planning and production scheduling in parallel machine system". Advances in Mechanical Engineering 11, nr 12 (grudzień 2019): 168781401989834. http://dx.doi.org/10.1177/1687814019898347.

Pełny tekst źródła
Streszczenie:
With the wide application of advanced information technology and intelligent equipment in the manufacturing system, the decisions of design and operation have become more interdependent and their integration optimization has gained great concerns from the community of operational research recently. This article investigates an optimization problem of integrating dynamic resource allocation and production schedule in a parallel machine environment. A meta-heuristic algorithm, in which heuristic-based partition, genetic-based sampling, promising index calculation, and backtracking strategies are employed, is proposed for solving the investigated integration problem in order to minimize the makespan of the manufacturing system. The experimental results on a set of random-generated test instances indicate that the presented model is effective and the proposed algorithm exhibits the satisfactory performance that outperforms two state-of-the-art algorithms from literature.
Style APA, Harvard, Vancouver, ISO itp.
33

Huang, Weifan, Chin-Chia Wu i Shangchia Liu. "Single-machine batch scheduling problem with job rejection and resource dependent processing times". RAIRO - Operations Research 52, nr 2 (kwiecień 2018): 315–34. http://dx.doi.org/10.1051/ro/2017040.

Pełny tekst źródła
Streszczenie:
This paper addresses single-machine batch scheduling with job rejection and convex resource allocation. A job is either rejected, in which case a rejection penalty will be incurred, or accepted and processed on the machine. The accepted jobs are combined to form batches containing contiguously scheduled jobs. For each batch, a batch-dependent machine setup time, which is a function of the number of batches processed previously, is required before the first job in the batch is processed. Both the setup times and job processing times are controllable by allocating a continuously divisible nonrenewable resource to the jobs. The objective is to determine an accepted job sequence, a rejected job set, a partition of the accepted job sequence into batches, and resource allocation that jointly minimize a cost function based on the total delivery dates of the accepted jobs, and the job holding, resource consumption, and rejection penalties. An dynamic programming solution algorithm with running time O (n6) is developed for the problem. It is also shown that the case of the problem with a common setup time can be solved in O (n5) time.
Style APA, Harvard, Vancouver, ISO itp.
34

Bloch, Aurelien, Simone Casale-Brunet i Marco Mattavelli. "Dynamic SIMD Parallel Execution on GPU from High-Level Dataflow Synthesis". Journal of Low Power Electronics and Applications 12, nr 3 (17.07.2022): 40. http://dx.doi.org/10.3390/jlpea12030040.

Pełny tekst źródła
Streszczenie:
Developing and fine-tuning software programs for heterogeneous hardware such as CPU/GPU processing platforms comprise a highly complex endeavor that demands considerable time and effort of software engineers and requires evaluating various fundamental components and features of both the design and of the platform to maximize the overall performance. The dataflow programming approach has proven to be an appropriate methodology for reaching such a difficult and complex goal for the intrinsic portability and the possibility of easily decomposing a network of actors on different processing units of the heterogeneous hardware. Nonetheless, such a design method might not be enough on its own to achieve the desired performance goals, and supporting tools are useful to be able to efficiently explore the design space so as to optimize the desired performance objectives. This article presents a methodology composed of several stages for enhancing the performance of dataflow software developed in RVC-CAL and generating low-level implementations to be executed on GPU/CPU heterogeneous hardware platforms. The stages are composed of a method for the efficient scheduling of parallel CUDA partitions, an optimization of the performance of the data transmission tasks across computing kernels, and the exploitation of dynamic programming for introducing SIMD-capable graphics processing unit systems. The methodology is validated on both the quantitative and qualitative side by means of dataflow software application examples running on platforms according to various different mapping configurations.
Style APA, Harvard, Vancouver, ISO itp.
35

Serpen, Gursel, i Jayanta Debnath. "Design and performance evaluation of a parking management system for automated, multi-story and robotic parking structure". International Journal of Intelligent Computing and Cybernetics 12, nr 4 (11.11.2019): 444–65. http://dx.doi.org/10.1108/ijicc-02-2019-0017.

Pełny tekst źródła
Streszczenie:
Purpose The purpose of this paper is to present design and performance evaluation through simulation of a parking management system (PMS) for a fully automated, multi-story, puzzle-type and robotic parking structure with the overall objective of minimizing customer wait times while maximizing the space utilization. Design/methodology/approach The presentation entails development and integration of a complete suite of path planning, elevator scheduling and resource allocation algorithms. The PMS aims to manage multiple concurrent requests, in real time and in a dynamic context, for storage and retrieval of vehicles loaded onto robotic carts for a fully automated, multi-story and driving-free parking structure. The algorithm suite employs the incremental informed search algorithm D* Lite with domain-specific heuristics and the uninformed search algorithm Uniform Cost Search for path search and planning. An optimization methodology based on nested partitions and Genetic algorithm is adapted for scheduling of a group of elevators. The study considered a typical business day scenario in the center of a metropolis. Findings The simulation study indicates that the proposed design for the PMS is able to serve concurrent storage-retrieval requests representing a wide range of Poisson distributed customer arrival rates in real time while requiring reasonable computing resources under realistic scenarios. The customer waiting times for both storage and retrieval requests are within acceptable bounds, which are set as no more than 5 min, even in the presence of up to 100 concurrent storage and retrieval requests. The design is able to accommodate a variety of customer arrival rates and presence of immobilized vehicles which are assumed to be scattered across the floors of the structure to make it possible for deployment in real-time environments. Originality/value The intelligent system design is novel as the fully automated robotic parking structures are just in the process of being matured from a technology standpoint.
Style APA, Harvard, Vancouver, ISO itp.
36

Aydin, Kevin, MohammadHossein Bateni i Vahab Mirrokni. "Distributed Balanced Partitioning via Linear Embedding †". Algorithms 12, nr 8 (10.08.2019): 162. http://dx.doi.org/10.3390/a12080162.

Pełny tekst źródła
Streszczenie:
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size. We study a distributed balanced-partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks such as MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, for example, via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, and minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, for example, a label-propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: For instance, compared to the label-propagation algorithm, we report an improvement of 15–25% in the cut value. We also observe that our algorithms admit scalable distributed implementation for any number of partitions. Finally, we explain three applications of this work at Google: (1) Balanced partitioning is used to route multi-term queries to different replicas in Google Search backend in a way that reduces the cache miss rates by ≈ 0.5 % , which leads to a double-digit gain in throughput of production clusters. (2) Applied to the Google Maps Driving Directions, balanced partitioning minimizes the number of cross-shard queries with the goal of saving in CPU usage. This system achieves load balancing by dividing the world graph into several “shards”. Live experiments demonstrate an ≈ 40 % drop in the number of cross-shard queries when compared to a standard geography-based method. (3) In a job scheduling problem for our data centers, we use balanced partitioning to evenly distribute the work while minimizing the amount of communication across geographically distant servers. In fact, the hierarchical nature of our solution goes well with the layering of data center servers, where certain machines are closer to each other and have faster links to one another.
Style APA, Harvard, Vancouver, ISO itp.
37

Su, Guojun, i Xionghai Wang. "Weighted nested partitions based on differential evolution (WNPDE) algorithm-based scheduling of parallel batching processing machines (BPM) with incompatible families and dynamic lot arrival". International Journal of Computer Integrated Manufacturing 24, nr 6 (czerwiec 2011): 552–60. http://dx.doi.org/10.1080/0951192x.2011.562545.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Bloch, Aurelien, Simone Casale-Brunet i Marco Mattavelli. "Performance Estimation of High-Level Dataflow Program on Heterogeneous Platforms by Dynamic Network Execution". Journal of Low Power Electronics and Applications 12, nr 3 (23.06.2022): 36. http://dx.doi.org/10.3390/jlpea12030036.

Pełny tekst źródła
Streszczenie:
The performance of programs executed on heterogeneous parallel platforms largely depends on the design choices regarding how to partition the processing on the various different processing units. In other words, it depends on the assumptions and parameters that define the partitioning, mapping, scheduling, and allocation of data exchanges among the various processing elements of the platform executing the program. The advantage of programs written in languages using the dataflow model of computation (MoC) is that executing the program with different configurations and parameter settings does not require rewriting the application software for each configuration setting, but only requires generating a new synthesis of the execution code corresponding to different parameters. The synthesis stage of dataflow programs is usually supported by automatic code generation tools. Another competitive advantage of dataflow software methodologies is that they are well-suited to support designs on heterogeneous parallel systems as they are inherently free of memory access contention issues and naturally expose the available intrinsic parallelism. So as to fully exploit these advantages and to be able to efficiently search the configuration space to find the design points that better satisfy the desired design constraints, it is necessary to develop tools and associated methodologies capable of evaluating the performance of different configurations and to drive the search for good design configurations, according to the desired performance criteria. The number of possible design assumptions and associated parameter settings is usually so large (i.e., the dimensions and size of the design space) that intuition as well as trial and error are clearly unfeasible, inefficient approaches. This paper describes a method for the clock-accurate profiling of software applications developed using the dataflow programming paradigm such as the formal RVL-CAL language. The profiling can be applied when the application program has been compiled and executed on GPU/CPU heterogeneous hardware platforms utilizing two main methodologies, denoted as static and dynamic. This paper also describes how a method for the qualitative evaluation of the performance of such programs as a function of the supplied configuration parameters can be successfully applied to heterogeneous platforms. The technique was illustrated using two different application software examples and several design points.
Style APA, Harvard, Vancouver, ISO itp.
39

Petry, M. T., P. Paredes, L. S. Pereira, T. Carlesso i C. J. Michelon. "Modelling the Soil Water Balance of Maize under No-tillage and Conventional Tillage Systems in Southern Brazil". Agrociencia 19, nr 3 (grudzień 2015): 20. http://dx.doi.org/10.31285/agro.19.249.

Pełny tekst źródła
Streszczenie:
No-tillage and crop residue practices could help improving water productivity (WP) in irrigated areas. Mulches increase the crop yield and WP by favouring the water status in the root zone and reducing soil evaporation. However, scientific knowledge tells that no-till systems change the soil physical properties by increasing the soil bulk density and reducing soil porosity, which can lead to alterations in soil water fluxes as well as the soil water dynamics in the soil-plant-atmosphere system. Thus, water balance models combined with field experiments can favour a better understanding of the soil water dynamics under different tillage systems and irrigation management. The present study aimed at assessing the performance of the soil water balance model SIMDualKc which applies the dual crop coefficient approach to partition crop evapotranspiration into crop transpiration and evaporation components of a maize crop cultivated under no-tillage and conventional tillage systems, and under different irrigation managements. Two experiments were carried out in the 1999/2000 and 2000/2001 growing seasons in an experimental field of the Agricultural Engineering Department of the Federal University of Santa Maria, Southern Brazil. Treatments consisted of a 2 x 2 factorial scheme, in a completely randomized design, with four replications. The tested treatments were: Factor A -irrigation management (irrigation and terminal water stress, irrigation was ceased after V7) and, Factor B - tillage system (no-tillage and conventional tillage). Soil water content was measured three times a week along crop seasons using a neutron probe until 1.10 m of soil depth. Irrigation was scheduled using as threshold a cumulative crop evapotranspiration of 25 mm and an irrigation depth that allowed raised the soil water content to field capacity was used. The SIMDualKc model was calibrated for each tillage and irrigation management using data from the first season and validated against data of the 2000/2001 season. Goodness of fit indicators were used to assess model performance and included a linear regression through the origin and an ordinary least-squares regression between observed and simulated soil water content, having respectively as indicators the regression coefficient (b0) and the determination coefficient (R2), the Root Mean Square Error (RMSE) and the Nash and Sutcliff model efficiency (EF). Results show the ability of the model to be further explored to support farm irrigation scheduling and tillage practices in southern Brazil.
Style APA, Harvard, Vancouver, ISO itp.
40

Nayyar, Anand, Rudra Rameshwar i Piyush Kanti Dutta. "Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies". Scalable Computing: Practice and Experience 20, nr 2 (2.05.2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Pełny tekst źródła
Streszczenie:
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
Style APA, Harvard, Vancouver, ISO itp.
41

Dare-Idowu, Oluwakemi, Lionel Jarlan, Valerie Le-Dantec, Vincent Rivalland, Eric Ceschia, Aaron Boone i Aurore Brut. "Hydrological Functioning of Maize Crops in Southwest France Using Eddy Covariance Measurements and a Land Surface Model". Water 13, nr 11 (25.05.2021): 1481. http://dx.doi.org/10.3390/w13111481.

Pełny tekst źródła
Streszczenie:
The primary objective of this study is to evaluate the representation of the energy budget for irrigated maize crops in soil–vegetation–atmosphere transfer (SVAT) models. To this end, a comparison between the original version of the interactions between the soil–biosphere–atmosphere (ISBA) model based on a single-surface energy balance and the new ISBA-multi-energy balance (ISBA-MEB) option was carried out. The second objective is to analyze the intra- and inter-seasonal variability of the crop water budget by implementing ISBA and ISBA-MEB over six irrigated maize seasons between 2008 and 2019 in Lamasquère, southwest France. Seasonal dynamics of the convective fluxes were properly reproduced by both models with R2 ranging between 0.66 and 0.80 (RMSE less than 59 W m−2) for the sensible heat flux and between 0.77 and 0.88 (RMSE less than 59 W m−2) for the latent heat flux. Statistical metrics also showed that over the six crop seasons, for the turbulent fluxes, ISBA-MEB was consistently in better agreement with the in situ measurements with RMSE 8–30% lower than ISBA, particularly when the canopy was heterogeneous. The ability of both models to partition the evapotranspiration (ET) term between soil evaporation and plant transpiration was also acceptable as transpiration predictions compared very well with the available sap flow measurements during the summer of 2015; (ISBA-MEB had slightly better statistics than ISBA with R2 of 0.91 and a RMSE value of 0.07 mm h−1). Finally, the results from the analysis of the inter-annual variability of the crop water budget can be summarized as follows: (1) The partitioning of the ET revealed a strong year-to-year variability with transpiration ranging between 40% and 67% of total ET, while soil evaporation was dominant in 2008 and 2010 due to the late and poor canopy development; (2) drainage losses are close to null because of an impervious layer at 60 cm depth; and (3) this very specific condition limited the inter-annual variability of irrigation scheduling as crops can always extract water that is stored in the root zone.
Style APA, Harvard, Vancouver, ISO itp.
42

Hobbs, Clara, Zelin Tong, Joshua Bakita i James H. Anderson. "Statically optimal dynamic soft real-time semi-partitioned scheduling". Real-Time Systems, 2.01.2021. http://dx.doi.org/10.1007/s11241-020-09359-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Cucinotta, Tommaso, Alexandre Amory, Gabriele Ara, Francesco Paladino i Marco Di Natale. "Multi-Criteria Optimization of Real-Time DAGs on Heterogeneous Platforms under P-EDF". ACM Transactions on Embedded Computing Systems, 13.04.2023. http://dx.doi.org/10.1145/3592609.

Pełny tekst źródła
Streszczenie:
This paper tackles the problem of optimal placement of complex real-time embedded applications on heterogeneous platforms. Applications are composed of directed acyclic graphs of tasks, with each DAG having a minimum inter-arrival period for its activation requests, and an end-to-end deadline within which all of the computations need to terminate since each activation. The platforms of interest are heterogeneous power-aware multi-core platforms with DVFS capabilities, including big.LITTLE Arm architectures, and platforms with GPU or FPGA hardware accelerators with Dynamic Partial Reconfiguration capabilities. Tasks can be deployed on CPUs using partitioned EDF-based scheduling. Additionally, some of the tasks may have an alternate implementation available for one of the accelerators on the target platform, which are assumed to serve requests in non-preemptive FIFO order. The system can be optimized by: minimizing power consumption, respecting precise timing constraints; maximizing the applications’ slack, respecting given power consumption constraints; or even a combination of these, in a multi-objective formulation. We propose an off-line optimization of the mentioned problem based on mixed-integer quadratic constraint programming (MIQCP). The optimization provides the DVFS configuration of all the CPUs (or accelerators) capable of frequency switching and the placement to be followed by each task in the DAGs, including the software-vs-hardware implementation choice for tasks that can be hardware-accelerated. For relatively big problems, we developed heuristic solvers capable of providing suboptimal solutions in a significantly reduced time compared to the MIQCP strategy, thus widening the applicability of the proposed framework. We validate the approach by running a set of randomly generated DAGs on Linux under SCHED_DEADLINE, deployed onto two real boards, one with Arm big.LITTLE architecture, the other with FPGA acceleration, verifying that the experimental runs meet the theoretical expectations in terms of timing and power optimization goals.
Style APA, Harvard, Vancouver, ISO itp.
44

Chen, Wei-Ju, Peng Wu, Pei-Chi Huang, Aloysius K. Mok i Song Han. "Regular Composite Resource Partitioning and Reconfiguration in Open Systems". ACM Transactions on Embedded Computing Systems, 18.07.2023. http://dx.doi.org/10.1145/3609424.

Pełny tekst źródła
Streszczenie:
We consider the problem of resource provisioning for real-time cyber-physical applications in an open system environment where there does not exist a global resource scheduler that has complete knowledge of the real-time performance requirements of each individual application that shares the resources with the other applications. Regularity-based Resource Partition (RRP) model is an effective strategy to hierarchically partition and assign various resource slices among such applications. However, previous work on RRP model only discusses uniform resource environment, where resources are implicitly assumed to be synchronized and clocked at the same frequency. The challenge is that a task utilizing multiple resources may experience unexpected delays in non-uniform environments, where resources are clocked at different frequencies. This paper extends the RRP model to non-uniform multi-resource open system environments to tackle this problem. It first introduces a novel composite resource partition abstraction and then proposes algorithms to construct and reconfigure the composite resource partitions. Specifically, the Acyclic Regular Composite Resource Partition Scheduling (ARCRP-S) algorithm constructs regular composite resource partitions and the Acyclic Regular Composite Resource Partition Dynamic Reconfiguration (ARCRP-DR) algorithm reconfigures the composite resource partitions in the run time upon requests of partition configuration changes. Our experimental results show that compared with state-of-the-art methods, ARCRP-S can prevent unexpected resource supply shortfall and improve the schedulability up to \(50\% \) . On the other hand, ARCRP-DR can guarantee the resource supply during the reconfiguration with moderate computational overhead.
Style APA, Harvard, Vancouver, ISO itp.
45

Lu, SenXing, Mingming Zhao, Chunlin Li, Quanbing Du i Youlong Luo. "Time-Aware Data Partition Optimization and Heterogeneous Task Scheduling Strategies in Spark Clusters". Computer Journal, 14.03.2023. http://dx.doi.org/10.1093/comjnl/bxad017.

Pełny tekst źródła
Streszczenie:
Abstract The Spark computing framework provides an efficient solution to address the major requirements of big data processing, but data partitioning and job scheduling in the Spark framework are the two major bottlenecks that limit Spark’s performance. In the Spark Shuffle phase, the data skewing problem caused by unbalanced data partitioning leads to the problem of increased job completion time. In response to the above problems, a balanced partitioning strategy for intermediate data is proposed in this article, which considers the characteristics of intermediate data, establishes a data skewing model and proposes a dynamic partitioning algorithm. In Spark heterogeneous clusters, because of the differences in node performance and task requirements, the default task scheduling algorithm cannot complete scheduling efficiently, which leads to low system task processing efficiency. In order to deal with the above problems, an efficient job scheduling strategy is proposed in this article, which integrates node performance and task requirements, and proposes a task scheduling algorithm using greedy strategy. The experimental results prove that the dynamic partitioning algorithm for intermediate data proposed in this article effectively alleviates the problem that data skew leads to the decrease of system task processing efficiency and shortens the overall task completion time. The efficient job scheduling strategy proposed in this article can efficiently complete the job scheduling tasks under heterogeneous clusters, allocate jobs to nodes in a balanced manner, decrease the overall job completion time and increase the system resource utilization.
Style APA, Harvard, Vancouver, ISO itp.
46

Maleki, Neda, Hamid Reza Faragardi, Amir Masoud Rahmani, Mauro Conti i Jay Lofstead. "TMaR: a two-stage MapReduce scheduler for heterogeneous environments". Human-centric Computing and Information Sciences 10, nr 1 (7.10.2020). http://dx.doi.org/10.1186/s13673-020-00247-5.

Pełny tekst źródła
Streszczenie:
Abstract In the context of MapReduce task scheduling, many algorithms mainly focus on the scheduling of Reduce tasks with the assumption that scheduling of Map tasks is already done. However, in the cloud deployments of MapReduce, the input data is located on remote storage which indicates the importance of the scheduling of Map tasks as well. In this paper, we propose a two-stage Map and Reduce task scheduler for heterogeneous environments, called TMaR. TMaR schedules Map and Reduce tasks on the servers that minimize the task finish time in each stage, respectively. We employ a dynamic partition binder for Reduce tasks in the Reduce stage to lighten the shuffling traffic. Indeed, TMaR minimizes the makespan of a batch of tasks in heterogeneous environments while considering the network traffic. The simulation results demonstrate that TMaR outperforms Hadoop-stock and Hadoop-A in terms of makespan and network traffic and achieves by an average of 29%, 36%, and 14% performance using Wordcount, Sort, and Grep benchmarks. Besides, the power reduction of TMaR is up to 12%.
Style APA, Harvard, Vancouver, ISO itp.
47

Rekha, S., i C. Kalaiselvi. "Load Balancing Using SJF-MMBF and SJF-ELM Approach". International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 10.01.2021, 74–86. http://dx.doi.org/10.32628/cseit21714.

Pełny tekst źródła
Streszczenie:
This paper studies the delay-optimal virtual machine (VM) scheduling problem in cloud computing systems, which have a constant amount of infrastructure resources such as CPU, memory and storage in the resource pool. The cloud computing system provides VMs as services to users. Cloud users request various types of VMs randomly over time and the requested VM-hosting durations vary vastly. A multi-level queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority or process type. Each queue has its own scheduling algorithm. Similarly, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. Multi-level queue scheduling is performed via the use of the Particle Swarm Optimization algorithm (MQPSO). It checks both Shortest-Job-First (SJF) buffering and Min-Min Best Fit (MMBF) scheduling algorithms, i.e., SJF-MMBF, is proposed to determine the solutions. Another scheme that combines the SJF buffering and Extreme Learning Machine (ELM)-based scheduling algorithms, i.e., SJF- ELM, is further proposed to avoid the potential of job starva¬tion in SJF-MMBF. In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling. The simulation results also illustrate that SJF- ELM is optimal in a heavy-loaded and highly dynamic environment and it is efficient in provisioning the average job hosting rate.
Style APA, Harvard, Vancouver, ISO itp.
48

Monniot, Julien, François Tessier, Matthieu Robert i Gabriel Antoniu. "Supporting dynamic allocation of heterogeneous storage resources on HPC systems". Concurrency and Computation: Practice and Experience, 16.08.2023. http://dx.doi.org/10.1002/cpe.7890.

Pełny tekst źródła
Streszczenie:
SummaryScaling up large‐scale scientific applications on supercomputing facilities is largely dependent on the ability to scale up efficiently data storage and retrieval. However, there is an ever‐widening gap between I/O and computing performance. To address this gap, an increasingly popular approach consists in introducing new intermediate storage tiers (node‐local storage, burst‐buffers,) between the compute nodes and the traditional global shared parallel file‐system. Unfortunately, without advanced techniques to allocate and size these resources, they remain underutilized. In this article, we investigate how heterogeneous storage resources can be allocated on an high‐performance computing platform, just like compute resources. To this purpose, we introduce StorAlloc, a simulator used as a testbed for assessing storage‐aware job scheduling algorithms and evaluating various storage infrastructures. We illustrate its usefulness by showing through a large series of experiments how this tool can be used to size a burst‐buffer partition on a top‐tier supercomputer by using the job history of a production year.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii