Academic literature on the topic 'Job Replication'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Job Replication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Job Replication"

1

Kim, Yusik, Rhonda Righter, and Ronald Wolff. "Job replication on multiserver systems." Advances in Applied Probability 41, no. 2 (2009): 546–75. http://dx.doi.org/10.1239/aap/1246886623.

Full text
Abstract:
Parallel processing is a way to use resources efficiently by processing several jobs simultaneously on different servers. In a well-controlled environment where the status of the servers and the jobs are well known, everything is nearly deterministic and replicating jobs on different servers is obviously a waste of resources. However, in a poorly controlled environment where the servers are unreliable and/or their capacity is highly variable, it is desirable to design a system that is robust in the sense that it is not affected by the poorly performing servers. By replicating jobs and assigning them to several different servers simultaneously, we not only achieve robustness but we can also make the system more efficient under certain conditions so that the jobs are processed at a faster rate overall. In this paper we consider the option of replicating jobs and study how the performance of different ‘degrees’ of replication, ranging from no replication to full replication, affects the performance of a system of parallel servers.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Yusik, Rhonda Righter, and Ronald Wolff. "Job replication on multiserver systems." Advances in Applied Probability 41, no. 02 (2009): 546–75. http://dx.doi.org/10.1017/s0001867800003414.

Full text
Abstract:
Parallel processing is a way to use resources efficiently by processing several jobs simultaneously on different servers. In a well-controlled environment where the status of the servers and the jobs are well known, everything is nearly deterministic and replicating jobs on different servers is obviously a waste of resources. However, in a poorly controlled environment where the servers are unreliable and/or their capacity is highly variable, it is desirable to design a system that is robust in the sense that it is not affected by the poorly performing servers. By replicating jobs and assigning them to several different servers simultaneously, we not only achieve robustness but we can also make the system more efficient under certain conditions so that the jobs are processed at a faster rate overall. In this paper we consider the option of replicating jobs and study how the performance of different ‘degrees’ of replication, ranging from no replication to full replication, affects the performance of a system of parallel servers.
APA, Harvard, Vancouver, ISO, and other styles
3

Lynch, Beverly P. "Job Satisfaction in Libraries: A Replication." Library Quarterly 57, no. 2 (1987): 190–202. http://dx.doi.org/10.1086/601871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Raaijmakers, Youri, and Sem Borst. "Achievable Stability in Redundancy Systems." ACM SIGMETRICS Performance Evaluation Review 49, no. 1 (2022): 27–28. http://dx.doi.org/10.1145/3543516.3456267.

Full text
Abstract:
We investigate the achievable stability region for redundancy systems and a quite general workload model with different job types and heterogeneous servers, reflecting job-server affinity relations which may arise from data locality issues and soft compatibility constraints. Under the assumption that job types are known beforehand we establish for New-Better-than-Used (NBU) distributed speed variations that no replication gives a strictly larger stability region than replication. Strikingly, this does not depend on the underlying distribution of the intrinsic job sizes, but observing the job types is essential for this statement to hold. In case of non-observable job types we show that for New-Worse-than-Used (NWU) distributed speed variations full replication gives a larger stability region than no replication.
APA, Harvard, Vancouver, ISO, and other styles
5

Saadat, Nazanin, and Amir Masoud Rahmani. "A Two-Level Fuzzy Value-Based Replica Replacement Algorithm in Data Grids." International Journal of Grid and High Performance Computing 8, no. 4 (2016): 78–99. http://dx.doi.org/10.4018/ijghpc.2016100105.

Full text
Abstract:
One of the challenges of data grid is to access widely distributed data fast and efficiently and providing maximum data availability with minimum latency. Data replication is an efficient way used to address this challenge by replicating and storing replicas, making it possible to access similar data in different locations of the data grid and can shorten the time of getting the files. However, as the number and storage size of grid sites is limited and restricted, an optimized and effective replacement algorithm is needed to improve the efficiency of replication. In this paper, the authors propose a novel two-level replacement algorithm which uses Fuzzy Replica Preserving Value Evaluator System (FRPVES) for evaluating the value of each replica. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. Results from simulation procedure show that the authors' proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, total number of replications and effective network usage.
APA, Harvard, Vancouver, ISO, and other styles
6

Levinson, Edward M. "Job Satisfaction among School Psychologists: A Replication Study." Psychological Reports 65, no. 2 (1989): 579–84. http://dx.doi.org/10.2466/pr0.1989.65.2.579.

Full text
Abstract:
The purpose of this study was to determine whether the results of Anderson, et al.'s 1984 national study of the job satisfaction of NASP affiliated school psychologists could be replicated in one state and with a sample that comprised both NASP-affiliated and nonaffiliated school psychologists. The job satisfaction of 362 school psychologists in Pennsylvania was analyzed using demographic data and the Minnesota Satisfaction Questionnaire and other procedures nearly identical to those employed by Anderson, et al. Current results paralleled the results of Anderson, et al. both in the percentages of school psychologists who showed various levels of job satisfaction and in regard to sources of satisfaction and dissatisfaction.
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Ruay-Shiung, Jih-Sheng Chang, and Shin-Yi Lin. "Job scheduling and data replication on data grids." Future Generation Computer Systems 23, no. 7 (2007): 846–60. http://dx.doi.org/10.1016/j.future.2007.02.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shwe, Thanda, and Masayoshi Aritsugi. "A DATA RE-REPLICATION SCHEME AND ITS IMPROVEMENT TOWARD PROACTIVE APPROACH." ASEAN Engineering Journal 8, no. 1 (2018): 36–52. http://dx.doi.org/10.11113/aej.v8.15497.

Full text
Abstract:
With increasing demand for cloud computing technology, cloud infrastructures are utilized to their maximum limits. There is a high possibility that commodity servers that are used in Hadoop Distributed File System (HDFS) based cloud data center will fail often. However, the selection of source and destination data nodes for re-replication of data has so far not been adequately addressed. In order to balance the workload among nodes during re-replication phase and reduce impact on cluster normal jobs’ performance, we develop a re-replication scheme that takes into consideration of both performance and reliability perspectives. The appropriate nodes for re-replication are selected based on Analytic Hierarchy Process (AHP) with the consideration of the current utilization of resources by the cluster normal jobs. Toward effective data re-replication, we investigate the feasibility of using linear regression and local regression methods to predict resource utilization. Simulation results show that our proposed approach can reduce re-replication time, total job execution time and top-of-rack network traffic compared to baseline HDFS, consequently increases the reliability of the system and reduces performance impacts on users jobs. Regarding feasibility study of prediction methods, both regression methods are good enough to predict short time future resource utilization for re-replication.
APA, Harvard, Vancouver, ISO, and other styles
9

Beigrezaei, Mahsa, Abolfazel Toroghi Haghighat, and Seyedeh Leili Mirtaheri. "Improve Performance by a Fuzzy-Based Dynamic Replication Algorithm in Grid, Cloud, and Fog." Mathematical Problems in Engineering 2021 (June 21, 2021): 1–14. http://dx.doi.org/10.1155/2021/5522026.

Full text
Abstract:
The efficiency of data-intensive applications in distributed environments such as Cloud, Fog, and Grid is directly related to data access delay. Delays caused by queue workload and delays caused by failure can decrease data access efficiency. Data replication is a critical technique in reducing access latency. In this paper, a fuzzy-based replication algorithm is proposed, which avoids the mentioned imposed delays by considering a comprehensive set of significant parameters to improve performance. The proposed algorithm selects the appropriate replica using a hierarchical method, taking into account the transmission cost, queue delay, and failure probability. The algorithm determines the best place for replication using a fuzzy inference system considering the queue workload, number of accesses in the future, last access time, and communication capacity. It uses the Simple Exponential Smoothing method to predict future file popularity. The OptorSim simulator evaluates the proposed algorithm in different access patterns. The results show that the algorithm improves performance in terms of the number of replications, the percentage of storage filled, and the mean job execution time. The proposed algorithm has the highest efficiency in random access patterns, especially random Zipf access patterns. It also has good performance when the number of jobs and file size are increased.
APA, Harvard, Vancouver, ISO, and other styles
10

Kapur, Kanika. "The Impact of Health on Job Mobility: A Measure of Job Lock." ILR Review 51, no. 2 (1998): 282–98. http://dx.doi.org/10.1177/001979399805100208.

Full text
Abstract:
The author analyzes data from the National Medical Expenditure Survey of 1987 to measure the importance of “job lock”—the reduction in job mobility due to the non-portability of employer-provided health insurance. Refining the approach commonly used by other researchers investigating the same question, the author finds insignificant estimates of job lock; moreover, the confidence intervals of these estimates exclude large levels of job lock. A replication of an influential previous study that used the same data source shows large and significant job lock, as did that study, but when methodological problems are corrected and improved data are used to construct the job lock variables, job lock is found to be small and statistically insignificant.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography