Academic literature on the topic 'Job Replication'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Job Replication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Job Replication"

1

Kim, Yusik, Rhonda Righter, and Ronald Wolff. "Job replication on multiserver systems." Advances in Applied Probability 41, no. 2 (June 2009): 546–75. http://dx.doi.org/10.1239/aap/1246886623.

Full text
Abstract:
Parallel processing is a way to use resources efficiently by processing several jobs simultaneously on different servers. In a well-controlled environment where the status of the servers and the jobs are well known, everything is nearly deterministic and replicating jobs on different servers is obviously a waste of resources. However, in a poorly controlled environment where the servers are unreliable and/or their capacity is highly variable, it is desirable to design a system that is robust in the sense that it is not affected by the poorly performing servers. By replicating jobs and assigning them to several different servers simultaneously, we not only achieve robustness but we can also make the system more efficient under certain conditions so that the jobs are processed at a faster rate overall. In this paper we consider the option of replicating jobs and study how the performance of different ‘degrees’ of replication, ranging from no replication to full replication, affects the performance of a system of parallel servers.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Yusik, Rhonda Righter, and Ronald Wolff. "Job replication on multiserver systems." Advances in Applied Probability 41, no. 02 (June 2009): 546–75. http://dx.doi.org/10.1017/s0001867800003414.

Full text
Abstract:
Parallel processing is a way to use resources efficiently by processing several jobs simultaneously on different servers. In a well-controlled environment where the status of the servers and the jobs are well known, everything is nearly deterministic and replicating jobs on different servers is obviously a waste of resources. However, in a poorly controlled environment where the servers are unreliable and/or their capacity is highly variable, it is desirable to design a system that is robust in the sense that it is not affected by the poorly performing servers. By replicating jobs and assigning them to several different servers simultaneously, we not only achieve robustness but we can also make the system more efficient under certain conditions so that the jobs are processed at a faster rate overall. In this paper we consider the option of replicating jobs and study how the performance of different ‘degrees’ of replication, ranging from no replication to full replication, affects the performance of a system of parallel servers.
APA, Harvard, Vancouver, ISO, and other styles
3

Lynch, Beverly P. "Job Satisfaction in Libraries: A Replication." Library Quarterly 57, no. 2 (April 1987): 190–202. http://dx.doi.org/10.1086/601871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Raaijmakers, Youri, and Sem Borst. "Achievable Stability in Redundancy Systems." ACM SIGMETRICS Performance Evaluation Review 49, no. 1 (June 22, 2022): 27–28. http://dx.doi.org/10.1145/3543516.3456267.

Full text
Abstract:
We investigate the achievable stability region for redundancy systems and a quite general workload model with different job types and heterogeneous servers, reflecting job-server affinity relations which may arise from data locality issues and soft compatibility constraints. Under the assumption that job types are known beforehand we establish for New-Better-than-Used (NBU) distributed speed variations that no replication gives a strictly larger stability region than replication. Strikingly, this does not depend on the underlying distribution of the intrinsic job sizes, but observing the job types is essential for this statement to hold. In case of non-observable job types we show that for New-Worse-than-Used (NWU) distributed speed variations full replication gives a larger stability region than no replication.
APA, Harvard, Vancouver, ISO, and other styles
5

Saadat, Nazanin, and Amir Masoud Rahmani. "A Two-Level Fuzzy Value-Based Replica Replacement Algorithm in Data Grids." International Journal of Grid and High Performance Computing 8, no. 4 (October 2016): 78–99. http://dx.doi.org/10.4018/ijghpc.2016100105.

Full text
Abstract:
One of the challenges of data grid is to access widely distributed data fast and efficiently and providing maximum data availability with minimum latency. Data replication is an efficient way used to address this challenge by replicating and storing replicas, making it possible to access similar data in different locations of the data grid and can shorten the time of getting the files. However, as the number and storage size of grid sites is limited and restricted, an optimized and effective replacement algorithm is needed to improve the efficiency of replication. In this paper, the authors propose a novel two-level replacement algorithm which uses Fuzzy Replica Preserving Value Evaluator System (FRPVES) for evaluating the value of each replica. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. Results from simulation procedure show that the authors' proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, total number of replications and effective network usage.
APA, Harvard, Vancouver, ISO, and other styles
6

Levinson, Edward M. "Job Satisfaction among School Psychologists: A Replication Study." Psychological Reports 65, no. 2 (October 1989): 579–84. http://dx.doi.org/10.2466/pr0.1989.65.2.579.

Full text
Abstract:
The purpose of this study was to determine whether the results of Anderson, et al.'s 1984 national study of the job satisfaction of NASP affiliated school psychologists could be replicated in one state and with a sample that comprised both NASP-affiliated and nonaffiliated school psychologists. The job satisfaction of 362 school psychologists in Pennsylvania was analyzed using demographic data and the Minnesota Satisfaction Questionnaire and other procedures nearly identical to those employed by Anderson, et al. Current results paralleled the results of Anderson, et al. both in the percentages of school psychologists who showed various levels of job satisfaction and in regard to sources of satisfaction and dissatisfaction.
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Ruay-Shiung, Jih-Sheng Chang, and Shin-Yi Lin. "Job scheduling and data replication on data grids." Future Generation Computer Systems 23, no. 7 (August 2007): 846–60. http://dx.doi.org/10.1016/j.future.2007.02.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shwe, Thanda, and Masayoshi Aritsugi. "A DATA RE-REPLICATION SCHEME AND ITS IMPROVEMENT TOWARD PROACTIVE APPROACH." ASEAN Engineering Journal 8, no. 1 (June 1, 2018): 36–52. http://dx.doi.org/10.11113/aej.v8.15497.

Full text
Abstract:
With increasing demand for cloud computing technology, cloud infrastructures are utilized to their maximum limits. There is a high possibility that commodity servers that are used in Hadoop Distributed File System (HDFS) based cloud data center will fail often. However, the selection of source and destination data nodes for re-replication of data has so far not been adequately addressed. In order to balance the workload among nodes during re-replication phase and reduce impact on cluster normal jobs’ performance, we develop a re-replication scheme that takes into consideration of both performance and reliability perspectives. The appropriate nodes for re-replication are selected based on Analytic Hierarchy Process (AHP) with the consideration of the current utilization of resources by the cluster normal jobs. Toward effective data re-replication, we investigate the feasibility of using linear regression and local regression methods to predict resource utilization. Simulation results show that our proposed approach can reduce re-replication time, total job execution time and top-of-rack network traffic compared to baseline HDFS, consequently increases the reliability of the system and reduces performance impacts on users jobs. Regarding feasibility study of prediction methods, both regression methods are good enough to predict short time future resource utilization for re-replication.
APA, Harvard, Vancouver, ISO, and other styles
9

Beigrezaei, Mahsa, Abolfazel Toroghi Haghighat, and Seyedeh Leili Mirtaheri. "Improve Performance by a Fuzzy-Based Dynamic Replication Algorithm in Grid, Cloud, and Fog." Mathematical Problems in Engineering 2021 (June 21, 2021): 1–14. http://dx.doi.org/10.1155/2021/5522026.

Full text
Abstract:
The efficiency of data-intensive applications in distributed environments such as Cloud, Fog, and Grid is directly related to data access delay. Delays caused by queue workload and delays caused by failure can decrease data access efficiency. Data replication is a critical technique in reducing access latency. In this paper, a fuzzy-based replication algorithm is proposed, which avoids the mentioned imposed delays by considering a comprehensive set of significant parameters to improve performance. The proposed algorithm selects the appropriate replica using a hierarchical method, taking into account the transmission cost, queue delay, and failure probability. The algorithm determines the best place for replication using a fuzzy inference system considering the queue workload, number of accesses in the future, last access time, and communication capacity. It uses the Simple Exponential Smoothing method to predict future file popularity. The OptorSim simulator evaluates the proposed algorithm in different access patterns. The results show that the algorithm improves performance in terms of the number of replications, the percentage of storage filled, and the mean job execution time. The proposed algorithm has the highest efficiency in random access patterns, especially random Zipf access patterns. It also has good performance when the number of jobs and file size are increased.
APA, Harvard, Vancouver, ISO, and other styles
10

Kapur, Kanika. "The Impact of Health on Job Mobility: A Measure of Job Lock." ILR Review 51, no. 2 (January 1998): 282–98. http://dx.doi.org/10.1177/001979399805100208.

Full text
Abstract:
The author analyzes data from the National Medical Expenditure Survey of 1987 to measure the importance of “job lock”—the reduction in job mobility due to the non-portability of employer-provided health insurance. Refining the approach commonly used by other researchers investigating the same question, the author finds insignificant estimates of job lock; moreover, the confidence intervals of these estimates exclude large levels of job lock. A replication of an influential previous study that used the same data source shows large and significant job lock, as did that study, but when methodological problems are corrected and improved data are used to construct the job lock variables, job lock is found to be small and statistically insignificant.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Job Replication"

1

Lynn, Priscilla P. "The effect of job stress and social interactions on nursing job performance a replication study /." Muncie, Ind. : Ball State University, 2008. http://cardinalscholar.bsu.edu/362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pinner, Relaine. "A replication study of neonatal intensive care unit nurses participation in ethical decision making." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/917042.

Full text
Abstract:
The purpose of this study was to determine the extent to which Neonatal Intensive Care Unit (NICU) nurses participate in ethical decision making, and to describe the role NICU nurses have in the ethical decision making process.This study replicated a 1991 study conducted by Elizondo. According to Lowe, 1991, replication research is the repeating of a study for the purposes of validating the findings of the original investigation. The traditional theory of utilitarianism provides the theoretical framework for this study, a goal-based approach to ethical decison making that focuses on consequences of actions. Findings provide information about satisfaction and conflicts related to nurse participation in ethical deecision making in the NICU.The Nurse Participation in Ethical Decision Making (NPEDM) questionnaire (Elizondo, 1991) was used for data collection. Of fifty NICU nurses, seventeen (34%) of the sample completed the questionnaire. Confidentiality was maintained. Results showed that all respondents were able to identify methods that are used for participation in ethical decision making. Informal conversations with physicians was identified as the primary method of participation. Forty-one percent of respondents were satisfied with the nurse's role in ethical decision making. Forty-seven percent were only somewhat satisfied.An indication of satisfaction demonstrated by 100% of the study sample was that nurses' ideas are respected by other health care professionals.Findings indicated that a significant positive relationship exists between role satisfaction and study variables. Eighty-eight percent of respondents stated that conflicts related to participation were experienced. Overwhelmingly, respondents felt that the primary source of conflicts were with physicians. These findings are consistent with results reported in the original study.When asked what factors impact on how decisions are made, 40% of respondents indicated that ethical decisions are often impacted by generalized decisions based on viability of the neonate as determined by the gestational age, and "quality of life."Seventy-six percent of respondents believed nurses should be more involved in the ethical decision making. Conferences with physicians and parents was identified by 69% of the study sample. This study found that the older the nurse, the more satisfied with role in the ethical decison making process. Length of employment also contributed positively to satisfaction in ethical decision making. The more educated the nurse, the more satisfied with role in the ethical decision making process. Nurses were less satisfied if conflicts were experienced or identified.Findings suggest that collaborative relationships exist between nurses and other health team members and that nurses feel some sense of fulfillment with their role in the ethical decision making process. It was concluded that many issues were unsolved and need to be discussed.
School of Nursing
APA, Harvard, Vancouver, ISO, and other styles
3

Lai, Siu-yu Kriss, and 黎兆宇. "Stress among prison officers: a replication study in a Hong Kong prison." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31978952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gholmi, Sara, and Josilda Kola. "Replication Study on the Mediating Effect of Work Engagement between Self-efficacy and Job Satisfaction." Thesis, Linnéuniversitetet, Institutionen för psykologi (PSY), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-95742.

Full text
Abstract:
The aim of this study was to test the applicability of a replication through a publishedresearch article choosing constructs that were of interest to the authors. The one chosen wasa 2020 article from Orgambídez et al., which investigated a model derived from the jobdemands and resources model (JD-R) and Quality of Working Life (QWL) in Spain with a sample of nurses. Our study followed the principles of Orgambídez et al. (2020) of a cross-sectional correlational design but with a Swedish sample in the IT, software and technology field, with 101 participants. Correlational analysis, confirmatory factor analysis and pathanalysis were utilized to test four hypotheses and the mediation effect of work engagementbetween self-efficacy and job satisfaction. Results confirmed all four hypotheses includingthe mediation effect. Opposite to the findings of Orgambídez et al. (2020) though, there wasno direct effect found of self-efficacy on job satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
5

Jackson, Angela DeCarla. "A Survey of the Occupational Stress, Psychological Strain, and Coping Resources of Licensed Professional Counselors in Virginia: A Replication Study." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/30206.

Full text
Abstract:
The Occupational Stress Inventory Revised Edition (OSI-R) and an Individual Data Form (IDF) were used to examine the current levels of occupational stress, psychological strain, and coping resources for a random sample of 360 licensed professional counselors (LPCs) in Virginia. Using the OSI-R (Osipow, 1998), a comparison of the results of this study to the Occupational Stress Inventory (OSI), (Osipow & Spokane, 1987) Ryan (1996) used was made. Replicating Ryan's study was needed to determine if significant differences at the level of occupational stress, psychological strain, and coping resources exist over time which would emphasize the importance of occupational stress research for this population. The OSI-R is a concise measure of three dimensions of occupational adjustment: occupational stress, psychological strain, and coping resources. Demographic variables, such as age, gender, ethnicity, marital and parental status, primary work-setting, years of experience, stress related treatment, and years licensed were examined within the three dimensions of stress, strain, and coping. Data were collected via first mailing of 360 surveys with a final response rate of 63.52%. Th e number of responses used for analysis was 183. The majority of the participants were white (93.4%), female (65%), parents (69.9%) of two children (33.9%), and adults averaging 49 years old. There were 120 females (65.6%) and 63 males (34.4%). Private practice either individual (21.9%) or group affiliation (18.6%) was identified as the primary work setting. The majority (86.3%) of the LPCs worked with clients and averaged 19.79 hours per four day week, counseling clients. The average number of daily client sessions was 4.76 and the maximum number of daily client sessions was 6.52. Most (49.2%) of the clients' source of referral were legally mandated. Overall, the T-scores on the OSI-R fell in the average range for stress, strain, and coping. Variables that had no significant differences in level of stress, strain, or coping were marital and parental status, number of children, years experience, average daily client sessions, and stress related treatment. Demographic variables that contributed to differences in levels of stress only included ethnicity and weekly work hours. Demographic variables that contributed to differences in scores of strain only included age and years licensed. Demographic variables that contributed to differences in scores of coping were weekly work hours, number of days per week clients seen. Variables that had significant differences on the levels of stress, strain, and coping were gender, primary work setting, number of work settings, maximum daily client sessions, and referral source of clients. Thus, future research in the counseling profession for occupational stress, psychological strain, and coping resources are warranted. Implications for the profession and recommendations for future research were made.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Mario, Benini. "Improving Decision Making in Real-world Applications by Solving Combinatorial Optimization Problems." Doctoral thesis, Università di Siena, 2022. https://hdl.handle.net/11365/1221594.

Full text
Abstract:
The motivation for this work is to study complex real-world scenarios and provide tools that can actually improve decision-making in those problems. To do so, we mainly adopt techniques from the fields of Operations Research and Combinatorial Optimization. In this dissertation, we focus on three real-world applications from different industries that can be modeled as combinatorial optimization problems and address them with operations research techniques. The dissertation is divided in chapters, each of which is related to a different topic. In Chapter 1, a problem concerning the transportation of biological samples from draw centers to a main laboratory for analysis is presented. The problem arises from a healthcare application in Bologna, Italy, where the healthcare authority decided to centralize the analysis of all biological samples of the area to a main laboratory, in order to exploit economies of scales and reduce the costs for samples’ analysis. Of course, such an improvement goal also created a new complex problem: all the samples must be transported from draw centers to the main lab. A fleet of vehicle is available for the transportation and must collect the samples from draw centers during given times of the day and deliver them within a certain time, since samples are perishable. Vehicles can also exploit the existence of dedicated centers that can extend the lifespan of the samples and where samples can be transferred from one vehicle to another. It is clear from this brief description how hard it could be to decide which is the routing of all the vehicles which minimizes the traveling costs while delivering all samples on time. For this problem we developed different mixed integer linear programming models, metaheuristic algorithms, and grouping policies for the samples that are able to tackle the complexity of the problem and improve routing decisions. All methods have been tested through an extensive computational campaign using real-world data, showing the effectiveness of the proposed approaches. In Chapter 2, a problem related to the agricultural industry is presented. The problem arises from a real-world application in Italy and it is that of planning the use of the available land of a farm for a given number of years, given a set of crops that can be grown. The objective is to maximize the farmer’s profit, but the farmer is subject to several rules both from an agronomic and from a regulation point of view. In fact, many constraints exist regarding agronomic principles, such as maximum replanting, botanical family constraints and crop rotation issues. One of the goals of this work is indeed that of evaluating the risks and benefits of following or not the best practices regarding crop rotation issues in the Mediterranean pedo-climatic context. Furthermore, we want to evaluate the effectiveness of public and private initiatives regarding sustainable agriculture. In fact, it is more and more important nowadays to face these challenges in the food supply chain, which is one of the most discussed industries when it comes to sustainability. In particular, we analyze two different initiatives, namely the Common Agricultural Policy by the European Union and “La Carta del Mulino” by Barilla Group S.p.A.. Both initiatives introduce economic incentives for the farmers following virtuous behaviors from a sustainability point of view. Practically, these behaviors are constraints increasing the complexity of the problem and the difficulty in the decision-making process. For this problem, we will give a formal characterization and study its complexity, also analyzing special cases. We will also present a network-flow based model to solve a special case of the problem and integer linear programming models developed to solve three variants accounting for different sustainability scenarios. Real-world data from 23 Italian farms were used in an extensive computational campaign. The analysis of the results shows that the models can be helpful tools for farmers to plan their production and for authorities to evaluate the effectiveness (and efficiency) of their sustainability initiatives. In Chapter 3, we discuss a problem concerning the sequencing of unreliable jobs on parallel machines. Even if the problem is not taken from a specific application, it may have several applications in real-world scenarios, such as in manufacturing and planning of complex computations on multi-processors computers. In this problem, we have n unreliable jobs providing a reward when successfully completed, but each job has a probability of not being carried out. We have m parallel identical machines at our disposal, and we want to schedule the jobs on the machines in order to maximize the total expected reward. To increase the probability of completing the jobs, we create m copies of each job and schedule each copy on a different machine. For this problem, we will present a complexity analysis showing that the problem is NP-complete for two machines. For the problem with two machines, we derived some theoretical properties and developed a quadratic integer programming model, a tabu search algorithm, and an upper bound based on the Three-Dimensional Assignment problem. A computational campaign on different sets of instances shows that the tabu search outperforms the model. Then we focused on the general case with m machines. In particular, we developed several heuristics and proved some theoretical results, including the worst case performance guarantee of two heuristics. We also devised a generalized tabu search algorithm and a new, improved, upper bounding scheme based on a relaxation of the problem. Computational experiments are performed for the new methods on the problems with two and three machines. The results show that good optimality gaps are reached on all the instances.
APA, Harvard, Vancouver, ISO, and other styles
7

Klang, Andreas. "The Relationship between Personality and Job Performance in Sales: : A Replication of Past Research and an Extension to a Swedish Context." Thesis, Stockholms universitet, Psykologiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-78637.

Full text
Abstract:
This study examined the relationship between personality dimensions and supervisory ratings of job performance, in a sales context in Sweden. A sample of 34 telesales workers, employed at two major telecom companies, completed the NEO PI-3 (McCrae & Costa, 2010). As hypothesized, it was found that Extroversion, Conscientiousness, and Neuroticism correlated moderately with job performance. In line with past research, this suggests that individuals, who display high levels of Extroversion and Conscientiousness, as well as low levels of Neuroticism, perform better in sales related occupations. Unlike hypothesized, no correlation was found between job performance and Agreeableness and Openness to Experience. Additional computations indicated the importance of specific sub dimensions of Extroversion and Conscientiousness in respect to job performance. Practical implications in respect to recruitment and directions of future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Ramesh, Anuradha. "Replicating and extending job embeddedness across cultures employee turnover in India and the United States /." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/6841.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Psychology. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Jo, Hanju [Verfasser], and Patrick [Akademischer Betreuer] Theato. "Fabrication of Stimuli-responsive, Chemically Tunable Nanostructures by Template-assisted Replication Method / Hanju Jo ; Betreuer: Patrick Theato." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2017. http://d-nb.info/114386879X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Shin-Yi, and 林欣儀. "Data Replication and Job Scheduling on Cluster Grid for Data-Intensive Applications." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/56713144432171572048.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
93
In data grid, distributed scientific and engineering applications often require access to large amount of data (terabytes or petabytes). Data access time is depended on bandwidth, especially in hierarchical network structure. A simplest hierarchical form of a grid system, called cluster grid, provides a compute service to the group level. Network bandwidth within cluster is broader than across cluster. In a communications environment, the major bottleneck to supporting fast data access in Grid is the high latencies of Wide Area Networks (WANs) and Internet. Yet effective scheduling in such network architecture can reduce the amount data transfer across Internet by dispatch a job to where data is present. In other way, data replication mechanism generates multiple copies of the existing data to reduce access opportunity from remote site. To avoid WAN bandwidth bottleneck in cluster grid, we develop a job scheduling policy, called HCS, and dynamic data replication strategy, called HRS, and use simulation studies to evaluate various combinations. The simulation results show that HCS and HRS successfully reduces data access time and the amount of inter-communications in comparing to other combinations in cluster grid.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Job Replication"

1

Replicating Jobs in Business and Industry for Persons With Disabilities. Wisconsin Center on Education, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Job Replication"

1

Santos-Neto, Elizeu, Walfredo Cirne, Francisco Brasileiro, and Aliandro Lima. "Exploiting Replication and Data Reuse to Efficiently Schedule Data-Intensive Applications on Grids." In Job Scheduling Strategies for Parallel Processing, 210–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11407522_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Phan, Thomas, Kavitha Ranganathan, and Radu Sion. "Evolving Toward the Perfect Schedule: Co-scheduling Job Assignments and Data Replication in Wide-Area Systems Using a Genetic Algorithm." In Job Scheduling Strategies for Parallel Processing, 173–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11605300_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tang, Ming, Bu-Sung Lee, Xueyan Tang, and Chai-Kiat Yeo. "Combining Data Replication Algorithms and Job Scheduling Heuristics in the Data Grid." In Euro-Par 2005 Parallel Processing, 381–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11549468_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zarina, M., M. Mat Deris, ANM M. Rose, and A. M. Isa. "Job Scheduling for Dynamic Data Replication Strategy based on Federation Data Grid Systems." In Advances in Wireless, Mobile Networks and Applications, 283–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21153-9_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jung, Daeyong, SungHo Chin, KwangSik Chung, Taeweon Suh, HeonChang Yu, and JoonMin Gil. "An Effective Job Replication Technique Based on Reliability and Performance in Mobile Grids." In Advances in Grid and Pervasive Computing, 47–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13067-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ghare, Gaurav D., and Scott T. Leutenegger. "Improving Speedup and Response Times by Replicating Parallel Programs on a SNOW." In Job Scheduling Strategies for Parallel Processing, 264–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11407522_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saadat, Nazanin, and Amir Masoud Rahmani. "A Two-Level Fuzzy Value-Based Replica Replacement Algorithm in Data Grids." In Fuzzy Systems, 516–39. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1908-9.ch023.

Full text
Abstract:
One of the challenges of data grid is to access widely distributed data fast and efficiently and providing maximum data availability with minimum latency. Data replication is an efficient way used to address this challenge by replicating and storing replicas, making it possible to access similar data in different locations of the data grid and can shorten the time of getting the files. However, as the number and storage size of grid sites is limited and restricted, an optimized and effective replacement algorithm is needed to improve the efficiency of replication. In this paper, the authors propose a novel two-level replacement algorithm which uses Fuzzy Replica Preserving Value Evaluator System (FRPVES) for evaluating the value of each replica. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. Results from simulation procedure show that the authors' proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, total number of replications and effective network usage.
APA, Harvard, Vancouver, ISO, and other styles
8

Liao, ChenHan, Na Helian, Sining Wu, and Mamunur M. Rashid. "Predictive File Replication on the Data Grids." In Evolving Developments in Grid and Cloud Computing, 67–83. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0056-0.ch005.

Full text
Abstract:
Most replication methods either monitor the popularity of files or use complicated functions to calculate the overall cost of whether or not a replication decision or a deletion decision should be issued. However, once the replication decision is issued, the popularity of the files is changed and may have already impacted access latency and resource usage. This article proposes a decision-tree-based predictive file replication strategy that forecasts files’ future popularity based on their characteristics on the Grids. The proposed strategy has shown superb performance in terms of mean job time and effective network usage compared with the other two replication strategies, LRU and Economic under OptorSim simulation environment.
APA, Harvard, Vancouver, ISO, and other styles
9

Khatter, Harsh, and Prabhat Singh. "Role Coordination in Large-Scale and Highly-Dense Internet of Things." In Advances in Wireless Technologies and Telecommunication, 66–79. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4685-7.ch004.

Full text
Abstract:
Huge-scale highly-dense networks integrate with different application spaces of internet of things for precise occasion discovery and monitoring. Because of the high thickness and colossal scope, the hubs in these systems must play out some basic correspondence jobs, in particular detecting, handing-off, information combination, and information control (collection and replication). Since the vitality utilization and the unwavering correspondence quality is one of the significant difficulties in large-scale highly-dense networks, the correspondence jobs ought to be facilitated so as to efficiently utilize the vitality assets and to meet a palatable degree of correspondence dependability. Right now, the authors propose an on-request and completely dispersed system for job coordination that is intended to distinguish occasions with different levels of basicity, adjusting the information total and information replication as per the desperation level of the recognized event.
APA, Harvard, Vancouver, ISO, and other styles
10

Shorfuzzaman, Mohammad, Rasit Eskicioglu, and Peter Graham. "The State of the Art and Open Problems in Data Replication in Grid Environments." In Handbook of Research on Scalable Computing Technologies, 486–516. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-661-7.ch022.

Full text
Abstract:
Data Grids provide services and infrastructure for distributed data-intensive applications that need to access, transfer and modify massive datasets stored at distributed locations around the world. For example, the next-generation of scientific applications such as many in high-energy physics, molecular modeling, and earth sciences will involve large collections of data created from simulations or experiments. The size of these data collections is expected to be of multi-terabyte or even petabyte scale in many applications. Ensuring efficient, reliable, secure and fast access to such large data is hindered by the high latencies of the Internet. The need to manage and access multiple petabytes of data in Grid environments, as well as to ensure data availability and access optimization are challenges that must be addressed. To improve data access efficiency, data can be replicated at multiple locations so that a user can access the data from a site near where it will be processed. In addition to the reduction of data access time, replication in Data Grids also uses network and storage resources more efficiently. In this chapter, the state of current research on data replication and arising challenges for the new generation of data-intensive grid environments are reviewed and open problems are identified. First, fundamental data replication strategies are reviewed which offer high data availability, low bandwidth consumption, increased fault tolerance, and improved scalability of the overall system. Then, specific algorithms for selecting appropriate replicas and maintaining replica consistency are discussed. The impact of data replication on job scheduling performance in Data Grids is also analyzed. A set of appropriate metrics including access latency, bandwidth savings, server load, and storage overhead for use in making critical comparisons of various data replication techniques is also discussed. Overall, this chapter provides a comprehensive study of replication techniques in Data Grids that not only serves as a tool to understanding this evolving research area but also provides a reference to which future e orts may be mapped.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Job Replication"

1

Kurnosov, Mikhail, and Alexey Paznikov. "Efficiency analysis of decentralized grid scheduling with job migration and replication." In the 7th International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2448556.2448600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Raaijmakers, Youri, Sem Borst, and Onno Boxma. "Threshold-based rerouting and replication for resolving job-server affinity relations." In IEEE INFOCOM 2021 - IEEE Conference on Computer Communications. IEEE, 2021. http://dx.doi.org/10.1109/infocom42981.2021.9488909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ebenezer, A. Shamila, and Dr Baskaran. "Job replication techniques for improving fault tolerance in Computational Grid A Survey." In Annual International Conference on Advances in Distributed and Parallel Computing ADPC 2010. Global Science and Technology Forum, 2010. http://dx.doi.org/10.5176/978-981-08-7656-2_a-40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Congfeng, Cheng Wang, Xiaohu Liu, and Yinghui Zhao. "Adaptive Replication Based Security Aware and Fault Tolerant Job Scheduling for Grids." In Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007). IEEE, 2007. http://dx.doi.org/10.1109/snpd.2007.292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Uhlemann, K., C. Engelmann, and S. Scott. "JOSHUA: Symmetric Active/Active Replication for Highly Available HPC Job and Resource Management." In 2006 IEEE International Conference on Cluster Computing. IEEE, 2006. http://dx.doi.org/10.1109/clustr.2006.311855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zarina, M., Fadhilah Ahmad, Anm, M. Nordin, and M. Mat Deris. "Job scheduling for dynamic data replication strategy in heterogeneous federation data grid systems." In 2013 Second International Conference on Informatics & Applications (ICIA 2013). IEEE, 2013. http://dx.doi.org/10.1109/icoia.2013.6650256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dang, Nhan Nguyen, Soonwook Hwang, and Sang Boem Lim. "Improvement of Data Grid's Performance by Combining Job Scheduling with Dynamic Replication Strategy." In Sixth International Conference on Grid and Cooperative Computing (GCC 2007). IEEE, 2007. http://dx.doi.org/10.1109/gcc.2007.79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Marbukh, Vladimir. "Dynamic Job Replication for Balancing Fault Tolerance, Latency, and Economic Efficiency: Work in Progress." In 2018 IEEE International Conference on Services Computing (SCC). IEEE, 2018. http://dx.doi.org/10.1109/scc.2018.00043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Suri, P. K., and Manpreet Singh. "JS2DR2: An Effective Two-Level Job Scheduling Algorithm and Two-Phase Dynamic Replication Strategy for Data Grid." In 2009 International Conference on Advances in Computing, Control, & Telecommunication Technologies (ACT 2009). IEEE, 2009. http://dx.doi.org/10.1109/act.2009.65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mauleón, Begoña Sáiz, Lenin Guillermo Lemus Zuñiga, Jorge E. Luzuriaga, Miguel Angel Mateo Pla, Jose Vicente Benlloch Dualde, Olga Ampuero Canellas, Jimena González-del Río Cogorno, and Nereida Tarazona Berenguer. "Empowering Youth Employment through European Digital Bootcamps (EDIBO)." In CARPE Conference 2019: Horizon Europe and beyond. Valencia: Universitat Politècnica València, 2019. http://dx.doi.org/10.4995/carpe2019.2019.10207.

Full text
Abstract:
Information and Communication Technologies (ICT) are transforming every area of economic and social life all around the world. New types of jobs different from the traditional ones are created rapidly. The demand for highly skilled staff who uses technology effectively has become a requirement for success of companies and the growing industry. However, the number of IT graduates is not keeping up with the current demand. In addition, companies have little or no training programs to develop ICT skills. Initiatives from the European Economic Area (EEA) and Norway Grants to support transnational projects for Youth Employment including European Digital Bootcamps (EDIBO) contribute to increase the job opportunities for young people outside of the labour market. In this way the Sustainable Development Goal 8 which aims to “promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all” could be fulfilled. Nowadays, EDIBO is developing different training labs in order to achieve a success model of all processes involved with the organization, execution and evaluation. The goal of this document is to allow a rapid replication of the intensive ICT training among the partners of the project as well to the social innovation community in general.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Job Replication"

1

Lance, Charles E. Replication and Extension of Models of Job Performance Ratings. Fort Belvoir, VA: Defense Technical Information Center, June 1998. http://dx.doi.org/10.21236/ada368462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography