Статті в журналах з теми "Replication of computing experiment"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Replication of computing experiment.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Replication of computing experiment".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Beermann, Thomas, Olga Chuchuk, Alessandro Di Girolamo, Maria Grigorieva, Alexei Klimentov, Mario Lassnig, Markus Schulz, Andrea Sciaba, and Eugeny Tretyakov. "Methods of Data Popularity Evaluation in the ATLAS Experiment at the LHC." EPJ Web of Conferences 251 (2021): 02013. http://dx.doi.org/10.1051/epjconf/202125102013.

Повний текст джерела
Анотація:
The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing sites all over the world and is processed continuously by various central production and user analysis tasks. The popularity of data is typically measured as the number of accesses and plays an important role in resolving data management issues: deleting, replicating, moving between tapes, disks and caches. These data management procedures were still carried out in a semi-manual mode and now we have focused our efforts on automating it, making use of the historical knowledge about existing data management strategies. In this study we describe sources of information about data popularity and demonstrate their consistency. Based on the calculated popularity measurements, various distributions were obtained. Auxiliary information about replication and task processing allowed us to evaluate the correspondence between the number of tasks with popular data executed per site and the number of replicas per site. We also examine the popularity of user analysis data that is much less predictable than in the central production and requires more indicators than just the number of accesses.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Liu, Liang, and Tian Yu Wo. "A Scalable Data Platform for Cloud Computing Systems." Applied Mechanics and Materials 577 (July 2014): 860–64. http://dx.doi.org/10.4028/www.scientific.net/amm.577.860.

Повний текст джерела
Анотація:
With cloud computing systems becoming popular, it has been a hotspot to design a scalable, highly available and cost-effective data platform. This paper proposed such a data platform using MySQL DBMS blocks. For scalability, a three-level (system, super-cluster, cluster) architecture is applied, making it scalable to thousands of applications. For availability, we use asynchronous replication across geographically dispersed super clusters to provide disaster recovery, synchronous replication within a cluster to perform failure recovery and hot standby or even process pair mechanism for controllers to enhance fault tolerance. For resource utility, we design a novel load balancing strategy by exploiting the key property that the throughput requirement of web applications is flucatuated in a time period. Experiments with NLPIR dataset indicate that the system can scale to a large number of web applications and make good use of resources provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vashisht, Priyanka, Rajesh Kumar, and Anju Sharma. "Efficient Dynamic Replication Algorithm Using Agent for Data Grid." Scientific World Journal 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/767016.

Повний текст джерела
Анотація:
In data grids scientific and business applications produce huge volume of data which needs to be transferred among the distributed and heterogeneous nodes of data grids. Data replication provides a solution for managing data files efficiently in large grids. The data replication helps in enhancing the data availability which reduces the overall access time of the file. In this paper an algorithm, namely, EDRA using agents for data grid, has been proposed and implemented. EDRA consists of dynamic replication of hierarchical structure taken into account for the selection of best replica. Decision for selecting the best replica is based on scheduling parameters. The scheduling parameters are bandwidth, load gauge, and computing capacity of the node. The scheduling in data grid helps in reducing the data access time. The distribution of the load on the nodes of data grid is done evenly by considering scheduling parameters. EDRA is implemented using data grid simulator, namely, OptorSim. European Data Grid CMS test bed topology is used in this experiment. The simulation results are obtained by comparing BHR, LRU, No Replication, and EDRA. The result shows the efficiency of EDRA algorithm in terms of mean job execution time, network usage, and storage usage of node.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kolokoltsev, Yevgeniy, Evgeny Ivashko, and Carlos Gershenson. "Improving “tail” computations in a BOINC-based Desktop Grid." Open Engineering 7, no. 1 (December 29, 2017): 371–78. http://dx.doi.org/10.1515/eng-2017-0044.

Повний текст джерела
Анотація:
AbstractA regular Desktop Grid bag-of-tasks project can take a lot of time to complete computations. An important part of the process is tail computations: when the number of tasks to perform becomes less than the number of computing nodes. At this stage, a dynamic replication could be used to reduce the time needed to complete computations. In this paper, we propose a mathematical model and a strategy of dynamic replication at the tail stage. The results of the numerical experiments are given.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fazlina, M. A., Rohaya Latip, Azizol Abdullah, Hamidah Ibrahim, and Mohamed A. Alrshah. "Crucial File Selection Strategy (CFSS) for Enhanced Download Response Time in Cloud Replication Environments." Baghdad Science Journal 18, no. 4(Suppl.) (December 20, 2021): 1356. http://dx.doi.org/10.21123/bsj.2021.18.4(suppl.).1356.

Повний текст джерела
Анотація:
Cloud Computing is a mass platform to serve high volume data from multi-devices and numerous technologies. Cloud tenants have a high demand to access their data faster without any disruptions. Therefore, cloud providers are struggling to ensure every individual data is secured and always accessible. Hence, an appropriate replication strategy capable of selecting essential data is required in cloud replication environments as the solution. This paper proposed a Crucial File Selection Strategy (CFSS) to address poor response time in a cloud replication environment. A cloud simulator called CloudSim is used to conduct the necessary experiments, and results are presented to evidence the enhancement on replication performance. The obtained analytical graphs are discussed thoroughly, and apparently, the proposed CFSS algorithm outperformed another existing algorithm with a 10.47% improvement in average response time for multiple jobs per round.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Baldi-Boleda, Tomás, Ehsan Sadeghi, Carles Colominas, and Andrés García-Granada. "Simulation Approach for Hydrophobicity Replication via Injection Molding." Polymers 13, no. 13 (June 23, 2021): 2069. http://dx.doi.org/10.3390/polym13132069.

Повний текст джерела
Анотація:
Nanopattern replication of complex structures by plastic injection is a challenge that requires simulations to define the right processing parameters. Previous work managed to simulate replication for single cavities in 2D and 3D, showing high performance requirements of CPU to simulate periodic trenches in 2D. This paper presents two ways to approach the simulation of replication of complex 3D hydrophobic surfaces. The first approach is based on previous CFD Ansys Fluent and compared to FE based CFD Polyflow software for the analysis of laminar flows typical in polymer processing and glass forming as well as other applications. The results showed that Polyflow was able to reduce computing time from 72 h to only 5 min as desired in the project. Furthermore, simulations carried out with Polyflow showed that higher injection and mold temperature lead to better replication of hydrophobicity in agreement with the experiments. Polyflow simulations are proved to be a good tool to define process parameters such as temperature and cycle times for nanopattern replication.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mehner, Philipp J., Merle Allerdißen, Sebastian Haefner, Andreas Voigt, Uwe Marschner, and Andreas Richter. "Modeling the closing behavior of a smart hydrogel micro-valve." Journal of Intelligent Material Systems and Structures 30, no. 9 (December 4, 2017): 1409–18. http://dx.doi.org/10.1177/1045389x17742726.

Повний текст джерела
Анотація:
Smart hydrogel micro-valves are essential components of micro-chemo-mechanical fluid systems. These valves are based on phase-changeable polymers. They can open and close micro-fluidic channels depending on the chemical concentration or the temperature in the fluid. A concept of finite element–based modeling in combination with network methods to simulate concentration-triggered, phase-changeable hydrogels is proposed. We introduce a temperature domain as a replication domain to substitute insufficiently implemented domains. With the used simulation tools, problems are highlighted and their solutions are presented. The computed parameters of such valves are included in a circuit representation, which is capable of efficiently computing large-scale micro-fluidic systems. These methods will help predict, visualize, and understand polymeric swelling behavior as well as the performance of large-scale chip applications before any complex experiment is performed.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Alshammari, Mohammad M., Ali A. Alwan, Azlin Nordin, and Abedallah Zaid Abualkishik. "Data Backup and Recovery With a Minimum Replica Plan in a Multi-Cloud Environment." International Journal of Grid and High Performance Computing 12, no. 2 (April 2020): 102–20. http://dx.doi.org/10.4018/ijghpc.2020040106.

Повний текст джерела
Анотація:
Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replications in the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial of service attacks to maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicas replication strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Blomer, Jakob, Gerardo Ganis, Simone Mosciatti, and Radu Popescu. "Towards a serverless CernVM-FS." EPJ Web of Conferences 214 (2019): 09007. http://dx.doi.org/10.1051/epjconf/201921409007.

Повний текст джерела
Анотація:
The CernVM File System (CernVM-FS) provides a scalable and reliable software distribution and—to some extent—a data distribution service. It gives POSIX access to more than a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. Increasingly, CernVM-FSalso provides access to certain classes of data, such as detector conditions data, genomics reference sets, or gravitational wave detector experiment data. For most of the high- energy physics experiments, an underlying HTTP content distribution infrastructure is jointly provided by universities and research institutes around the world. In this contribution, we will present recent developments and future plans. For future developments, we put a focus on evolving the content distribution infrastructure and at lowering the barrier for publishing into CernVM-FS. Through so-called serverless computing, we envision cloud hosted CernVM-FS repositories without the need to operate dedicated servers or virtual machines. An S3 compatible service in conjunction with a content delivery network takes on data provisioning, replication, and caching. A chainof time-limited and resource-limited functions (so called “lambda function” or “function-as- a-service”) operate on the repository and stage the updates. As a result, any CernVM-FS client should be able to turn intoawriter, possession of suitable keys provided. For repository owners, we aim at providing cost transparency and seamless scalability from very small to very large CernVM-FS installations.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Panatula, Ganesh, K. Sailaja Kumar, D. Evangelin Geetha, and T. V. Suresh Kumar. "Performance evaluation of cloud service with hadoop for twitter data." Indonesian Journal of Electrical Engineering and Computer Science 13, no. 1 (January 1, 2019): 392. http://dx.doi.org/10.11591/ijeecs.v13.i1.pp392-404.

Повний текст джерела
Анотація:
<p>In the era of rapid growth of cloud computing, performance calculation of cloud service is an essential criterion to assure quality of service. Nevertheless, it is a perplexing task to effectively analyze the performance of cloud service due to the complexity of cloud resources and the diversity of Big Data applications. Hence, we propose to examine the performance of Big Data applications with Hadoop and thus to figure out the performance in cloud cluster. Hadoop is built based on MapReduce, one of the widely used programming models in Big Data. In this paper, the performance analysis of Hadoop MapReduce WordCount application for Twitter data is presented. A 4-node in-house Hadoop cluster was setup and experiment was carried out for analyzing the performance. Through this work, it was concluded that Hadoop is efficient for BigData applications with 3 or more nodes with replication factor 3. Also, it was observed that system time was relatively more compared to user time for BigData applications beyond 80GB. This experiment had also thrown certain pattern on actual data blocks used to process the WordCount application. </p>
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wang, Han Ning, Wei Xiang Xu, and Chao Long Jia. "A High-Speed Railway Data Placement Strategy Based on Cloud Computing." Applied Mechanics and Materials 135-136 (October 2011): 43–49. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.43.

Повний текст джерела
Анотація:
The application of high-speed railway data, which is an important component of China's transportation science data sharing, has embodied the typical characteristics of data-intensive computing. A reasonable and effective data placement strategy is needed to deploy and execute data-intensive applications in the cloud computing environment. Study results of current data placement approaches have been analyzed and compared in this paper. Combining the semi-definite programming algorithm with the dynamic interval mapping algorithm, a hierarchical structure data placement strategy is proposed. The semi-definite programming algorithm is suitable for the placement of files with various replications, ensuring that different replications of a file are placed on different storage devices. And the dynamic interval mapping algorithm could guarantee better self-adaptability of the data storage system. It has been proved both by theoretical analysis and experiment demonstration that a hierarchical data placement strategy could guarantee the self-adaptability, data reliability and high-speed data access for large-scale networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Črepinšek, Matej, Shih-Hsi Liu, and Marjan Mernik. "Replication and comparison of computational experiments in applied evolutionary computing: Common pitfalls and guidelines to avoid them." Applied Soft Computing 19 (June 2014): 161–70. http://dx.doi.org/10.1016/j.asoc.2014.02.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Sun, Pan Jun. "Research on the Optimization Management of Cloud Privacy Strategy Based on Evolution Game." Security and Communication Networks 2020 (August 11, 2020): 1–18. http://dx.doi.org/10.1155/2020/6515328.

Повний текст джерела
Анотація:
Cloud computing services have great convenience, but privacy security is a big obstacle of popularity. In the process result of privacy protection of cloud computing, it is difficult to choose the optimal strategy. In order to solve this problem, we propose a quantitative weight model of privacy information, use evolutionary game theory to establish a game model of attack protection, design the optimal protection strategy selection algorithm, and make the evolutionary stable equilibrium solution method from the limited rational constraint. In order to study the strategic dependence of the same game group, the classical dynamic replication equation is improved by using the incentive coefficient, an improved evolutionary game model of attack protection is constructed, the stability of equilibrium point is further analyzed by Jacobian matrix method, and the optimal selection strategy is obtained under different conditions. Finally, the correctness and validity of the model are verified by experiments, different strategies of the same group have the dual effects of promotion and inhibition, and the advantages of this paper are shown by comparing with other articles.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Stehman, S. V., and M. P. Meredith. "Practical analysis of factorial experiments in forestry." Canadian Journal of Forest Research 25, no. 3 (March 1, 1995): 446–61. http://dx.doi.org/10.1139/x95-050.

Повний текст джерела
Анотація:
Factorial designs are among the most frequently employed for arranging treatments in forestry experiments. Yet researchers often fail either to recognize the factorial treatment structure or to take full advantage of the structure for interpreting treatment effects. The analysis of factorial experiments should focus on comparisons of means of research interest specified by the investigator. Reliance on default options of computing packages or routine application of multiple comparison procedures often fails to address research hypotheses directly. A two-step strategy for the analysis of factorial experiments entails a check for interaction followed by estimation of either main effects or simple effects. This strategy emphasizes sensible mean comparisons through estimation of contrasts and their standard errors. The strategy also applies to the analysis of factorial experiments in which unequal replication or empty cells complicate the analysis. We summarize a practical approach for use by forest scientists and applied statisticians consulting with such scientists so that they may analyze and interpret their experiments more effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ramakrishnan, Nithya, Sibi Raj B. Pillai, and Ranjith Padinhateeri. "High fidelity epigenetic inheritance: Information theoretic model predicts threshold filling of histone modifications post replication." PLOS Computational Biology 18, no. 2 (February 17, 2022): e1009861. http://dx.doi.org/10.1371/journal.pcbi.1009861.

Повний текст джерела
Анотація:
During cell devision, maintaining the epigenetic information encoded in histone modification patterns is crucial for survival and identity of cells. The faithful inheritance of the histone marks from the parental to the daughter strands is a puzzle, given that each strand gets only half of the parental nucleosomes. Mapping DNA replication and reconstruction of modifications to equivalent problems in communication of information, we ask how well enzymes can recover the parental modifications, if they were ideal computing machines. Studying a parameter regime where realistic enzymes can function, our analysis predicts that enzymes may implement a critical threshold filling algorithm which fills unmodified regions of length at most k. This algorithm, motivated from communication theory, is derived from the maximum à posteriori probability (MAP) decoding which identifies the most probable modification sequence based on available observations. Simulations using our method produce modification patterns similar to what has been observed in recent experiments. We also show that our results can be naturally extended to explain inheritance of spatially distinct antagonistic modifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Li, Jing, Yidong Cui, and Yan Ma. "Modeling Message Queueing Services with Reliability Guarantee in Cloud Computing Environment Using Colored Petri Nets." Mathematical Problems in Engineering 2015 (2015): 1–20. http://dx.doi.org/10.1155/2015/383846.

Повний текст джерела
Анотація:
Motivated by the need for loosely coupled and asynchronous dissemination of information, message queues are widely used in large-scale application areas. With the advent of virtualization technology, cloud-based message queueing services (CMQSs) with distributed computing and storage are widely adopted to improve availability, scalability, and reliability; however, a critical issue is its performance and the quality of service (QoS). While numerous approaches evaluating system performance are available, there is no modeling approach for estimating and analyzing the performance of CMQSs. In this paper, we employ both the analytical and simulation modeling to address the performance of CMQSs with reliability guarantee. We present a visibility-based modeling approach (VMA) for simulation model using colored Petri nets (CPN). Our model incorporates the important features of message queueing services in the cloud such as replication, message consistency, resource virtualization, and especially the mechanism named visibility timeout which is adopted in the services to guarantee system reliability. Finally, we evaluate our model through different experiments under varied scenarios to obtain important performance metrics such as total message delivery time, waiting number, and components utilization. Our results reveal considerable insights into resource scheduling and system configuration for service providers to estimate and gain performance optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Spiga, Daniele, Enol Fernandez, Vincenzo Spinoso, Diego Ciangottini, Mirco Tracolli, Giacinto Donvito, Marica Antonacci, et al. "The DODAS Experience on the EGI Federated Cloud." EPJ Web of Conferences 245 (2020): 07033. http://dx.doi.org/10.1051/epjconf/202024507033.

Повний текст джерела
Анотація:
The EGI Cloud Compute service offers a multi-cloud IaaS federation that brings together research clouds as a scalable computing platform for research accessible with OpenID Connect Federated Identity. The federation is not limited to single sign-on, it also introduces features to facilitate the portability of applications across providers: i) a common VM image catalogue VM image replication to ensure these images will be available at providers whenever needed; ii) a GraphQL information discovery API to understand the capacities and capabilities available at each provider; and iii) integration with orchestration tools (such as Infrastructure Manager) to abstract the federation and facilitate using heterogeneous providers. EGI also monitors the correct function of every provider and collects usage information across all the infrastructure. DODAS (Dynamic On Demand Analysis Service) is an open-source Platform-as-a-Service tool, which allows to deploy software applications over heterogeneous and hybrid clouds. DODAS is one of the so-called Thematic Services of the EOSC-hub project and it instantiates on-demand container-based clusters offering a high level of abstraction to users, allowing to exploit distributed cloud infrastructures with a very limited knowledge of the underlying technologies.This work presents a comprehensive overview of DODAS integration with EGI Cloud Federation, reporting the experience of the integration with CMS Experiment submission infrastructure system.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Fishman, George S. "Sensitivity Analysis for the System Reliability Function." Probability in the Engineering and Informational Sciences 5, no. 2 (April 1991): 185–213. http://dx.doi.org/10.1017/s0269964800002011.

Повний текст джерела
Анотація:
Sensitivity analysis is an integral part of virtually every study of system reliability. This paper describes a Monte Carlo sampling plan for estimating this sensitivity in system reliability to changes in component reliabilities. The unique feature of the approach is that sample data collected on K independent replications using a specified component reliability vector p are transformed by an importance function into unbiased estimates of system reliability for each component reliability vector q in a set of vectors Q. Moreover, this importance function together with available prior information about the given system enables one to produce estimates that require considerably less computing time to achieve a specified accuracy for all |Q| reliability estimates than a set of |Q| crude Monte Carlo sampling experiments would require to estimate each of the |Q| system reliabilities separately. As the number of components in the system grows, the relative efficiency continues to favor the proposed method.The paper shows the intimate relationship between the proposal and the method of control variates. Next, it relates the proposal to the estimation of coefficients in a reliability polynomial and indicates how this concept can be used to improve computing efficiency in certain cases. It also describes a procedure that determines the p vector, to be used in the sampling experiment, that minimizes a bound on the worst-case variance. The paper also derives individual and simultaneous confidence intervals that hold for every fixed sample size K. An example illustrates how the proposal works in an s-t connectedness problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Cortés, Mariela I., Sávio Freire, Lucas Vieira Alves, Matheus Lima Chagas, Marx Haron Barbosa, Adson Damasceno, Andressa Ferreira, Eliakim Gama, and José Paulo Rodrigues Moraes. "On the Adoption of Empirical Methods and Systematic Reviews in the Brazilian Symposium on Human Factors in Computing Systems." Journal on Interactive Systems 12, no. 1 (November 4, 2021): 125–44. http://dx.doi.org/10.5753/jis.2021.981.

Повний текст джерела
Анотація:
Context: Empirical studies (ES) and systematic reviews (SR) play an essential role in the Human-Computer Interaction (HCI) field as its focus is on evaluating the end-user and usability of software solutions and synthesizing the evidence found by the HCI community. Even though the adoption of empirical evaluation techniques and SR has gained popularity in recent years, the consistent use of a methodology is still maturing. Goal: This study aims to provide a qualitative and quantitative assessment of the current status of ES and SR presented in the research papers published at the proceedings of the Brazilian Symposium on Human Factors in Computing Systems (IHC Symposium). Method: We conduct an empirical study on the papers over the 18 editions in the IHC Symposium to answer four research questions. Our study proposes a protocol to identify and assess ES and SR reported in the papers published at the IHC Symposium. Results: From the sample of 259 studies, we find 131 ES and SR (~51%). We have characterized and categorized the ES into case studies, experiments, and surveys. Further, we found evidence that these studies' quantity and quality have been increased over the IHC Symposium editions, and almost half of these studies give detailed information making possible their replication. Conclusion: We hope that each study's characterization can support the conduction of new ES and SR by the HCI Brazilian community, producing more reliable results and reducing or eliminating biases.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Jiang, Chunmao, and Peng Wu. "A Fine-Grained Horizontal Scaling Method for Container-Based Cloud." Scientific Programming 2021 (November 27, 2021): 1–10. http://dx.doi.org/10.1155/2021/6397786.

Повний текст джерела
Анотація:
The container scaling mechanism, or elastic scaling, means the cluster can be dynamically adjusted based on the workload. As a typical container orchestration tool in cloud computing, Horizontal Pod Autoscaler (HPA) automatically adjusts the number of pods in a replication controller, deployment, replication set, or stateful set based on observed CPU utilization. There are several concerns with the current HPA technology. The first concern is that it can easily lead to untimely scaling and insufficient scaling for burst traffic. The second is that the antijitter mechanism of HPA may cause an inadequate number of onetime scale-outs and, thus, the inability to satisfy subsequent service requests. The third concern is that the fixed data sampling time means that the time interval for data reporting is the same for average and high loads, leading to untimely and insufficient scaling at high load times. In this study, we propose a Double Threshold Horizontal Pod Autoscaler (DHPA) algorithm, which fine-grained divides the scale of events into three categories: scale-out, no scale, and scale-in. And then, on the scaling strength, we also employ two thresholds that are further subdivided into no scaling (antijitter), regular scaling, and fast scaling for each of the three cases. The DHPA algorithm determines the scaling strategy using the average of the growth rates of CPU utilization, and thus, different scheduling policies are adopted. We compare the DHPA with the HPA algorithm under different loads, including low, medium, and high. The experiments show that the DHPA algorithm has better antijitter and antiload characteristics in container increase and reduction while ensuring service and cluster security.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Jurado Pérez, Luis, and Joaquín Salvachúa. "Simulation of Scalability in Cloud-Based IoT Reactive Systems Leveraged on a WSAN Simulator and Cloud Computing Technologies." Applied Sciences 11, no. 4 (February 18, 2021): 1804. http://dx.doi.org/10.3390/app11041804.

Повний текст джерела
Анотація:
Implementing a wireless sensor and actuator network (WSAN) in Internet of Things (IoT) applications is a complex task. The need to establish the number of nodes, sensors, and actuators, and their location and characteristics, requires a tool that allows the preliminary determination of this information. Additionally, in IoT scenarios where a large number of sensors and actuators are present, such as in a smart city, it is necessary to analyze the scalability of these systems. Modeling and simulation can help to conduct an early study and reduce development and deployment times in environments such as a smart city. The design-time verification of the system through a network simulation tool is useful for the most complex and expensive part of the system formed by a WSAN. However, the use of real components for other parts of the IoT system is feasible by using cloud computing infrastructure. Although there are cloud computing simulators, the cloud layer is poorly developed for the requirements of IoT applications. Technologies around cloud computing can be used for the rapid deployment of some parts of the IoT application and software services using containers. With this framework, it is possible to accelerate the development of the real system, facilitate the rapid deployment of a prototype, and provide more realistic simulations. This article proposes an approach for the modeling and simulation of IoT systems and services in a smart city leveraged in a WSAN simulator and technologies of cloud computing. Our approach was verified through experiments with two use cases. (1) A model of sensor and actuator networks as an integral part of an IoT application to monitor and control parks in a city. Through this use case, we analyze the scalability of a system whose sensors constantly emit data. (2) A model for cloud-based IoT reactive parking lot systems for a city. Through our approach, we have created an IoT parking system simulation model. The model contains an M/M/c/N queuing system to simulate service requests from users. In this use case, the model replication through hierarchical modeling and scalability of a distributed parking reservation service were evaluated. This last use case showed how the simulation model could provide information to size the system through probability distribution variables related to the queuing system. The experimental results show that the use of simulation techniques for this type of application makes it possible to analyze scalability in a more realistic way.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

SENKEVICH, Lyudmila B., and Marat A. SABITOV. "SIMULATION MODELING AND OPTIMIZATION OF THE OPERATION OF A PARALLEL SERVER WITH FAILURES IN ANYLOGIC." Tyumen State University Herald. Physical and Mathematical Modeling. Oil, Gas, Energy 8, no. 1 (2022): 126–43. http://dx.doi.org/10.21684/2411-7978-2022-8-1-126-143.

Повний текст джерела
Анотація:
Modern scientific research is increasingly raising issues of the processing of large amounts of data. The widespread use of client-server interaction technology and cloud computing at the moment raises questions about the efficiency of a parallel server, as well as the ability to predict results depending on the degree of load and characteristics of the equipment. This article simulates a parallel server with failures in the AnyLogic environment, and then performs multidimensional optimization by the weighted sum method. As part of the study, a simulation model of a queuing system with failures was built. It contains a server simulator, terminals, a failure simulator and statistics collection segments. The used parallel server model is abstract and rather generalized and makes it possible to concretize it by introducing additional dependencies and refining characteristics. The experiment with optimal parameters allowed to obtain the following gain in system efficiency indicators: processor load parameter (by memory) — a gain of 7%; processor load parameter (by load factor) — a gain of 8%; probability of terminal downtime — a gain of 5.7%; the failure rate of the main computer — 36 times less than the initial configuration; the number of interrupted programs — 7 less. In addition, it should be noted that the total number of completed requests remained at the same level — 462-465, for the reason that the intensity of the terminals did not vary. Since the results of replications (“runs”) are unique and the values of the optimized function vary for different replications, the built-in possibility of a variable number of replications (from 5 to 10) with a confidence probability of 95% and an error level of 0.5 was used. The obtained results suggest the possibility of further research of the model and its development in the AnyLogic environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Garrett, K. A., L. V. Madden, G. Hughes, and W. F. Pfender. "New Applications of Statistical Tools in Plant Pathology." Phytopathology® 94, no. 9 (September 2004): 999–1003. http://dx.doi.org/10.1094/phyto.2004.94.9.999.

Повний текст джерела
Анотація:
The series of papers introduced by this one address a range of statistical applications in plant pathology, including survival analysis, nonparametric analysis of disease associations, multivariate analyses, neural networks, meta-analysis, and Bayesian statistics. Here we present an overview of additional applications of statistics in plant pathology. An analysis of variance based on the assumption of normally distributed responses with equal variances has been a standard approach in biology for decades. Advances in statistical theory and computation now make it convenient to appropriately deal with discrete responses using generalized linear models, with adjustments for overdispersion as needed. New nonparametric approaches are available for analysis of ordinal data such as disease ratings. Many experiments require the use of models with fixed and random effects for data analysis. New or expanded computing packages, such as SAS PROC MIXED, coupled with extensive advances in statistical theory, allow for appropriate analyses of normally distributed data using linear mixed models, and discrete data with generalized linear mixed models. Decision theory offers a framework in plant pathology for contexts such as the decision about whether to apply or withhold a treatment. Model selection can be performed using Akaike's information criterion. Plant pathologists studying pathogens at the population level have traditionally been the main consumers of statistical approaches in plant pathology, but new technologies such as microarrays supply estimates of gene expression for thousands of genes simultaneously and present challenges for statistical analysis. Applications to the study of the landscape of the field and of the genome share the risk of pseudoreplication, the problem of determining the appropriate scale of the experimental unit and of obtaining sufficient replication at that scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Alves, Givago Lopes, Raimundo Nonato Viana Santos, Antônia Alice Costa Rodrigues, Maria José Pinheiro Correa, Mário Luiz Ribeiro Mesquita, and Maria Rosângela Malheiros Silva. "Effects of mulching on the weed community and grain yield of upland rice cultivars." December 2021, no. 15(12):2021 (December 12, 2021): 1478–84. http://dx.doi.org/10.21475/ajcs.21.15.12.p3425.

Повний текст джерела
Анотація:
This study evaluated the effects of mulching on upland rice cultivars Comecru and Cambará and the weed community under four amounts of babassou (Attalea speciosa Mart. ex Spreng.) straw mulching namely: 0, 15, 20, 25 t ha-1 with a view to control weeds. The experiment was laid out in a randomized complete block design in a 2 x 4 factorial scheme with four replications. The rice plant height, percentage of fertile panicles, number of spikelets per panicle, weight of 100 grains and grain yield were assessed. We also assessed the weed community by computing the following phytosociological parameters: density, frequency and the importance value index (IVI) of each species. Babassou straw mulching reduced weed density and dry mass between rows of upland rice cultivars and increased the rice yield. The weed species with the highest Importance Value Index (IVI) in the treatments with no mulching were Cyperus iria, Fimbristylis dichotoma and Digitaria ciliaris. Rice grain yield was increased with the increase in the amount of straw. Comecru cv. had the highest suppressive effect on weeds with significantly higher grain yield (1,214.85 kg-1) than Cambará cv. (878.72 kg ha-1). We conclude that the higher amounts (20 and 25 t ha-1) of babassou straw mulching suppressed weeds, providing less competition with the rice cultivars, which resulted in an increase in the number of rice panicles, weight of 100 grains, spikelet fertility and grain yield in both rice cultivars
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Hema S and Dr.Kangaiammal A. "A Secure Method for Managing Data in Cloud Storage using Deduplication and Enhanced Fuzzy Based Intrusion Detection Framework." November 2020 6, no. 11 (November 30, 2020): 165–73. http://dx.doi.org/10.46501/ijmtst061131.

Повний текст джерела
Анотація:
Cloud services increase data availability so as to offer flawless service to the client. Because of increasing data availability, more redundancies and more memory space are required to store such data. Cloud computing requires essential storage and efficient protection for all types of data. With the amount of data produced seeing an exponential increase with time, storing the replicated data contents is inevitable. Hence, using storage optimization approaches becomes an important pre-requisite for enormous storage domains like cloud storage. Data deduplication is the technique which compresses the data by eliminating the replicated copies of similar data and it is widely utilized in cloud storage to conserve bandwidth and minimize the storage space. Despite the data deduplication eliminates data redundancy and data replication; it likewise presents significant data privacy and security problems for the end-user. Considering this, in this work, a novel security-based deduplication model is proposed to reduce a hash value of a given file size and provide additional security for cloud storage. In proposed method the hash value of a given file is reduced employing Distributed Storage Hash Algorithm (DSHA) and to provide security the file is encrypted by using an Improved Blowfish Encryption Algorithm (IBEA). This framework also proposes the enhanced fuzzy based intrusion detection system (EFIDS) by defining rules for the major attacks, thereby alert the system automatically. Finally the combination of data exclusion and security encryption technique allows cloud users to effectively manage their cloud storage by avoiding repeated data encroachment. It also saves bandwidth and alerts the system from attackers. The results of experiments reveal that the discussed algorithm yields improved throughput and bytes saved per second in comparison with other chunking algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kumar, P. J., and P. Ila ngo. "Data Replication in Conventional Computing Environment." International Journal of Computer Trends and Technology 45, no. 2 (March 25, 2017): 67–74. http://dx.doi.org/10.14445/22312803/ijctt-v45p114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Chang, Wan-Chi, and Pi-Chung Wang. "Adaptive Replication for Mobile Edge Computing." IEEE Journal on Selected Areas in Communications 36, no. 11 (November 2018): 2422–32. http://dx.doi.org/10.1109/jsac.2018.2874140.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Khan, Nawsher, Noraziah Ahmad, Tutut Herawan, and Zakira Inayat. "Cloud Computing." International Journal of Cloud Applications and Computing 2, no. 3 (July 2012): 68–85. http://dx.doi.org/10.4018/ijcac.2012070103.

Повний текст джерела
Анотація:
Efficiency (in term of time consumption) and effectiveness in resources utilization are the desired quality attributes in cloud services provision. The main purpose of which is to execute jobs optimally, i.e., with minimum average waiting, turnaround and response time by using effective scheduling technique. Replication provides improved availability and scalability; decreases bandwidth use and increases fault tolerance. To speed up access, file can be replicated so a user can access a nearby replica. This paper proposes architecture to convert Globally One Cloud to Locally Many Clouds. By combining replication and scheduling, this architecture improves efficiency and easy accessibility. In the case of failure of one sub cloud or one cloud service, clients can start using another cloud under “failover” techniques. As a result, no one cloud service will go down.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Norton, John D. "Replicability of Experiment." THEORIA. An International Journal for Theory, History and Foundations of Science 30, no. 2 (June 20, 2015): 229. http://dx.doi.org/10.1387/theoria.12691.

Повний текст джерела
Анотація:
The replicability of experiment is routinely offered as the gold standard of evidence. I argue that it is not supported by a universal principle of replicability in inductive logic. A failure of replication may not impugn a credible experimental result; and a successful replication can fail to vindicate an incredible experimental result. Rather, employing a material approach to inductive inference, the evidential import of successful replication of an experiment is determined by the prevailing background facts. Commonly, these background facts do support successful replication as a good evidential guide and this has fostered the illusion of a deeper, exceptionless principle.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Dominik, Tomáš, Daniel Dostál, Martin Zielina, Jan Šmahaj, Zuzana Sedláčková, and Roman Procházka. "Libet’s experiment: A complex replication." Consciousness and Cognition 65 (October 2018): 1–26. http://dx.doi.org/10.1016/j.concog.2018.07.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Wang, Hanning, Weixiang Xu, Futian Wang, and Chaolong Jia. "A Cloud-Computing-Based Data Placement Strategy in High-Speed Railway." Discrete Dynamics in Nature and Society 2012 (2012): 1–15. http://dx.doi.org/10.1155/2012/396387.

Повний текст джерела
Анотація:
As an important component of China’s transportation data sharing system, high-speed railway data sharing is a typical application of data-intensive computing. Currently, most high-speed railway data is shared in cloud computing environment. Thus, there is an urgent need for an effective cloud-computing-based data placement strategy in high-speed railway. In this paper, a new data placement strategy named hierarchical structure data placement strategy is proposed. The proposed method combines the semidefinite programming algorithm with the dynamic interval mapping algorithm. The semi-definite programming algorithm is suitable for the placement of files with various replications, ensuring that different replications of a file are placed on different storage devices, while the dynamic interval mapping algorithm ensures better self-adaptability of the data storage system. A hierarchical data placement strategy is proposed for large-scale networks. In this paper, a new theoretical analysis is provided, which is put in comparison with several other previous data placement approaches, showing the efficacy of the new analysis in several experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Kumar, P. J., and P. Ilango. "Data Replication in Current Generation Computing Environment." International Journal of Engineering Trends and Technology 45, no. 10 (March 25, 2017): 488–92. http://dx.doi.org/10.14445/22315381/ijett-v45p292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

O'Connor, K., R. Hallam, and S. Rachman. "Fearlessness and courage: A replication experiment." British Journal of Psychology 76, no. 2 (May 1985): 187–97. http://dx.doi.org/10.1111/j.2044-8295.1985.tb01942.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Martínez, Alberto A. "Replication of Coulomb's Torsion Balance Experiment." Archive for History of Exact Sciences 60, no. 6 (September 5, 2006): 517–63. http://dx.doi.org/10.1007/s00407-006-0113-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

HShahapure, Nagamani, and P. Jayarekha. "Replication: A Technique for Scalability in Cloud Computing." International Journal of Computer Applications 122, no. 5 (July 18, 2015): 13–18. http://dx.doi.org/10.5120/21695-4799.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Garg, Rajni. "CLOUD COMPUTING ENERGY DISSIPATION REDUCTION USING SHADOW REPLICATION." International Journal of Advanced Research in Computer Science 9, no. 2 (April 20, 2018): 93–98. http://dx.doi.org/10.26483/ijarcs.v9i2.5544.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Da, Gauri Joshi, and Gregory W. Wornell. "Efficient Straggler Replication in Large-Scale Parallel Computing." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 4, no. 2 (June 14, 2019): 1–23. http://dx.doi.org/10.1145/3310336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Boru, Dejene, Dzmitry Kliazovich, Fabrizio Granelli, Pascal Bouvry, and Albert Y. Zomaya. "Energy-efficient data replication in cloud computing datacenters." Cluster Computing 18, no. 1 (January 10, 2015): 385–402. http://dx.doi.org/10.1007/s10586-014-0404-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Shakarami, Ali, Mostafa Ghobaei-Arani, Ali Shahidinejad, Mohammad Masdari, and Hamid Shakarami. "Data replication schemes in cloud computing: a survey." Cluster Computing 24, no. 3 (April 16, 2021): 2545–79. http://dx.doi.org/10.1007/s10586-021-03283-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Krammer, N., and D. Liko. "Computing challenges of the CMS experiment." Journal of Instrumentation 12, no. 06 (June 26, 2017): C06039. http://dx.doi.org/10.1088/1748-0221/12/06/c06039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Nakajima, T., Y. Sakai, and A. Suyama. "Experiment of DNA computing with robot." Seibutsu Butsuri 40, supplement (2000): S152. http://dx.doi.org/10.2142/biophys.40.s152_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

HARA, Takanori. "Computing at the Belle II experiment." Journal of Physics: Conference Series 664, no. 1 (December 23, 2015): 012002. http://dx.doi.org/10.1088/1742-6596/664/1/012002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Choutko, V., A. Egorov, A. Eline, and B. S. Shan. "Computing Strategy of the AMS Experiment." Journal of Physics: Conference Series 664, no. 3 (December 23, 2015): 032029. http://dx.doi.org/10.1088/1742-6596/664/3/032029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Tedre, Matti, and Nella Moisseinen. "Experiments in Computing: A Survey." Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/549398.

Повний текст джерела
Анотація:
Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing’s centrality in other fields, to promote understanding of experiments in modern science in general.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Mrnjavac, Teo, Konstantinos Alexopoulos, Vasco Chibante Barroso, and George Raduta. "AliECS: a New Experiment Control System for the ALICE Experiment." EPJ Web of Conferences 245 (2020): 01033. http://dx.doi.org/10.1051/epjconf/202024501033.

Повний текст джерела
Анотація:
The ALICE Experiment at CERN’s Large Hadron Collider (LHC) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2021, which includes a new computing system called O2 (Online-Offline). To ensure the efficient operation of the upgraded experiment and of its newly designed computing system, a reliable, high performance, and automated experiment control system is being developed. The ALICE Experiment Control System (AliECS) is a distributed system based on state of the art cluster management and microservices that have recently emerged in the distributed computing ecosystem. Such technologies will allow the ALICE collaboration to benefit from a vibrant and innovating open source community. This communication describes the AliECS architecture. It provides an in-depth overview of the system’s components, features, and design elements, as well as its performance. It also reports on the experience with AliECS as part of ALICE Run 3 detector commissioning setups.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kobayashi, Tetsuro, Asako Miura, and Kazunori Inamasu. "Media Priming Effect: A Preregistered Replication Experiment." Journal of Experimental Political Science 4, no. 1 (2017): 81–94. http://dx.doi.org/10.1017/xps.2017.8.

Повний текст джерела
Анотація:
AbstractIyengar et al. (1984, The Evening News and Presidential Evaluations. Journal of Personality and Social Psychology 46(4): 778–87) discovered the media priming effect, positing that by drawing attention to certain issues while ignoring others, television news programs help define the standards by which presidents are evaluated. We conducted a direct replication of Experiment 1 by Iyengar et al. (1984, The Evening News and Presidential Evaluations. Journal of Personality and Social Psychology 46(4): 778–87) with some changes. Specifically, we (a) collected data from Japanese undergraduates; (b) reduced the number of conditions to two; (c) used news coverage of the issue of relocating US bases in Okinawa as the treatment; (d) measured issue-specific evaluations of the Japanese Prime Minister in the pre-treatment questionnaire; and (e) performed statistical analyses that are more appropriate for testing heterogeneity in the treatment effect. We did not find statistically significant evidence of media priming. Overall, the results suggest that the effects of media priming may be quite sensitive either to the media environment or to differences in populations in which the effect has been examined.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sasikumar, K., and Madiajagan Muthaiyan. "Literature Survey of Dynamic Data Replication in Cloud Computing." Research Journal of Applied Sciences, Engineering and Technology 13, no. 2 (July 15, 2016): 158–72. http://dx.doi.org/10.19026/rjaset.13.2927.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Shahzad, Haroon, Xiang Li, and Muhammad Irfan. "Review of Data Replication Techniques for Mobile Computing Environment." Research Journal of Applied Sciences, Engineering and Technology 6, no. 9 (July 15, 2013): 1639–48. http://dx.doi.org/10.19026/rjaset.6.3883.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Shao, Yanling, Chunlin Li, Zhao Fu, Leyue Jia, and Youlong Luo. "Cost-effective replication management and scheduling in edge computing." Journal of Network and Computer Applications 129 (March 2019): 46–61. http://dx.doi.org/10.1016/j.jnca.2019.01.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Yang, Chaofei, Beiye Liu, Hai Li, Yiran Chen, Mark Barnell, Qing Wu, Wujie Wen, and Jeyavijayan Rajendran. "Thwarting Replication Attack Against Memristor-Based Neuromorphic Computing System." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 39, no. 10 (October 2020): 2192–205. http://dx.doi.org/10.1109/tcad.2019.2937817.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії