Journal articles on the topic 'Distributed Read-Write System'

To see the other types of publications on this topic, follow the link: Distributed Read-Write System.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distributed Read-Write System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lin, Ji-Cherng, Tetz C. Huang, Cheng-Zen Yang, and Nathan Mou. "Quasi-self-stabilization of a distributed system assuming read/write atomicity." Computers & Mathematics with Applications 57, no. 2 (January 2009): 184–94. http://dx.doi.org/10.1016/j.camwa.2008.02.052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fan, Li Xian, Yong Zhao Xu, and Hong Tao Li. "Design and Realization of E-Learning Resource Storage System." Advanced Materials Research 271-273 (July 2011): 1307–12. http://dx.doi.org/10.4028/www.scientific.net/amr.271-273.1307.

Full text
Abstract:
The datacenter of one university owns massive E-Learning resources, and how to support safe and efficient access so as to support large-scale concurrent read and write protection has troubled the administrators of the campus datacenter. In this paper, we presented our E-learning resources storage system. The resources are stored in one HDFS(HADDOP Distributed File System) based storage systems, to effectively support the E-learning scenarios as small write-once files with large-scale concurrent read scenarios. Within the System, the E-learning data can be deployed redundantly across multiple storage nodes, which are distributed so as to improve read and write speed. Furthermore, when a single data node fails the system can also be recovered by redundant node data in the system and improve system security greatly. The system supports that the desktop systems, mobile platforms and web browser can access the system with cross-platform mechanism. The system adopts encryption mechanisms through the establishment of secure storage directory, and the user terminal can encrypts and unencrypts the user-files with offline mode to guarantee the safety of user data.
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Jintao, Wenjie Liu, and Zhanhuai Li. "A Strategy of Data Synchronization in Distributed System with Read Separating from Write." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, no. 1 (February 2020): 209–15. http://dx.doi.org/10.1051/jnwpu/20203810209.

Full text
Abstract:
Read separating from write is a strategy that NewSQL adopts to incorporate the advantages of traditional relation database and NoSQL database. Under this architecture, baseline data is split into multiple partitions stored at distributed physical nodes, while delta data is stored at single transaction node. For reducing the pressure of transaction node and improving the query performance, delta data needs to be synchronized into storage nodes. The current strategies trigger the procedure of data synchronization per partition, meaning that unchanged partitions will also participate in data synchronization, which consumes extra network cost, local IO and space resources. For improving the efficiency of data synchronization meanwhile mitigating space utilization, the fine-grained data synchronization strategy is proposed, whose main idea includes that fine-grained logical partitions upon original coarse-grained partitions is established, providing more correct synchronized unit; the delta data sensing strategy is introduced, which records the mapping between changed partitions and its delta data; instead of partition driven, the data synchronization through the delta-broadcasting mechanism is driven, constraining that only changed partitions can participate in data synchronization. The fine-grained data synchronization strategy on Oceanbase is implemented, which is a distributed database with read separating from write, and the results show that our strategy is better than other strategies in efficiency of data synchronizing and space utilization.
APA, Harvard, Vancouver, ISO, and other styles
4

Mardedi, Lalu Zazuli Azhar. "Analisa Kinerja System Gluster FS pada Proxmox VE untuk Menyediakan High Availability." MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer 19, no. 1 (November 5, 2019): 173–85. http://dx.doi.org/10.30812/matrik.v19i1.473.

Full text
Abstract:
Virtualization is used as a means to improve the scalability of existing hardware. Proxmox Virtual Environment (PVE) with hypervisor type based on open source. PVE can use Network Attached Storage as a network-based storage location in GlusterFS storage, which is a distributed file system. The research methodology uses Network Development Live Cycle (NDLC) which has 3 (three) stages, namely analysis, design, and simulation prototyping. The analysis phase is carried out by collecting data by means of literature study and data analysis. The design phase is carried out making the design of network systems and trials. At the simulation stage prototyping tests various scenarios and analyzes Read, Write and Re-Write tests. The system in PVE is integrated with GlusterFS storage made using two servers including the PVE 1 and PVE 2 servers. GlusterFS performance results show that the Read file capability is higher compared to Write, and Re-write in the measurement variations of 1mb, 10mb and 100mb files. In conclusion, GlusterFS can be clustered using the same node as PVE Cluster which can use GlusterFS as additional storage for PVE. GlusterFS also supports live migration features so that LXC or VM can do live migration from one node to another in an online state. The ability to write, re-write and read files on GlusterFS is also relatively stable in testing file size variations.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmad. "Data Replication Using Read-One-Write-All Monitoring Synchronization Transaction System in Distributed Environment." Journal of Computer Science 6, no. 10 (October 1, 2010): 1095–98. http://dx.doi.org/10.3844/jcssp.2010.1095.1098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Jun, Changsen Pan, and Menghan Lu. "A Seismic Data Processing System based on Fast Distributed File System." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, no. 5 (April 6, 2015): 5779–88. http://dx.doi.org/10.24297/ijct.v14i5.3986.

Full text
Abstract:
Big data has attracted an increasingly number of attentions with the advent of the cloud era, and in the field of seismic exploration, the amount of data created by seismic exploration has also experienced an incredible growth in order to satisfy the social needs. In this case, it is necessary to build a highly-effective system of data storage and process. In our paper, we aim at the properties of the seismic data and the requirement to the performance of IO, and establish a distributed file system with the goal of processing seismic data based on the Fast Distributed File System (Fast DFS), then test our system through a series of operations such as file write and read, and the results show that our file system is very proper and effective when processing seismic data.
APA, Harvard, Vancouver, ISO, and other styles
7

MAT DERIS, MUSTAFA, ALI MAMAT, PUA CHAI SENG, and MOHD YAZID SAMAN. "THREE DIMENSIONAL GRID STRUCTURE FOR EFFICIENT ACCESS OF REPLICATED DATA." Journal of Interconnection Networks 02, no. 03 (September 2001): 317–29. http://dx.doi.org/10.1142/s0219265901000415.

Full text
Abstract:
This article addresses the performance of data replication protocol in terms of data availability and communication costs. Specifically, we present a new protocol called Three Dimensional Grid Structure (TDGS) protocol, to manage data replication in distributed system. The protocol provides high availability for read and write operations with limited fault-tolerance at low communication cost. With TDGS protocol, a read operation is limited to two data copies, while a write operation is required with minimal number of copies. In comparison to other protocols. TDGS requires lower communication cost for an operation, while providing higher data availability.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Zijian, and Chuqiao Xiao. "ER-Store: A Hybrid Storage Mechanism with Erasure Coding and Replication in Distributed Database Systems." Scientific Programming 2021 (September 10, 2021): 1–13. http://dx.doi.org/10.1155/2021/9910942.

Full text
Abstract:
In distributed database systems, as cluster scales grow, efficiency and availability become critical considerations. In a cluster, a common approach to high availability is using replication, but this is inefficient due to its low storage utilization. Erasure coding can provide data reliability while ensuring high storage utilization. However, due to the large number of coding and decoding operations required by the CPU, it is not suitable for some frequently updated data. In order to optimize the storage efficiency of the data in the distributed system without affecting the availability of the data, this paper proposes a data temperature recognition algorithm that can distinguish data tablets and divides data tablets into three types, cold, warm, and hot, according to the frequency of access. Combining three replicas and erasure coding technology, ER-store is proposed, a hybrid storage mechanism for different data types. At the same time, we combined the read-write separation architecture of the distributed database system to design the data temperature conversion cycle, which reduces the computational overhead caused by frequent updates of erasure coding technology. We have implemented this design on the CBase database system based on the read-write separation architecture, and the experimental results show that it can save 14.6%–18.3% of the storage space while meeting the efficient access performance of the system.
APA, Harvard, Vancouver, ISO, and other styles
9

Zain ali, Zain ali. "The Communication Mechanism in a Distributed System." International Journal for Electronic Crime Investigation 5, no. 3 (April 6, 2022): 17–22. http://dx.doi.org/10.54692/ijeci.2022.050385.

Full text
Abstract:
In this research, problems are discussed dynamically distributed systems that relate to the sharing of data and communication from one system to another over the network. A distributed system communicates with its related systems by sending and receiving messages over the internet and in this way, it fulfills its work. When we discuss dynamic distributed systems, it means that it includes many different changeable types of networks, different operating systems like android, mac, windows, different software processors portability, breaking down of WAN, and inter-process communication errors. Another problem that accrues in distributed systems is latency. So, it is very difficult to develop software for these types of environments. Proposed work is related to make message communication in distributed systems easy, reliable, and efficient. For the sharing of data, coherence is responsible. Every problem can be solved but that proper appropriate methods and algorithms are required. We create a new method which is a dynamic atomic shared memory for message communication. A properly stated method is proposed for message communication and then implemented. According to this method, owners can be changed dynamically and their access to read and write also changes.
APA, Harvard, Vancouver, ISO, and other styles
10

Noor, Ahmad Shukri Mohd, Nur Farhah Mat Zian, and Fatin Nurhanani M. Shaiful Bahri. "Survey on replication techniques for distributed system." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 2 (April 1, 2019): 1298. http://dx.doi.org/10.11591/ijece.v9i2.pp1298-1303.

Full text
Abstract:
<p>Distributed systems mainly provide access to a large amount of data and computational resources through a wide range of interfaces. Besides its dynamic nature, which means that resources may enter and leave the environment at any time, many distributed systems applications will be running in an environment where faults are more likely to occur due to their ever-increasing scales and the complexity. Due to diverse faults and failures conditions, fault tolerance has become a critical element for distributed computing in order for the system to perform its function correctly even in the present of faults. Replication techniques primarily concentrate on the two fault tolerance manners precisely masking the failures as well as reconfigure the system in response. This paper presents a brief survey on different replication techniques such as Read One Write All (ROWA), Quorum Consensus (QC), Tree Quorum (TQ) Protocol, Grid Configuration (GC) Protocol, Two-Replica Distribution Techniques (TRDT), Neighbour Replica Triangular Grid (NRTG) and Neighbour Replication Distributed Techniques (NRDT). These techniques have its own redeeming features and shortcoming which forms the subject matter of this survey.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Wilczkiewicz, Bartłomiej, Piotr Jankowski-Mihułowicz, and Mariusz Węglarski. "Test Platform for Developing Processes of Autonomous Identification in RFID Systems with Proximity-Range Read/Write Devices." Electronics 12, no. 3 (January 26, 2023): 617. http://dx.doi.org/10.3390/electronics12030617.

Full text
Abstract:
The subject of a distributed RFID system with proximity-range read/write devices (RWD) is considered in this paper. Possible work scenarios were presented in the scope of industrial implementations and were then tested in a dedicated laboratory set. The development system is based on a high-frequency RWD integrated with a Wi-Fi microcontroller unit to create an Internet of things connected with a server (for data exchanging, user interface, etc.) via a wireless local area network. In practical applications, in order to increase the interrogation zone (IZ), there is a tendency to use one RWD with significant output power equipped with a multiplexer for managing several antennas located in the operational space. Such a solution is often economically unprofitable and even impossible to implement, especially in the case of the need to create the large IZ. Responding to market demand, the authors propose a distributed system developed on the basis of several cheap RFID reader modules and a few freely available hardware/software tools. They created the fully functional RFID platform and confirmed its usefulness in static and dynamic systems of object identification.
APA, Harvard, Vancouver, ISO, and other styles
12

Nalajala, Anusha, T. Ragunathan, Ranesh Naha, and Sudheer Kumar Battula. "HRFP: Highly Relevant Frequent Patterns-Based Prefetching and Caching Algorithms for Distributed File Systems." Electronics 12, no. 5 (March 1, 2023): 1183. http://dx.doi.org/10.3390/electronics12051183.

Full text
Abstract:
Data-intensive applications are generating massive amounts of data which is stored on cloud computing platforms where distributed file systems are utilized for storage at the back end. Most users of those applications deployed on cloud computing systems read data more often than they write. Hence, enhancing the performance of read operations is an important research issue. Prefetching and caching are used as important techniques in the context of distributed file systems to improve the performance of read operations. In this research, we introduced a novel highly relevant frequent patterns (HRFP)-based algorithm that prefetches content from the distributed file system environment and stores it in the client-side caches that are present in the same environment. We have also introduced a new replacement policy and an efficient migration technique for moving the patterns from the main memory caches to the caches present in the solid-state devices based on a new metric namely the relevancy of the patterns. According to the simulation results, the proposed approach outperformed other algorithms that have been suggested in the literature by a minimum of 15% and a maximum of 53%.
APA, Harvard, Vancouver, ISO, and other styles
13

Jayakumar, N., and A. M. Kulkarni. "A Simple Measuring Model for Evaluating the Performance of Small Block Size Accesses in Lustre File System." Engineering, Technology & Applied Science Research 7, no. 6 (December 18, 2017): 2313–18. http://dx.doi.org/10.48084/etasr.1557.

Full text
Abstract:
Storage performance is one of the vital characteristics of a big data environment. Data throughput can be increased to some extent using storage virtualization and parallel data paths. Technology has enhanced the various SANs and storage topologies to be adaptable for diverse applications that improve end to end performance. In big data environments the mostly used file systems are HDFS (Hadoop Distributed File System) and Lustre. There are environments in which both HDFS and Lustre are connected, and the applications directly work on Lustre. In Lustre architecture with out-of-band storage virtualization system, the separation of data path from metadata path is acceptable (and even desirable) for large files since one MDT (Metadata Target) open RPC is typically a small fraction of the total number of read or write RPCs. This hurts small file performance significantly when there is only a single read or write RPC for the file data. Since applications require data for processing and considering in-situ architecture which brings data or metadata close to applications for processing, how the in-situ processing can be exploited in Lustre is the domain of this dissertation work. The earlier research exploited Lustre supporting in-situ processing when Hadoop/MapReduce is integrated with Lustre, but still, the scope of performance improvement existed in Lustre. The aim of the research is to check whether it is feasible and beneficial to move the small files to the MDT so that additional RPCs and I/O overhead can be eliminated, and read/write performance of Lustre file system can be improved.
APA, Harvard, Vancouver, ISO, and other styles
14

Fornari, Federico, Alessandro Cavalli, Daniele Cesini, Antonio Falabella, Enrico Fattibene, Lucia Morganti, Andrea Prosperini, and Vladimir Sapunenko. "Distributed file systems performance tests on Kubernetes/Docker clusters." Journal of Physics: Conference Series 2438, no. 1 (February 1, 2023): 012030. http://dx.doi.org/10.1088/1742-6596/2438/1/012030.

Full text
Abstract:
Abstract Modern data centers need distributed file systems to provide user applications with access to data stored on a large number of nodes. The ability to mount a distributed file system and leverage its native application programming interfaces in a Docker container, combined with the advanced orchestration features provided by Kubernetes, can improve flexibility in installing, monitoring and recovering data management and transfer services. At INFN-CNAF some distributed file systems (i.e. IBM Spectrum Scale, CephFS and Lustre-ZFS) deployment tests with Kubernetes and Docker have been conducted recently with positive results. The purpose of this paper is to show the throughput scores of the previously mentioned file systems when their servers are containerized and run on bare metal machines using a container orchestration framework. This is a preliminary study: for the time being, only sequential read/write tests have been considered.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Wei, Songping Yu, and Zhiying Wang. "Fast In-Memory Key–Value Cache System with RDMA." Journal of Circuits, Systems and Computers 28, no. 05 (May 2019): 1950074. http://dx.doi.org/10.1142/s0218126619500749.

Full text
Abstract:
The quick advances of Cloud and the advent of Fog computing impose more and more critical demand for computing and data transfer of low latency onto the underlying distributed computing infrastructure. Remote direct memory access (RDMA) technology has been widely applied for its low latency of remote data access. However, RDMA gives rise to a host of challenges in accelerating in-memory key–value stores, such as direct remote memory writes, making the remote system more vulnerable. This study presents an in-memory key–value system based on RDMA, named Craftscached, which enables: (1) buffering remote memory writes into a communication cache memory to eliminate direct remote memory writes to the data memory area; (2) dividing the communication cache memory into RDMA-writable and RDMA-readable memory zones to reduce the possibility of data corruption due to stray memory writes and caching data into an RDMA-readable memory zone to improve the remote memory read performance; and (3) adopting remote out-of-place direct memory write to achieve high performance of remote read and write. Experimental results in comparison with Memcached indicate that Craftscached provides a far better performance: (1) in the case of read-intensive workloads, the data access of Craftscached is about 7–43[Formula: see text] and 18–72.4% better than those of TCP/IP-based and RDMA-based Memcached, respectively; (2) the memory utilization of small objects is more efficient with only about 3.8% memory compaction overhead.
APA, Harvard, Vancouver, ISO, and other styles
16

Fukuda, Hiroaki, Ryota Gunji, Tadahiro Hasegawa, Paul Leger, and Ismael Figueroa. "DSSM: Distributed Streaming Data Sharing Manager." Sensors 21, no. 4 (February 14, 2021): 1344. http://dx.doi.org/10.3390/s21041344.

Full text
Abstract:
Developing robot control software systems is difficult because of a wide variety of requirements, including hardware systems and sensors, even though robots are demanding nowadays. Middleware systems, such as Robot Operating System (ROS), are being developed and widely used to tackle this difficulty. Streaming data Sharing Manager (SSM) is one of such middleware systems that allow developers to write and read sensor data with timestamps using a Personal Computer (PC). The timestamp feature is essential for the robot control system because it usually uses multiple sensors with their own measurement cycles, meaning that measured sensor values with different timestamps become useless for the robot control. Using SSM allows developers to use measured sensor values with the same timestamps; however, SSM assumes that only one PC is used. Thereby, if one process consumes CPU resources intensively, other processes cannot finish their assumed deadlines, leading to the unexpected behavior of a robot. This paper proposes an SSM middleware, named Distributed Streaming data Sharing Manager (DSSM), that enables distributing processes on SSM to different PCs. We have developed a prototype of DSSM and confirmed its behavior so far. In addition, we apply DSSM to an existing real SSM based robot control system that autonomously controls an unmanned vehicle robot. We then reveal its advantages and disadvantages via several experiments by measuring resource usages.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Peng, and Yan Qi. "Research of Load Balancing Based on NOSQL Database." Applied Mechanics and Materials 602-605 (August 2014): 3371–74. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3371.

Full text
Abstract:
The NOSQL database to support data and high concurrent read and write,scalability and high availability features in a distributed storage system which has been applied widely. In this paper, through the research of load balancing in distributed storage system,and it proposes the consistent hashing algorithm and the virtual node strategy, in order to improve the load balancing of the system and increase the cache hit ratio. For the load balancing principle of NOSQL and SQL Server, analysis and comparison of the experimental data.The result shows that, with the increase of the number of virtual nodes, the cache hit ratio of NOSQL is higher than the cache hit ratio of SQL Server.
APA, Harvard, Vancouver, ISO, and other styles
18

Gorbenko, Anatoliy, Andrii Karpenko, and Olga Tarasyuk. "Performance evaluation of various deployment scenarios of the 3-replicated Cassandra NoSQL cluster on AWS." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 4 (November 29, 2021): 157–65. http://dx.doi.org/10.32620/reks.2021.4.13.

Full text
Abstract:
A concept of distributed replicated NoSQL data storages Cassandra-like, HBase, MongoDB has been proposed to effectively manage Big Data set whose volume, velocity and variability are difficult to deal with by using the traditional Relational Database Management Systems. Tradeoffs between consistency, availability, partition tolerance and latency is intrinsic to such systems. Although relations between these properties have been previously identified by the well-known CAP and PACELC theorems in qualitative terms, it is still necessary to quantify how different consistency settings, deployment patterns and other properties affect system performance.This experience report analysis performance of the Cassandra NoSQL database cluster and studies the tradeoff between data consistency guaranties and performance in distributed data storages. The primary focus is on investigating the quantitative interplay between Cassandra response time, throughput and its consistency settings considering different single- and multi-region deployment scenarios. The study uses the YCSB benchmarking framework and reports the results of the read and write performance tests of the three-replicated Cassandra cluster deployed in the Amazon AWS. In this paper, we also put forward a notation which can be used to formally describe distributed deployment of Cassandra cluster and its nodes relative to each other and to a client application. We present quantitative results showing how different consistency settings and deployment patterns affect Cassandra performance under different workloads. In particular, our experiments show that strong consistency costs up to 22 % of performance in case of the centralized Cassandra cluster deployment and can cause a 600 % increase in the read/write requests if Cassandra replicas and its clients are globally distributed across different AWS Regions.
APA, Harvard, Vancouver, ISO, and other styles
19

Chu, Kai-Chun, Kuo-Chi Chang, Hsiao-Chuan Wang, Yuh-Chung Lin, and Tsui-Lien Hsu. "Field-Programmable Gate Array-Based Hardware Design of Optical Fiber Transducer Integrated Platform." Journal of Nanoelectronics and Optoelectronics 15, no. 5 (May 1, 2020): 663–71. http://dx.doi.org/10.1166/jno.2020.2835.

Full text
Abstract:
This study focuses on the hardware architecture of a Raman scattering distributed optical fiber transducer platform, the principles of Raman scattering are analyzed, and the output 2 analog electrical signals are converted to digital signals at a 16-bit sampling rate by an Analog-to-Digital Converter (ADC). The system is implemented based on the FPGA. The integrated circuit is responsible for controlling the data acquisition process. The differential amplifier circuit, FPGA peripheral circuit, and CPU subsystem circuit, which takes ARM as the core, are separately designed. The composition of software includes a DDR2 (Double Data Rate 2) driver and central control logic. In this study, the optical fiber transducer platform has been tested. The CPU DDR2 is read/written by the test program respectively. According to the results, the program passes the read/write test. The NAND FLASH is tested. The results show that this program returns all operations successfully. The timing tests of the DDR2 interface and data latching are conducted. The results show that the read/write operations ensure that the clock and data curves are aligned. Therefore, the optical fiber transducer integrated platform designed in this study is effective.
APA, Harvard, Vancouver, ISO, and other styles
20

Patra, Prashant Kumar, and Padma Lochan Pradhan. "Distributed HPC UFS ACM Optimizing the Risk for All the Time on Every Time." International Journal of Advanced Pervasive and Ubiquitous Computing 6, no. 3 (July 2014): 15–34. http://dx.doi.org/10.4018/ijapuc.2014070102.

Full text
Abstract:
The access control mechanism is one of the well advance controls for all the time on every time on recent pervasive computing for protection of data and services from the hacker, thefts and unauthorized users. This paper contributes to the development of an optimization model that aims to determine the optimal cost to be implementing into DOOS security mechanisms on the measure component of UFS attribute. Our objective should be design in such way, that the Read, Write & Execute automatically protect to our web services on DOOS. We have to make high simplification, unification and step by step normalization by implementing UFS ACM mechanism based on distributed object oriented system on N dimensional hypercube model. Finally, we have to maximize the qualities of services & minimize the cost and time of the Business, Resources and Technology. The subject and object can able communicate through read, write and execute over a UFS on N Dimensional HPC. We have to apply these ACM utilities over a anti-fragile technology to make robust and high secure for all the time. Our objective will be resolve the unstable, uncertainty, un-order, un safe and unset up (U^4) problems of complex technology on right time and right place for all the time in around the globe to take care of accountabilities, action abilities and manage abilities. Meanwhile, it will be more accountable for performance, fault tolerance, throughput, bench marking and risk optimization on any web services for all the time.
APA, Harvard, Vancouver, ISO, and other styles
21

Su, Jing, and Xiao Jing Li. "Production Plan of Information Management System for Virtual Industry." Advanced Materials Research 765-767 (September 2013): 1271–74. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.1271.

Full text
Abstract:
The information management is a crucial mission for a virtual industry in such a competitive market environment. The typical characteristic of information management is distribution, autonomy and co-operation. Based on an on-going ESPRIT project (X-CITTIC), The author presents a distributed information management architecture for production planning and control in a virtual enterprises of semiconductor manufacturing. Object technologies are widely used in its design and implementation. A detailed structure of the components in the architecture, called information managers, is also suggested and introduced. Each information manager has three elements: a data object server, a database and a group of meta-objects. The information management can provide not only basic services (e.g. read and write) but also advanced services (e.g. notification, security control, subscription and data sending). Finally the present X-CITTIC information management system is detailed introduced.
APA, Harvard, Vancouver, ISO, and other styles
22

Ahmed, Jaafar, Andrii Karpenko, Olga Tarasyuk, Anatoliy Gorbenko, and Akbar Sheikh-Akbari. "Consistency issue and related trade-offs in distributed replicated systems and databases: a review." Radioelectronic and Computer Systems, no. 2 (May 25, 2023): 171–79. http://dx.doi.org/10.32620/reks.2023.2.14.

Full text
Abstract:
Distributed replicated databases play a crucial role in modern computer systems enabling scalable, fault-tolerant, and high-performance data management. However, achieving these qualities requires resolving a number of trade-offs between various properties during system design and operation. This paper reviews trade-offs in distributed replicated databases and provides a survey of recent research papers studying distributed data storage. The paper first discusses a compromise between consistency and latency that appears in distributed replicated data storages and directly follows from CAP and PACELC theorems. Consistency refers to the guarantee that all clients in a distributed system observe the same data at the same time. To ensure strong consistency, distributed systems typically employ coordination mechanisms and synchronization protocols that involve communication and agreement among distributed replicas. These mechanisms introduce additional overhead and latency and can dramatically increase the time taken to complete operations when replicas are globally distributed across the Internet. In addition, we study trade-offs between other system properties including availability, durability, cost, energy consumption, read and write latency, etc. In this paper we also provide a comprehensive review and classification of recent research works in distributed replicated databases. Reviewed papers showcase several major areas of research, ranging from performance evaluation and comparison of various NoSQL databases to suggest new strategies for data replication and putting forward new consistency models. In particular, we observed a shift towards exploring hybrid consistency models of causal consistency and eventual consistency with causal ordering due to their ability to strike a balance between operations ordering guarantees and high performance. Researchers have also proposed various consistency control algorithms and consensus quorum protocols to coordinate distributed replicas. Insights from this review can empower practitioners to make informed decisions in designing and managing distributed data storage systems as well as help identify existing gaps in the body of knowledge and suggest further research directions.
APA, Harvard, Vancouver, ISO, and other styles
23

Ivanov, Ievgen, Mykola Nikitchenko, and Uri Abraham. "Event-Based Proof of the Mutual Exclusion Property of Peterson’s Algorithm." Formalized Mathematics 23, no. 4 (December 1, 2015): 325–31. http://dx.doi.org/10.1515/forma-2015-0026.

Full text
Abstract:
Summary Proving properties of distributed algorithms is still a highly challenging problem and various approaches that have been proposed to tackle it [1] can be roughly divided into state-based and event-based proofs. Informally speaking, state-based approaches define the behavior of a distributed algorithm as a set of sequences of memory states during its executions, while event-based approaches treat the behaviors by means of events which are produced by the executions of an algorithm. Of course, combined approaches are also possible. Analysis of the literature [1], [7], [12], [9], [13], [14], [15] shows that state-based approaches are more widely used than event-based approaches for proving properties of algorithms, and the difficulties in the event-based approach are often emphasized. We believe, however, that there is a certain naturalness and intuitive content in event-based proofs of correctness of distributed algorithms that makes this approach worthwhile. Besides, state-based proofs of correctness of distributed algorithms are usually applicable only to discrete-time models of distributed systems and cannot be easily adapted to the continuous time case which is important in the domain of cyber-physical systems. On the other hand, event-based proofs can be readily applied to continuous-time / hybrid models of distributed systems. In the paper [2] we presented a compositional approach to reasoning about behavior of distributed systems in terms of events. Compositionality here means (informally) that semantics and properties of a program is determined by semantics of processes and process communication mechanisms. We demonstrated the proposed approach on a proof of the mutual exclusion property of the Peterson’s algorithm [11]. We have also demonstrated an application of this approach for proving the mutual exclusion property in the setting of continuous-time models of cyber-physical systems in [8]. Using Mizar [3], in this paper we give a formal proof of the mutual exclusion property of the Peterson’s algorithm in Mizar on the basis of the event-based approach proposed in [2]. Firstly, we define an event-based model of a shared-memory distributed system as a multi-sorted algebraic structure in which sorts are events, processes, locations (i.e. addresses in the shared memory), traces (of the system). The operations of this structure include a binary precedence relation ⩽ on the set of events which turns it into a linear preorder (events are considered simultaneous, if e1 ⩽ e2 and e2 ⩽ e1), special predicates which check if an event occurs in a given process or trace, predicates which check if an event causes the system to read from or write to a given memory location, and a special partial function “val of” on events which gives the value associated with a memory read or write event (i.e. a value which is written or is read in this event) [2]. Then we define several natural consistency requirements (axioms) for this structure which must hold in every distributed system, e.g. each event occurs in some process, etc. (details are given in [2]). After this we formulate and prove the main theorem about the mutual exclusion property of the Peterson’s algorithm in an arbitrary consistent algebraic structure of events. Informally, the main theorem states that if a system consists of two processes, and in some trace there occur two events e1 and e2 in different processes and each of these events is preceded by a series of three special events (in the same process) guaranteed by execution of the Peterson’s algorithm (setting the flag of the current process, writing the identifier of the opposite process to the “turn” shared variable, and reading zero from the flag of the opposite process or reading the identifier of the current process from the “turn” variable), and moreover, if neither process writes to the flag of the opposite process or writes its own identifier to the “turn” variable, then either the events e1 and e2 coincide, or they are not simultaneous (mutual exclusion property).
APA, Harvard, Vancouver, ISO, and other styles
24

Zhou, Bao, Junsan Zhao, Guoping Chen, and Ying Yin. "Research on Secure Storage Technology of Spatiotemporal Big Data Based on Blockchain." Applied Sciences 13, no. 13 (July 6, 2023): 7911. http://dx.doi.org/10.3390/app13137911.

Full text
Abstract:
With the popularity of spatiotemporal big data applications, more and more sensitive data are generated by users, and the sharing and secure storage of spatiotemporal big data are faced with many challenges. In response to these challenges, the present paper puts forward a new technology called CSSoB (Classified Secure Storage Technology over Blockchain) that leverages blockchain technology to enable classified secure storage of spatiotemporal big data. This paper introduces a twofold approach to tackle challenges associated with spatiotemporal big data. First, the paper proposes a strategy to fragment and distribute space–time big data while enabling both encryption and nonencryption operations based on different data types. The sharing of sensitive data is enabled via smart contract technology. Second, CSSoB’s single-node storage performance was assessed under local and local area network (LAN) conditions, and results indicate that the read performance of CSSoB surpasses its write performance. In addition, read and write performance were observed to increase significantly as the file size increased. Finally, the transactions per second (TPS) of CSSoB and the Hadoop Distributed File System (HDFS) were compared under varying thread numbers. In particular, when the thread number was set to 100, CSSoB demonstrated a TPS improvement of 7.8% in comparison with HDFS. Given the remarkable performance of CSSoB, its adoption can not only enhance storage performance, but also improve storage security to a great extent. Moreover, the fragmentation processing technology employed in this study enables secure storage and rapid data querying while greatly improving spatiotemporal data processing capabilities.
APA, Harvard, Vancouver, ISO, and other styles
25

Stetsyk, Oleksii, and Svitlana Terenchuk. "COMPARATIVE ANALYSIS OF NOSQL DATABASES ARCHITECTURE." Management of Development of Complex Systems, no. 47 (September 27, 2021): 78–82. http://dx.doi.org/10.32347/2412-9933.2021.47.78-82.

Full text
Abstract:
This article is devoted to the study of problematic issues due to the growing scale and requirements for modern high-load distributed systems. The relevance of the work is ensured by the fact that an important component of each such system is a database. The paper highlights the main problems associated with the use of relational databases in many high-load distributed systems. The main focus is on the study of such properties as data consistency, availability, and stability of the system. Basic information about the architecture and purpose of non-relational databases with a wide column, databases of key-value type, and document-oriented databases is provided. The advantages and disadvantages of non-relational databases of different types are shown, which are manifested in solving different problems depending on the purpose and features of the system. The choice of non-relational databases of different types for comparative analysis is substantiated. Databases such as Cassandra, Redis, and Mongo, which have long been used in high-load distributed systems and have already proven themselves among users, have been studied in detail. The main task addressed in this article was to find an answer to the question of the feasibility of using non-relational databases of the architecture of Cassandra, Redis, and Mongo depending on the characteristics of the system, or record information. Based on the analysis, options for using these databases for systems with a high number of requests to read or write information are proposed.
APA, Harvard, Vancouver, ISO, and other styles
26

Hong, Feng, Jianquan Zhang, Shigui Qi, and Zheng Li. "PCM-2R: Accelerating MLC PCM Writes via Data Reshaping and Remapping." Mobile Information Systems 2022 (July 16, 2022): 1–19. http://dx.doi.org/10.1155/2022/9552517.

Full text
Abstract:
Multilevel cell (MLC) phase change memory (PCM) shows great potential in terms of capacity and cost compared with single-level cell (SLC) PCM by storing multiple bits in one physical PCM cell. However, poor write performance is a huge challenge for MLC PCM. In general, write latency of MLC PCM is 10 to 100X longer compared with DRAM technology. Considerable write latency greatly degrades the overall system performance and restricts the application of MLC PCM. Actually, several chips compose a memory DIMM to match the wide interface of data bus. The data of a write request, i.e., a cache line block, are distributed to multiple PCM chips. As a result, the write service time is determined by the chips with the most data amount. Conventional PCM write schemes do not care for the modified-byte distribution among PCM chips and it just waits for the completion of the chip with the most amount of data. However, it is observed that (1) the conventional PCM write scheme suffers from unbalanced modified-byte distribution that some PCM chips bear too many modified bytes while some chips are kept idle for long times. (2) The modified-byte distribution shows some unique patterns that some bytes are changed more frequently compared with others. (3) MLC PCM shows significant asymmetry considering only MSB or LSB transitions. Based on these observations, in order to solve the poor write problem of PCM, this article presents a novel PCM write scheme called PCM-2R. The key ideas behind our proposed scheme are to reshape the data to evenly distribute the cache line blocks among all chips based on their modified-byte distribution pattern to avoid unbalanced distribution and then remap modified bytes to fast region after decoupling MLC PCM cells considering the state transition asymmetries. The evaluation results show that PCM-2R achieves 51% read latency reduction, 37% write latency reduction, 1.9X IPC improvement, 41% running time reduction, 2.2X throughout improvement, and 52% energy reduction compared with the baseline. Moreover, compared with the state-of-the-art write schemes, PCM-2R achieves 0.2X more IPC improvement and 0.2X throughout improvement.
APA, Harvard, Vancouver, ISO, and other styles
27

Cheng, Audrey, Xiao Shi, Lu Pan, Anthony Simpson, Neil Wheaton, Shilpa Lawande, Nathan Bronson, Peter Bailis, Natacha Crooks, and Ion Stoica. "RAMP-TAO." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 3014–27. http://dx.doi.org/10.14778/3476311.3476379.

Full text
Abstract:
Facebook's graph store TAO, like many other distributed data stores, traditionally prioritizes availability, efficiency, and scalability over strong consistency or isolation guarantees to serve its large, read-dominant workloads. As product developers build diverse applications on top of this system, they increasingly seek transactional semantics. However, providing advanced features for select applications while preserving the system's overall reliability and performance is a continual challenge. In this paper, we first characterize developer desires for transactions that have emerged over the years and describe the current failure-atomic (i.e., write) transactions offered by TAO. We then explore how to introduce an intuitive read transaction API. We highlight the need for atomic visibility guarantees in this API with a measurement study on potential anomalies that occur without stronger isolation for reads. Our analysis shows that 1 in 1,500 batched reads reflects partial transactional updates, which complicate the developer experience and lead to unexpected results. In response to our findings, we present the RAMP-TAO protocol, a variation based on the Read Atomic Multi-Partition (RAMP) protocols that can be feasibly deployed in production with minimal overhead while ensuring atomic visibility for a read-optimized workload at scale.
APA, Harvard, Vancouver, ISO, and other styles
28

Shin, Dong-Jin, and Jeong-Joon Kim. "Cache-Based Matrix Technology for Efficient Write and Recovery in Erasure Coding Distributed File Systems." Symmetry 15, no. 4 (April 6, 2023): 872. http://dx.doi.org/10.3390/sym15040872.

Full text
Abstract:
With the development of various information and communication technologies, the amount of big data has increased, and distributed file systems have emerged to store them stably. The replication technique divides the original data into blocks and writes them on multiple servers for redundancy and fault tolerance. However, there is a symmetrical space efficiency problem that arises from the need to store blocks larger than the original data. When storing data, the Erasure Coding (EC) technique generates parity blocks through encoding calculations and writes them separately on each server for fault tolerance and data recovery purposes. Even if a specific server fails, original data can still be recovered through decoding calculations using the parity blocks stored on the remaining servers. However, matrices generated during encoding and decoding are redundantly generated during data writing and recovery, which leads to unnecessary overhead in distributed file systems. This paper proposes a cache-based matrix technique that uploads the matrices generated during encoding and decoding to cache memory and reuses them, rather than generating new matrices each time encoding or decoding occurs. The design of the cache memory applies the Weighting Size and Cost Replacement Policy (WSCRP) algorithm to efficiently upload and reuse matrices to cache memory using parameters known as weights and costs. Furthermore, the cache memory table can be managed efficiently because the weight–cost model sorts and updates matrices using specific parameters, which reduces replacement cost. The experiment utilized the Hadoop Distributed File System (HDFS) as the distributed file system, and the EC volume was composed of Reed–Solomon code with parameters (6, 3). As a result of the experiment, it was possible to reduce the write, read, and recovery times associated with encoding and decoding. In particular, for up to three node failures, systems using WSCRP were able to reduce recovery time by about 30 s compared to regular HDFS systems.
APA, Harvard, Vancouver, ISO, and other styles
29

Lee, Jun-Yeong, Moon-Hyun Kim, Syed Asif Raza Shah, Sang-Un Ahn, Heejun Yoon, and Seo-Young Noh. "Performance Evaluations of Distributed File Systems for Scientific Big Data in FUSE Environment." Electronics 10, no. 12 (June 18, 2021): 1471. http://dx.doi.org/10.3390/electronics10121471.

Full text
Abstract:
Data are important and ever growing in data-intensive scientific environments. Such research data growth requires data storage systems that play pivotal roles in data management and analysis for scientific discoveries. Redundant Array of Independent Disks (RAID), a well-known storage technology combining multiple disks into a single large logical volume, has been widely used for the purpose of data redundancy and performance improvement. However, this requires RAID-capable hardware or software to build up a RAID-enabled disk array. In addition, it is difficult to scale up the RAID-based storage. In order to mitigate such a problem, many distributed file systems have been developed and are being actively used in various environments, especially in data-intensive computing facilities, where a tremendous amount of data have to be handled. In this study, we investigated and benchmarked various distributed file systems, such as Ceph, GlusterFS, Lustre and EOS for data-intensive environments. In our experiment, we configured the distributed file systems under a Reliable Array of Independent Nodes (RAIN) structure and a Filesystem in Userspace (FUSE) environment. Our results identify the characteristics of each file system that affect the read and write performance depending on the features of data, which have to be considered in data-intensive computing environments.
APA, Harvard, Vancouver, ISO, and other styles
30

Edwards, Nicholas Jain, David Tonny Brain, Stephen Carinna Joly, and Mariana Karry Masucato. "Hadoop distributed file system mechanism for processing of large datasets across computers cluster using programming techniques." International research journal of management, IT and social sciences 6, no. 6 (September 7, 2019): 1–16. http://dx.doi.org/10.21744/irjmis.v6n6.739.

Full text
Abstract:
In this paper, we have proved that the HDFS I/O operations performance is getting increased by integrating the set associativity in the cache design and changing the pipeline topology using fully connected digraph network topology. In read operation, since there is huge number of locations (words) at cache compared to direct mapping the chances of miss ratio is very low, hence reducing the swapping of the data between main memory and cache memory. This is increasing the memory I/O operations performance. In Write operation instead of using the sequential pipeline we need to construct the fully connected graph using the data blocks listed from the NameNode metadata. In sequential pipeline, the data is getting copied to source node in the pipeline. Source node will copy the data to next data block in the pipeline. The same copy process will continue until the last data block in the pipeline. The acknowledgment process has to follow the same process from last block to source block. The time required to transfer the data to all the data blocks in the pipeline and the acknowledgment process is almost 2n times to data copy time from one data block to another data block (if the replication factor is n).
APA, Harvard, Vancouver, ISO, and other styles
31

Majdoubi, Anass, Abdellatif El Abderrahmani, and Rafik Lasri. "Smart environmental data management system into a cattle building." E3S Web of Conferences 234 (2021): 00033. http://dx.doi.org/10.1051/e3sconf/202123400033.

Full text
Abstract:
The climatic atmosphere in which cattle live is an essential parameter of their environment because of its critical role in their productivity. An adapted cattle building must help to mitigate the effects of climatic stress and allow the farmer to properly control the climatic atmosphere during the production cycle. The most important factors influencing the climatic atmosphere inside a cattle building are temperature, humidity, and greenhouse gas emissions. We propose a case study for a wireless sensor network model placed on a cattle farm, in which each measurement node “mote” collects environmental data (temperature, humidity, and emission gas), in order to control the building's climate, this data is stored and managed in a remote database. We will present HBase, a NoSQL database management system, based on the concept of distributed storage, a column-oriented database that provides the read/write access to data on the HADOOP HDFS file system in real-time. The storage results presented in this paper are obtained via a java code that can connect with the HBase database, in order to store the received data at every second from each node constituting the measurement system via HTTP requests.
APA, Harvard, Vancouver, ISO, and other styles
32

Thalij, Saadi Hamad, and Veli Hakkoymaz. "Multiobjective Glowworm Swarm Optimization-Based Dynamic Replication Algorithm for Real-Time Distributed Databases." Scientific Programming 2018 (December 4, 2018): 1–16. http://dx.doi.org/10.1155/2018/2724692.

Full text
Abstract:
Distributed systems offer resources to be accessed geographically for large-scale data requests of different users. In many cases, replication of the vital data files and storing their replica in multiple locations accessible to the requesting clients is vital in improving the data availability, reliability, security, and reduction of the execution time. It is important that real-time distributed databases maintain the consistency constraints and also guarantee the time constraints required by the client requests. However, when the size of the distributed system increases, the user access time also tends to increase, which in turn increases the vitality of the replica placement. Thus, the primary issues that emerge are deciding upon an optimal replication number and identifying perfect locations to store the replicated data. These open challenges have been considered in this study, which turns to develop a dynamic data replication algorithm for real-time distributed databases using a multiobjective glowworm swarm optimization (MGSO) strategy. The proposed algorithm adapts the random patterns of the read-write requests and employs a dynamic window mechanism for replication. It also models the replica number and placement problem as a multiobjective optimization problem and utilizes MGSO for resolving it. The cost models are presented to ensure the time constraint satisfaction in servicing user requests. The performance of the MGSO dynamic data replication algorithm has been studied using competitive analysis, and the results show the efficiency of the proposed algorithm for the distributed databases.
APA, Harvard, Vancouver, ISO, and other styles
33

Korneev, V. V. "Routing in a Communication Fabric of a Computing System with Distributed Shared Memory and Synchronization based on FE-Bits." Programmnaya Ingeneria 13, no. 10 (December 14, 2022): 471–82. http://dx.doi.org/10.17587/prin.13.471-482.

Full text
Abstract:
This article discusses an architecture based on the paradigm of using all possible processes parallelism. The user should only specify which calculations can be performed in parallel threads over shared memory, conforming only to the selected algorithm. This allows you to create the maximum flow of memory accesses inherent in the algorithm. If necessary, read, and only then write a new value instead to the corresponding shared memory cell, the user believes that the conflict resolution mechanism is implemented by hardware memory access control. In general, the proposed architecture is aimed at solving the same problems as the EMU and PIUMA architectures, but uses "smart" controllers of shared memory blocks to synchronize threads and implement atomic operations. For a large flow of accesses to distributed shared memory, energy-efficient routing is necessary. This paper proposes arithmetic routing, which is applicable in any communication fabrics, including with graphs of Dragonfly and graphs with the minimum possible length of the middle path and with the same number of vertices N and degrees of vertices v. An addressing and routing algorithm is proposed that provides energy-efficient access to distributed shared memory. Routing enables fault-tolerant operation based on the choice of alternative routes.
APA, Harvard, Vancouver, ISO, and other styles
34

Ünver, Mahmut, Atilla Ergüzen, and Erdal Erdal. "Design of a DFS to Manage Big Data in Distance Education Environments." JUCS - Journal of Universal Computer Science 28, no. 2 (February 28, 2022): 202–24. http://dx.doi.org/10.3897/jucs.69069.

Full text
Abstract:
Information technologies have invaded every aspect of our lives. Distance education was also affected by this phase and became an accepted model of education. The evolution of education into a digital platform has also brought unexpected problems, such as the increase in internet usage, the need for new software and devices that can connect to the Internet. Perhaps the most important of these problems is the management of the large amounts of data generated when all training activities are conducted remotely. Over the past decade, studies have provided important information about the quality of training and the benefits of distance learning. However, Big Data in distance education has been studied only to a limited extent, and to date no clear single solution has been found. In this study, a Distributed File Systems (DFS) is proposed and implemented to manage big data in distance education. The implemented ecosystem mainly contains the elements Dynamic Link Library (DLL), Windows Service Routines and distributed data nodes. DLL codes are required to connect Learning Management System (LMS) with the developed system. 67.72% of the files in the distance education system have small file size (<=16 MB) and 53.10% of the files are smaller than 1 MB. Therefore, a dedicated Big Data management platform was needed to manage and archive small file sizes. The proposed system was designed with a dynamic block structure to address this shortcoming. A serverless architecture has been chosen and implemented to make the platform more robust. Moreover, the developed platform also has compression and encryption features. According to system statistics, each written file was read 8.47 times, and for video archive files, this value was 20.95. In this way, a framework was developed in the Write Once Read Many architecture. A comprehensive performance analysis study was conducted using the operating system, NoSQL, RDBMS and Hadoop. Thus, for file sizes 1 MB and 50 MB, the developed system achieves a response time of 0.95 ms and 22.35 ms, respectively, while Hadoop, a popular DFS, has 4.01 ms and 47.88 ms, respectively.
APA, Harvard, Vancouver, ISO, and other styles
35

Noor, Ahmad Shukri Mohd, Nur Farhah Mat Zian, Noor Hafhizah Abd Rahim, Rabiei Mamat, and Wan Nur Amira Wan Azman. "Novelty circular neighboring technique using reactive fault tolerance method." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 6 (December 1, 2019): 5211. http://dx.doi.org/10.11591/ijece.v9i6.pp5211-5217.

Full text
Abstract:
The availability of the data in a distributed system can be increase by implementing fault tolerance mechanism in the system. Reactive method in fault tolerance mechanism deals with restarting the failed services, placing redundant copies of data in multiple nodes across network, in other words data replication and migrating the data for recovery. Even if the idea of data replication is solid, the challenge is to choose the right replication technique that able to provide better data availability as well as consistency that involves read and write operations on the redundant copies. Circular Neighboring Replication (CNR) technique exploits neighboring policy in replicating the data items in the system performs well with regards to lower copies needed to maintain the system availability at the highest. In a performance analysis with existing techniques, results show that CNR improves system availability by average 37% by offering only two replicas needed to maintain data availability and consistency. The study demonstrates the possibility of the proposed technique and the potential of deploying in larger and complex environment.
APA, Harvard, Vancouver, ISO, and other styles
36

Costa, Thiago Bulhões da Silva, Lucas Shinoda, Ramon Alfredo Moreno, Jose E. Krieger, and Marco Gutierrez. "Blockchain-Based Architecture Design for Personal Health Record: Development and Usability Study." Journal of Medical Internet Research 24, no. 4 (April 13, 2022): e35013. http://dx.doi.org/10.2196/35013.

Full text
Abstract:
Background The importance of blockchain-based architectures for personal health record (PHR) lies in the fact that they are thought and developed to allow patients to control and at least partly collect their health data. Ideally, these systems should provide the full control of such data to the respective owner. In spite of this importance, most of the works focus more on describing how blockchain models can be used in a PHR scenario rather than whether these models are in fact feasible and robust enough to support a large number of users. Objective To achieve a consistent, reproducible, and comparable PHR system, we build a novel ledger-oriented architecture out of a permissioned distributed network, providing patients with a manner to securely collect, store, share, and manage their health data. We also emphasize the importance of suitable ledgers and smart contracts to operate the blockchain network as well as discuss the necessity of standardizing evaluation metrics to compare related (net)works. Methods We adopted the Hyperledger Fabric platform to implement our blockchain-based architecture design and the Hyperledger Caliper framework to provide a detailed assessment of our system: first, under workload, ranging from 100 to 2500 simultaneous record submissions, and second, increasing the network size from 3 to 13 peers. In both experiments, we used throughput and average latency as the primary metrics. We also created a health database, a cryptographic unit, and a server to complement the blockchain network. Results With a 3-peer network, smart contracts that write on the ledger have throughputs, measured in transactions per second (tps) in an order of magnitude close to 102 tps, while those contracts that only read have rates close to 103 tps. Smart contracts that write also have latencies, measured in seconds, in an order of magnitude close to 101 seconds, while that only read have delays close to 100 seconds. In particular, smart contracts that retrieve, list, and view history have throughputs varying, respectively, from 1100 tps to 1300 tps, 650 tps to 750 tps, and 850 tps to 950 tps, impacting the overall system response if they are equally requested under the same workload. Varying the network size and applying an equal fixed load, in turn, writing throughputs go from 102 tps to 101 tps and latencies go from 101 seconds to 102 seconds, while reading ones maintain similar values. Conclusions To the best of our knowledge, we are the first to evaluate, using Hyperledger Caliper, the performance of a PHR blockchain architecture and the first to evaluate each smart contract separately. Nevertheless, blockchain systems achieve performances far below what the traditional distributed databases achieve, indicating that the assessment of blockchain solutions for PHR is a major concern to be addressed before putting them into a real production.
APA, Harvard, Vancouver, ISO, and other styles
37

Nur Afifah, Rizqi, Tuti Simanullang, and R. Madhakomala. "VARIOUS ADVANTAGES IN EDUCATION." International Journal of Business, Law, and Education 3, no. 3 (July 1, 2022): 145–56. http://dx.doi.org/10.56442/ijble.v3i3.65.

Full text
Abstract:
Education is the main factor for every country or nation to excel in global competition, it is also an investment in the future that will be profitable. Various improvements to the education system have been implemented and are close to achieving their goals. The quality education system is actually the spirit of the teachers as the executor of education. Teachers are the key to quality education and the most important factor in the teaching and learning process. This study provides an overview of various problems, advantages and disadvantages of education, differences and similarities between Indonesia's education system and foreign countries. Literature review is the method used in this research. The results of the study show that the advantages of education in Indonesia are (1) a diverse education system; (2) Transparent education system; (3) The curriculum is prepared directly by the experts. Weaknesses; (1) Educators who are not evenly distributed; (2) Educational facilities that are not evenly distributed; (3) The curriculum is still theoretical. Education in Indonesia places more emphasis on learning to read, write, and count for early childhood, learning time is very busy, always giving assignments at home. While abroad, early childhood education places more emphasis on playing and interacting to explore their environment, students only study in class about 30-40% and the rest is spent playing and interacting with their friends, not giving assignments or homework. The similarity is to focus on sharpening students' character intelligence, increasing students' practical abilities which will be needed in the future.
APA, Harvard, Vancouver, ISO, and other styles
38

Dwivedi, Sanjeev Kumar, Ruhul Amin, Jegatha Deborah Lazarus, and Vijayakumar Pandi. "Blockchain-Based Electronic Medical Records System with Smart Contract and Consensus Algorithm in Cloud Environment." Security and Communication Networks 2022 (September 15, 2022): 1–10. http://dx.doi.org/10.1155/2022/4645585.

Full text
Abstract:
The blockchain is a peer-to-peer distributed ledger technology that works on the precept of “write-once-read-only.” In a blockchain, pieces of information are arranged in the form of blocks, and these blocks are linked together using the hash value of previous blocks. The blocks in a blockchain mechanism are appended only, which means that once information is stored in a block and it cannot be changed; no one tampers the block’s content. The traditional electronic medical records (EMRs) based system stores the patients’ information in a local database or server, which provides centralization of information, and traditional EMRs are more centric on the health providers. So, security and sharing of patients’ information are difficult tasks in the traditional EMR system. The blockchain mechanism has the potential to resolve these existing problems. Due to the appended-only-ledger principle and decentralization of blocks between the network participants, blockchain technology is suited to the EMR system. In this article, first, we discuss all the existing EMR systems and discuss their drawbacks. Keeping all the drawbacks in our mind, we propose a blockchain-based medical record system that utilizes clouding technology for storage purposes. Furthermore, we have designed a smart contract and consensus algorithm for our proposed EMR. Our system only uses a permissioned blockchain model so that only verified and authenticated users can generate their data and participate in the data-sharing system.
APA, Harvard, Vancouver, ISO, and other styles
39

Miao, Xupeng, Hailin Zhang, Yining Shi, Xiaonan Nie, Zhi Yang, Yangyu Tao, and Bin Cui. "HET." Proceedings of the VLDB Endowment 15, no. 2 (October 2021): 312–20. http://dx.doi.org/10.14778/3489496.3489511.

Full text
Abstract:
Embedding models have been an effective learning paradigm for high-dimensional data. However, one open issue of embedding models is that their representations (latent factors) often result in large parameter space. We observe that existing distributed training frameworks face a scalability issue of embedding models since updating and retrieving the shared embedding parameters from servers usually dominates the training cycle. In this paper, we propose HET, a new system framework that significantly improves the scalability of huge embedding model training. We embrace skewed popularity distributions of embeddings as a performance opportunity and leverage it to address the communication bottleneck with an embedding cache. To ensure consistency across the caches, we incorporate a new consistency model into HET design, which provides fine-grained consistency guarantees on a per-embedding basis. Compared to previous work that only allows staleness for read operations, HET also utilizes staleness for write operations. Evaluations on six representative tasks show that HET achieves up to 88% embedding communication reductions and up to 20.68×performance speedup over the state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
40

Saha, Sujan, and Sukumar Mandal. "Application of tools to support Linked Open Data." Library Hi Tech News 38, no. 6 (October 18, 2021): 21–24. http://dx.doi.org/10.1108/lhtn-09-2021-0060.

Full text
Abstract:
Purpose These projects aim to improve library services for users in the future by combining Link Open Data (LOD) technology with data visualization. It displays and analyses search results in an intuitive manner. These services are enhanced by integrating various LOD technologies into the authority control system. Design/methodology/approach The technology known as LOD is used to access, recycle, share, exchange and disseminate information, among other things. The applicability of Linked Data technologies for the development of library information services is evaluated in this study. Findings Apache Hadoop is used for rapidly storing and processing massive Linked Data data sets. Apache Spark is a free and open-source data processing tool. Hive is a SQL-based data warehouse that enables data scientists to write, read and manage petabytes of data. Originality/value The distributed large data storage system Apache HBase does not use SQL. This study’s goal is to search the geographic, authority and bibliographic databases for relevant links found on various websites. When data items are linked together, all of the data bits are linked together as well. The study observed and evaluated the tools and processes and recorded each data item’s URL. As a result, data can be combined across silos, enhanced by third-party data sources and contextualized.
APA, Harvard, Vancouver, ISO, and other styles
41

Patra, Prashant Kumar, and Padma Lochan Pradhan. "Dynamic FCFS ACM Model for Risk Assessment on Real Time Unix File System." International Journal of Advanced Pervasive and Ubiquitous Computing 5, no. 4 (October 2013): 41–62. http://dx.doi.org/10.4018/ijapuc.2013100104.

Full text
Abstract:
The access control is a mechanism that a system grants, revoke the right to access the object. The subject and object can able to integrate, synchronize, communicate and optimize through read, write and execute over a UFS. The access control mechanism is the process of mediating each and every request to system resources, application and data maintained by a operating system and determining whether the request should be approve, created, granted or denied as per top management policy. The AC mechanism, management and decision is enforced by implementing regulations established by a security policy. The management has to investigate the basic concepts behind access control design and enforcement, point out different security requirements that may need to be taken into consideration. The authors have to formulate and implement several ACM on normalizing and optimizing them step by step, that have been highlighted in proposed model for development and production purpose. This research paper contributes to the development of an optimization model that aims and objective to determine the optimal cost, time and maximize the quality of services to be invested into security model and mechanisms deciding on the measure components of UFS. This model has to apply to ACM utilities over a Web portal server on object oriented and distributed environment. This ACM will be resolve the uncertainty, un-order, un formal and unset up (U^4) problems of web portal on right time and right place of any where & any time in around the globe. It will be more measurable and accountable for performance, fault tolerance, throughput, bench marking and risk assessment on any application.
APA, Harvard, Vancouver, ISO, and other styles
42

Widiyati, Dewi Nur, Risa Arroyyani, and Risa Arroyyani. "DIGGING UP NON-ENGLISH TEACHERS’ PERSPECTIVES ON THEIR STUDENTS’ ESP NEEDS." INOVISH JOURNAL 7, no. 2 (December 28, 2022): 146. http://dx.doi.org/10.35314/inovish.v7i2.2883.

Full text
Abstract:
This study aims to identify and explore non-English teachers' perspectives on their students' ESP needs. This study is a descriptive study with a qualitative approach. The participants in this study were 16 public health lecturers at STIKes Surya Global Yogyakarta. A semi-structured questionnaire was distributed to participants to gather information related to their needs. The study analyzes the data from closed questions using simple statistics and from open questions using interpretations. The results showed that the importance of learning English is to support students’ academic fields and their communication skills. Reading is a basic skill that needs to be mastered. The students found it difficult to write English journal abstracts, practice oral communication, read, comprehend, and review academic sources in English. Teachers sometimes use English in the form of resources including books written in English and international journals and websites. Students need to learn English during their study period and after graduation. The topics needed to learn to support students’ backgrounds were health technology, health business service management, web causation of disease, health insurance, abstract, environmental health, health information system, safety, health promotion, epidemiology triangle, iceberg phenomenon of disease, research design epidemiology, epidemic, and infectious diseases, wastewater treatment, hygiene, and sanitation.Keywords: Students’ needs, ESP, public health students
APA, Harvard, Vancouver, ISO, and other styles
43

Mkrtchyan, Tigran, Olufemi Adeyemi, Patrick Fuhrmann, Vincent Garonne, Dmitry Litvintsev, Paul Millar, Albert Rossi, et al. "dCache - joining the noWORM storage club." EPJ Web of Conferences 214 (2019): 04048. http://dx.doi.org/10.1051/epjconf/201921404048.

Full text
Abstract:
For over a decade, dCache.ORG has provided robust software, called dCache, that is used at more than 80 universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments and many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from all-in-one Raspberry-Pi up to hundreds of nodes in multi-petabyte infrastructures. The life cycle of scientific data is well defined - collected, processed, archived and finally deleted, when it’s not needed anymore. Moreover, during all those stages the data is never modified: either the original data is used, or new derived data is produced. With this knowledge, dCache was designed to handle immutable files as efficiently as possible. Data replication, HSM connectivity and data-server independent operations are only possible due to the immutable nature of stored data. Nowadays many commercial vendors provide such write-once-read-many or WORM storage systems, as they become more and more demanded with grown demand of audio, photo and video content in the web. On the other hand by providing standard NFSv4.1 interface dCache is often used as a general-purpose file-system, especially by new communities, like photon scientists or microbiologists. Although many users are aware of data immutability, some applications and use cases still require in-place updates of stored files. To satisfy new requirements some fundamental changes have to be applied to dCache’s core design. However, new developments must not compromise any aspect of existing functionality. In this presentation we will show new developments in dCache to turn it into a regular file system. We will discuss the challenges to build a distributed storage system, ‘life’ with POSIX compliance, handling of multiple replicas and backward compatibility by providing WORM and noWORM capabilities within the same storage system.
APA, Harvard, Vancouver, ISO, and other styles
44

Awad, Khaled M., Mustafa ElNainay, Mohammad Abdeen, Marwan Torki, Omar Saif, and Emad Nabil. "A Secure Blockchain Framework for Storing Historical Text: A Case Study of the Holy Hadith." Computers 11, no. 3 (March 14, 2022): 42. http://dx.doi.org/10.3390/computers11030042.

Full text
Abstract:
Historical texts are one of the main pillars for understanding current civilization and are used to reference different aspects. Hadiths are an example of one of the historical texts that should be securely preserved. Due to the expansion of the online resources, fabrications and alterations of fake Hadiths are easily feasible. Therefore, it has become more challenging to authenticate the online available Hadith contents and much harder to keep these authenticated results secure and unmanipulated. In this research, we are using the capabilities of the distributed blockchain technology to securely archive the Hadith and its level of authenticity in a blockchain. We selected a permissioned blockchain customized model in which the main entities approving the level of authenticity of the Hadith are well-established and specialized institutions in the main Islamic countries that can apply their own Hadith validation model. The proposed solution guarantees its integrity using the crowd wisdom represented in the selected nodes in the blockchain, which uses voting algorithms to decide the insertion of any new Hadiths into the database. This technique secures data integrity at any given time. If any organization’s credentials are compromised and used to update the data maliciously, 50% + 1 approval from the whole network nodes will be required. In case of any malicious or misguided information during the state of reaching consensus, the system will self-heal using practical Byzantine Fault Tolerance (pBFT). We evaluated the proposed framework’s read/write performance and found it adequate for the operational requirements.
APA, Harvard, Vancouver, ISO, and other styles
45

Butt, Ghulam Qadar, Toqeer Ali Sayed, Rabia Riaz, Sanam Shahla Rizvi, and Anand Paul. "Secure Healthcare Record Sharing Mechanism with Blockchain." Applied Sciences 12, no. 5 (February 23, 2022): 2307. http://dx.doi.org/10.3390/app12052307.

Full text
Abstract:
The transfer of information is a demanding issue, particularly due to the presence of a large number of eavesdroppers on communication channels. Sharing medical service records between different clinical jobs is a basic and testing research topic. The particular characteristics of blockchains have attracted a large amount of attention and resulted in revolutionary changes to various business applications, including medical care. A blockchain is based on a distributed ledger, which tends to improve cyber security. A number of proposals have been made with respect to the sharing of basic medical records using a blockchain without needing earlier information or the trust of patients. Specialist service providers and insurance agencies are not secure against data breaches. The safe sharing of clinical records between different countries, to ensure an incorporated and universal medical service, is also a significant issue for patients who travel. The medical data of patients normally reside on different healthcare units around the world, thus raising many concerns. Firstly, a patient’s history of treatment by different physicians is not accessible to the doctor in a single location. Secondly, it is very difficult to secure widespread data residing in different locations. This study proposed record sharing in a chain-like structure, in which every record is globally connected to the others, based on a blockchain under the suggestions and recommendations of the HL7 standards. This study focused on making medical data available, especially of patients who travel in different countries, for a specific period of time after validating the required authentication. Authorization and authentication are performed on the Shibboleth identity management system with the involvement of patient in the sanction process, thereby revealing the patient data for the specific period of time. The proposed approach improves the performance with respect to other record sharing systems, e.g., it reduces the time to read, write, delete, and revoke a record by a noticeable margin. The proposed system takes around three seconds to upload and 7.5 s to download 250 Mb of data, which can contain up to sixteen documents, over a stable network connection. The system has a latency of 413.76 ms when retrieving 100 records, compared to 447.9 and 459.3 ms in previous systems. Thus, the proposed system improved the performance and ensured seclusion by using a blockchain.
APA, Harvard, Vancouver, ISO, and other styles
46

Al-Masadeh, Mohammad Bahjat, Mohad Sanusi Azmi, and Sharifah Sakinah Syed Ahmad. "Tiny datablock in saving Hadoop distributed file system wasted memory." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 2 (April 1, 2023): 1757. http://dx.doi.org/10.11591/ijece.v13i2.pp1757-1772.

Full text
Abstract:
<p>Hadoop distributed file system (HDFS) is the file system whereby Hadoop is use it to store all the upcoming data inside it. Since it been declared, HDFS is consuming a huge memory amount in order to serve a normal dataset. Nonetheless, the current file saving mechanism in HDFS save only one file in one datablock. Thus, a file with just 5 Mb in size will take up the whole datablock capacity causing the rest of the memory unavailable for other upcoming files, and this is considered a huge waste of memory in serving a normal size dataset. This paper proposed a method called tiny datablock-HDFS (TD-HDFS) to increase the usability of HDFS memory and increase the file hosting capabilities by reducing the datablock size to the minimum capacity, and then merging all the related datablocks into one master datablock. This master datablock consists of tiny virtual datablocks that contain the related small files together; will exploit the full memory of the master datablock. The result of this study is a running HDFS with a minimum amount of wasted memory with the same read/write data performance. The results were examined through a comparison between the standard HDFS file hosting and the proposed solution of this study.</p><textarea id="BFI_DATA" style="width: 1px; height: 1px; display: none;"></textarea><textarea id="BFI_DATA" style="width: 1px; height: 1px; display: none;"></textarea><textarea id="BFI_DATA" style="width: 1px; height: 1px; display: none;"></textarea><textarea id="BFI_DATA" style="width: 1px; height: 1px; display: none;"></textarea><div id="WidgetFloaterPanels" class="LTRStyle" style="display: none; text-align: left; direction: ltr; visibility: hidden;"><div id="WidgetFloater" style="display: none;" onmouseover="Microsoft.Translator.OnMouseOverFloater()" onmouseout="Microsoft.Translator.OnMouseOutFloater()"><div id="WidgetLogoPanel"><span id="WidgetTranslateWithSpan"><span>TRANSLATE with </span><img id="FloaterLogo" alt="" /></span> <span id="WidgetCloseButton" title="Exit Translation" onclick="Microsoft.Translator.FloaterOnClose()">x</span></div><div id="LanguageMenuPanel"><div class="DDStyle_outer"><input id="LanguageMenu_svid" style="display: none;" onclick="this.select()" type="text" name="LanguageMenu_svid" value="en" /> <input id="LanguageMenu_textid" style="display: none;" onclick="this.select()" type="text" name="LanguageMenu_textid" /> <span id="__LanguageMenu_header" class="DDStyle" onclick="return LanguageMenu &amp;&amp; !LanguageMenu.Show('__LanguageMenu_popup', event);" onkeydown="return LanguageMenu &amp;&amp; !LanguageMenu.Show('__LanguageMenu_popup', event);">English</span><div style="position: relative; text-align: left; left: 0;"><div style="position: absolute; ;left: 0px;"><div id="__LanguageMenu_popup" class="DDStyle" style="display: none;"><table id="LanguageMenu" border="0"><tbody><tr><td><a onclick="return LanguageMenu.onclick('ar');" tabindex="-1" href="#ar">Arabic</a></td><td><a onclick="return LanguageMenu.onclick('he');" tabindex="-1" href="#he">Hebrew</a></td><td><a onclick="return LanguageMenu.onclick('pl');" tabindex="-1" href="#pl">Polish</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('bg');" tabindex="-1" href="#bg">Bulgarian</a></td><td><a onclick="return LanguageMenu.onclick('hi');" tabindex="-1" href="#hi">Hindi</a></td><td><a onclick="return LanguageMenu.onclick('pt');" tabindex="-1" href="#pt">Portuguese</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('ca');" tabindex="-1" href="#ca">Catalan</a></td><td><a onclick="return LanguageMenu.onclick('mww');" tabindex="-1" href="#mww">Hmong Daw</a></td><td><a onclick="return LanguageMenu.onclick('ro');" tabindex="-1" href="#ro">Romanian</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('zh-CHS');" tabindex="-1" href="#zh-CHS">Chinese Simplified</a></td><td><a onclick="return LanguageMenu.onclick('hu');" tabindex="-1" href="#hu">Hungarian</a></td><td><a onclick="return LanguageMenu.onclick('ru');" tabindex="-1" href="#ru">Russian</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('zh-CHT');" tabindex="-1" href="#zh-CHT">Chinese Traditional</a></td><td><a onclick="return LanguageMenu.onclick('id');" tabindex="-1" href="#id">Indonesian</a></td><td><a onclick="return LanguageMenu.onclick('sk');" tabindex="-1" href="#sk">Slovak</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('cs');" tabindex="-1" href="#cs">Czech</a></td><td><a onclick="return LanguageMenu.onclick('it');" tabindex="-1" href="#it">Italian</a></td><td><a onclick="return LanguageMenu.onclick('sl');" tabindex="-1" href="#sl">Slovenian</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('da');" tabindex="-1" href="#da">Danish</a></td><td><a onclick="return LanguageMenu.onclick('ja');" tabindex="-1" href="#ja">Japanese</a></td><td><a onclick="return LanguageMenu.onclick('es');" tabindex="-1" href="#es">Spanish</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('nl');" tabindex="-1" href="#nl">Dutch</a></td><td><a onclick="return LanguageMenu.onclick('tlh');" tabindex="-1" href="#tlh">Klingon</a></td><td><a onclick="return LanguageMenu.onclick('sv');" tabindex="-1" href="#sv">Swedish</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('en');" tabindex="-1" href="#en">English</a></td><td><a onclick="return LanguageMenu.onclick('ko');" tabindex="-1" href="#ko">Korean</a></td><td><a onclick="return LanguageMenu.onclick('th');" tabindex="-1" href="#th">Thai</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('et');" tabindex="-1" href="#et">Estonian</a></td><td><a onclick="return LanguageMenu.onclick('lv');" tabindex="-1" href="#lv">Latvian</a></td><td><a onclick="return LanguageMenu.onclick('tr');" tabindex="-1" href="#tr">Turkish</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('fi');" tabindex="-1" href="#fi">Finnish</a></td><td><a onclick="return LanguageMenu.onclick('lt');" tabindex="-1" href="#lt">Lithuanian</a></td><td><a onclick="return LanguageMenu.onclick('uk');" tabindex="-1" href="#uk">Ukrainian</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('fr');" tabindex="-1" href="#fr">French</a></td><td><a onclick="return LanguageMenu.onclick('ms');" tabindex="-1" href="#ms">Malay</a></td><td><a onclick="return LanguageMenu.onclick('ur');" tabindex="-1" href="#ur">Urdu</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('de');" tabindex="-1" href="#de">German</a></td><td><a onclick="return LanguageMenu.onclick('mt');" tabindex="-1" href="#mt">Maltese</a></td><td><a onclick="return LanguageMenu.onclick('vi');" tabindex="-1" href="#vi">Vietnamese</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('el');" tabindex="-1" href="#el">Greek</a></td><td><a onclick="return LanguageMenu.onclick('no');" tabindex="-1" href="#no">Norwegian</a></td><td><a onclick="return LanguageMenu.onclick('cy');" tabindex="-1" href="#cy">Welsh</a></td></tr><tr><td><a onclick="return LanguageMenu.onclick('ht');" tabindex="-1" href="#ht">Haitian Creole</a></td><td><a onclick="return LanguageMenu.onclick('fa');" tabindex="-1" href="#fa">Persian</a></td><td> </td></tr></tbody></table><img style="height: 7px; width: 17px; border-width: 0px; left: 20px;" alt="" /></div></div></div></div><script type="text/javascript">// <![CDATA[ var LanguageMenu; var LanguageMenu_keys=["ar","bg","ca","zh-CHS","zh-CHT","cs","da","nl","en","et","fi","fr","de","el","ht","he","hi","mww","hu","id","it","ja","tlh","ko","lv","lt","ms","mt","no","fa","pl","pt","ro","ru","sk","sl","es","sv","th","tr","uk","ur","vi","cy"]; var LanguageMenu_values=["Arabic","Bulgarian","Catalan","Chinese Simplified","Chinese Traditional","Czech","Danish","Dutch","English","Estonian","Finnish","French","German","Greek","Haitian Creole","Hebrew","Hindi","Hmong Daw","Hungarian","Indonesian","Italian","Japanese","Klingon","Korean","Latvian","Lithuanian","Malay","Maltese","Norwegian","Persian","Polish","Portuguese","Romanian","Russian","Slovak","Slovenian","Spanish","Swedish","Thai","Turkish","Ukrainian","Urdu","Vietnamese","Welsh"]; var LanguageMenu_callback=function(){ }; var LanguageMenu_popupid='__LanguageMenu_popup'; // ]]></script></div><div id="CTFLinksPanel"><span id="ExternalLinksPanel"><a id="HelpLink" title="Help" href="https://go.microsoft.com/?linkid=9722454" target="_blank"> <img id="HelpImg" alt="" /></a> <a id="EmbedLink" title="Get this widget for your own site" href="javascript:Microsoft.Translator.FloaterShowEmbed()"> <img id="EmbedImg" alt="" /></a> <a id="ShareLink" title="Share translated page with friends" href="javascript:Microsoft.Translator.FloaterShowSharePanel()"> <img id="ShareImg" alt="" /></a> </span></div><div id="FloaterProgressBar"> </div></div><div id="WidgetFloaterCollapsed" style="display: none;" onmouseover="Microsoft.Translator.OnMouseOverFloater()"><span>TRANSLATE with </span><img id="CollapsedLogoImg" alt="" /></div><div id="FloaterSharePanel" style="display: none;"><div id="ShareTextDiv"><span id="ShareTextSpan"> COPY THE URL BELOW </span></div><div id="ShareTextboxDiv"><input id="ShareTextbox" onclick="this.select()" type="text" name="ShareTextbox" readonly="readonly" /> <!--a id="TwitterLink" title="Share on Twitter"> <img id="TwitterImg" /></a> <a-- id="FacebookLink" title="Share on Facebook"> <img id="FacebookImg" /></a--> <a id="EmailLink" title="Email this translation"> <img id="EmailImg" alt="" /></a></div><div id="ShareFooter"><span id="ShareHelpSpan"><a id="ShareHelpLink"> <img id="ShareHelpImg" alt="" /></a></span> <span id="ShareBackSpan"><a id="ShareBack" title="Back To Translation" href="javascript:Microsoft.Translator.FloaterOnShareBackClick()"> Back</a></span></div><input id="EmailSubject" type="hidden" name="EmailSubject" value="Check out this page in {0} translated from {1}" /> <input id="EmailBody" type="hidden" name="EmailBody" value="Translated: {0}%0d%0aOriginal: {1}%0d%0a%0d%0aAutomatic translation powered by Microsoft® Translator%0d%0ahttp://www.bing.com/translator?ref=MSTWidget" /> <input id="ShareHelpText" type="hidden" value="This link allows visitors to launch this page and automatically translate it to {0}." /></div><div id="FloaterEmbed" style="display: none;"><div id="EmbedTextDiv"><span id="EmbedTextSpan">EMBED THE SNIPPET BELOW IN YOUR SITE</span> <a id="EmbedHelpLink" title="Copy this code and place it into your HTML."> <img id="EmbedHelpImg" alt="" /></a></div><div id="EmbedTextboxDiv"><input id="EmbedSnippetTextBox" onclick="this.select()" type="text" name="EmbedSnippetTextBox" value="&lt;div id='MicrosoftTranslatorWidget' class='Dark' style='color:white;background-color:#555555'&gt;&lt;/div&gt;&lt;script type='text/javascript'&gt;setTimeout(function(){var s=document.createElement('script');s.type='text/javascript';s.charset='UTF-8';s.src=((location &amp;&amp; location.href &amp;&amp; location.href.indexOf('https') == 0)?'https://ssl.microsofttranslator.com':'http://www.microsofttranslator.com')+'/ajax/v3/WidgetV3.ashx?siteData=ueOIGRSKkd965FeEGM5JtQ**&amp;ctf=true&amp;ui=true&amp;settings=manual&amp;from=en';var p=document.getElementsByTagName('head')[0]||document.documentElement;p.insertBefore(s,p.firstChild); },0);&lt;/script&gt;" readonly="readonly" /></div><div id="EmbedNoticeDiv"><span id="EmbedNoticeSpan">Enable collaborative features and customize widget: <a href="http://www.bing.com/widget/translator" target="_blank">Bing Webmaster Portal</a></span></div><div id="EmbedFooterDiv"><span id="EmbedBackSpan"><a title="Back To Translation" href="javascript:Microsoft.Translator.FloaterOnEmbedBackClick()">Back</a></span></div></div><script type="text/javascript">// <![CDATA[ var intervalId = setInterval(function () { if (MtPopUpList) { LanguageMenu = new MtPopUpList(); var langMenu = document.getElementById(LanguageMenu_popupid); var origLangDiv = document.createElement("div"); origLangDiv.id = "OriginalLanguageDiv"; origLangDiv.innerHTML = "<span id='OriginalTextSpan'>ORIGINAL: </span><span id='OriginalLanguageSpan'></span>"; langMenu.appendChild(origLangDiv); LanguageMenu.Init('LanguageMenu', LanguageMenu_keys, LanguageMenu_values, LanguageMenu_callback, LanguageMenu_popupid); window["LanguageMenu"] = LanguageMenu; clearInterval(intervalId); } }, 1); // ]]></script></div>
APA, Harvard, Vancouver, ISO, and other styles
47

WATANABE, KENICHI, YOUSUKE SUGIYAMA, TOMOYA ENOKIDO, and MAKOTO TAKIZAWA. "MODERATE CONCURRENCY CONTROL IN DISTRIBUTED OBJECT SYSTEMS." Journal of Interconnection Networks 05, no. 02 (June 2004): 181–91. http://dx.doi.org/10.1142/s021926590400109x.

Full text
Abstract:
Objects are concurrently manipulated only through methods issued by multiple transactions in object-based systems. We first extend traditional read and write lock modes to methods on objects. We newly introduce availability and exclusion types of conflicting relations among methods. Then, we define a partially ordered relation on lock modes showing which modes are stronger than others. We newly propose a moderate concurrency control algorithm. Before manipulating an object through a method, the object is locked in a weaker mode than an intrinsic mode of the method. Then, the lock mode is escalated to the method mode. The weaker the initial mode is, the more concurrency is obtained but the more frequently deadlock occurs.
APA, Harvard, Vancouver, ISO, and other styles
48

Karpenko, Andrii, Olga Tarasyuk, and Anatoliy Gorbenko. "Research consistency and perfomance of nosql replicated databases." Advanced Information Systems 5, no. 3 (October 18, 2021): 66–75. http://dx.doi.org/10.20998/2522-9052.2021.3.09.

Full text
Abstract:
This paper evaluates performance of distributed fault-tolerant computer systems and replicated NoSQL databases and studies the impact of data consistency on performance and throughput on the example of a three-replicated Cassandra cluster. The paper presents results of heavy-load testing (benchmarking) of Cassandra cluster’s read and write performance which replicas were deployed on Amazon EC2 cloud. The presented quantitative results show how different consistency settings affect the performance of a Cassandra cluster under different workloads considering two deployment scenarios: when all cluster replicas are located in the sane data center, and when they are geographically distributed across different data centers (i.e. Amazon availability zones). We propose a new method of minimizing Cassandra response time while ensuring strong data consistency which is based on optimization of consistency settings depending on the current workload and the proportion between read and write operations.
APA, Harvard, Vancouver, ISO, and other styles
49

An, Byoung Chul, and Hanul Sung. "Efficient I/O Merging Scheme for Distributed File Systems." Symmetry 15, no. 2 (February 5, 2023): 423. http://dx.doi.org/10.3390/sym15020423.

Full text
Abstract:
Recently, decentralized file systems are widely used to overcome centralized file systems’ load asymmetry between nodes and the scalability problem. Due to the lack of a metadata server, decentralized systems require more RPC requests to control metadata processing between clients and servers, which adversely impacts the I/O performance and traffic imbalance by increasing RPC latency. In this paper, we propose an efficient I/O scheme to reduce the RPC overhead in decentralized file systems. Instead of sending a single RPC request at a time, we enqueued the RPCs in the global queue and merged them into larger RPC requests, thus avoiding excessive RPC latency overheads. The experimental results showed that our scheme improves write and read performance by up to 13% and 16%, respectively, compared with those of the original.
APA, Harvard, Vancouver, ISO, and other styles
50

A.Shihata, Reham. "Formal Specification for Implementing Atomic Read/Write Shared Memory in Mobile Ad Hoc Networks Using the Mobile Unity." International Journal of Computer Science and Information Technology 14, no. 03 (June 30, 2022): 1–17. http://dx.doi.org/10.5121/ijcsit.2022.14301.

Full text
Abstract:
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers address novel concerns such as logical mobility, the invocations, specific conditions constructs. The proof logic and programming notation of mobile UNITY provide the intellectual tools required to carryout this task. Also, the quorum systems are investigated in highly mobile networks in order to reduce the communication cost associated with each distributed operation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography