Статті в журналах з теми "In-memory key-value store"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: In-memory key-value store.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "In-memory key-value store".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Cai, Tao, Qingjian He, Dejiao Niu, Fuli Chen, Jie Wang, and Lei Li. "A New Embedded Key–Value Store for NVM Device Simulator." Micromachines 11, no. 12 (December 2, 2020): 1075. http://dx.doi.org/10.3390/mi11121075.

Повний текст джерела
Анотація:
The non-volatile memory (NVM) device is a useful way to solve the memory wall in computers. However, the current I/O software stack in operating systems becomes a performance bottleneck for applications based on NVM devices, especially for key–value stores. We analyzed the characteristics of key–value stores and NVM devices and designed a new embedded key–value store for an NVM device simulator named PMEKV. The embedded processor in NVM devices was used to manage key–value pairs to reduce the data transfer between NVM devices and key–value applications. Meanwhile, it also cut down the data copy between the user space and the kernel space in the operating system to alleviate the I/O software stacks on the efficiency of key–value stores. The architecture, data layout, management strategy, new interface and log strategy of PMEKV are given. Finally, a prototype of PMEKV was implemented based on PMEM. We used YCSB to test and compare it with Redis, MongDB, and Memcache. Meanwhile, the Redis for PMEM named PMEM-Redis and PMEM-KV were also used to test and compared with PMEKV. The results show that PMEKV had the advantage of throughput and adaptability compared with the current key–value stores.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kusuma, Mandahadi. "Metode Optimasi Memcached sebagai NoSQL Key-value Memory Cache." JISKA (Jurnal Informatika Sunan Kalijaga) 3, no. 3 (August 30, 2019): 14. http://dx.doi.org/10.14421/jiska.2019.33-02.

Повний текст джерела
Анотація:
Memcached is an application that is used to store client query results on the web into the memory server as a temporary storage (cache). The goal is that the web remains responsive even though many access the web. Memcached uses key-value and the LRU (Least Recenly Used) algorithm to store data. In the default configuration Memcached can handle web-based applications properly, but if it is faced with an actual situation, where the process of transferring data and cache objects swells to thousands to millions of items, optimization steps are needed so that Memcached services can always be optimal, not experiencing Input / Output (I / O) overhead, and low latency. In a review of this paper, we will show some of the latest research in memcached optimization efforts. Some methods that can be used are clustering are; Memory partitioning, Graphic Processor Unit hash, User Datagram Protocol (UDP) transmission, Solid State Drive Hybird Memory and Memcached Hadoop distributed File System (HDFS)Keywords : memcached, optimization, web-app, overhead, latency
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Yiming, Dongsheng Li, Chuanxiong Guo, Haitao Wu, Yongqiang Xiong, and Xicheng Lu. "CubicRing: Exploiting Network Proximity for Distributed In-Memory Key-Value Store." IEEE/ACM Transactions on Networking 25, no. 4 (August 2017): 2040–53. http://dx.doi.org/10.1109/tnet.2017.2669215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wei, Xingda, Rong Chen, Haibo Chen, and Binyu Zang. "XStore : Fast RDMA-Based Ordered Key-Value Store Using Remote Learned Cache." ACM Transactions on Storage 17, no. 3 (August 31, 2021): 1–32. http://dx.doi.org/10.1145/3468520.

Повний текст джерела
Анотація:
RDMA ( Remote Direct Memory Access ) has gained considerable interests in network-attached in-memory key-value stores. However, traversing the remote tree-based index in ordered key-value stores with RDMA becomes a critical obstacle, causing an order-of-magnitude slowdown and limited scalability due to multiple round trips. Using index cache with conventional wisdom—caching partial data and traversing them locally—usually leads to limited effect because of unavoidable capacity misses, massive random accesses, and costly cache invalidations. We argue that the machine learning (ML) model is a perfect cache structure for the tree-based index, termed learned cache . Based on it, we design and implement XStore , an RDMA-based ordered key-value store with a new hybrid architecture that retains a tree-based index at the server to perform dynamic workloads (e.g., inserts) and leverages a learned cache at the client to perform static workloads (e.g., gets and scans). The key idea is to decouple ML model retraining from index updating by maintaining a layer of indirection from logical to actual positions of key-value pairs. It allows a stale learned cache to continue predicting a correct position for a lookup key. XStore ensures correctness using a validation mechanism with a fallback path and further uses speculative execution to minimize the cost of cache misses. Evaluations with YCSB benchmarks and production workloads show that a single XStore server can achieve over 80 million read-only requests per second. This number outperforms state-of-the-art RDMA-based ordered key-value stores (namely, DrTM-Tree, Cell, and eRPC+Masstree) by up to 5.9× (from 3.7×). For workloads with inserts, XStore still provides up to 3.5× (from 2.7×) throughput speedup, achieving 53M reqs/s. The learned cache can also reduce client-side memory usage and further provides an efficient memory-performance tradeoff, e.g., saving 99% memory at the cost of 20% peak throughput.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ma, Wenlong, Yuqing Zhu, Cheng Li, Mengying Guo, and Yungang Bao. "BiloKey : A Scalable Bi-Index Locality-Aware In-Memory Key-Value Store." IEEE Transactions on Parallel and Distributed Systems 30, no. 7 (July 1, 2019): 1528–40. http://dx.doi.org/10.1109/tpds.2019.2891599.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Baoquan, and David H. C. Du. "NVLSM: A Persistent Memory Key-Value Store Using Log-Structured Merge Tree with Accumulative Compaction." ACM Transactions on Storage 17, no. 3 (August 31, 2021): 1–26. http://dx.doi.org/10.1145/3453300.

Повний текст джерела
Анотація:
Computer systems utilizing byte-addressable Non-Volatile Memory ( NVM ) as memory/storage can provide low-latency data persistence. The widely used key-value stores using Log-Structured Merge Tree ( LSM-Tree ) are still beneficial for NVM systems in aspects of the space and write efficiency. However, the significant write amplification introduced by the leveled compaction of LSM-Tree degrades the write performance of the key-value store and shortens the lifetime of the NVM devices. The existing studies propose new compaction methods to reduce write amplification. Unfortunately, they result in a relatively large read amplification. In this article, we propose NVLSM, a key-value store for NVM systems using LSM-Tree with new accumulative compaction. By fully utilizing the byte-addressability of NVM, accumulative compaction uses pointers to accumulate data into multiple floors in a logically sorted run to reduce the number of compactions required. We have also proposed a cascading searching scheme for reads among the multiple floors to reduce read amplification. Therefore, NVLSM reduces write amplification with small increases in read amplification. We compare NVLSM with key-value stores using LSM-Tree with two other compaction methods: leveled compaction and fragmented compaction. Our evaluations show that NVLSM reduces write amplification by up to 67% compared with LSM-Tree using leveled compaction without significantly increasing the read amplification. In write-intensive workloads, NVLSM reduces the average latency by 15.73%–41.2% compared to other key-value stores.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ha, Minjong, and Sang-Hoon Kim. "InK: In-Kernel Key-Value Storage with Persistent Memory." Electronics 9, no. 11 (November 13, 2020): 1913. http://dx.doi.org/10.3390/electronics9111913.

Повний текст джерела
Анотація:
Block-based storage devices exhibit different characteristics from main memory, and applications and systems have been optimized for a long time considering the characteristics in mind. However, emerging non-volatile memory technologies are about to change the situation. Persistent Memory (PM) provides a huge, persistent, and byte-addressable address space to the system, thereby enabling new opportunities for systems software. However, existing applications are usually apt to indirectly utilize PM as a storage device on top of file systems. This makes applications and file systems perform unnecessary operations and amplify I/O traffic, thereby under-utilizing the high performance of PM. In this paper, we make the case for an in-Kernel key-value storage service optimized for PM, called InK. While providing the persistence of data at a high performance, InK considers the characteristics of PM to guarantee the crash consistency. To this end, InK indexes key-value pairs with B+ tree, which is more efficient on PM. We implemented InK based on the Linux kernel and evaluated its performance with Yahoo Cloud Service Benchmark (YCSB) and RocksDB. Evaluation results confirms that InK has advantages over LSM-tree-based key-value store systems in terms of throughput and tail latency.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Han, Youil, and Eunji Lee. "CRAST: Crash-resilient data management for a key-value store in persistent memory." IEICE Electronics Express 15, no. 23 (2018): 20180919. http://dx.doi.org/10.1587/elex.15.20180919.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Kai, Kaibo Wang, Yuan Yuan, Lei Guo, Rubao Li, Xiaodong Zhang, Bingsheng He, Jiayu Hu, and Bei Hua. "A distributed in-memory key-value store system on heterogeneous CPU–GPU cluster." VLDB Journal 26, no. 5 (August 21, 2017): 729–50. http://dx.doi.org/10.1007/s00778-017-0479-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kim, Jungwon, and Jeffrey S. Vetter. "Implementing efficient data compression and encryption in a persistent key-value store for HPC." International Journal of High Performance Computing Applications 33, no. 6 (May 23, 2019): 1098–112. http://dx.doi.org/10.1177/1094342019847264.

Повний текст джерела
Анотація:
Recently, persistent data structures, like key-value stores (KVSs), which are stored in a high-performance computing (HPC) system’s nonvolatile memory, provide an attractive solution for a number of emerging challenges like limited I/O performance. Data compression and encryption are two well-known techniques for improving several properties of such data-oriented systems. This article investigates how to efficiently integrate data compression and encryption into persistent KVSs for HPC with the ultimate goal of hiding their costs and complexity in terms of performance and ease of use. Our compression technique exploits deep memory hierarchy in an HPC system to achieve both storage reduction and performance improvement. Our encryption technique provides a practical level of security and enables sharing of sensitive data securely in complex scientific workflows with nearly imperceptible cost. We implement the proposed techniques on top of a distributed embedded KVS to evaluate the benefits and costs of incorporating these capabilities along different points in the dataflow path, illustrating differences in effective bandwidth, latency, and additional computational expense on Swiss National Supercomputing Centre’s Grand Tavé and National Energy Research Scientific Computing Center’s Cori.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ding, Chen, Jiguang Wan, and Rui Yan. "HybridKV: An Efficient Key-Value Store with HybridTree Index Structure Based on Non-Volatile Memory." Journal of Physics: Conference Series 2025, no. 1 (September 1, 2021): 012093. http://dx.doi.org/10.1088/1742-6596/2025/1/012093.

Повний текст джерела
Анотація:
Abstract Non-Volatile Memory (NVM) is a new type of storage media with non-volatile data, higher storage density, better performance and concurrency. Persistent key-value stores designed for earlier storage devices, using Log-Structured Merge Tree (LSM-Tree), have serious read-write amplification problem and do not take full advantage of these new devices. Existing works on NVM index structure are mostly based on Radix-Tree or B+-Tree, index structure based on Radix-Tree has better performance but takes up more space. In this paper, we present a new index structure named HybridTree on NVM. HybridTree combines the characteristics of Radix-Tree and B+-Tree. The upper layer is composed of prefix index nodes similar to Radix-Tree, which is indexed by the key prefix speed up data locating, and providing multi-thread support. The lower layer consists of variable-length adaptive B+-Tree nodes organizing key-value data to reduce space waste caused by node sparseness. We evaluate HybridTree on a real NVM devices (Inter Optane DC Persistent Memory). Evaluation results show that HybridTree’s random write performance is 1.2x to 1.62x compared to Fast & Fair and 1.11x to 1.52x compared to NV-Tree, with 54% space utilization reduced compared to WORT. We further integrate HybridTree into LevelDB to build a high performance key-value store HybridKV. By storing HybridTree directly on NVM, the problem of read and write amplification of LSM-Tree is avoided. We evaluate HybridKV on a hybrid DRAM/NVM systems, according to the results, HybridKV can improve random write performance by 7.5x compared to LevelDB and 3.23x compared to RocksDB. In addition, the random read performance of HybridKV is 7x compared to NoveLSM.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Puranik, Sunil, Mahesh Barve, Swapnil Rodi, and Rajendra Patrikar. "FPGA-Based High-Throughput Key-Value Store Using Hashing and B-Tree for Securities Trading System." Electronics 12, no. 1 (December 30, 2022): 183. http://dx.doi.org/10.3390/electronics12010183.

Повний текст джерела
Анотація:
Field-Programmable Array (FPGA) technology is extensively used in Finance. This paper describes a high-throughput key-value store (KVS) for securities trading system applications using an FPGA. The design uses a combination of hashing and B-Tree techniques and supports a large number of keys (40 million) as required by the Trading System. We have used a novel technique of using buckets of different capacities to reduce the amount of Block-RAM (BRAM) and perform a high-speed lookup. The design uses high-bandwidth-memory (HBM), an On-chip memory available in Virtex Ultrascale+ FPGAs to support a large number of keys. Another feature of this design is the replication of the database and lookup logic to increase the overall throughput. By implementing multiple lookup engines in parallel and replicating the database, we could achieve high throughput (up to 6.32 million search operations/second) as specified by our client, which is a major stock exchange. The design has been implemented with a combination of Verilog and high-level-synthesis (HLS) flow to reduce the implementation time.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Qiu, Yunhui, Jinyu Xie, Hankun Lv, Wenbo Yin, Wai-Shing Luk, Lingli Wang, Bowei Yu, et al. "FULL-KV: Flexible and Ultra-Low-Latency In-Memory Key-Value Store System Design on CPU-FPGA." IEEE Transactions on Parallel and Distributed Systems 31, no. 8 (August 1, 2020): 1828–444. http://dx.doi.org/10.1109/tpds.2020.2973965.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Kulkarni, Chinmay, Badrish Chandramouli, and Ryan Stutsman. "Achieving high throughput and elasticity in a larger-than-memory store." Proceedings of the VLDB Endowment 14, no. 8 (April 2021): 1427–40. http://dx.doi.org/10.14778/3457390.3457406.

Повний текст джерела
Анотація:
Millions of sensors, mobile applications and machines now generate billions of events. Specialized many-core key-value stores (KVSs) can ingest and index these events at high rates (over 100 Mops/s on one machine) if events are generated on the same machine; however, to be practical and cost-effective they must ingest events over the network and scale across cloud resources elastically. We present Shadowfax, a new distributed KVS based on FASTER, that transparently spans DRAM, SSDs, and cloud blob storage while serving 130 Mops/s/VM over commodity Azure VMs using conventional Linux TCP. Beyond high single-VM performance, Shadowfax uses a unique approach to distributed reconfiguration that avoids any server-side key ownership checks or cross-core coordination both during normal operation and migration. Hence, Shadowfax can shift load in 17 s to improve system throughput by 10 Mops/s with little disruption. Compared to the state-of-the-art, it has 8x better throughput (than Seastar+memcached) and avoids costly I/O to move cold data during migration. On 12 machines, Shadowfax retains its high throughput to perform 930 Mops/s, which, to the best of our knowledge, is the highest reported throughput for a distributed KVS used for large-scale data ingestion and indexing.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Papagiannis, Anastasios, Giorgos Saloustros, Giorgos Xanthakis, Giorgos Kalaentzis, Pilar Gonzalez-Ferez, and Angelos Bilas. "Kreon." ACM Transactions on Storage 17, no. 1 (February 2, 2021): 1–32. http://dx.doi.org/10.1145/3418414.

Повний текст джерела
Анотація:
Persistent key-value stores have emerged as a main component in the data access path of modern data processing systems. However, they exhibit high CPU and I/O overhead. Nowadays, due to power limitations, it is important to reduce CPU overheads for data processing. In this article, we propose Kreon , a key-value store that targets servers with flash-based storage, where CPU overhead and I/O amplification are more significant bottlenecks compared to I/O randomness. We first observe that two significant sources of overhead in key-value stores are: (a) The use of compaction in Log-Structured Merge-Trees (LSM-Tree) that constantly perform merging and sorting of large data segments and (b) the use of an I/O cache to access devices, which incurs overhead even for data that reside in memory. To avoid these, Kreon performs data movement from level to level by using partial reorganization instead of full data reorganization via the use of a full index per-level. Kreon uses memory-mapped I/O via a custom kernel path to avoid a user-space cache. For a large dataset, Kreon reduces CPU cycles/op by up to 5.8×, reduces I/O amplification for inserts by up to 4.61×, and increases insert ops/s by up to 5.3×, compared to RocksDB.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Abed Abud, Adam, Danilo Cicalese, Grzegorz Jereczek, Fabrice Le Goff, Giovanna Lehmann Miotto, Jeremy Love, Maciej Maciejewski, et al. "Let’s get our hands dirty: a comprehensive evaluation of DAQDB, key-value store for petascale hot storage." EPJ Web of Conferences 245 (2020): 10004. http://dx.doi.org/10.1051/epjconf/202024510004.

Повний текст джерела
Анотація:
Data acquisition systems are a key component for successful data taking in any experiment. The DAQ is a complex distributed computing system and coordinates all operations, from the data selection stage of interesting events to storage elements. For the High Luminosity upgrade of the Large Hadron Collider, the experiments at CERN need to meet challenging requirements to record data with a much higher occupancy in the detectors. The DAQ system will receive and deliver data with a significantly increased trigger rate, one million events per second, and capacity, terabytes of data per second. An effective way to meet these requirements is to decouple real-time data acquisition from event selection. Data fragments can be temporarily stored in a large distributed key-value store. Fragments belonging to the same event can be then queried on demand, by the data selection processes. Implementing such a model relies on a proper combination of emerging technologies, such as persistent memory, NVMe SSDs, scalable networking, and data structures, as well as high performance, scalable software. In this paper, we present DAQDB (Data Acquisition Database) — an open source implementation of this design that was presented earlier, with an extensive evaluation of this approach, from the single node to the distributed performance. Furthermore, we complement our study with a description of the challenges faced and the lessons learned while integrating DAQDB with the existing software framework of the ATLAS experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhang, Qizhen, Philip A. Bernstein, Daniel S. Berger, and Badrish Chandramouli. "Redy." Proceedings of the VLDB Endowment 15, no. 4 (December 2021): 766–79. http://dx.doi.org/10.14778/3503585.3503587.

Повний текст джерела
Анотація:
Redy is a cloud service that provides high performance caches using RDMA-accessible remote memory. An application can customize the performance of each cache with a service level objective (SLO) for latency and throughput. By using remote memory, it can leverage stranded memory and spot VM instances to reduce the cost of its caches and improve data center resource utilization. Redy automatically customizes the resource configuration for the given SLO, handles the dynamics of remote memory regions, and recovers from failures. The experimental evaluation shows that Redy can deliver its promised performance and robustness under remote memory dynamics in the cloud. We augment a production key-value store, FASTER, with a Redy cache. When the working set exceeds local memory, using Redy is significantly faster than spilling to SSDs.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Al-Allawee, Ali, Pascal Lorenz, Abdelhafid Abouaissa, and Mosleh Abualhaj. "A Performance Evaluation of In-Memory Databases Operations in Session Initiation Protocol." Network 3, no. 1 (December 28, 2022): 1–14. http://dx.doi.org/10.3390/network3010001.

Повний текст джерела
Анотація:
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just like other Internet technology, SIP stores its related data in databases with a predefined data structure. In recent, SIP technologies have adopted the real advantages of in-memory databases as cache systems to ensure fast database operations during real-time communication. Meanwhile, in industry, there are several names of in-memory databases that have been implemented with different structures (e.g., query types, data structure, persistency, and key/value size). However, there are limited resources and poor recommendations on how to select a proper in-memory database in SIP communications. This paper provides recommended and efficient in-memory databases which are most fitted to SIP servers by evaluating three types of databases including Memcache, Redis, and Local (OpenSIPS built-in). The evaluation has been conducted based on the experimental performance of the impact of in-memory operations (store and fetch) against the SIP server by applying heavy load traffic through different scenarios. To sum up, evaluation results show that the Local database consumed less memory compared to Memcached and Redis for read and write operations. While persistency was considered, Memcache is the preferable database selection due to its 25.20 KB/s for throughput and 0.763 s of call–response time.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Tang, Xiaochun, and Jiawen Zhou. "Better Big Data Store for Real-Time Processing of Modern MRO System Via Transaction Key Groups." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 36, no. 6 (December 2018): 1139–44. http://dx.doi.org/10.1051/jnwpu/20183661139.

Повний текст джерела
Анотація:
Aim. MRO2 system is a data management platform. It has the ability to manage and store all kinds of data in the product's lifecycle, that is both the mass storage capacity and the scalability are required. For the existing big-data stores for MRO2 systemeither only focus on the storage problem, or only do the scalability issue. In this paper, a two-layer data management model is proposed, in which the top layer uses the memory storage for scalability and the bottom layer uses the distributed key-value storage for mass storage. By adding a middle layer of the key group between the application and the KV storage system, the keys for real-time processing are combined to cache in a node. It satisfies the characteristics of the real-time application and improves the dynamic scalability. The present protocol of the dynamic key groups for real-time distributed computation for MRO2 system is explained in detail. And then the protocol for creating and deleting key groups is introduced. The third topic is an implement of a big-data store for supporting MRO2 system. In this topic, the delay times for creating and deleting the dynamic transaction groups to estimation are used. Finally, the experiments to appraise the present method are done. The response time of the present method is quite efficient in comparison with the other methods to be used inbig data storage systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Byun, Hayoung, and Hyesook Lim. "Comparison on Search Failure between Hash Tables and a Functional Bloom Filter." Applied Sciences 10, no. 15 (July 29, 2020): 5218. http://dx.doi.org/10.3390/app10155218.

Повний текст джерела
Анотація:
Hash-based data structures have been widely used in many applications. An intrinsic problem of hashing is collision, in which two or more elements are hashed to the same value. If a hash table is heavily loaded, more collisions would occur. Elements that could not be stored in a hash table because of the collision cause search failures. Many variant structures have been studied to reduce the number of collisions, but none of the structures completely solves the collision problem. In this paper, we claim that a functional Bloom filter (FBF) provides a lower search failure rate than hash tables, when a hash table is heavily loaded. In other words, a hash table can be replaced with an FBF because the FBF is more effective than hash tables in the search failure rate in storing a large amount of data to a limited size of memory. While hash tables require to store each input key in addition to its return value, a functional Bloom filter stores return values without input keys, because different index combinations according to each input key can be used to identify the input key. In search failure rates, we theoretically compare the FBF with hash-based data structures, such as multi-hash table, cuckoo hash table, and d-left hash table. We also provide simulation results to prove the validity of our theoretical results. The simulation results show that the search failure rates of hash tables are larger than that of the functional Bloom filter when the load factor is larger than 0.6.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Izydorczyk, Lucas, Nadia Oudjane, and Francesco Russo. "A fully backward representation of semilinear PDEs applied to the control of thermostatic loads in power systems." Monte Carlo Methods and Applications 27, no. 4 (October 21, 2021): 347–71. http://dx.doi.org/10.1515/mcma-2021-2095.

Повний текст джерела
Анотація:
Abstract We propose a fully backward representation of semilinear PDEs with application to stochastic control. Based on this, we develop a fully backward Monte-Carlo scheme allowing to generate the regression grid, backwardly in time, as the value function is computed. This offers two key advantages in terms of computational efficiency and memory. First, the grid is generated adaptively in the areas of interest, and second, there is no need to store the entire grid. The performances of this technique are compared in simulations to the traditional Monte-Carlo forward-backward approach on a control problem of thermostatic loads.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Vdovychenko, Ruslan, and Vadim Tulchinsky. "Parallel Implementation of Sparse Distributed Memory for Semantic Storage." Cybernetics and Computer Technologies, no. 2 (September 30, 2022): 58–66. http://dx.doi.org/10.34229/2707-451x.22.2.6.

Повний текст джерела
Анотація:
Introduction. Sparse Distributed Memory (SDM) and Binary Sparse Distributed Representations (Binary Sparse Distributed Representations, BSDR), as two phenomenological approaches to biological memory modelling, have many similarities. The idea of ??their integration into a hybrid semantic storage model with SDM as a low-level cleaning memory (brain cells) for BSDR, which is used as an encoder of high-level symbolic information, is natural. A hybrid semantic store should be able to store holistic data (for example, structures of interconnected and sequential key-value pairs) in a neural network. A similar design has been proposed several times since the 1990s. However, the previously proposed models are impractical due to insufficient scalability and/or low storage density. The gap between SDM and BSDR can be bridged by the results of a third theory related to sparse signals: Compressive Sensing or Sampling (CS). In this article, we focus on the highly efficient parallel implementation of the CS-SDM hybrid memory model for graphics processing units on the NVIDIA CUDA platform, analyze the computational complexity of CS-SDM operations for the case of parallel implementation, and offer optimization techniques for conducting experiments with big sequential batches of vectors. The purpose of the paper is to propose an efficient software implementation of sparse-distributed memory for preserving semantics on modern graphics processing units. Results. Parallel algorithms for CS-SDM operations are proposed, their computational complexity is estimated, and a parallel implementation of the CS-SDM hybrid semantic store is given. Optimization of vector reconstruction for experiments with sequential data batches is proposed. Conclusions. The obtained results show that the design of CS-SDM is naturally parallel and that its algorithms are by design compatible with the architecture of systems with massive parallelism. The conducted experiments showed high performance of the developed implementation of the SDM memory block. Keywords: GPU, CUDA, neural network, Sparse Distributed Memory, associative memory, Compressive Sensing.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Chen, Guangsheng, Pei Nie, and Weipeng Jing. "A Novel Query Method for Spatial Data in Mobile Cloud Computing Environment." Wireless Communications and Mobile Computing 2018 (2018): 1–11. http://dx.doi.org/10.1155/2018/1059231.

Повний текст джерела
Анотація:
With the development of network communication, a 1000-fold increase in traffic demand from 4G to 5G, it is critical to provide efficient and fast spatial data access interface for applications in mobile environment. In view of the low I/O efficiency and high latency of existing methods, this paper presents a memory-based spatial data query method that uses the distributed memory file system Alluxio to store data and build a two-level index based on the Alluxio key-value structure; moreover, it aims to solve the problem of low efficiency of traditional method; according to the characteristics of Spark computing framework, a data input format for spatial data query is proposed, which can selectively read the file data and reduce the data I/O. The comparative experiments show that the memory-based file system Alluxio has better I/O performance than the disk file system; compared with the traditional distributed query method, the method we proposed reduces the retrieval time greatly.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wu, Chenyuan, Mohammad Javad Amiri, Jared Asch, Heena Nagda, Qizhen Zhang, and Boon Thau Loo. "FlexChain." Proceedings of the VLDB Endowment 16, no. 1 (September 2022): 23–36. http://dx.doi.org/10.14778/3561261.3561264.

Повний текст джерела
Анотація:
While permissioned blockchains enable a family of data center applications, existing systems suffer from imbalanced loads across compute and memory, exacerbating the underutilization of cloud resources. This paper presents FlexChain , a novel permissioned blockchain system that addresses this challenge by physically disaggregating CPUs, DRAM, and storage devices to process different blockchain workloads efficiently. Disaggregation allows blockchain service providers to upgrade and expand hardware resources independently to support a wide range of smart contracts with diverse CPU and memory demands. Moreover, it ensures efficient resource utilization and hence prevents resource fragmentation in a data center. We have explored the design of XOV blockchain systems in a disaggregated fashion and developed a tiered key-value store that can elastically scale its memory and storage. Our design significantly speeds up the execution stage. We have also leveraged several techniques to parallelize the validation stage in FlexChain to further improve the overall blockchain performance. Our evaluation results show that FlexChain can provide independent compute and memory scalability, while incurring at most 12.8% disaggregation overhead. FlexChain achieves almost identical throughput as the state-of-the-art distributed approaches with significantly lower memory and CPU consumption for compute-intensive and memory-intensive workloads respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Pino, Juan, Aurelien Waite, and William Byrne. "Simple and Efficient Model Filtering in Statistical Machine Translation." Prague Bulletin of Mathematical Linguistics 98, no. 1 (October 1, 2012): 5–24. http://dx.doi.org/10.2478/v10108-012-0005-x.

Повний текст джерела
Анотація:
Simple and Efficient Model Filtering in Statistical Machine Translation Data availability and distributed computing techniques have allowed statistical machine translation (SMT) researchers to build larger models. However, decoders need to be able to retrieve information efficiently from these models to be able to translate an input sentence or a set of input sentences. We introduce an easy to implement and general purpose solution to tackle this problem: we store SMT models as a set of key-value pairs in an HFile. We apply this strategy to two specific tasks: test set hierarchical phrase-based rule filtering and n-gram count filtering for language model lattice rescoring. We compare our approach to alternative strategies and show that its trade offs in terms of speed, memory and simplicity are competitive.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Li, Cheng, Hao Chen, Chaoyi Ruan, Xiaosong Ma, and Yinlong Xu. "Leveraging NVMe SSDs for Building a Fast, Cost-effective, LSM-tree-based KV Store." ACM Transactions on Storage 17, no. 4 (November 30, 2021): 1–29. http://dx.doi.org/10.1145/3480963.

Повний текст джерела
Анотація:
Key-value (KV) stores support many crucial applications and services. They perform fast in-memory processing but are still often limited by I/O performance. The recent emergence of high-speed commodity non-volatile memory express solid-state drives (NVMe SSDs) has propelled new KV system designs that take advantage of their ultra-low latency and high bandwidth. Meanwhile, to switch to entirely new data layouts and scale up entire databases to high-end SSDs requires considerable investment. As a compromise, we propose SpanDB, an LSM-tree-based KV store that adapts the popular RocksDB system to utilize selective deployment of high-speed SSDs . SpanDB allows users to host the bulk of their data on cheaper and larger SSDs (and even hard disc drives with certain workloads), while relocating write-ahead logs (WAL) and the top levels of the LSM-tree to a much smaller and faster NVMe SSD. To better utilize this fast disk, SpanDB provides high-speed, parallel WAL writes via SPDK, and enables asynchronous request processing to mitigate inter-thread synchronization overhead and work efficiently with polling-based I/O. To ease the live data migration between fast and slow disks, we introduce TopFS, a stripped-down file system providing familiar file interface wrappers on top of SPDK I/O. Our evaluation shows that SpanDB simultaneously improves RocksDB's throughput by up to 8.8 \times and reduces its latency by 9.5–58.3%. Compared with KVell, a system designed for high-end SSDs, SpanDB achieves 96–140% of its throughput, with a 2.3–21.6 \times lower latency, at a cheaper storage configuration.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Williams, Cassondra L., and Paul J. Ponganis. "Diving physiology of marine mammals and birds: the development of biologging techniques." Philosophical Transactions of the Royal Society B: Biological Sciences 376, no. 1830 (June 14, 2021): 20200211. http://dx.doi.org/10.1098/rstb.2020.0211.

Повний текст джерела
Анотація:
In the 1940s, Scholander and Irving revealed fundamental physiological responses to forced diving of marine mammals and birds, setting the stage for the study of diving physiology. Since then, diving physiology research has moved from the laboratory to the field. Modern biologging, with the development of microprocessor technology, recorder memory capacity and battery life, has advanced and expanded investigations of the diving physiology of marine mammals and birds. This review describes a brief history of the start of field diving physiology investigations, including the invention of the time depth recorder, and then tracks the use of biologging studies in four key diving physiology topics: heart rate, blood flow, body temperature and oxygen store management. Investigations of diving heart rates in cetaceans and O 2 store management in diving emperor penguins are highlighted to emphasize the value of diving physiology biologging research. The review concludes with current challenges, remaining diving physiology questions and what technologies are needed to advance the field. This article is part of the theme issue ‘Measuring physiology in free-living animals (Part I)’.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Vasavi, S., V.N. Priyanka G, and Anu A. Gokhale. "Framework for Visualization of GeoSpatial Query Processing by Integrating Redis With Spark." International Journal of Natural Computing Research 8, no. 3 (July 2019): 1–25. http://dx.doi.org/10.4018/ijncr.2019070101.

Повний текст джерела
Анотація:
Nowadays we are moving towards digitization and making all our devices produce a variety of data, this has paved the way to the emergence of NoSQL databases like Cassandra, MongoDB, and Redis. Big data such as geospatial data allows for geospatial analytics in applications such as tourism, marketing, and rural development. Spark frameworks provide operators storage and processing of distributed data. This article proposes “GeoRediSpark” to integrate Redis with Spark. Redis is a key-value store that uses an in-memory store, hence integrating Redis with Spark can extend the real-time processing of geospatial data. The article investigates storage and retrieval of the Redis built-in geospatial queries and has added two new geospatial operators, GeoWithin and GeoIntersect, to enhance the capabilities of Redis. Hashed indexing is used to improve the processing performance. A comparison on Redis metrics with three benchmark datasets is made. Hashset is used to display geographic data. The output of geospatial queries is visualized to the type of place and the nature of the query using Tableau.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zhou, Jingyu, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, et al. "FoundationDB." ACM SIGMOD Record 51, no. 1 (May 31, 2022): 24–31. http://dx.doi.org/10.1145/3542700.3542707.

Повний текст джерела
Анотація:
FoundationDB is an open source transactional key value store created more than ten years ago. It is one of the first systems to combine the flexibility and scalability of NoSQL architectures with the power of ACID transactions. FoundationDB adopts an unbundled architecture that decouples an in-memory transaction management system, a distributed storage system, and a built-in distributed configuration system. Each sub-system can be independently provisioned and configured to achieve scalability, high-availability and fault tolerance. FoundationDB includes a deterministic simulation framework, used to test every new feature under a myriad of possible faults. FoundationDB offers a minimal and carefully chosen feature set, which has enabled a range of disparate systems to be built as layers on top. FoundationDB is the underpinning of cloud infrastructure at Apple, Snowflake and other companies.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Mikhav, Volodymyr, Yelyzaveta Meleshko, and Mykola Yakymenko. "Development of a Database Management System of Recommendation Systems for Computer Networks and Computer-integrated Systems." Central Ukrainian Scientific Bulletin. Technical Sciences 2, no. 5(36) (2022): 130–36. http://dx.doi.org/10.32515/2664-262x.2022.5(36).2.130-136.

Повний текст джерела
Анотація:
The goal of this work is to develop a database management system of the recommendation system for computer networks and computer-integrated systems, as well as to compare the quality of its work with existing systems. Today, recommendation systems are widely used in computer networks, in particular, in social networks, Internet commerce systems, media content distribution, advertising, etc., as well as in computer-integrated systems, in particular, in the Internet of Things and smart houses. An effective way to present the data required for the recommendation system can reduce the number of resources required and facilitate the development and use of more sophisticated algorithms for compiling lists of recommendations. When storing data from the recommendation system, one of the important parameters of the database is the speed of reading/writing information, as well as the amount of memory required to store data in one format or another. Therefore, it is advisable to use simple data models. This paper investigated the feasibility and effectiveness of using open linear lists to store recommendation system data in computer networks and computer-integrated systems. To test the effectiveness of the proposed method of presenting data in the recommendation system, comparative experiments were conducted with such software as: relational database management system Postgresql, resident repository key-value pairs Redis and graph database Neo4j. Each method of presenting data was tested on the following indicators: time of filling the repository with test data; the amount of memory occupied by the repository after filling; recommendation generation time. The MovieLens data set was used as test data. The developed database management system based on linear lists is significantly ahead of the existing tools in terms of both speed and efficiency of memory use.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Salah, Mohammed Saïd, Maizate Abderrahim, Ouzzif Mohamed, and Toumi Mohamed. "Mobile nodes for improving security and lifetime efficiency in wireless sensor networks." International Journal of Pervasive Computing and Communications 15, no. 1 (April 1, 2019): 2–15. http://dx.doi.org/10.1108/ijpcc-06-2019-057.

Повний текст джерела
Анотація:
Purpose This paper aims to provide an acceptable level of security while taking into account limited capabilities of the sensors. This paper proposes a mobile approach to securing data exchanged by structured nodes in a cluster. Design/methodology/approach The approach is based on mobile nodes with significant calculation and energy resources that allow cryptographic key management and periodic rekeying. However, mobility in wireless sensor networks aims to increase the security and lifetime of the entire network. The technical methods used in this paper are based on cryptography elliptic curves and key management through a balanced binary tree. Findings To maintain the effectiveness of critical applications based on wireless sensor networks, a good level of nodes security must be ensured, taking into account their limited energy and computing. Collaboration between powerful mobile nodes provides better coverage and a good key management. Owing to the significant capabilities of the mobile nodes, they can be used to secure critical applications at the same time if needed in applications requiring difficult operations. Originality/value To compare the performance of the proposed approach with other mobile algorithms, the following metrics are focused on: the energy consumed by normal sensors and cluster heads, the number of packets exchanged during key installation, time to generate and distribute cryptographic keys and the memory used by the different sensors to store keys.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Pan, Cheng, Xiaolin Wang, Yingwei Luo, and Zhenlin Wang. "Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET." ACM Transactions on Storage 17, no. 2 (May 28, 2021): 1–45. http://dx.doi.org/10.1145/3447573.

Повний текст джерела
Анотація:
Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Perez-Soltero, Alonso, Humberto Galvez-Leon, Mario Barcelo-Valenzuela, and Gerardo Sanchez-Schmitz. "A methodological proposal to benefit from team knowledge." VINE Journal of Information and Knowledge Management Systems 46, no. 3 (August 8, 2016): 298–318. http://dx.doi.org/10.1108/vjikms-08-2015-0043.

Повний текст джерела
Анотація:
Purpose This paper aims to propose a methodology to develop an organizational memory to benefit from team knowledge and to make the design of electromechanical devices processes more efficient. Design/methodology/approach Different frameworks and methods were analyzed from literature, obtaining key ideas to be included in the methodology developed and considering other approaches to apply in team knowledge about design processes. The research was conducted as a case study in a Mexican small and medium-sized enterprises dedicated to the manufacturing and installation of electromechanical devices where the methodology was implemented. Findings A five-stage methodology was developed which consisted of preparation, identification, capture & storage, dissemination & application and finally the evaluation & feedback stage. An implementation of the described processes was carried out, which was materialized into a technological tool that represents the organizational memory where knowledge was captured, organized and disseminated. Practical implications This study offers guidelines that can be applied in other organizations where team knowledge on design processes have not been adequately used for company’s improvement. The application of this methodology could be a strategy that enabled team knowledge to store their experience. This knowledge could then be consulted and recovered by the workgroup in an effective manner to solve new problems. Originality/value A methodological proposal to develop an organizational memory about team knowledge was developed. To evaluate the impact of the methodology implementation, a variety of indicators were proposed, which were classified as economic, organizational and performance indicators.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Cai, Tao, Fuli Chen, Qingjian He, Dejiao Niu, and Jie Wang. "The Matrix KV Storage System Based on NVM Devices." Micromachines 10, no. 5 (May 27, 2019): 346. http://dx.doi.org/10.3390/mi10050346.

Повний текст джерела
Анотація:
The storage device based on Nonvolatile Memory (NVM devices) has high read/write speed and embedded processor. It is a useful way to improve the efficiency of Key-Value (KV) application. However it still has some limitations such as limited capacity, poorer computing power compared with CPU, and complex I/O system software. Thus it is not an effective way to construct KV storage system with NVM devices directly. We analyze the characteristics of NVM devices and demands of KV application to design the matrix KV storage system based on NVM Devices. The group collaboration management based on Bloomfilter, intragroup optimization based on competition, embedded KV management based on B+-tree, and the new interface of KV storage system are presented. Then, the embedded processor in the NVM device and CPU can be comprehensively utilized to construct a matrix KV pair management system. It can improve the storage and management efficiency of massive KV pairs, and it can also support the efficient execution of KV applications. A prototype is implemented named MKVS (the matrix KV storage system based on NVM devices) to test with YCSB (Yahoo! Cloud System Benchmark) and to compare with the current in-memory KV store. The results show that MKVS can improve the throughput by 5.98 times, and reduce the 99.7% read latency and 77.2% write latency.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Jin, Hai, Zhiwei Li, Haikun Liu, Xiaofei Liao, and Yu Zhang. "Hotspot-Aware Hybrid Memory Management for In-Memory Key-Value Stores." IEEE Transactions on Parallel and Distributed Systems 31, no. 4 (April 1, 2020): 779–92. http://dx.doi.org/10.1109/tpds.2019.2945315.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Goncalves, Carlos, Luis Assuncao, and Jose C. Cunha. "Flexible MapReduce Workflows for Cloud Data Analytics." International Journal of Grid and High Performance Computing 5, no. 4 (October 2013): 48–64. http://dx.doi.org/10.4018/ijghpc.2013100104.

Повний текст джерела
Анотація:
Data analytics applications handle large data sets subject to multiple processing phases, some of which can execute in parallel on clusters, grids or clouds. Such applications can benefit from using MapReduce model, only requiring the end-user to define the application algorithms for input data processing and the map and reduce functions, but this poses a need to install/configure specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. In order to provide more flexibility in defining and adjusting the application configurations, as well as in the specification of the composition of the application phases and their orchestration, the authors describe an approach for supporting MapReduce stages as sub-workflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). The authors discuss how a text mining application is represented as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. Access to intermediate data produced during the MapReduce computations is supported by a data sharing abstraction. The authors describe two implementations of this abstraction, one based on a shared tuple space and another based on an in-memory distributed key/value store. The authors describe the implementation of the framework, a set of developed tools, and our experimentation with the execution of the text mining algorithm over multiple Amazon EC2 (Elastic Compute Cloud) instances, and report on the speed-up and size-up results obtained up to 20 EC2 instances and for different corpus sizes, up to 97 million words.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Chen, Wei, Songping Yu, and Zhiying Wang. "Fast In-Memory Key–Value Cache System with RDMA." Journal of Circuits, Systems and Computers 28, no. 05 (May 2019): 1950074. http://dx.doi.org/10.1142/s0218126619500749.

Повний текст джерела
Анотація:
The quick advances of Cloud and the advent of Fog computing impose more and more critical demand for computing and data transfer of low latency onto the underlying distributed computing infrastructure. Remote direct memory access (RDMA) technology has been widely applied for its low latency of remote data access. However, RDMA gives rise to a host of challenges in accelerating in-memory key–value stores, such as direct remote memory writes, making the remote system more vulnerable. This study presents an in-memory key–value system based on RDMA, named Craftscached, which enables: (1) buffering remote memory writes into a communication cache memory to eliminate direct remote memory writes to the data memory area; (2) dividing the communication cache memory into RDMA-writable and RDMA-readable memory zones to reduce the possibility of data corruption due to stray memory writes and caching data into an RDMA-readable memory zone to improve the remote memory read performance; and (3) adopting remote out-of-place direct memory write to achieve high performance of remote read and write. Experimental results in comparison with Memcached indicate that Craftscached provides a far better performance: (1) in the case of read-intensive workloads, the data access of Craftscached is about 7–43[Formula: see text] and 18–72.4% better than those of TCP/IP-based and RDMA-based Memcached, respectively; (2) the memory utilization of small objects is more efficient with only about 3.8% memory compaction overhead.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Shvahirev, P., O. Lopakov, V. Kosmachecskiy, and К. Migorenko. "TEMPERATURE CONTROLLER ON MODERN SEMICONDUCTOR SENSORS." Collection of scientific works of Odesa Military Academy 1, no. 13 (December 30, 2020): 322–32. http://dx.doi.org/10.37129/2313-7509.2020.13.1.322-332.

Повний текст джерела
Анотація:
The ultra-fast development of electronics in the last decade, as well as the constant reduction in the price of electronic components, lead to the fact that many manufacturers of household and industrial equipment replace electromechanical components with microcontroller-controlled electronic circuits in the manufactured equipment. The regulator, on the bimetallic plate and the mechanical timer are inferior in many respects to the electronic regulator and the digital timer. In this work, there was a study of an intelligent temperature module, since in industrial automation there is a tendency for widespread use of such intelligent sensor devices and microcontrollers that collect information, preprocess it, and then forward or store it. Such modules can work autonomously, collecting information about the measurement object and accumulating it until the moment of transfer to the operator (usually implemented on the basis of battery power), or they can be combined into a sensor network while standing but interacting with the main measurement unit. The sensor is powered via an interface cable, and in some cases using signal lines. The criteria for choosing a microcontroller for these purposes are: 1. Kernel performance, the required size of program and data memory, a sufficient number of I / O port lines. 2. The cost of the microcontroller. 3. Technical parameters of the microcontroller (supply voltage range, operating temperature range, resistance to electromagnetic interference). 4. The life cycle of the selected family. 5. FLASH-memory of programs possessing a sufficient resource for programming (minimum resource 1000 times, desirable 100 000). 6. Reduction to the actually necessary value of the frequency of the processor (the use of clock quartz in order to reduce current consumption). 7. Power supply to peripheral microcircuits (external ADCs, FLASH-memory) directly from the processor outputs or through key elements only for the duration of their operation. In the article, a circuit diagram based on the PIC16F84A microcontroller is designed, in which a general circuit model based on Brock’s cell and an operational amplifier-comparator was used as a heat-sensitive sensor. A feature of this microcontroller system is that it is equipped with an optocoupler and triac assembly, which allows you to connect a powerful load of up to 20 A and control it remotely. In addition, volt-temperature characteristics for the presence of non-linearity were considered in the work and the choice of sensor was justified. Keywords: microcontroller, data acquisition system, temperature, measurement error.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Liu, Chengjian, Kai Ouyang, Xiaowen Chu, Hai Liu, and Yiu-Wing Leung. "R-memcached: A reliable in-memory cache for big key-value stores." Tsinghua Science and Technology 20, no. 6 (December 2015): 560–73. http://dx.doi.org/10.1109/tst.2015.7349928.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Gao, Shen, Xiuying Chen, Li Liu, Dongyan Zhao, and Rui Yan. "Learning to Respond with Your Favorite Stickers." ACM Transactions on Information Systems 39, no. 2 (March 2021): 1–32. http://dx.doi.org/10.1145/3429980.

Повний текст джерела
Анотація:
Stickers with vivid and engaging expressions are becoming increasingly popular in online messaging apps, and some works are dedicated to automatically select sticker response by matching the stickers image with previous utterances. However, existing methods usually focus on measuring the matching degree between the dialog context and sticker image, which ignores the user preference of using stickers. Hence, in this article, we propose to recommend an appropriate sticker to user based on multi-turn dialog context and sticker using history of user. Two main challenges are confronted in this task. One is to model the sticker preference of user based on the previous sticker selection history. Another challenge is to jointly fuse the user preference and the matching between dialog context and candidate sticker into final prediction making. To tackle these challenges, we propose a Preference Enhanced Sticker Response Selector (PESRS) model. Specifically, PESRS first employs a convolutional-based sticker image encoder and a self-attention-based multi-turn dialog encoder to obtain the representation of stickers and utterances. Next, deep interaction network is proposed to conduct deep matching between the sticker and each utterance. Then, we model the user preference by using the recently selected stickers as input and use a key-value memory network to store the preference representation. PESRS then learns the short-term and long-term dependency between all interaction results by a fusion network and dynamically fuses the user preference representation into the final sticker selection prediction. Extensive experiments conducted on a large-scale real-world dialog dataset show that our model achieves the state-of-the-art performance for all commonly used metrics. Experiments also verify the effectiveness of each component of PESRS.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kassa, Hiwot Tadese, Jason Akers, Mrinmoy Ghosh, Zhichao Cao, Vaibhav Gogte, and Ronald Dreslinski. "Power-optimized Deployment of Key-value Stores Using Storage Class Memory." ACM Transactions on Storage 18, no. 2 (May 31, 2022): 1–26. http://dx.doi.org/10.1145/3511905.

Повний текст джерела
Анотація:
High-performance flash-based key-value stores in data-centers utilize large amounts of DRAM to cache hot data. However, motivated by the high cost and power consumption of DRAM, server designs with lower DRAM-per-compute ratio are becoming popular. These low-cost servers enable scale-out services by reducing server workload densities. This results in improvements to overall service reliability, leading to a decrease in the total cost of ownership (TCO) for scalable workloads. Nevertheless, for key-value stores with large memory footprints, these reduced DRAM servers degrade performance due to an increase in both IO utilization and data access latency. In this scenario, a standard practice to improve performance for sharded databases is to reduce the number of shards per machine, which degrades the TCO benefits of reduced DRAM low-cost servers. In this work, we explore a practical solution to improve performance and reduce the costs and power consumption of key-value stores running on DRAM-constrained servers by using Storage Class Memories (SCM). SCMs in a DIMM form factor, although slower than DRAM, are sufficiently faster than flash when serving as a large extension to DRAM. With new technologies like Compute Express Link, we can expand the memory capacity of servers with high bandwidth and low latency connectivity with SCM. In this article, we use Intel Optane PMem 100 Series SCMs (DCPMM) in AppDirect mode to extend the available memory of our existing single-socket platform deployment of RocksDB (one of the largest key-value stores at Meta). We first designed a hybrid cache in RocksDB to harness both DRAM and SCM hierarchically. We then characterized the performance of the hybrid cache for three of the largest RocksDB use cases at Meta (ChatApp, BLOB Metadata, and Hive Cache). Our results demonstrate that we can achieve up to 80% improvement in throughput and 20% improvement in P95 latency over the existing small DRAM single-socket platform, while maintaining a 43–48% cost improvement over our large DRAM dual-socket platform. To the best of our knowledge, this is the first study of the DCPMM platform in a commercial data center.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Shi, Yang, Jiawei Fei, Mei Wen, and Chunyuan Zhang. "Balancing Distributed Key-Value Stores with Efficient In-Network Redirecting." Electronics 8, no. 9 (September 9, 2019): 1008. http://dx.doi.org/10.3390/electronics8091008.

Повний текст джерела
Анотація:
Today’s cloud-based online services are underpinned by distributed key-value stores (KVSs). Keys and values are distributed across back-end servers in such scale-out systems. One primary real-life performance bottleneck occurs when storage servers suffer from load imbalance under skewed workloads. In this paper, we present KVSwitch, a centralized self-managing load balancer that leverages the power and flexibility of emerging programmable switches. The balance is achieved by dynamically predicting the hot items and by creating replication strategies according to KVS loading. To overcome the challenges in realizing KVSwitch given the limitations of the switch hardware, we decompose KVSwitch’s functions and carefully design them for the heterogeneous processors inside the switch. We prototype KVSwitch in a Tofino switch. Experimental results show that our solution can effectively keep the KVS servers balanced even under highly skewed workloads. Furthermore, KVSwitch only replicates 70 % of hot items and consumes 9.88 % of server memory rather than simply replicating all hot items to each server.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

An, Feng-Ping, Jun-e. Liu, and Lei Bai. "Pedestrian Reidentification Algorithm Based on Deconvolution Network Feature Extraction-Multilayer Attention Mechanism Convolutional Neural Network." Journal of Sensors 2021 (January 7, 2021): 1–12. http://dx.doi.org/10.1155/2021/9463092.

Повний текст джерела
Анотація:
Pedestrian reidentification is a key technology in large-scale distributed camera systems. It can quickly and efficiently detect and track target people in large-scale distributed surveillance networks. The existing traditional pedestrian reidentification methods have problems such as low recognition accuracy, low calculation efficiency, and weak adaptive ability. Pedestrian reidentification algorithms based on deep learning have been widely used in the field of pedestrian reidentification due to their strong adaptive ability and high recognition accuracy. However, the pedestrian recognition method based on deep learning has the following problems: first, during the learning process of the deep learning model, the initial value of the convolution kernel is usually randomly assigned, which makes the model learning process easily fall into a local optimum. The second is that the model parameter learning method based on the gradient descent method exhibits gradient dispersion. The third is that the information transfer of pedestrian reidentification sequence images is not considered. In view of these issues, this paper first examines the feature map matrix from the original image through a deconvolution neural network, uses it as a convolution kernel, and then performs layer-by-layer convolution and pooling operations. Then, the second derivative information of the error function is directly obtained without calculating the Hessian matrix, and the momentum coefficient is used to improve the convergence of the backpropagation, thereby suppressing the gradient dispersion phenomenon. At the same time, to solve the problem of information transfer of pedestrian reidentification sequence images, this paper proposes a memory network model based on a multilayer attention mechanism, which uses the network to effectively store image visual information and pedestrian behavior information, respectively. It can solve the problem of information transmission. Based on the above ideas, this paper proposes a pedestrian reidentification algorithm based on deconvolution network feature extraction-multilayer attention mechanism convolutional neural network. Experiments are performed on the related data sets using this algorithm and other major popular human reidentification algorithms. The results show that the pedestrian reidentification method proposed in this paper not only has strong adaptive ability but also has significantly improved average recognition accuracy and rank-1 matching rate compared with other mainstream methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Trofimova, E. I. "Blockchain for social memory institutions: Functional value and possibilities." Scientific and Technical Libraries 1, no. 1 (March 18, 2021): 115–24. http://dx.doi.org/10.33186/1027-3689-2021-1-115-124.

Повний текст джерела
Анотація:
De-centralized storage and the closely-coupled related constancy of loaded data, immunity against hacker attacks, transaction history recording and complete transparency make the blockchain technology attractive not only for developing cryptocurrencies and economic transactions. The author reviews the world experience in applying the technology to various activities of social memory institutions and, in particular, individual programs based on the blockchain. The technology enables to provide control and insurance for pieces of art, to ensure copyright, to prevent illegal copying, to store digital copies and ori- ginal works created in the digital environment, to integrate resources using the key functionality of distributed databases. The possibilities and prospects for Russia are evaluated; the need for regulative foundation to define core functionality and legal liability of blockchain processes is emphasized. The possibility for using the technology for building the single knowledge space as the integrative model of digital museum, archival and library resources is analyzed.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Deng, He Lian, and You Gang Xiao. "Development of General Embedded Intelligent Monitoring System for Tower Crane." Applied Mechanics and Materials 103 (September 2011): 394–98. http://dx.doi.org/10.4028/www.scientific.net/amm.103.394.

Повний текст джерела
Анотація:
For improving the generality, expandability and accuracy, the general embedded intelligent monitoring system of tower crane is developed. The system can be applied to different kinds of tower cranes running at any lifting ratio, can be initialized using U disk with the information of tower crane, and fit the lifting torque curve automatically. In dangerous state, the system can sent out alarm signals with sounds and lights, and cut off power by sending signals to PLC through communication interface RS485. When electricity goes off suddenly, the system can record the real-time operating information automatically, and store them in a black box, which can be taken as the basis for confirming the accident responsibility.In recent years, tower cranes play a more and more important role in the construction of tall buildings, in other construction fields are also more widely used. For the safety of tower cranes, various monitors have been developed for monitoring the running information of crane tower [1-8]. These monitors can’t eliminate the errors caused by temperature variations automatically. The specific tower crane’s parameters such as geometric parameters, alarming parameters, lifting ratio, lifting torque should be embedded into the core program, so a monitor can only be applied to a specific type of tower crane, lack of generality and expansibility.For improving the defects of the existing monitors, a general intelligent monitoring modular system of tower crane with high precision is developed, which can initialize the system automatically, eliminate the temperature drift and creep effect of sensor, and store power-off data, which is the function of black box.Hardware design of the monitoring systemThe system uses modularized design mode. These modules include embedded motherboard module, sensor module, signal processing module, data acquisition module, power module, output control module, display and touch screen module. The hardware structure is shown in figure 1. Figure 1 Hardware structure of the monitoring systemEmbedded motherboard module is the core of the system. The motherboard uses the embedded microprocessor ARM 9 as MCU, onboard SDRAM and NAND Flash. Memory size can be chosen according to users’ needs. SDRAM is used for running procedure and cache data. NAND Flash is used to store embedded Linux operating system, applications and operating data of tower crane. Onboard clock with rechargeable batteries provides the information of year, month, day, hour, minute and second. This module provides time tag for real-time operating data. Most interfaces are taken out by the plugs on the embedded motherboard. They include I/O interface, RS232 interface, RS485 interface, USB interface, LCD interface, Audio interface, Touch Screen interface. Pull and plug structure is used between all interfaces and peripheral equipments, which not only makes the system to be aseismatic, but also makes its configuration flexible. Watch-dog circuit is designed on the embedded motherboard, which makes the system reset to normal state automatically after its crash because of interference, program fleet, or getting stuck in an infinite loop, so the system stability is improved greatly. In order to store operating data when power is down suddenly, the power-down protection circuit is designed. The saved data will be helpful to repeat the accident process later, confirm the accident responsibility, and provide the basis for structure optimization of tower crane.Sensor module is confirmed by the main parameters related to tower crane’s security, such as lifting weight, lifting torque, trolley luffing, lifting height, rotary angle and wind speed. Axle pin shear load cell is chosen to acquire lifting weight signals. Potentiometer accompanied with multi-stopper or incremental encoder is chosen to acquire trolley luffing and lifting height signals. Potentiometer accompanied with multi-stopper or absolute photoelectric encoder is chosen to acquire rotary angle signals. Photoelectric sensor is chosen to acquire wind speed signals. The output signals of these sensors can be 0~5V or 4~20mA analog signals, or digital signal from RS485 bus. The system can choose corresponding signal processing method according to the type of sensor signal, which increases the flexibility on the selection of sensors, and is helpful for the users to expand monitoring objects. If the acquired signal is analog signal, it will be processed with filtering, isolation, anti-interference processing by signal isolate module, and sent to A/D module for converting into digital signals, then transformed into RS485 signal by the communication protocol conversion device according to Modbus protocol. If the acquired signal is digital signal with RS485 interface, it can be linked to RS485 bus directly. All the acquired signals are sent to embedded motherboard for data processing through RS485 bus.The data acquisition module is linked to the data acquisition control module on embedded motherboard through RS485 interface. Under the control of program, the system inquires the sensors at regular intervals, and acquires the operating data of crane tower. Median filter technology is used to eliminate interferences from singularity signals. After analysis and processing, the data are stored in the database on ARM platform.Switch signal can be output to relay module or PLC from output control module through RS485 bus, then each actuator will be power on or power off according to demand, so the motion of tower crane will be under control.Video module is connected with motherboard through TFT interface. After being processed, real-time operating parameters are displayed on LCD. The working time, work cycle times, alarm, overweight and ultar-torque information will be stored into database automatically. For meeting the needs of different users, the video module is compatible with 5.7, 8.4 or 10.4 inches of color display.Touch screen is connected with embedded motherboard by touch screen interface, so human machine interaction is realized. Initialization, data download, alarm information inquire, parameter modification can be finished through touch screen.Speaker is linked with audio interface, thus alarm signals is human voice signal, not harsh buzz.USB interface can be linked to conventional U disk directly. Using U disk, users can upload basic parameters of tower crane, initialize system, download operating data, which provides the basis for the structural optimization and accident analysis. Software design of the monitoring systemAccording to the modular design principle, the system software is divided into grading encryption module, system update module, parameter settings module, calibration module, data acquisition and processor module, lifting parameters monitoring module, alarm query module, work statistics module.Alarm thresholds are guarantee for safety operation of the tower crane. Operating data of tower crane are the basis of service life prediction, structural optimization, accident analysis, accident responsibility confirmation. According to key field, the database is divided into different security levels for security requirements. Key fields are grade encryption with symmetrical encryption algorithm, and data keys are protected with elliptic curve encryption algorithm. The association is realized between the users’ permission and security grade of key fields, which will ensure authorized users with different grades to access the equivalent encrypted key fields. The user who meets the grade can access equivalent encrypted database and encrypted key field in the database, also can access low-grade encrypted key fields. This ensures the confidentiality and integrity of key data, and makes the system a real black box.The system is divided into operating mode and management mode in order to make the system toggle between the two states conveniently. The default state is operating mode. As long as the power is on, the monitoring system will be started by the system guide program, and monitor the operating state of the tower crane. The real-time operating data will be displayed on the display screen. At the dangerous state, warning signal will be sent to the driver through voice alarm and light alarm, and corresponding control signal will be output to execution unit to cut off relevant power for tower crane’s safety.By clicking at the mode switch button on the initial interface, the toggle can be finished between the management mode and the operating mode. Under the management mode, there are 4 grades encrypted modes, namely the system update, alarm query, parameter setting and data query. The driver only can browse relevant information. Ordinary administrator can download the alarm information for further analysis. Senior administrator can modify the alarm threshold. The highest administrator can reinitialize system to make it adapt to different types of tower crane. Only browse and download function are available in the key fields of alarm inquiry, anyone can't modify the data. The overload fields in alarm database are encrypted, only senior administrator can browse. The sensitive fields are prevented from being tampered to the great extent, which will provide the reliable basis for the structural optimization and accident analysis. The system can be initialized through the USB interface. Before initialization, type, structural parameters, alarm thresholds, control thresholds, lifting torque characteristics of tower crane should be made as Excel files and then converted to XML files by format conversion files developed specially, then the XML files are downloaded to U disk. The U disk is inserted into USB interface, then the highest administrator can initialize the system according to hints from system. After initialization, senior administrator can modify structural parameters, alarm thresholds, control thresholds by clicking on parameters setting menu. So long as users can make the corresponding excel form, the system initialization can be finished easily according to above steps and used for monitoring. This is very convenient for user.Tower crane belongs to mobile construction machinery. Over time, sensor signals may have some drift, so it is necessary to calibrate the system regularly for guaranteeing the monitoring accuracy. Considering the tower is a linear elastic structure, sensors are linear sensors,in calibration linear equation is used:y=kx+b (1)where x is sample value of sensor, y is actual value. k, b are calibration coefficients, and are calculated out by two-points method. At running mode, the relationship between x and y is:y=[(y1-y0)/(x1-x0)](x-x0)+y0 (2)After calibration, temperature drift and creep can be eliminated, so the monitoring accuracy is improved greatly.Lifting torque is the most important parameter of condition monitoring of tower crane. Comparing the real-time torque M(L) with rated torque Me(L), the movement of tower crane can be controlled under a safe status.M (L)= Q (L)×L (3)Where, Q(L)is actual lifting weight, L is trolley luffing. Me(L) = Qe(L)×L (4)Where, Q e(L) is rated lifting weight. The design values of rated lifting weight are discrete, while trolley luffing is continuous. Therefore there is a rated lifting weight in any position. According to the mechanical characteristics of tower crane, the rated lifting weight is calculated out at any point by 3 spline interpolation according to the rated lifting weight at design points.When lifting weight or lifting torque is beyond rated value, alarm signal and control signal will be sent out. The hoist motor with high, medium and low speed is controlled by the ratio of lifting weight Q and maximum lifting weight Qmax,so the hoisting speed can be controlled automatically by the lifting weight. The luffing motor with high and low speed is controlled by the ratio of lifting torque M and rated lifting torque Me. Thus the luffing speed can be controlled by the lifting torque automatically. The flow chart is shown in figure 2. Fig. 2 real-time control of lifting weight and lifting torqueWhen accidents take place, power will be off suddenly. It is vital for identifying accident liability to record the operating data at the time of power-off. If measures are not taken to save the operating data, the relevant departments is likely to shirk responsibility. In order to solve the problem, the power-off protection module is designed. The module can save the operating data within 120 seconds automatically before power is off suddenly. In this 120 seconds, data is recorded every 0.1 seconds, and stores in a 2D array with 6 rows 1200 columns in queue method. The elements of the first line are the recent time (year-month-day-hour-minute-second), the elements of the second line to sixth line are lifting weight, lifting torque, trolley luffing, lifting height and wind speed in turn. The initial values are zero, when a set of data are obtained, the elements in the first column are eliminated, the elements in the backward columns move frontwards, new elements are filled into the last column of the array, so the array always saves the operating data at the recent 120 seconds. In order to improve the real-time property of the response, and to extend the service life of the nonvolatile memory chip EEPROM-93C46, the array is cached in volatile flip SDRAM usually. So long as power-off signal produces, the array will be shift to EEPROM, at once.In order to achieve the task, the external interruption thread and the power-off monitoring thread of program is set up, the power-off monitoring thread of program is the highest priority. These two threads is idle during normal operation. When power is off, the power-off monitoring thread of program can be executed immediately. When power-off is monitored by power-off control circuit, the external interruption pins produces interrupt signal. The ARM microprocessor responds to external interrupt request, and wakes up the processing thread of external interruption, then sets synchronized events as informing state. After receiving the synchronized events, the data cached in SDRAM will be written to EEPROM in time.ConclusionThe general intelligence embedded monitoring system of tower crane, which can be applicable to various types of tower crane operating under any lifting rates, uses U disk with the information of the tower crane to finish the system initialization and fits the lifting torque curve automatically. In dangerous state, the system will give out the voice and light alarm, link with the relay or PLC by the RS485 communication interface, and cut off the power. When power is down suddenly, the instantaneous operating data can be recorded automatically, and stored in a black box, which can be taken as the proof for identifying accident responsibility. The system has been used to monitor the "JiangLu" series of tower cranes successfully, and achieved good social and economic benefits.AcknowledgementsThe authors wish to thank China Natural Science Foundation(50975289), China Postdoctoral Science Foundation(20100471229), Hunan science & technology plan, Jianglu Machinery & Electronics Co. Ltd for funding this work.Reference Leonard Bernold. Intelligent Technology for Crane Accident Prevention. Journal of Construction Engineering and Management. 1997, 9: 122~124.Gu Lichen,Lei Peng,Jia Yongfeng. Tower crane' monitor and control based on multi-sensor. Journal of Vibration, Measurement and Diagnosis. 2006, 26(SUPPL.): 174-178.Wang Ming,Zhang Guiqing,Yan Qiao,et, al. Development of a novel black box for tower crane based on an ARM-based embedded system. Proceedings of the IEEE International Conference on Automation and Logistics. 2007: 82-87.Wang Renqun, Yin Chenbo, Zhang Song, et, al. Tower Crane Safety Monitoring and Control System Based on CAN Bus. Instrument Techniques and Sensor. 2010(4): 48-51.Zheng Conghai,Li Yanming,Yang Shanhu,et, al. Intelligent Monitoring System for Tower Crane Based on BUS Architecture and Cut IEEE1451 Standard. Computer Measurement & Control. 2010, 18, (9): 1992-1995.Yang Yu,Zhenlian Zhao,Liang Chen. Research and Design of Tower Crane Condition Monitoring and Fault Diagnosis System. 2010 Proceedings of International Conference on Artificial Intelligence and Computational Intelligence. 2010: 405-408.Yu Yang, Chen Liang, Zhao Zhenlian. Research and design of tower crane condition monitoring and fault diagnosis system. International Conference on Artificial Intelligence and Computational Intelligence, 2010, 3: 405-408.Chen Baojiang, Zeng Xiaoyuan. Research on structural frame of the embedded monitoring and control system for tower crane. 2010 International Conference on Mechanic Automation and Control Engineering. 2010: 5374-5377.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Pang, Pu, Gang Deng, Kaihao Bai, Quan Chen, Shixuan Sun, Bo Liu, Yu Xu, et al. "Async-Fork: Mitigating Query Latency Spikes Incurred by the Fork-based Snapshot Mechanism from the OS Level." Proceedings of the VLDB Endowment 16, no. 5 (January 2023): 1033–45. http://dx.doi.org/10.14778/3579075.3579079.

Повний текст джерела
Анотація:
In-memory key-value stores (IMKVSes) serve many online applications. They generally adopt the fork-based snapshot mechanism to support data backup. However, this method can result in query latency spikes because the engine is out-of-service for queries during the snapshot. In contrast to existing research optimizing snapshot algorithms, we address the problem from the operating system (OS) level, while keeping the data persistent mechanism in IMKVSes unchanged. Specifically, we first study the impact of the fork operation on query latency. Based on findings in the study, we propose Async-fork, which performs the fork operation asynchronously to reduce the out-of-service time of the engine. Async-fork is implemented in the Linux kernel and deployed into the online Redis database in public clouds. Our experiment results show that Async-fork can significantly reduce the tail latency of queries during the snapshot.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Skrzypzcak, Jan, and Florian Schintke. "Towards Log-Less, Fine-Granular State Machine Replication." Datenbank-Spektrum 20, no. 3 (October 15, 2020): 231–41. http://dx.doi.org/10.1007/s13222-020-00358-4.

Повний текст джерела
Анотація:
Abstract State machine replication is used to increase the availability of a service such as a data management system while ensuring consistent access to it. State-of-the-art implementations are based on a command log to gain linear write access to storage and avoid repeated transmissions of large replicas. However, the command log requires non-trivial state management such as allocation and pruning to prevent unbounded growth. By introducing in-place replicated state machines that do not use command logs, the log overhead can be avoided. Instead, replicas agree on a sequence of states, and former states are directly overwritten. This method enables the consistent, fault-tolerant replication of basic data management primitives such as counters, sets, or individual locks with little to no overhead. It matches the properties of fast, byte-addressable, non-volatile memory particularly well, where it is no longer necessary to rely on sequential access for good performance. Our approach is especially well suited for small states and fine-granular distributed data management as it occurs in key-value stores, for example.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Benson, Lawrence, Hendrik Makait, and Tilmann Rabl. "Viper." Proceedings of the VLDB Endowment 14, no. 9 (May 2021): 1544–56. http://dx.doi.org/10.14778/3461535.3461543.

Повний текст джерела
Анотація:
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4--18X for write workloads, while matching or surpassing their get performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sujith, A. V. L. N., Naila Iqbal Qureshi, Venkata Harshavardhan Reddy Dornadula, Abinash Rath, Kolla Bhanu Prakash, and Sitesh Kumar Singh. "A Comparative Analysis of Business Machine Learning in Making Effective Financial Decisions Using Structural Equation Model (SEM)." Journal of Food Quality 2022 (February 23, 2022): 1–7. http://dx.doi.org/10.1155/2022/6382839.

Повний текст джерела
Анотація:
Globally, organisations are focused on deriving more value from the data which has been collected from various sources. The purpose of this research is to examine the key components of machine learning in making efficient financial decisions. The business leaders are now faced with huge volume of data, which needs to be stored, analysed, and retrieved so as to make effective decisions for achieving competitive advantage. Machine learning is considered to be the subset of artificial intelligence which is mainly focused on optimizing the business process with lesser or no human interventions. The ML techniques enable analysing the pattern and recognizing from large data set and provide the necessary information to the management for effective decision making in different areas covering finance, marketing, supply chain, human resources, etc. Machine learning enables extracting the quality patterns and forecasting the data from the data base and fosters growth; the machine learning enables transition from the physical data to electronically stored data, enables enhancing the memory, and supports with financial decision making and other aspects. This study is focused on addressing the application of machine learning in making the effective financial decision making among the companies; the application of ML has emerged as a critical technology which is being applied in the current competitive market, and it has offered more opportunities to the business leaders in leveraging the large volume of data. The study is intended to collect the data from employees, managers, and business leaders in various industries to understand the influence of machine learning in financial decision making .
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Moine, Alexandre, Arthur Charguéraud, and François Pottier. "A High-Level Separation Logic for Heap Space under Garbage Collection." Proceedings of the ACM on Programming Languages 7, POPL (January 9, 2023): 718–47. http://dx.doi.org/10.1145/3571218.

Повний текст джерела
Анотація:
We present a Separation Logic with space credits for reasoning about heap space in a sequential call-by-value lambda-calculus equipped with garbage collection and mutable state. A key challenge is to design sound, modular, lightweight mechanisms for establishing the unreachability of a block. Prior work in this area uses pointed-by assertions to keep track of the predecessors of every block, but is carried out in the setting of an assembly-like programming language. We take up the challenge in the setting of a high-level language, where a key problem is to identify and reason about the memory locations that the garbage collector considers as roots. For this purpose, we propose novel "stackable" assertions, which keep track of the existence of stack-to-heap pointers without explicitly recording their origin. Furthermore, we explain how to reason about closures -- concrete heap-allocated data structures that implement the abstract concept of a first-class function. We demonstrate the expressiveness and tractability of our program logic via a range of examples, including recursive functions on linked lists, objects implemented using closures and mutable internal state, recursive functions in continuation-passing style, and three stack implementations that exhibit different space bounds. These last three examples illustrate reasoning about the reachability of the items stored in a container as well as amortized reasoning about space. All of our results are proved in Coq on top of Iris.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії