Academic literature on the topic 'In-memory key-value store'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'In-memory key-value store.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "In-memory key-value store"

1

Cai, Tao, Qingjian He, Dejiao Niu, Fuli Chen, Jie Wang, and Lei Li. "A New Embedded Key–Value Store for NVM Device Simulator." Micromachines 11, no. 12 (December 2, 2020): 1075. http://dx.doi.org/10.3390/mi11121075.

Full text
Abstract:
The non-volatile memory (NVM) device is a useful way to solve the memory wall in computers. However, the current I/O software stack in operating systems becomes a performance bottleneck for applications based on NVM devices, especially for key–value stores. We analyzed the characteristics of key–value stores and NVM devices and designed a new embedded key–value store for an NVM device simulator named PMEKV. The embedded processor in NVM devices was used to manage key–value pairs to reduce the data transfer between NVM devices and key–value applications. Meanwhile, it also cut down the data copy between the user space and the kernel space in the operating system to alleviate the I/O software stacks on the efficiency of key–value stores. The architecture, data layout, management strategy, new interface and log strategy of PMEKV are given. Finally, a prototype of PMEKV was implemented based on PMEM. We used YCSB to test and compare it with Redis, MongDB, and Memcache. Meanwhile, the Redis for PMEM named PMEM-Redis and PMEM-KV were also used to test and compared with PMEKV. The results show that PMEKV had the advantage of throughput and adaptability compared with the current key–value stores.
APA, Harvard, Vancouver, ISO, and other styles
2

Kusuma, Mandahadi. "Metode Optimasi Memcached sebagai NoSQL Key-value Memory Cache." JISKA (Jurnal Informatika Sunan Kalijaga) 3, no. 3 (August 30, 2019): 14. http://dx.doi.org/10.14421/jiska.2019.33-02.

Full text
Abstract:
Memcached is an application that is used to store client query results on the web into the memory server as a temporary storage (cache). The goal is that the web remains responsive even though many access the web. Memcached uses key-value and the LRU (Least Recenly Used) algorithm to store data. In the default configuration Memcached can handle web-based applications properly, but if it is faced with an actual situation, where the process of transferring data and cache objects swells to thousands to millions of items, optimization steps are needed so that Memcached services can always be optimal, not experiencing Input / Output (I / O) overhead, and low latency. In a review of this paper, we will show some of the latest research in memcached optimization efforts. Some methods that can be used are clustering are; Memory partitioning, Graphic Processor Unit hash, User Datagram Protocol (UDP) transmission, Solid State Drive Hybird Memory and Memcached Hadoop distributed File System (HDFS)Keywords : memcached, optimization, web-app, overhead, latency
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yiming, Dongsheng Li, Chuanxiong Guo, Haitao Wu, Yongqiang Xiong, and Xicheng Lu. "CubicRing: Exploiting Network Proximity for Distributed In-Memory Key-Value Store." IEEE/ACM Transactions on Networking 25, no. 4 (August 2017): 2040–53. http://dx.doi.org/10.1109/tnet.2017.2669215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Xingda, Rong Chen, Haibo Chen, and Binyu Zang. "XStore : Fast RDMA-Based Ordered Key-Value Store Using Remote Learned Cache." ACM Transactions on Storage 17, no. 3 (August 31, 2021): 1–32. http://dx.doi.org/10.1145/3468520.

Full text
Abstract:
RDMA ( Remote Direct Memory Access ) has gained considerable interests in network-attached in-memory key-value stores. However, traversing the remote tree-based index in ordered key-value stores with RDMA becomes a critical obstacle, causing an order-of-magnitude slowdown and limited scalability due to multiple round trips. Using index cache with conventional wisdom—caching partial data and traversing them locally—usually leads to limited effect because of unavoidable capacity misses, massive random accesses, and costly cache invalidations. We argue that the machine learning (ML) model is a perfect cache structure for the tree-based index, termed learned cache . Based on it, we design and implement XStore , an RDMA-based ordered key-value store with a new hybrid architecture that retains a tree-based index at the server to perform dynamic workloads (e.g., inserts) and leverages a learned cache at the client to perform static workloads (e.g., gets and scans). The key idea is to decouple ML model retraining from index updating by maintaining a layer of indirection from logical to actual positions of key-value pairs. It allows a stale learned cache to continue predicting a correct position for a lookup key. XStore ensures correctness using a validation mechanism with a fallback path and further uses speculative execution to minimize the cost of cache misses. Evaluations with YCSB benchmarks and production workloads show that a single XStore server can achieve over 80 million read-only requests per second. This number outperforms state-of-the-art RDMA-based ordered key-value stores (namely, DrTM-Tree, Cell, and eRPC+Masstree) by up to 5.9× (from 3.7×). For workloads with inserts, XStore still provides up to 3.5× (from 2.7×) throughput speedup, achieving 53M reqs/s. The learned cache can also reduce client-side memory usage and further provides an efficient memory-performance tradeoff, e.g., saving 99% memory at the cost of 20% peak throughput.
APA, Harvard, Vancouver, ISO, and other styles
5

Ma, Wenlong, Yuqing Zhu, Cheng Li, Mengying Guo, and Yungang Bao. "BiloKey : A Scalable Bi-Index Locality-Aware In-Memory Key-Value Store." IEEE Transactions on Parallel and Distributed Systems 30, no. 7 (July 1, 2019): 1528–40. http://dx.doi.org/10.1109/tpds.2019.2891599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Baoquan, and David H. C. Du. "NVLSM: A Persistent Memory Key-Value Store Using Log-Structured Merge Tree with Accumulative Compaction." ACM Transactions on Storage 17, no. 3 (August 31, 2021): 1–26. http://dx.doi.org/10.1145/3453300.

Full text
Abstract:
Computer systems utilizing byte-addressable Non-Volatile Memory ( NVM ) as memory/storage can provide low-latency data persistence. The widely used key-value stores using Log-Structured Merge Tree ( LSM-Tree ) are still beneficial for NVM systems in aspects of the space and write efficiency. However, the significant write amplification introduced by the leveled compaction of LSM-Tree degrades the write performance of the key-value store and shortens the lifetime of the NVM devices. The existing studies propose new compaction methods to reduce write amplification. Unfortunately, they result in a relatively large read amplification. In this article, we propose NVLSM, a key-value store for NVM systems using LSM-Tree with new accumulative compaction. By fully utilizing the byte-addressability of NVM, accumulative compaction uses pointers to accumulate data into multiple floors in a logically sorted run to reduce the number of compactions required. We have also proposed a cascading searching scheme for reads among the multiple floors to reduce read amplification. Therefore, NVLSM reduces write amplification with small increases in read amplification. We compare NVLSM with key-value stores using LSM-Tree with two other compaction methods: leveled compaction and fragmented compaction. Our evaluations show that NVLSM reduces write amplification by up to 67% compared with LSM-Tree using leveled compaction without significantly increasing the read amplification. In write-intensive workloads, NVLSM reduces the average latency by 15.73%–41.2% compared to other key-value stores.
APA, Harvard, Vancouver, ISO, and other styles
7

Ha, Minjong, and Sang-Hoon Kim. "InK: In-Kernel Key-Value Storage with Persistent Memory." Electronics 9, no. 11 (November 13, 2020): 1913. http://dx.doi.org/10.3390/electronics9111913.

Full text
Abstract:
Block-based storage devices exhibit different characteristics from main memory, and applications and systems have been optimized for a long time considering the characteristics in mind. However, emerging non-volatile memory technologies are about to change the situation. Persistent Memory (PM) provides a huge, persistent, and byte-addressable address space to the system, thereby enabling new opportunities for systems software. However, existing applications are usually apt to indirectly utilize PM as a storage device on top of file systems. This makes applications and file systems perform unnecessary operations and amplify I/O traffic, thereby under-utilizing the high performance of PM. In this paper, we make the case for an in-Kernel key-value storage service optimized for PM, called InK. While providing the persistence of data at a high performance, InK considers the characteristics of PM to guarantee the crash consistency. To this end, InK indexes key-value pairs with B+ tree, which is more efficient on PM. We implemented InK based on the Linux kernel and evaluated its performance with Yahoo Cloud Service Benchmark (YCSB) and RocksDB. Evaluation results confirms that InK has advantages over LSM-tree-based key-value store systems in terms of throughput and tail latency.
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Youil, and Eunji Lee. "CRAST: Crash-resilient data management for a key-value store in persistent memory." IEICE Electronics Express 15, no. 23 (2018): 20180919. http://dx.doi.org/10.1587/elex.15.20180919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Kai, Kaibo Wang, Yuan Yuan, Lei Guo, Rubao Li, Xiaodong Zhang, Bingsheng He, Jiayu Hu, and Bei Hua. "A distributed in-memory key-value store system on heterogeneous CPU–GPU cluster." VLDB Journal 26, no. 5 (August 21, 2017): 729–50. http://dx.doi.org/10.1007/s00778-017-0479-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Jungwon, and Jeffrey S. Vetter. "Implementing efficient data compression and encryption in a persistent key-value store for HPC." International Journal of High Performance Computing Applications 33, no. 6 (May 23, 2019): 1098–112. http://dx.doi.org/10.1177/1094342019847264.

Full text
Abstract:
Recently, persistent data structures, like key-value stores (KVSs), which are stored in a high-performance computing (HPC) system’s nonvolatile memory, provide an attractive solution for a number of emerging challenges like limited I/O performance. Data compression and encryption are two well-known techniques for improving several properties of such data-oriented systems. This article investigates how to efficiently integrate data compression and encryption into persistent KVSs for HPC with the ultimate goal of hiding their costs and complexity in terms of performance and ease of use. Our compression technique exploits deep memory hierarchy in an HPC system to achieve both storage reduction and performance improvement. Our encryption technique provides a practical level of security and enables sharing of sensitive data securely in complex scientific workflows with nearly imperceptible cost. We implement the proposed techniques on top of a distributed embedded KVS to evaluate the benefits and costs of incorporating these capabilities along different points in the dataflow path, illustrating differences in effective bandwidth, latency, and additional computational expense on Swiss National Supercomputing Centre’s Grand Tavé and National Energy Research Scientific Computing Center’s Cori.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "In-memory key-value store"

1

Giordano, Omar. "Design and Implementation of an Architecture-aware In-memory Key- Value Store." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291213.

Full text
Abstract:
Key-Value Stores (KVSs) are a type of non-relational databases whose data is represented as a key-value pair and are often used to represent cache and session data storage. Among them, Memcached is one of the most popular ones, as it is widely used in various Internet services such as social networks and streaming platforms. Given the continuous and increasingly rapid growth of networked devices that use these services, the commodity hardware on which the databases are based must process packets faster to meet the needs of the market. However, in recent years, the performance improvements characterising the new hardware has become thinner and thinner. From here, as the purchase of new products is no longer synonymous with significant performance improvements, companies need to exploit the full potential of the hardware already in their possession, consequently postponing the purchase of more recent hardware. One of the latest ideas for increasing the performance of commodity hardware is the use of slice-aware memory management. This technique exploits the Last Level of Cache (LLC) by making sure that the individual cores take data from memory locations that are mapped to their respective cache portions (i.e., LLC slices). This thesis focuses on the realisation of a KVS prototype—based on Intel Haswell micro-architecture—built on top of the Data Plane Development Kit (DPDK), and to which the principles of slice-aware memory management are applied. To test its performance, given the non-existence of a DPDKbased traffic generator that supports the Memcached protocol, an additional prototype of a traffic generator that supports these features has also been developed. The performances were measured using two distinct machines: one for the traffic generator and one for the KVS. First, the “regular” KVS prototype was tested, then, to see the actual benefits, the slice-aware one. Both KVS prototypeswere subjected to two types of traffic: (i) uniformtraffic where the keys are always different from each other, and (ii) skewed traffic, where keys are repeated and some keys are more likely to be repeated than others. The experiments show that, in real-world scenario (i.e., characterised by skewed key distributions), the employment of a slice-aware memory management technique in a KVS can slightly improve the end-to-end latency (i.e.,~2%). Additionally, such technique highly impacts the look-up time required by the CPU to find the key and the corresponding value in the database, decreasing the mean time by ~22.5%, and improving the 99th percentile by ~62.7%.
Key-Value Stores (KVSs) är en typ av icke-relationsdatabaser vars data representeras som ett nyckel-värdepar och används ofta för att representera lagring av cache och session. Bland dem är Memcached en av de mest populära, eftersom den används ofta i olika internettjänster som sociala nätverk och strömmande plattformar. Med tanke på den kontinuerliga och allt snabbare tillväxten av nätverksenheter som använder dessa tjänster måste den råvaruhårdvara som databaserna bygger på bearbeta paket snabbare för att möta marknadens behov. Under de senaste åren har dock prestandaförbättringarna som kännetecknar den nya hårdvaran blivit tunnare och tunnare. Härifrån, eftersom inköp av nya produkter inte längre är synonymt med betydande prestandaförbättringar, måste företagen utnyttja den fulla potentialen för hårdvaran som redan finns i deras besittning, vilket skjuter upp köpet av nyare hårdvara. En av de senaste idéerna för att öka prestanda för råvaruhårdvara är användningen av skivmedveten minneshantering. Denna teknik utnyttjar den Sista Nivån av Cache (SNC) genom att se till att de enskilda kärnorna tar data från minnesplatser som är mappade till deras respektive cachepartier (dvs. SNCskivor). Denna avhandling fokuserar på förverkligandet av en KVS-prototyp— baserad på Intel Haswell mikroarkitektur—byggd ovanpå Data Plane Development Kit (DPDK), och på vilken principerna för skivmedveten minneshantering tillämpas. För att testa dess prestanda, med tanke på att det inte finns en DPDK-baserad trafikgenerator som stöder Memcachedprotokollet, har en ytterligare prototyp av en trafikgenerator som stöder dessa funktioner också utvecklats. Föreställningarna mättes med två olika maskiner: en för trafikgeneratorn och en för KVS. Först testades den “vanliga” KVSprototypen, för att se de faktiska fördelarna, den skivmedvetna. Båda KVSprototyperna utsattes för två typer av trafik: (i) enhetlig trafik där nycklarna alltid skiljer sig från varandra och (ii) sned trafik, där nycklar upprepas och vissa nycklar är mer benägna att upprepas än andra. Experimenten visar att i verkliga scenarier (dvs. kännetecknas av snedställda nyckelfördelningar) kan användningen av en skivmedveten minneshanteringsteknik i en KVS förbättra förbättringen från slut till slut (dvs. ~2%). Dessutom påverkar sådan teknik i hög grad uppslagstiden som krävs av CPU: n för att hitta nyckeln och motsvarande värde i databasen, vilket minskar medeltiden med ~22, 5% och förbättrar 99th percentilen med ~62, 7%.
APA, Harvard, Vancouver, ISO, and other styles
2

Stenberg, Johan. "Snapple : A distributed, fault-tolerant, in-memory key-value store using Conflict-Free Replicated Data Types." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188691.

Full text
Abstract:
As services grow and receive more traffic, data resilience through replication becomes increasingly important. Modern large-scale Internet services such as Facebook, Google and Twitter serve millions of users concurrently. Replication is a vital component of distributed systems. Eventual consistency and Conflict-Free Replicated Data Types (CRDTs) are suggested as an alternative to strong consistency systems. This thesis implements and evaluates Snapple, a distributed, fault-tolerant, in-memory key-value database based on CRDTs running on the Java Virtual Machine. Snapple supports two kinds of CRDTs, an optimized implementation of the OR-Set and version vectors. Performance measurements show that the Snapple system is significantly faster than Riak, a persistent database based on CRDTs, but has a factor 5x - 2.5x lower throughput than Redis, a popular in-memory key-value database written in C. Snapple is a prototype-implementation but might be a viable alternative to Redis if the user wants the consistency guarantees CRDTs provide.
När internet-baserade tjänster växer och får mer trafik blir data replikering allt viktigare. Moderna storskaliga internet-baserade tjänster såsom Facebook, Google och Twitter hanterar miljoner av förfrågningar från användare samtidigt. Datareplikering är en vital komponent av distribuerade system. Eventuell synkronisering och Konfliktfria Replikerade Datatyper (CRDTs) är föreslagna som alternativ till direkt synkronisering. Denna uppsats implementerar och evaluerar Snapple, en distribuerad feltolerant nyckelvärdesdatabas i RAM-minnet baserad på CRDTs och som exekverar på Javas virtuella maskin. Snapple stödjer två sorters CRDTs, den optimerade implementationen av observera-ta-bort setet och versionsvektorer. Prestanda-mätningar visar att Snapple-systemet är mycket snabbare än Riak, en persistent databas baserad på CRDTs. Snapple visar sig ha 5x - 2.5x lägre genomströmning än Redis, en popular i-minnet nyckel-värdes databas skriven i C. Snapple är en prototyp men CRDT-stödda system kan vara ett värdigt alternativ till Redis om användaren vill ta del av synkroniseringsgarantierna som CRDTs tillhandahåller.
APA, Harvard, Vancouver, ISO, and other styles
3

HEMMATPOUR, MASOUD. "High Performance Computing using Infiniband-based clusters." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2750549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

CAMERA, GIANCARLO. "A decentralized framework for cross administrative domain data sharing." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/996881.

Full text
Abstract:
Federation of messaging and storage platforms located in remote datacenters is an essential functionality to share data among geographically distributed platforms. When systems are administered by the same owner data replication reduces data access latency bringing data closer to applications and enables fault tolerance to face disaster recovery of an entire location. When storage platforms are administered by different owners data replication across different administrative domains is essential for enterprise application data integration. Contents and services managed by different software platforms need to be integrated to provide richer contents and services. Clients may need to share subsets of data in order to enable collaborative analysis and service integration. Platforms usually include proprietary federation functionalities and specific APIs to let external software and platforms access their internal data. These different techniques may not be applicable to all environments and networks due to security and technological restrictions. Moreover the federation of dispersed nodes under a decentralized administration scheme is still a research issue. This thesis is a contribution along this research direction as it introduces and describes a framework, called “WideGroups”, directed towards the creation and the management of an automatic federation and integration of widely dispersed platform nodes. It is based on groups to exchange messages among distributed applications located in different remote datacenters. Groups are created and managed using client side programmatic configuration without touching servers. WideGroups enables the extension of the software platform services to nodes belonging to different administrative domains in a wide area network environment. It lets different nodes form ad-hoc overlay networks on-the-fly depending on message destinations located in distinct administrative domains. It supports multiple dynamic overlay networks based on message groups, dynamic discovery of nodes and automatic setup of overlay networks among nodes with no server-side configuration. I designed and implemented platform connectors to integrate the framework as the federation module of Message Oriented Middleware and Key Value Store platforms, which are among the most widespread paradigms supporting data sharing in distributed systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Coelho, Vasco Samuel Rodrigues. "Study and optimization of the memory management in Memcached." Master's thesis, 2019. http://hdl.handle.net/10362/92295.

Full text
Abstract:
Over the years the Internet has become more popular than ever and web applications like Facebook and Twitter are gaining more users. This results in generation of more and more data by the users which has to be efficiently managed, because access speed is an important factor nowadays, a user will not wait no more than three seconds for a web page to load before abandoning the site. In-memory key-value stores like Memcached and Redis are used to speed up web applications by speeding up access to the data by decreasing the number of accesses to the slower data storage’s. The first implementation of Memcached, in the LiveJournal’s website, showed that by using 28 instances of Memcached on ten unique hosts, caching the most popular 30GB of data can achieve a hit rate around 92%, reducing the number of accesses to the database and reducing the response time considerably. Not all objects in cache take the same time to recompute, so this research is going to study and present a new cost aware memory management that is easy to integrate in a key-value store, with this approach being implemented in Memcached. The new memory management and cache will give some priority to key-value pairs that take longer to be recomputed. Instead of replacing Memcached’s replacement structure and its policy, we simply add a new segment in each structure that is capable of storing the more costly key-value pairs. Apart from this new segment in each replacement structure, we created a new dynamic cost-aware rebalancing policy in Memcached, giving more memory to store more costly key-value pairs. With the implementations of our approaches, we were able to offer a prototype that can be used to research the cost on the caching systems performance. In addition, we were able to improve in certain scenarios the access latency of the user and the total recomputation cost of the key-value stored in the system.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "In-memory key-value store"

1

Scargall, Steve. "pmemkv: A Persistent In-Memory Key-Value Store." In Programming Persistent Memory, 141–53. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-4932-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, PengFei, GuangYu Sun, Peng Wang, and MingYu Chen. "Improving Memory Access Performance of In-Memory Key-Value Store Using Data Prefetching Techniques." In Lecture Notes in Computer Science, 1–17. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23216-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kohler, Jens, and Thomas Specht. "Performance Analysis of Vertically Partitioned Data in Clouds Through a Client-Based In-Memory Key-Value Store Cache." In Advances in Intelligent Systems and Computing, 3–13. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19713-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Makris, Antonios, Konstantinos Tserpes, and Dimosthenis Anagnostopoulos. "Load Balancing in In-Memory Key-Value Stores for Response Time Minimization." In Economics of Grids, Clouds, Systems, and Services, 62–73. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61920-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Chengjian, Kai Ouyang, Xiaowen Chu, Hai Liu, and Yiu-Wing Leung. "R-Memcached: A Reliable In-Memory Cache System for Big Key-Value Stores." In Big Data Computing and Communications, 243–56. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22047-5_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "In-memory key-value store"

1

Zheng, Ran, Wenjin Wang, Hai Jin, and Qin Zhang. "SKVM: Scaling in-memory Key-Value store on multicore." In 2015 20th IEEE Symposium on Computers and Communication (ISCC). IEEE, 2015. http://dx.doi.org/10.1109/iscc.2015.7405580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yin, Hao Wang, Xiaoqing Zhao, Hongbin Sun, and Tong Zhang. "Applying Software-based Memory Error Correction for In-Memory Key-Value Store." In MEMSYS '16: The Second International Symposium on Memory Systems. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2989081.2989091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tinnefeld, Christian, Alexander Zeier, and Hasso Plattner. "Cache-conscious data placement in an in-memory key-value store." In the 15th Symposium. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2076623.2076640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meng, Fandong, Zhaopeng Tu, Yong Cheng, Haiyang Wu, Junjie Zhai, Yuekui Yang, and Di Wang. "Neural Machine Translation with Key-Value Memory-Augmented Attention." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/357.

Full text
Abstract:
Although attention-based Neural Machine Translation (NMT) has achieved remarkable progress in recent years, it still suffers from issues of repeating and dropping translations. To alleviate these issues, we propose a novel key-value memory-augmented attention model for NMT, called KVMEMATT. Specifically, we maintain a timely updated keymemory to keep track of attention history and a fixed value-memory to store the representation of source sentence throughout the whole translation process. Via nontrivial transformations and iterative interactions between the two memories, the decoder focuses on more appropriate source word(s) for predicting the next target word at each decoding step, therefore can improve the adequacy of translations. Experimental results on Chinese)English and WMT17 German,English translation tasks demonstrate the superiority of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
5

Oh, Hyunyoung, Dongil Hwang, Maja Malenko, Myunghyun Cho, Hyungon Moon, Marcel Baunach, and Yunheung Paek. "XTENSTORE: Fast Shielded In-memory Key-Value Store on a Hybrid x86-FPGA System." In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2022. http://dx.doi.org/10.23919/date54114.2022.9774583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qiu, Yunhui, Hankun Lv, Jinyu Xie, Wenbo Yin, and Lingli Wang. "Ultra-Low-Latency and Flexible In-memory Key-Value Store System Design on CPU-FPGA." In 2018 International Conference on Field-Programmable Technology (FPT). IEEE, 2018. http://dx.doi.org/10.1109/fpt.2018.00030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Waddington, Daniel, Clem Dickey, Luna Xu, Travis Janssen, Jantz Tran, and Doshi Kshitij. "Evaluating Intel 3D-Xpoint NVDIMM Persistent Memory in the Context of a Key-Value Store." In 2020 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). IEEE, 2020. http://dx.doi.org/10.1109/ispass48437.2020.00035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Iwazume, Michiaki, Takahiro Iwase, Kouji Tanaka, Hideaki Fujii, Makoto Hijiya, and Hiroshi Haraguchi. "Big data in memory: Benchimarking in memory database using the distributed key-value store for machine to machine communication." In 2014 15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD). IEEE, 2014. http://dx.doi.org/10.1109/snpd.2014.6888748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Iwazume, Michiaki, Takahiro Iwase, Kouji Tanaka, and Hideaki Fujii. "Big Data in Memory: Benchmarking in Memory Database Using the Distributed Key-Value Store for Constructing a Large Scale Information Infrastructure." In 2014 IEEE 38th International Computer Software and Applications Conference Workshops (COMPSACW). IEEE, 2014. http://dx.doi.org/10.1109/compsacw.2014.37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tinnefeld, Christian, and Hasso Plattner. "Exploiting memory locality in distributed key-value stores." In 2011 IEEE International Conference on Data Engineering Workshops (ICDEW). IEEE, 2011. http://dx.doi.org/10.1109/icdew.2011.5767668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography