To see the other types of publications on this topic, follow the link: In-memory-computing (IMC).

Journal articles on the topic 'In-memory-computing (IMC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'In-memory-computing (IMC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Song, Soonbum, and Youngmin Kim. "Novel In-Memory Computing Adder Using 8+T SRAM." Electronics 11, no. 6 (2022): 929. http://dx.doi.org/10.3390/electronics11060929.

Full text
Abstract:
Von Neumann architecture-based computing systems are facing a von Neumann bottleneck owing to data transfer between separated memory and processor units. In-memory computing (IMC), on the other hand, reduces energy consumption and improves computing performance. This study explains an 8+T SRAM IMC circuit based on 8+T differential SRAM (8+T SRAM) and proposes 8+T SRAM-based IMC full adder (FA) and 8+T SRAM-based IMC approximate adder, which are based on the 8+T SRAM IMC circuit. The 8+T SRAM IMC circuit performs SRAM read and bitwise operations simultaneously and performs each logic operation
APA, Harvard, Vancouver, ISO, and other styles
2

Mannocci, P., M. Farronato, N. Lepri, et al. "In-memory computing with emerging memory devices: Status and outlook." APL Machine Learning 1, no. 1 (2023): 010902. http://dx.doi.org/10.1063/5.0136403.

Full text
Abstract:
In-memory computing (IMC) has emerged as a new computing paradigm able to alleviate or suppress the memory bottleneck, which is the major concern for energy efficiency and latency in modern digital computing. While the IMC concept is simple and promising, the details of its implementation cover a broad range of problems and solutions, including various memory technologies, circuit topologies, and programming/processing algorithms. This Perspective aims at providing an orientation map across the wide topic of IMC. First, the memory technologies will be presented, including both conventional com
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Zhaohui, Yang Feng, Peng Guo, et al. "Flash-based in-memory computing for stochastic computing in image edge detection." Journal of Semiconductors 44, no. 5 (2023): 054101. http://dx.doi.org/10.1088/1674-4926/44/5/054101.

Full text
Abstract:
Abstract The “memory wall” of traditional von Neumann computing systems severely restricts the efficiency of data-intensive task execution, while in-memory computing (IMC) architecture is a promising approach to breaking the bottleneck. Although variations and instability in ultra-scaled memory cells seriously degrade the calculation accuracy in IMC architectures, stochastic computing (SC) can compensate for these shortcomings due to its low sensitivity to cell disturbances. Furthermore, massive parallel computing can be processed to improve the speed and efficiency of the system. In this pape
APA, Harvard, Vancouver, ISO, and other styles
4

Pedretti, Giacomo, and Daniele Ielmini. "In-Memory Computing with Resistive Memory Circuits: Status and Outlook." Electronics 10, no. 9 (2021): 1063. http://dx.doi.org/10.3390/electronics10091063.

Full text
Abstract:
In-memory computing (IMC) refers to non-von Neumann architectures where data are processed in situ within the memory by taking advantage of physical laws. Among the memory devices that have been considered for IMC, the resistive switching memory (RRAM), also known as memristor, is one of the most promising technologies due to its relatively easy integration and scaling. RRAM devices have been explored for both memory and IMC applications, such as neural network accelerators and neuromorphic processors. This work presents the status and outlook on the RRAM for analog computing, where the precis
APA, Harvard, Vancouver, ISO, and other styles
5

Ayyappa, B. Kanth Naga. "Compute SNDR-Boosted 22-nm MRAM-Based In-Memory Computing Macro Using Statistical Error Compensation." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–7. https://doi.org/10.55041/ijsrem49276.

Full text
Abstract:
Abstract - The rapid growth of AI and data-intensive applications necessitates energy-efficient and high-performance memory solutions. In-memory computing (IMC) offers a paradigm shift by reducing data movement and enabling computation directly within memory arrays. This work presents a Compute SNDR-Boosted (Statistical Noise and Defect Resilience) 22-nm MRAM-based IMC macro that leverages statistical error compensation to mitigate device-level variability and noise. Our method integrates a statistical correction engine, enhancing the Signal-to-Noise and Distortion Ratio (SNDR), thereby achiev
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Manho, Sung-Ho Kim, Hyuk-Jae Lee, and Chae-Eun Rhee. "Case Study on Integrated Architecture for In-Memory and In-Storage Computing." Electronics 10, no. 15 (2021): 1750. http://dx.doi.org/10.3390/electronics10151750.

Full text
Abstract:
Since the advent of computers, computing performance has been steadily increasing. Moreover, recent technologies are mostly based on massive data, and the development of artificial intelligence is accelerating it. Accordingly, various studies are being conducted to increase the performance and computing and data access, together reducing energy consumption. In-memory computing (IMC) and in-storage computing (ISC) are currently the most actively studied architectures to deal with the challenges of recent technologies. Since IMC performs operations in memory, there is a chance to overcome the me
APA, Harvard, Vancouver, ISO, and other styles
7

Bansla, Neetu, and Rajneesh . "Future ERP: In-Memory Computing (IMC)Technology Infusion." Journal of Information Technology and Sciences 6, no. 3 (2020): 17–21. http://dx.doi.org/10.46610/joits.2020.v06i03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Akgül, Berke, and Tufan Coşkun Karalar. "An Energy Consumption Model for SRAM-Based In-Memory-Computing Architectures." Electronics 13, no. 6 (2024): 1121. http://dx.doi.org/10.3390/electronics13061121.

Full text
Abstract:
In this paper, a mathematical model for obtaining energy consumption of IMC architectures is constructed. This model provides energy estimation based on the distribution of a specific dataset. In addition, the estimation reduces the required simulation time to create an energy consumption model of SRAM-based IMC architectures. To validate our model with realistic data, the energy consumption of IMC is compared by using NeuroSim V3.0 for the CIFAR-10 and MNIST-like datasets. Furthermore, an application is created with our model to select highest performing quantization mapping based upon the pa
APA, Harvard, Vancouver, ISO, and other styles
9

Ponzina, Flavio, Marco Rios, Alexandre Levisse, Giovanni Ansaloni, and David Atienza. "Overflow-free Compute Memories for Edge AI Acceleration." ACM Transactions on Embedded Computing Systems 22, no. 5s (2023): 1–23. http://dx.doi.org/10.1145/3609387.

Full text
Abstract:
Compute memories are memory arrays augmented with dedicated logic to support arithmetic. They support the efficient execution of data-centric computing patterns, such as those characterizing Artificial Intelligence (AI) algorithms. These architectures can provide computing capabilities as part of the memory array structures (In-Memory Computing, IMC) or at their immediate periphery (Near-Memory Computing, NMC). By bringing the processing elements inside (or very close to) storage, compute memories minimize the cost of data access. Moreover, highly parallel (and, hence, high-performance) comput
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Peilong, Dan Lan, Fengyun Wang, and Incheol Shin. "In-Memory Computing Integrated Structure Circuit Based on Nonvolatile Flash Memory Unit." Electronics 12, no. 14 (2023): 3155. http://dx.doi.org/10.3390/electronics12143155.

Full text
Abstract:
Artificial intelligence has made people’s demands for computer computing efficiency increasingly high. The traditional hardware circuit simulation method for neural morphology computation has problems of unstable performance and excessive power consumption. This research will use non-volatile flash memory cells that are easy to read and write to build a convolutional neural network structure to improve the performance of neural morphological computing. In the experiment, floating-gate transistors were used to simulate neural network synapses to design core cross-array circuits. A voltage subtr
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Jin, Zhiting Lin, Xiulong Wu, et al. "An 8T SRAM Array with Configurable Word Lines for In-Memory Computing Operation." Electronics 10, no. 3 (2021): 300. http://dx.doi.org/10.3390/electronics10030300.

Full text
Abstract:
In-memory computing (IMC) has been widely accepted to be an effective method to improve energy efficiency. To realize IMC, operands in static random-access memory (SRAM) are stored in columns, which contradicts SRAM write patterns and requires additional data movement. In this paper, an 8T SRAM array with configurable word lines is proposed, in where the operands are arranged in rows, following the traditional SRAM storage pattern, and therefore additional data movement is not required. The proposed structure supports three different computing modes. In the ternary multiplication mode, the ref
APA, Harvard, Vancouver, ISO, and other styles
12

Xue, Wang, Liu, Lv, Wang, and Zeng. "An RISC-V Processor with Area-Efficient Memristor-Based In-Memory Computing for Hash Algorithm in Blockchain Applications." Micromachines 10, no. 8 (2019): 541. http://dx.doi.org/10.3390/mi10080541.

Full text
Abstract:
Blockchain technology is increasingly being used in Internet of things (IoT) devices for information security and data integrity. However, it is challenging to implement complex hash algorithms with limited resources in IoT devices owing to large energy consumption and a long processing time. This paper proposes an RISC-V processor with memristor-based in-memory computing (IMC) for blockchain technology in IoT applications. The IMC-adapted instructions were designed for the Keccak hash algorithm by virtue of the extendibility of the RISC-V instruction set architecture (ISA). Then, an RISC-V pr
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Thanh-Dat, Minh-Son Le, Thi-Nhan Pham, and Ik-Joon Chang. "TA-Quatro: Soft Error-Resilient and Power-Efficient SRAM Cell for ADC-Less Binary Weight and Ternary Activation In-Memory Computing." Electronics 13, no. 15 (2024): 2904. http://dx.doi.org/10.3390/electronics13152904.

Full text
Abstract:
Some applications, such as satellites, require ultralow power and high-radiation resilience. We developed a12Tsoft error-resilient SRAM cell, TA-Quatro, to deliver in-memory computing (IMC) for those applications. Based on our TA-Quatro cell, we implemented an IMC circuit to support binary weights and ternary activations in a single SRAM cell. Our simulation under 28 nm FD-SOI technology demonstrates that the TA-Quatro IMC circuit maintains good IMC stability at a scaled supply of 0.7Vand achieves ternary activation without needing analog-to-digital converters. These advancements significantly
APA, Harvard, Vancouver, ISO, and other styles
14

Krishnan, Gokul, Sumit K. Mandal, Manvitha Pannala, et al. "SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks." ACM Transactions on Embedded Computing Systems 20, no. 5s (2021): 1–24. http://dx.doi.org/10.1145/3476999.

Full text
Abstract:
In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic challenges on area, yield, and on-chip interconnection cost due to the ever-increasing model sizes. 2.5D integration or chiplet-based architectures interconnect multiple small chips (i.e., chiplets) to form a large computing system, presenting a feasible solution beyond a monolithic IMC architecture to accelerate large deep learning models. This paper presents a new benchmarking simulator, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore the potential of such a paradigm shift in IMC
APA, Harvard, Vancouver, ISO, and other styles
15

Levin, Semen M. "Unleashing real-time analytics: A comparative study of in-memory computing vs. traditional disk-based systems." Brazilian Journal of Science 3, no. 5 (2024): 30–39. http://dx.doi.org/10.14295/bjs.v3i5.553.

Full text
Abstract:
The article presents a comprehensive study evaluating the performance differences between in-memory computing (IMC) and traditional disk-based database systems, specifically focusing on Redis and PostgreSQL. Given the escalating demands for real-time data analytics across various sectors, the research delves into the comparative efficiency of these two data management paradigms in processing large datasets. Utilizing a synthetic dataset of 23.6 million records, we orchestrated a series of data manipulation tasks, including aggregation, table joins, and filtering operations, to simulate real-wo
APA, Harvard, Vancouver, ISO, and other styles
16

Bärenfänger, Rieke, Boris Otto, and Hubert Österle. "Business value of in-memory technology – multiple-case study insights." Industrial Management & Data Systems 114, no. 9 (2014): 1396–414. http://dx.doi.org/10.1108/imds-07-2014-0212.

Full text
Abstract:
Purpose – The purpose of this paper is to assess the business value of in-memory computing (IMC) technology by analyzing its organizational impact in different application scenarios. Design/methodology/approach – This research applies a multiple-case study methodology analyzing five cases of IMC application scenarios in five large European industrial and service-sector companies. Findings – Results show that IMC can deliver business value in various applications ranging from advanced analytic insights to support of real-time processes. This enables higher-level organizational advantages like d
APA, Harvard, Vancouver, ISO, and other styles
17

Kiran Cherupally, Sai, Jian Meng, Adnan Siraj Rakin, et al. "Improving the accuracy and robustness of RRAM-based in-memory computing against RRAM hardware noise and adversarial attacks." Semiconductor Science and Technology 37, no. 3 (2022): 034001. http://dx.doi.org/10.1088/1361-6641/ac461f.

Full text
Abstract:
Abstract We present a novel deep neural network (DNN) training scheme and resistive RAM (RRAM) in-memory computing (IMC) hardware evaluation towards achieving high accuracy against RRAM device/array variations and enhanced robustness against adversarial input attacks. We present improved IMC inference accuracy results evaluated on state-of-the-art DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. These DNNs are evaluated with measured noise data obtained from three different RRAM-based IMC prototype chips. Across thes
APA, Harvard, Vancouver, ISO, and other styles
18

Lee, Sanghyun, and Youngmin Kim. "Charge-Domain Static Random Access Memory-Based In-Memory Computing with Low-Cost Multiply-and-Accumulate Operation and Energy-Efficient 7-Bit Hybrid Analog-to-Digital Converter." Electronics 13, no. 3 (2024): 666. http://dx.doi.org/10.3390/electronics13030666.

Full text
Abstract:
This study presents a charge-domain SRAM-based in-memory computing (IMC) architecture. The multiply-and-accumulate (MAC) operation in the IMC structure is divided into current- and charge-domain methods. Current-domain IMC has high-power consumption and poor linearity. Charge-domain IMC has reduced variability compared with current-domain IMCs, achieving higher linearity and enabling energy-efficient operation with fewer dynamic current paths. The proposed IMC structure uses a 9T1C bitcell considering the trade-off between the bitcell area and the threshold voltage drop by an NMOS access trans
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Jiyoung, Minjong Lee, Geon Park, and Dushyant M. Narayan. "Ferroelectric Hf0.5Zr0.5O2 Memcapacitor Array for in-Memory Computing." ECS Meeting Abstracts MA2025-01, no. 31 (2025): 1592. https://doi.org/10.1149/ma2025-01311592mtgabs.

Full text
Abstract:
The rapid progress in artificial intelligence (AI) is driving the need for more efficient computing architectures capable of performing the demanding, repetitive computations required for processing large datasets. Unlike traditional von Neumann architectures, which separate processing and memory units, new approaches aim to integrate computational capabilities directly into memory arrays, facilitating in-memory computing (IMC). This integration significantly improves both energy efficiency and processing speed, particularly for AI and similar applications. A crucial operation within IMC is ve
APA, Harvard, Vancouver, ISO, and other styles
20

Mambu, Kévin, Henri-Pierre Charles, Maha Kooli, and Julie Dumas. "Towards Integration of a Dedicated Memory Controller and Its Instruction Set to Improve Performance of Systems Containing Computational SRAM." Journal of Low Power Electronics and Applications 12, no. 1 (2022): 18. http://dx.doi.org/10.3390/jlpea12010018.

Full text
Abstract:
In-memory computing (IMC) aims to solve the performance gap between CPU and memories introduced by the memory wall. However, it does not address the energy wall problem caused by data transfer over memory hierarchies. This paper proposes the data-locality management unit (DMU) to efficiently transfer data from a DRAM memory to a computational SRAM (C-SRAM) memory allowing IMC operations. The DMU is tightly coupled within the C-SRAM and allows one to align the data structure in order to perform effective in-memory computation. We propose a dedicated instruction set within the DMU to issue data
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Sreeja S., and Jagadish Nayak. "Effective 8T Reconfigurable SRAM for Data Integrity and Versatile In-Memory Computing-Based AI Acceleration." Electronics 14, no. 13 (2025): 2719. https://doi.org/10.3390/electronics14132719.

Full text
Abstract:
For data-intensive applications like edge AI and image processing, we present a new reconfigurable 8T SRAM-based in-memory computing (IMC) macro designed for high-performance and energy-efficient operation. This architecture mitigates von Neumann limitations through numerous major breakthroughs. We built a new architecture with an adjustable capacitance array to substantially increase the multiply-and-accumulate (MAC) engine’s accuracy. It achieves 10–20 TOPS/W and >95% accuracy for 4–10-bit operations and is robust across PVT changes. By supporting binary and ternary neural networks (BNN/T
APA, Harvard, Vancouver, ISO, and other styles
22

Rajput, Anil Kumar, and Manisha Pattanaik. "A Nonvolatile 7T2M SRAM Cell with Improved Noise Margin for Energy Efficient In Memory Boolean Computations." International Journal of Engineering Research in Electronics and Communication Engineering 9, no. 1 (2022): 1–8. http://dx.doi.org/10.36647/ijerece/09.01.a001.

Full text
Abstract:
The current computing systems are facing von Neumann bottleneck (VNB) in modern times due to the high prominence on big-data applications such as artificial intelligence and neuromorphic computing. In-memory computation is one of the emerging computing paradigms to mitigate this VNB. In this paper, a memristor-based robust 7T2M Nonvolatile-SRAM (NvSRAM) is proposed for energy-efficient In-memory computation. The 7T2M NvSRAM is designed using CMOS and memristor with a higher resistance ratio, which improved the write margin by 74.44% and the energy consumption for read and write operation by 5.
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Huai, Xi Luo, Long Liu, et al. "All-Electrical Control of Compact SOT-MRAM: Toward Highly Efficient and Reliable Non-Volatile In-Memory Computing." Micromachines 13, no. 2 (2022): 319. http://dx.doi.org/10.3390/mi13020319.

Full text
Abstract:
Two-dimensional van der Waals (2D vdW) ferromagnets possess outstanding scalability, controllable ferromagnetism, and out-of-plane anisotropy, enabling the compact spintronics-based non-volatile in-memory computing (nv-IMC) that promises to tackle the memory wall bottleneck issue. Here, by employing the intriguing room-temperature ferromagnetic characteristics of emerging 2D Fe3GeTe2 with the dissimilar electronic structure of the two spin-conducting channels, we report on a new type of non-volatile spin-orbit torque (SOT) magnetic tunnel junction (MTJ) device based on Fe3GeTe2/MgO/Fe3GeTe2 he
APA, Harvard, Vancouver, ISO, and other styles
24

Krishnan, Gokul, Sumit K. Mandal, Chaitali Chakrabarti, Jae-Sun Seo, Umit Y. Ogras, and Yu Cao. "Impact of On-chip Interconnect on In-memory Acceleration of Deep Neural Networks." ACM Journal on Emerging Technologies in Computing Systems 18, no. 2 (2022): 1–22. http://dx.doi.org/10.1145/3460233.

Full text
Abstract:
With the widespread use of Deep Neural Networks (DNNs), machine learning algorithms have evolved in two diverse directions—one with ever-increasing connection density for better accuracy and the other with more compact sizing for energy efficiency. The increase in connection density increases on-chip data movement, which makes efficient on-chip communication a critical function of the DNN accelerator. The contribution of this work is threefold. First, we illustrate that the point-to-point (P2P)-based interconnect is incapable of handling a high volume of on-chip data movement for DNNs. Second,
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Chao, Junhao Zhou, Kyong-Seok Jang, and Youngok Kim. "Intelligent Mesh Cluster Algorithm for Device-Free Localization in Wireless Sensor Networks." Electronics 12, no. 16 (2023): 3426. http://dx.doi.org/10.3390/electronics12163426.

Full text
Abstract:
Device-free localization (DFL) is a technology designed to determine the positions of targets without the need for them to carry electronic devices. It achieves this by analyzing the shadowing effects of radio links within wireless sensor networks (WSNs). However, obtaining high precision in DFL often results in increased energy consumption, severe electromagnetic interference, and other challenges that impact positioning accuracy. Most DFL schemes for accurate tracking require substantial memory and computing resources, which make them unsuitable for resource-constrained applications. To addr
APA, Harvard, Vancouver, ISO, and other styles
26

Mucherla Usha Rani, Et al. "-Memory Computing Based Reliable and High Speed Schmitt trigger 10T SRAM cell design." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (2023): 1389–97. http://dx.doi.org/10.17762/ijritcc.v11i10.8681.

Full text
Abstract:
Static random access memories (SRAM) are useful building blocks in various applications, including cache memories, integrated data storage systems, and microprocessors. The von Neumann bottleneck difficulties are solved by in-memory computing. It eliminates unnecessary frequent data transfer between memory and processing units simultaneously. In this research, the replica-based 10T SRAM design for in-memory computing (IMC) is designed by adapting the word line control scheme in 14nm CMOS technology. In order to achieve high reading and writing capability, the Schmitt trigger inverter was used
APA, Harvard, Vancouver, ISO, and other styles
27

Lim, Kaeun, Jinhyun Kim, Eunsu Kim, and Youngmin Kim. "Enhanced Dual Carry Approximate Adder with Error Reduction Unit for High-Performance Multiplier and In-Memory Computing." Electronics 14, no. 9 (2025): 1702. https://doi.org/10.3390/electronics14091702.

Full text
Abstract:
The Dual Carry Approximate Adder (DCAA) is proposed as an advanced 8-bit approximate adder featuring dual carry-out and carry-in full adders (FAs) along with an Error Reduction Unit (ERU) to enhance accuracy. The 8-bit adder is partitioned into upper and lower 4-bit blocks, connected via a dual carry-out full adder and a dual carry-in full adder. To minimize impact on the critical path, an ERU is designed for efficient error correction. Four variants of the DCAA are provided, allowing users to select the most suitable design based on their specific power, area, and accuracy requirements. The D
APA, Harvard, Vancouver, ISO, and other styles
28

Gul, Waqas, Maitham Shams, and Dhamin Al-Khalili. "FinFET 6T-SRAM All-Digital Compute-in-Memory for Artificial Intelligence Applications: An Overview and Analysis." Micromachines 14, no. 8 (2023): 1535. http://dx.doi.org/10.3390/mi14081535.

Full text
Abstract:
Artificial intelligence (AI) has revolutionized present-day life through automation and independent decision-making capabilities. For AI hardware implementations, the 6T-SRAM cell is a suitable candidate due to its performance edge over its counterparts. However, modern AI hardware such as neural networks (NNs) access off-chip data quite often, degrading the overall system performance. Compute-in-memory (CIM) reduces off-chip data access transactions. One CIM approach is based on the mixed-signal domain, but it suffers from limited bit precision and signal margin issues. An alternate emerging
APA, Harvard, Vancouver, ISO, and other styles
29

Nanda, Ipseeta, and Rajesh De. "REVIEW OF CLOUD COMPUTING CRYPTOGRAPHY." Information Management And Computer Science 5, no. 2 (2022): 31–33. http://dx.doi.org/10.26480/imcs.02.2022.31.33.

Full text
Abstract:
The delivery of computing services over the internet, as opposed to storing data on a local memory device or a proprietary disc drive, is known as cloud computing. Servers, storage, databases, networking, and software are some examples of computing services. The primary justification and major benefit of using the cloud are the user’s ability to store data there and access it from any location at any time, as well as the low cost of all its services. Despite this, because the data stored in the cloud is not directly maintained by the customer, security has always been a major concern with clou
APA, Harvard, Vancouver, ISO, and other styles
30

Verma, Anil, Divya Anand, Aman Singh, et al. "IoT-Inspired Reliable Irregularity-Detection Framework for Education 4.0 and Industry 4.0." Electronics 11, no. 9 (2022): 1436. http://dx.doi.org/10.3390/electronics11091436.

Full text
Abstract:
Education 4.0 imitates Industry 4.0 in many aspects such as technology, customs, challenges, and benefits. The remarkable advancement in embryonic technologies, including IoT (Internet of Things), Fog Computing, Cloud Computing, and Augmented and Virtual Reality (AR/VR), polishes every dimension of Industry 4.0. The constructive impacts of Industry 4.0 are also replicated in Education 4.0. Real-time assessment, irregularity detection, and alert generation are some of the leading necessities of Education 4.0. Conspicuously, this study proposes a reliable assessment, irregularity detection, and
APA, Harvard, Vancouver, ISO, and other styles
31

Krasnov, Mikhail Mikhailovich, and Olga Borisovna Feodoritova. "Using the functional programming library for solving numerical problems on graphics accelerators with CUDA technology." Proceedings of the Institute for System Programming of the RAS 33, no. 5 (2021): 167–80. http://dx.doi.org/10.15514/ispras-2021-33(5)-10.

Full text
Abstract:
Modern graphics accelerators (GPUs) can significantly speed up the execution of numerical tasks. However, porting programs to graphics accelerators is not an easy task. Sometimes the transfer of programs to such accelerators is carried out by almost completely rewriting them (for example, when using the OpenCL technology). This raises the daunting task of maintaining two independent source codes. However, CUDA graphics accelerators, thanks to technology developed by NVIDIA, allow you to have a single source code for both conventional processors (CPUs) and CUDA. The machine code generated when
APA, Harvard, Vancouver, ISO, and other styles
32

Backx, Rosa, Caroline Skirrow, Pasquale Dente, Jennifer H. Barnett, and Francesca K. Cormack. "Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study." Journal of Medical Internet Research 22, no. 8 (2020): e16792. http://dx.doi.org/10.2196/16792.

Full text
Abstract:
Background Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. Objective This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based asse
APA, Harvard, Vancouver, ISO, and other styles
33

An, SangWoo, and Seog Chung Seo. "Highly Efficient Implementation of Block Ciphers on Graphic Processing Units for Massively Large Data." Applied Sciences 10, no. 11 (2020): 3711. http://dx.doi.org/10.3390/app10113711.

Full text
Abstract:
With the advent of IoT and Cloud computing service technology, the size of user data to be managed and file data to be transmitted has been significantly increased. To protect users’ personal information, it is necessary to encrypt it in secure and efficient way. Since servers handling a number of clients or IoT devices have to encrypt a large amount of data without compromising service capabilities in real-time, Graphic Processing Units (GPUs) have been considered as a proper candidate for a crypto accelerator for processing a huge amount of data in this situation. In this paper, we present h
APA, Harvard, Vancouver, ISO, and other styles
34

Zeinati, Aseel, Durga Misra, Dina H. Triyoso, et al. "Impact of Bottom Electrode in HfO2-Based Rram Devices on Switching Characteristics." ECS Meeting Abstracts MA2023-01, no. 29 (2023): 1783. http://dx.doi.org/10.1149/ma2023-01291783mtgabs.

Full text
Abstract:
Resistive random-access memory (RRAM) devices with hydrogen plasma treated HfO2 have shown low power switching (1) and good conductance quantization with programing pulsed operation (2) that qualify them to be used for in-memory computing. Engineering the distribution of defects or oxygen vacancies near the top and bottom electrodes has a significant impact on reducing the switching power and improving the multi-level cell (MLC) characteristics of the device. A graded distribution with higher concentration of oxygen vacancies closer to the top electrode (TE) due to hydrogen plasma treatment an
APA, Harvard, Vancouver, ISO, and other styles
35

Alotaibi, Nouf Saeed, Hassan Ibrahim Ahmed, and Samah Osama M. Kamel. "Dynamic Adaptation Attack Detection Model for a Distributed Multi-Access Edge Computing Smart City." Sensors 23, no. 16 (2023): 7135. http://dx.doi.org/10.3390/s23167135.

Full text
Abstract:
The internet of things (IoT) technology presents an intelligent way to improve our lives and contributes to many fields such as industry, communications, agriculture, etc. Unfortunately, IoT networks are exposed to many attacks that may destroy the entire network and consume network resources. This paper aims to propose intelligent process automation and an auto-configured intelligent automation detection model (IADM) to detect and prevent malicious network traffic and behaviors/events at distributed multi-access edge computing in an IoT-based smart city. The proposed model consists of two pha
APA, Harvard, Vancouver, ISO, and other styles
36

Kazemi, Arman, Franz Müller, Mohammad Mehdi Sharifi, et al. "Achieving software-equivalent accuracy for hyperdimensional computing with ferroelectric-based in-memory computing." Scientific Reports 12, no. 1 (2022). http://dx.doi.org/10.1038/s41598-022-23116-w.

Full text
Abstract:
AbstractHyperdimensional computing (HDC) is a brain-inspired computational framework that relies on long hypervectors (HVs) for learning. In HDC, computational operations consist of simple manipulations of hypervectors and can be incredibly memory-intensive. In-memory computing (IMC) can greatly improve the efficiency of HDC by reducing data movement in the system. Most existing IMC implementations of HDC are limited to binary precision which inhibits the ability to match software-equivalent accuracies. Moreover, memory arrays used in IMC are restricted in size and cannot immediately support t
APA, Harvard, Vancouver, ISO, and other styles
37

Dazzi, Martino, Abu Sebastian, Luca Benini, and Evangelos Eleftheriou. "Accelerating Inference of Convolutional Neural Networks Using In-memory Computing." Frontiers in Computational Neuroscience 15 (August 3, 2021). http://dx.doi.org/10.3389/fncom.2021.674154.

Full text
Abstract:
In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in O(1) time complexity by mapping the synaptic weights of a neural-network layer to the devices of an IMC core. However, because of the significantly different pattern of execution compared to previous computational paradigms, IMC requires a rethinking of the architectural design choices made when designing deep-learn
APA, Harvard, Vancouver, ISO, and other styles
38

Ren, Sheng‐Guang, A‐Wei Dong, Ling Yang, et al. "Self‐Rectifying Memristors for Three‐Dimensional in‐Memory Computing." Advanced Materials, November 16, 2023. http://dx.doi.org/10.1002/adma.202307218.

Full text
Abstract:
AbstractCostly data movement in terms of time and energy in traditional von Neumann systems is exacerbated by emerging information technologies related to artificial intelligence (AI). In‐memory computing (IMC) architecture aims to address this problem. Although the IMC hardware prototype represented by a memristor has developed rapidly and performs well, the sneak path issue is a critical and unavoidable challenge prevalent in large‐scale and high‐density crossbar arrays, particularly in three‐dimensional (3D) integration. As a perfect solution to the sneak‐path issue, self‐rectifying memrist
APA, Harvard, Vancouver, ISO, and other styles
39

Zhou, Houji, Jia Chen, Jiancong Li, Ling Yang, Yi Li, and Xiangshui Miao. "Bring memristive in-memory computing into general-purpose machine learning: A perspective." APL Machine Learning 1, no. 4 (2023). http://dx.doi.org/10.1063/5.0167743.

Full text
Abstract:
In-memory computing (IMC) using emerging nonvolatile devices has received considerable attention due to its great potential for accelerating artificial neural networks and machine learning tasks. As the basic concept and operation modes of IMC are now well established, there is growing interest in employing its wide and general application. In this perspective, the path that leads memristive IMC to general-purpose machine learning is discussed in detail. First, we reviewed the development timeline of machine learning algorithms that employ memristive devices, such as resistive random-access me
APA, Harvard, Vancouver, ISO, and other styles
40

Fan, Anjunyi, Yihan Fu, Yaoyu Tao, et al. "Hadamard product-based in-memory computing design for floating point neural network training." Neuromorphic Computing and Engineering, February 9, 2023. http://dx.doi.org/10.1088/2634-4386/acbab9.

Full text
Abstract:
Abstract Deep neural networks (DNNs) are one of the key fields of machine learning. It requires considerable computational resources for cognitive tasks. As a novel technology to perform computing inside/near memory units, in-memory computing (IMC) significantly improves computing efficiency by reducing the need for repetitive data transfer between the processing and memory units. However, prior IMC designs mainly focus on the acceleration for DNN inference. DNN training with the IMC hardware has rarely been proposed. The challenges lie in the requirement of DNN training for high precision (e.
APA, Harvard, Vancouver, ISO, and other styles
41

Chang, Chih-Cheng, Shao-Tzu Li, Tong-Lin Pan, et al. "Device quantization policy in variation-aware in-memory computing design." Scientific Reports 12, no. 1 (2022). http://dx.doi.org/10.1038/s41598-021-04159-x.

Full text
Abstract:
AbstractDevice quantization of in-memory computing (IMC) that considers the non-negligible variation and finite dynamic range of practical memory technology is investigated, aiming for quantitatively co-optimizing system performance on accuracy, power, and area. Architecture- and algorithm-level solutions are taken into consideration. Weight-separate mapping, VGG-like algorithm, multiple cells per weight, and fine-tuning of the classifier layer are effective for suppressing inference accuracy loss due to variation and allow for the lowest possible weight precision to improve area and energy ef
APA, Harvard, Vancouver, ISO, and other styles
42

Snasel, Vaclav, Tran Khanh Dang, Josef Kueng, and Lingping Kong. "A review of in-memory computing for machine learning: architectures, options." International Journal of Web Information Systems, December 22, 2023. http://dx.doi.org/10.1108/ijwis-08-2023-0131.

Full text
Abstract:
Purpose This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerato
APA, Harvard, Vancouver, ISO, and other styles
43

He, Song, Wenkang Zhong, Mufan Zhu, et al. "In‐Memory Computing with Self‐Rectification and Dynamic Logical Reconfiguration of 12 Algorithms in a Single Halide Perovskites." Advanced Functional Materials, May 5, 2025. https://doi.org/10.1002/adfm.202424114.

Full text
Abstract:
AbstractAlthough memristor‐based in‐memory computing (IMC) prototypes demonstrate great progress and performance, integrating high flexibility and programmability into large‐scale, high‐density crossbar arrays remain a major hurdle for advanced computing systems. Herein, the execution of 12 distinct algorithms is successfully implemented in a single halide perovskite based IMC, leading to the construction of a halide perovskite memory with reconfigurable logic operation capabilities. Moreover, the device exhibits robust anti‐crosstalk performance, paving the way for its potential application i
APA, Harvard, Vancouver, ISO, and other styles
44

Boniardi, Mattia, Matteo Baldo, Mario Allegra, and Andrea Redaelli. "Phase Change Memory: A Review on Electrical Behavior and Use in Analog In‐Memory‐Computing (A‐IMC) Applications." Advanced Electronic Materials, November 19, 2024. http://dx.doi.org/10.1002/aelm.202400599.

Full text
Abstract:
AbstractRecent development and progress of Artificial Intelligence (AI) algorithms made clear that this topic is a paradigm shift with respect to the past. High throughput and ability to do complex tasks makes AI a great field of opportunity. This advancement is somehow limited by the physical implementation of the chips that are still bound to the historical von‐Neumann Architecture with processing units and memory hardware spatially separated. The way data is bussed and processed needs disruptive innovation, rather than an evolutionary approach, too. In Analog In‐Memory Computing (A‐IMC) the
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Woyu, Zhi Li, Xinyuan Zhang, et al. "Fully Binarized Graph Convolutional Network Accelerator Based on In‐Memory Computing with Resistive Random‐Access Memory." Advanced Intelligent Systems, March 25, 2024. http://dx.doi.org/10.1002/aisy.202300784.

Full text
Abstract:
Artificial intelligence for graph‐structured data has achieved remarkable success in applications such as recommendation systems, social networks, drug discovery, and circuit annotation. Graph convolutional networks (GCNs) are an effective way to learn representations of various graphs. The increasing size and complexity of graphs call for in‐memory computing (IMC) accelerators for GCN to alleviate massive data transmission between off‐chip memory and processing units. However, GCN implementation with IMC is challenging because of the large memory consumption, irregular memory access, and devi
APA, Harvard, Vancouver, ISO, and other styles
46

Lalchhandama, F., Mukesh Sahani, Vompolu Mohan Srinivas, Indranil Sengupta, and Kamalika Datta. "In-Memory Computing on Resistive RAM Systems Using Majority Operation." Journal of Circuits, Systems and Computers 31, no. 04 (2021). http://dx.doi.org/10.1142/s0218126622500712.

Full text
Abstract:
Memristors can be used to build nonvolatile memory systems with in-memory computing (IMC) capabilities. A number of prior works demonstrate the design of an IMC-capable memory macro using a memristor crossbar. However, read disturbance limits the use of such memory systems built using a 0-transistor, 1-RRAM (0T1R) structure that suffers from the sneak path problem. In this paper, we introduce a scheme for both memory and logic operations using the 1-transistor, 1-RRAM (1T1R) memristor crossbar, which effectively mitigates the read disturbance problem. The memory array is designed using nMOS tr
APA, Harvard, Vancouver, ISO, and other styles
47

Cui, Xiaole, Mingqi Yin, Hanqing Liu, and Xiaoxin Cui. "The Resistance Analysis Attack and Security Enhancement of the IMC LUT based on the Complementary Resistive Switch Cells." ACM Transactions on Design Automation of Electronic Systems, August 24, 2023. http://dx.doi.org/10.1145/3616870.

Full text
Abstract:
The resistive random access memory (RRAM) based in-memory computing (IMC) is an emerging architecture to address the challenge of the “memory wall” problem. The complementary resistive switch (CRS) cell connects two bipolar RRAM elements anti-serially to reduce the sneak current in the crossbar array. The CRS array is a generic computing platform, for the arbitrary logic functions can be implemented in it. The IMC CRS LUT consumes fewer CRS cells than the static CRS LUT. The CRS array has built-in polymorphic characteristics because the correct logic function can not be distinguished based on
APA, Harvard, Vancouver, ISO, and other styles
48

vardar, Alptekin, Marcel Günther, Franz Müller, Nellie Laleni, Konrad Seidel, and Thomas Kämpfe. "Reliable multi-level cell programming in FeFET arrays for in-memory computing." Japanese Journal of Applied Physics, March 31, 2025. https://doi.org/10.35848/1347-4065/adc745.

Full text
Abstract:
Abstract With the increasing demands of AI and edge computing, traditional computing architectures face challenges in efficiency and scalability. This paper explores the use of multi-level cell (MLC) FeFET arrays for in-memory computing (IMC) systems, highlighting a program/verify scheme to precisely control threshold voltages (VT). Our study demonstrates the ability of a 1k FeFET crossbar to achieve up to seven distinct VT states, allowing nearly 3-bit storage per cell. This approach maintains computational accuracy and enhances storage density, offering a viable solution for energy-efficient
APA, Harvard, Vancouver, ISO, and other styles
49

Dahiya, Ayush, Poornima Mittal, and Rajesh Rohilla. "Realizing In-Memory Computing using Reliable Differential 8T SRAM for Improved Latency." ACM Transactions on Design Automation of Electronic Systems, September 23, 2024. http://dx.doi.org/10.1145/3696666.

Full text
Abstract:
Traditional von Neumann computing architectures suffer from high energy and lower speed as compared to the requirements of modern applications like those required in neural network accelerators. A modified differential eight transistor (8 + T) static random access memory (SRAM) based in-memory computing (IMC) structure was presented for realizing bit-wise boolean logic operations. The 8 + T SRAM-IMC is designed at 32 nm technology node with throughput of 2.1849, 2.4815, 2.5795, 2.6240, 2.6495, 2.6619, 2.6690, 2.6732 and 2.6749 giga outputs per second (GOPS) for 0.5-1.3 V supply voltage range r
APA, Harvard, Vancouver, ISO, and other styles
50

Kalaichelvi, K., M. Sundaram, and P. Sanmugavalli. "Spin orbit magnetic random access memory based binary CNN in-memory accelerator (BIMA) with sense amplifier." Journal of Intelligent & Fuzzy Systems, November 10, 2023, 1–12. http://dx.doi.org/10.3233/jifs-223898.

Full text
Abstract:
The research tends to suggest a spin-orbit torque magnetic random access memory (SOT-MRAM)-based Binary CNN In-Memory Accelerator (BIMA) to minimize power utilization and suggests an In-Memory Computing (IMC) for AdderNet-based BIMA to further enhance performance by fully utilizing the benefits of IMC as well as a low current consumption configuration employing SOT-MRAM. And recommended an IMC-friendly computation pipeline for AdderNet convolution at the algorithm level. Additionally, the suggested sense amplifier is not only capable of the addition operation but also typical Boolean operation
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!