Academic literature on the topic 'In-memory-computing (IMC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'In-memory-computing (IMC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "In-memory-computing (IMC)"

1

Song, Soonbum, and Youngmin Kim. "Novel In-Memory Computing Adder Using 8+T SRAM." Electronics 11, no. 6 (2022): 929. http://dx.doi.org/10.3390/electronics11060929.

Full text
Abstract:
Von Neumann architecture-based computing systems are facing a von Neumann bottleneck owing to data transfer between separated memory and processor units. In-memory computing (IMC), on the other hand, reduces energy consumption and improves computing performance. This study explains an 8+T SRAM IMC circuit based on 8+T differential SRAM (8+T SRAM) and proposes 8+T SRAM-based IMC full adder (FA) and 8+T SRAM-based IMC approximate adder, which are based on the 8+T SRAM IMC circuit. The 8+T SRAM IMC circuit performs SRAM read and bitwise operations simultaneously and performs each logic operation
APA, Harvard, Vancouver, ISO, and other styles
2

Mannocci, P., M. Farronato, N. Lepri, et al. "In-memory computing with emerging memory devices: Status and outlook." APL Machine Learning 1, no. 1 (2023): 010902. http://dx.doi.org/10.1063/5.0136403.

Full text
Abstract:
In-memory computing (IMC) has emerged as a new computing paradigm able to alleviate or suppress the memory bottleneck, which is the major concern for energy efficiency and latency in modern digital computing. While the IMC concept is simple and promising, the details of its implementation cover a broad range of problems and solutions, including various memory technologies, circuit topologies, and programming/processing algorithms. This Perspective aims at providing an orientation map across the wide topic of IMC. First, the memory technologies will be presented, including both conventional com
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Zhaohui, Yang Feng, Peng Guo, et al. "Flash-based in-memory computing for stochastic computing in image edge detection." Journal of Semiconductors 44, no. 5 (2023): 054101. http://dx.doi.org/10.1088/1674-4926/44/5/054101.

Full text
Abstract:
Abstract The “memory wall” of traditional von Neumann computing systems severely restricts the efficiency of data-intensive task execution, while in-memory computing (IMC) architecture is a promising approach to breaking the bottleneck. Although variations and instability in ultra-scaled memory cells seriously degrade the calculation accuracy in IMC architectures, stochastic computing (SC) can compensate for these shortcomings due to its low sensitivity to cell disturbances. Furthermore, massive parallel computing can be processed to improve the speed and efficiency of the system. In this pape
APA, Harvard, Vancouver, ISO, and other styles
4

Pedretti, Giacomo, and Daniele Ielmini. "In-Memory Computing with Resistive Memory Circuits: Status and Outlook." Electronics 10, no. 9 (2021): 1063. http://dx.doi.org/10.3390/electronics10091063.

Full text
Abstract:
In-memory computing (IMC) refers to non-von Neumann architectures where data are processed in situ within the memory by taking advantage of physical laws. Among the memory devices that have been considered for IMC, the resistive switching memory (RRAM), also known as memristor, is one of the most promising technologies due to its relatively easy integration and scaling. RRAM devices have been explored for both memory and IMC applications, such as neural network accelerators and neuromorphic processors. This work presents the status and outlook on the RRAM for analog computing, where the precis
APA, Harvard, Vancouver, ISO, and other styles
5

Ayyappa, B. Kanth Naga. "Compute SNDR-Boosted 22-nm MRAM-Based In-Memory Computing Macro Using Statistical Error Compensation." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–7. https://doi.org/10.55041/ijsrem49276.

Full text
Abstract:
Abstract - The rapid growth of AI and data-intensive applications necessitates energy-efficient and high-performance memory solutions. In-memory computing (IMC) offers a paradigm shift by reducing data movement and enabling computation directly within memory arrays. This work presents a Compute SNDR-Boosted (Statistical Noise and Defect Resilience) 22-nm MRAM-based IMC macro that leverages statistical error compensation to mitigate device-level variability and noise. Our method integrates a statistical correction engine, enhancing the Signal-to-Noise and Distortion Ratio (SNDR), thereby achiev
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Manho, Sung-Ho Kim, Hyuk-Jae Lee, and Chae-Eun Rhee. "Case Study on Integrated Architecture for In-Memory and In-Storage Computing." Electronics 10, no. 15 (2021): 1750. http://dx.doi.org/10.3390/electronics10151750.

Full text
Abstract:
Since the advent of computers, computing performance has been steadily increasing. Moreover, recent technologies are mostly based on massive data, and the development of artificial intelligence is accelerating it. Accordingly, various studies are being conducted to increase the performance and computing and data access, together reducing energy consumption. In-memory computing (IMC) and in-storage computing (ISC) are currently the most actively studied architectures to deal with the challenges of recent technologies. Since IMC performs operations in memory, there is a chance to overcome the me
APA, Harvard, Vancouver, ISO, and other styles
7

Bansla, Neetu, and Rajneesh . "Future ERP: In-Memory Computing (IMC)Technology Infusion." Journal of Information Technology and Sciences 6, no. 3 (2020): 17–21. http://dx.doi.org/10.46610/joits.2020.v06i03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Akgül, Berke, and Tufan Coşkun Karalar. "An Energy Consumption Model for SRAM-Based In-Memory-Computing Architectures." Electronics 13, no. 6 (2024): 1121. http://dx.doi.org/10.3390/electronics13061121.

Full text
Abstract:
In this paper, a mathematical model for obtaining energy consumption of IMC architectures is constructed. This model provides energy estimation based on the distribution of a specific dataset. In addition, the estimation reduces the required simulation time to create an energy consumption model of SRAM-based IMC architectures. To validate our model with realistic data, the energy consumption of IMC is compared by using NeuroSim V3.0 for the CIFAR-10 and MNIST-like datasets. Furthermore, an application is created with our model to select highest performing quantization mapping based upon the pa
APA, Harvard, Vancouver, ISO, and other styles
9

Ponzina, Flavio, Marco Rios, Alexandre Levisse, Giovanni Ansaloni, and David Atienza. "Overflow-free Compute Memories for Edge AI Acceleration." ACM Transactions on Embedded Computing Systems 22, no. 5s (2023): 1–23. http://dx.doi.org/10.1145/3609387.

Full text
Abstract:
Compute memories are memory arrays augmented with dedicated logic to support arithmetic. They support the efficient execution of data-centric computing patterns, such as those characterizing Artificial Intelligence (AI) algorithms. These architectures can provide computing capabilities as part of the memory array structures (In-Memory Computing, IMC) or at their immediate periphery (Near-Memory Computing, NMC). By bringing the processing elements inside (or very close to) storage, compute memories minimize the cost of data access. Moreover, highly parallel (and, hence, high-performance) comput
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Peilong, Dan Lan, Fengyun Wang, and Incheol Shin. "In-Memory Computing Integrated Structure Circuit Based on Nonvolatile Flash Memory Unit." Electronics 12, no. 14 (2023): 3155. http://dx.doi.org/10.3390/electronics12143155.

Full text
Abstract:
Artificial intelligence has made people’s demands for computer computing efficiency increasingly high. The traditional hardware circuit simulation method for neural morphology computation has problems of unstable performance and excessive power consumption. This research will use non-volatile flash memory cells that are easy to read and write to build a convolutional neural network structure to improve the performance of neural morphological computing. In the experiment, floating-gate transistors were used to simulate neural network synapses to design core cross-array circuits. A voltage subtr
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "In-memory-computing (IMC)"

1

Ezzadeen, Mona. "Conception d'un circuit dédié au calcul dans la mémoire à base de technologie 3D innovante." Electronic Thesis or Diss., Aix-Marseille, 2022. http://theses.univ-amu.fr.lama.univ-amu.fr/221212_EZZADEEN_955e754k888gvxorp699jljcho_TH.pdf.

Full text
Abstract:
Avec le développement de l'internet des objets et de l'intelligence artificielle, le "déluge de données" est une réalité, poussant au développement de systèmes de calcul efficaces énergétiquement. Dans ce contexte, en effectuant le calcul directement à l'intérieur ou à proximité des mémoires, le paradigme de l'in/near-memory-computing (I/NMC) semble être une voie prometteuse. En effet, les transferts de données entre les mémoires et les unités de calcul sont très énergivores. Cependant, les classiques mémoires Flash souffrent de problèmes de miniaturisation et ne semblent pas facilement adapté
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "In-memory-computing (IMC)"

1

Mannocci, Piergiulio, and Daniele Ielmini. "Analog Circuit Design for In-Memory Linear Algebra Accelerators." In SpringerBriefs in Applied Sciences and Technology. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-80268-3_5.

Full text
Abstract:
Abstract Since its introduction in 1945, computing systems have been built around von Neumann’s architecture, predicating the physical separation of memory and computing units on grounds of flexibility and generality. However, the increasingly data-driven workloads of modern-day applications exacerbate the energy and latency overheads associated with data shuttling. In-memory computing (IMC) radically subverts the classical paradigm by performing computation in situ within the memory elements, unlocking theoretically unrivaled throughput and energy efficiency. Among the wide spectrum of IMC ar
APA, Harvard, Vancouver, ISO, and other styles
2

Krishnan, Gokul, Sumit K. Mandal, Alper A. Goksoy, et al. "End-to-End Benchmarking of Chiplet-Based In-Memory Computing." In Neuromorphic Computing. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.111926.

Full text
Abstract:
In-memory computing (IMC)-based hardware reduces latency and energy consumption for compute-intensive machine learning (ML) applications. Several SRAM/RRAM-based IMC hardware architectures to accelerate ML applications have been proposed in the literature. However, crossbar-based IMC hardware poses several design challenges. We first discuss the different ML algorithms recently adopted in the literature. We then discuss the hardware implications of ML algorithms. Next, we elucidate the need for IMC architecture and the different components within a conventional IMC architecture. After that, we
APA, Harvard, Vancouver, ISO, and other styles
3

Mielke Uwe, Zabel Martin, and Bruestle Michael. "T42 – Transputer in FPGA." In Concurrent Systems Engineering Series. IOS Press, 2019. https://doi.org/10.3233/978-1-61499-949-2-525.

Full text
Abstract:
The T42 Transputer in FPGA is a full binary-code compatible open-source VHDL implementation of the Inmos IMS-T425 32bit microprocessor. The T42 is mainly targeted for education and exploration purposes and many interlinked cores can be loaded onto any suitable sized FPGA board. The construction of larger parallel computing systems based on the Transputers distributed memory architecture is possible (e.g. within student semester projects). The programming model of occam can be evaluated and compared to other languages.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "In-memory-computing (IMC)"

1

Vungarala, Deepak, Md Hasibul Amin, Pietro Mercati, Arman Roohi, Ramtin Zand, and Shaahin Angizi. "LLM-IMC: Automating Analog In-Memory Computing Architecture Generation with Large Language Models." In 2025 IEEE 33rd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE, 2025. https://doi.org/10.1109/fccm62733.2025.00071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Pufan, Peng Yao, Bin Gao, Jianshi Tang, He Qian, and Huaqiang Wu. "A Precision-Adaptive ECC Strategy with Computing Fusion Decoding for Near/In-Memory Computing." In 2025 IEEE International Memory Workshop (IMW). IEEE, 2025. https://doi.org/10.1109/imw61990.2025.11026950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shao, Hanyong, Boyi Fu, Jinghao Yang, et al. "IMCE: An In-Memory Computing and Encrypting Hardware Architecture for Robust Edge Security." In 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2024. http://dx.doi.org/10.23919/date58400.2024.10546703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Guobin, Zijian Wang, Xuemeng Fan, et al. "Ultrafast Self-Rectifying Memristors for Advanced in-Memory Computing." In 2024 IEEE International Conference on IC Design and Technology (ICICDT). IEEE, 2024. http://dx.doi.org/10.1109/icicdt63592.2024.10717728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kudo, Kana, Yuta Aiba, Kazuma Hasegawa, Xu Li, Yuichi Sano, and Tomoya Sanuki. "Energy-Efficient In-Memory Computing using 3D Flash Memory with Sequential Multi-Block Activation and Current Control Cell (CC cell)." In 2025 IEEE International Memory Workshop (IMW). IEEE, 2025. https://doi.org/10.1109/imw61990.2025.11026979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sahithi, Maddi, J. Ajayan, Podishetty Srikavya, Sada Hari, Thota Nithin, and B. Mounika. "Analysis of Energy Efficient Reverse Carry Select Inexact Full Adders for Future In-Memory Computing and Internet of Things Applications." In 2024 5th International Conference on Circuits, Control, Communication and Computing (I4C). IEEE, 2024. http://dx.doi.org/10.1109/i4c62240.2024.10748519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saikia, Jyotishman, Amitesh Sridharan, Injune Yeo, Shreyas Venkataramanaiah, Deliang Fan, and Jae-Sun Seo. "FP-IMC: A 28nm All-Digital Configurable Floating-Point In-Memory Computing Macro." In ESSCIRC 2023- IEEE 49th European Solid State Circuits Conference (ESSCIRC). IEEE, 2023. http://dx.doi.org/10.1109/esscirc59616.2023.10268770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Joshi, Vijay, and Sheeba Rani J. "In-Memory-Computing (IMC) Technique in Local Difference Decision Block of an On-Board Satellite Hyperspectral Data Compression Algorithm." In 2023 IEEE 66th International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2023. http://dx.doi.org/10.1109/mwscas57524.2023.10406095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sridharan, Amitesh, Fan Zhang, Jae-Sun Seo, and Deliang Fan. "SP-IMC: A Sparsity Aware In-Memory-Computing Macro in 28nm CMOS with Configurable Sparse Representation for Highly Sparse DNN Workloads." In 2024 IEEE Custom Integrated Circuits Conference (CICC). IEEE, 2024. http://dx.doi.org/10.1109/cicc60959.2024.10529009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Can, Daniel Belkin, Yunning Li, et al. "In-Memory Computing with Memristor Arrays." In 2018 IEEE International Memory Workshop (IMW). IEEE, 2018. http://dx.doi.org/10.1109/imw.2018.8388838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!