Academic literature on the topic 'Computation-in-memory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computation-in-memory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computation-in-memory"

1

Stern, Peter. "Parallel computation in memory-making." Science 355, no. 6321 (January 12, 2017): 143.17–145. http://dx.doi.org/10.1126/science.355.6321.143-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Zehui, Clayton Schoeny, and Lara Dolecek. "Hamming Distance Computation in Unreliable Resistive Memory." IEEE Transactions on Communications 66, no. 11 (November 2018): 5013–27. http://dx.doi.org/10.1109/tcomm.2018.2840717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yehl, Kevin, and Timothy Lu. "Scaling computation and memory in living cells." Current Opinion in Biomedical Engineering 4 (December 2017): 143–51. http://dx.doi.org/10.1016/j.cobme.2017.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Zhong, Giacomo Pedretti, Elia Ambrosi, Alessandro Bricalli, and Daniele Ielmini. "In‐Memory Eigenvector Computation in Time O (1)." Advanced Intelligent Systems 2, no. 8 (May 20, 2020): 2000042. http://dx.doi.org/10.1002/aisy.202000042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hwang, Myeong-Eun, and Sungoh Kwon. "A 0.94 μW 611 KHz In-Situ Logic Operation in Embedded DRAM Memory Arrays in 90 nm CMOS." Electronics 8, no. 8 (August 5, 2019): 865. http://dx.doi.org/10.3390/electronics8080865.

Full text
Abstract:
Conventional computers based on the Von Neumann architecture conduct computation with repeated data movements between their separate processing and memory units, where each movement takes time and energy. Unlike this approach, we experimentally study memory that can perform computation as well as store data within a generic memory array in a non-Von Neumann architecture way. Memory array can innately perform NOR operation that is functionally complete and thus realize any Boolean functions like inversion (NOT), disjunction (OR) and conjunction (AND) operations. With theoretical exploration of memory array performing Boolean computation along with storing data, we demonstrate another potential of memory array with a test chip fabricated in a 90 nm logic process. Measurement results confirm valid in-situ memory logic operations in a 32-kbit memory system that successfully operates down to 135 mV consuming 130 nW at 750 Hz, reducing power and data traffic between the units by five orders of magnitude at the sacrifice of performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Andrade, Marcus V. A., Salles V. G. Magalhães, Mirella A. Magalhães, W. Randolph Franklin, and Barbara M. Cutler. "Efficient viewshed computation on terrain in external memory." GeoInformatica 15, no. 2 (November 26, 2009): 381–97. http://dx.doi.org/10.1007/s10707-009-0100-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goswami, Mrinal, Jayanta Pal, Mayukh Roy Choudhury, Pritam P. Chougule, and Bibhash Sen. "In memory computation using quantum-dot cellular automata." IET Computers & Digital Techniques 14, no. 6 (November 1, 2020): 336–43. http://dx.doi.org/10.1049/iet-cdt.2020.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jafari, Atousa, Christopher Münch, and Mehdi Tahoori. "A Spintronic 2M/7T Computation-in-Memory Cell." Journal of Low Power Electronics and Applications 12, no. 4 (December 6, 2022): 63. http://dx.doi.org/10.3390/jlpea12040063.

Full text
Abstract:
Computing data-intensive applications on the von Neumann architecture lead to significant performance and energy overheads. The concept of computation in memory (CiM) addresses the bottleneck of von Neumann machines by reducing the data movement in the computing system. Emerging resistive non-volatile memory technologies, as well as volatile memories (SRAM and DRAM), can be used to realize architectures based on the CiM paradigm. In this paper, we propose a hybrid cell design to provide the opportunity for CiM by combining the magnetic tunnel junction (MTJ) and the conventional 6T-SRAM cell. The cell performs CiM operations based on stateful in-array computation, which has better scalability for multiple operands compared with stateless computation in the periphery. Various logic operations such as XOR, OR, and IMP can be performed with the proposed design. In addition, the proposed cell can also operate as a conventional memory cell to read and write volatile as well as non-volatile data. The obtained simulation results show that the proposed CiM-A design can increase the performance of regular memory architectures by reducing the delay by 8 times and the energy by 13 times for database query applications consisting of consecutive bitwise operations with minimum overhead.
APA, Harvard, Vancouver, ISO, and other styles
9

Khan, Kamil, Sudeep Pasricha, and Ryan Gary Kim. "A Survey of Resource Management for Processing-In-Memory and Near-Memory Processing Architectures." Journal of Low Power Electronics and Applications 10, no. 4 (September 24, 2020): 30. http://dx.doi.org/10.3390/jlpea10040030.

Full text
Abstract:
Due to the amount of data involved in emerging deep learning and big data applications, operations related to data movement have quickly become a bottleneck. Data-centric computing (DCC), as enabled by processing-in-memory (PIM) and near-memory processing (NMP) paradigms, aims to accelerate these types of applications by moving the computation closer to the data. Over the past few years, researchers have proposed various memory architectures that enable DCC systems, such as logic layers in 3D-stacked memories or charge-sharing-based bitwise operations in dynamic random-access memory (DRAM). However, application-specific memory access patterns, power and thermal concerns, memory technology limitations, and inconsistent performance gains complicate the offloading of computation in DCC systems. Therefore, designing intelligent resource management techniques for computation offloading is vital for leveraging the potential offered by this new paradigm. In this article, we survey the major trends in managing PIM and NMP-based DCC systems and provide a review of the landscape of resource management techniques employed by system designers for such systems. Additionally, we discuss the future challenges and opportunities in DCC management.
APA, Harvard, Vancouver, ISO, and other styles
10

Ou, Qiao-Feng, Bang-Shu Xiong, Lei Yu, Jing Wen, Lei Wang, and Yi Tong. "In-Memory Logic Operations and Neuromorphic Computing in Non-Volatile Random Access Memory." Materials 13, no. 16 (August 10, 2020): 3532. http://dx.doi.org/10.3390/ma13163532.

Full text
Abstract:
Recent progress in the development of artificial intelligence technologies, aided by deep learning algorithms, has led to an unprecedented revolution in neuromorphic circuits, bringing us ever closer to brain-like computers. However, the vast majority of advanced algorithms still have to run on conventional computers. Thus, their capacities are limited by what is known as the von-Neumann bottleneck, where the central processing unit for data computation and the main memory for data storage are separated. Emerging forms of non-volatile random access memory, such as ferroelectric random access memory, phase-change random access memory, magnetic random access memory, and resistive random access memory, are widely considered to offer the best prospect of circumventing the von-Neumann bottleneck. This is due to their ability to merge storage and computational operations, such as Boolean logic. This paper reviews the most common kinds of non-volatile random access memory and their physical principles, together with their relative pros and cons when compared with conventional CMOS-based circuits (Complementary Metal Oxide Semiconductor). Their potential application to Boolean logic computation is then considered in terms of their working mechanism, circuit design and performance metrics. The paper concludes by envisaging the prospects offered by non-volatile devices for future brain-inspired and neuromorphic computation.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computation-in-memory"

1

Rehn, Martin. "Aspects of memory and representation in cortical computation." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4161.

Full text
Abstract:
Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag.
In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.
QC 20100916
APA, Harvard, Vancouver, ISO, and other styles
2

Vasilev, Vasil P. "Exploiting the memory-communication duality in parallel computation." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hsieh, Wilson Cheng-Yi. "Dynamic computation migration in distributed shared memory systems." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36635.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Vita.
Includes bibliographical references (p. 123-131).
by Wilson Cheng-Yi Hsieh.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Farzadfard, Fahim. "Scalable platforms for computation and memory in living cells." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115599.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 245-265).
Living cells are biological computers - constantly sensing, processing and responding to biological cues they receive over time and space. Devised by evolution, these biological machines are capable of performing many computing and memory operations, some of which are analogous to and some are distinct from man-made computers. The ability to rationally design and dynamically control genetic programs in living cells in a robust and scalable fashion offers unprecedented capacities to investigate and engineer biological systems and holds a great promise for many biotechnological and biomedical applications. In this thesis, I describe foundational platforms for computation and memory in living cells and demonstrate strategies for investigating biology and engineering robust, scalable, and sophisticated cellular programs. These include platforms for genomically-encoded analog memory (SCRIBE - Chapter 2), efficient and generalizable DNA writers for spatiotemporal recording and genome engineering (HiSCRIBE - Chapter 3), single-nucleotide resolution digital and analog computing and memory (DOMINO - Chapter 4), concurrent, autonomous and high-capacity recording of signaling dynamics and events histories for cell lineage mapping with tunable resolution (ENGRAM - Chapter 5), continuous in vivo evolution and synthetic Lamarckian evolution (DRIVE - Chapter 6), tunable and multifunctional transcriptional factors for gene regulation in eukaryotes (crisprTF - Chapter 7), and an unbiased, high-throughput and combinatorial strategy for perturbing transcriptional networks for genetic screening (PRISM - Chapter 8). I envision the platforms and approaches described herein will enable broad applications for investigating basic biology and engineering cellular programs.
by Fahim Farzadfard.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Beattie, Bridget Joan Healy. "The use of libraries for numerical computation in distributed memory MIMD systems." Thesis, University of Liverpool, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hong, Chao, and 洪潮. "Parallel processing in power systems computation on a distributed memory message passing multicomputer." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B3124032X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hong, Chao. "Parallel processing in power systems computation on a distributed memory message passing multicomputer /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22050383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miryala, Goutham. "In Memory Computation of Glowworm Swarm Optimization Applied to Multimodal Functions Using Apache Spark." Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/28755.

Full text
Abstract:
Glowworm Swarm Optimization (GSO) is one of the optimization techniques, which need to be parallelized in order to evaluate large problems with high-dimensional function spaces. There are various issues involved in the parallelization of any algorithm such as efficient communication among nodes in a cluster, load balancing, automatic node failure recovery, and scalability of nodes at runtime. In this paper, we have implemented the GSO algorithm with the Apache Spark framework. The Spark framework is designed in such a way that one does not need to deal with any parallelization details except the logic of the algorithm itself. For the experimentation, two multimodal benchmark functions were used to evaluate the Spark-GSO algorithm with various sizes of dimensionality. We evaluate the optimization results of the two evaluation functions as well as we will compare the Spark results with the ones obtained using a previously implemented MapReduce-based GSO algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Breuer, Thomas [Verfasser], Regina [Akademischer Betreuer] Dittmann, and Tobias G. [Akademischer Betreuer] Noll. "Development of ReRAM-based devices for logic- and computation-in-memory applications / Thomas Breuer ; Regina Dittmann, Tobias G. Noll." Aachen : Universitätsbibliothek der RWTH Aachen, 2017. http://d-nb.info/1162499680/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alhaj, Ali Khaled. "New design approaches for flexible architectures and in-memory computing based on memristor technologies." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0197.

Full text
Abstract:
Le développement récent de nouvelles technologies de mémoires non-volatiles basées sur le concept de memristor a suscité de nombreux efforts pour explorer leur utilisation potentielle dans différents domaines d'application. Les propriétés uniques de ces dispositifs memristifs et leur compatibilité pour uneintégration avec les technologies CMOS conventionnelles permettent de nouveaux paradigmes de conception d’architecture, offrant des niveaux sans précédent de densité, de reconfigurabilité et d’efficacité énergétique. Dans ce contexte, le but de ce travail de thèse était d'explorer et d'introduire de nouvelles approches de conception basées sur les memristors pour combiner flexibilité et efficacité en proposant des architectures originales qui dépassent les limites des architectures existantes. Cette exploration et cette étude ont été menées à trois niveaux : interconnexion, traitement et mémoire. Au niveau des interconnexions, nous avons étudié l'utilisation de dispositifs memristifs pour permettre une grande flexibilité basée sur des réseaux d'interconnexion programmables. Cela a permis de proposer la première architecture de transformée de Fourier rapide reconfigurable basée sur des memristors, nommée mrFFT. Les memristors sont insérés comme des commutateurs reconfigurables au niveau des interconnexions afin d'établir un routage flexible puce. Au niveau du traitement, nous avons exploré l'utilisation de dispositifs memristifs et leur intégration avec les technologies CMOS pour la conception de fonctions logique combinatoire. Ces circuits hybrides memristor-CMOS exploitent la forte densité d'intégration des memristors afin d'améliorer les performances des implémentations numériques, et en particulier des unités arithmétiques et logiques. Au niveau mémoire, une nouvelle approche de calcul en mémoire a été introduite. Dans ce contexte, un nouveau style de conception logique a été proposé, nommé Memristor Overwrite Logic (MOL), associé à une architecture originale de mémoire de calcul. L’approche proposée permet de combiner efficacement le stockage et le traitement afin de contourner les problèmes liés aux accès mémoire et d'améliorer ainsi l'efficacité de calcul. L'approche proposée a été appliquée dans trois études de cas à des fins de validation et d'évaluation des performances
The recent development of new non-volatile memory technologies based on the memristor concept has triggered many research efforts to explore their potential usage in different application domains. The distinctive features of memristive devices and their suitability for CMOS integration are expected to lead for novel architecture design paradigms enabling unprecedented levels of energy efficiency, density, and reconfigurability. In this context, the goal of this thesis work was to explore and introduce new memristor based designs that combine flexibility and efficiency through the proposal of original architectures that break the limits of the existing ones. This exploration and study have been conducted at three levels: interconnect, processing, and memory levels. At interconnect level, we have explored the use of memristive devices to allow high degree of flexibility based on programmable interconnects. This allows to propose the first memristor-based reconfigurable fast Fourier transform architecture, namely mrFFT. Memristors are inserted as reconfigurable switches at the level of interconnects in order to establish flexible on-chip routing. At processing level, we have explored the use of memristive devices and their integration with CMOS technologies for combinational logic design. Such hybrid memristor-CMOS designs exploit the high integration density of memristors in order to improve the performance of digital designs, and particularly arithmetic logic units. At memory level, we have explored new in-memory computing approaches and proposed a novel logic design style, namely Memristor Overwrite Logic (MOL), associated with an original MOL-based computational memory. The proposed approach allows efficient combination of storage and processing in order to bypass the memory wall problem and thus to improve the computational efficiency. The proposed approach has been applied in three real application case studies for the sake of validation and performance evaluation
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computation-in-memory"

1

Nishimura, Naomi. Asynchrony in shared memory parallel computation. Toronto: University of Toronto, Dept. of Computer Science, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1903-1995, Church Alonzo, Anderson C. Anthony, and Zelëny Michael, eds. Logic, meaning, and computation: Essays in memory of Alonzo Church. Dordrecht: Kluwer Academic Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

O'Donnell, Timothy J. Productivity and reuse in language: A theory of linguistic computation and storage. Cambridge, Massachusetts: The MIT Press, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mihai, Oltean, and SpringerLink (Online service), eds. Optical Supercomputing: 4th International Workshop, OSC 2012, in Memory of H. John Caulfield, Bertinoro, Italy, July 19-21, 2012. Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nishimura, Naomi *. Asynchrony in shared memory parallel computation. 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zelëny, Michael, and C. Anthony Anderson. Logic, Meaning and Computation: Essays in Memory of Alonzo Church. Springer London, Limited, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

1903-1995, Church Alonzo, Anderson C. Anthony, and Zelëny Michael, eds. Logic, meaning, and computation: Essays in memory of Alonzo Church. Dordrecht: Kluwer Academic Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Anderson, C. Anthony. Logic, Meaning and Computation: Essays in Memory of Alonzo Church. Ingramcontent, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Logic, Meaning and Computation : Essays in Memory of Alonzo Church (Synthese Library, 305) (Synthese Library). Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Launay, Jean-Pierre, and Michel Verdaguer. Electrons in Molecules. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198814597.001.0001.

Full text
Abstract:
The book treats in a unified way electronic properties of molecules (magnetic, electrical, photophysical), culminating with the mastering of electrons, i.e. molecular electronics and spintronics and molecular machines. Chapter 1 recalls basic concepts. Chapter 2 describes the magnetic properties due to localized electrons. This includes phenomena such as spin cross-over, exchange interaction from dihydrogen to extended molecular magnetic systems, and magnetic anisotropy with single-molecule magnets. Chapter 3 is devoted to the electrical properties due to moving electrons. One considers first electron transfer in discrete molecular systems, in particular in mixed valence compounds. Then, extended molecular solids, in particular molecular conductors, are described by band theory. Special attention is paid to structural distortions (Peierls instability) and interelectronic repulsions in narrow-band systems. Chapter 4 treats photophysical properties, mainly electron transfer in the excited state and its applications to photodiodes, organic light emitting diodes, photovoltaic cells and water photolysis. Energy transfer is also treated. Photomagnetism (how a photonic excitation modifies magnetic properties) is introduced. Finally, Chapter 5 combines the previous knowledge for three advanced subjects: first molecular electronics in its hybrid form (molecules connected to electrodes acting as wires, diodes, memory elements, field-effect transistors) or in the quantum computation approach. Then, molecular spintronics, using, besides the charge, the spin of the electron. Finally the theme of molecular machines is presented, with the problem of the directionality control of their motion.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computation-in-memory"

1

Schüle, Maximilian E., Alex Kulikov, Alfons Kemper, and Thomas Neumann. "ARTful Skyline Computation for In-Memory Database Systems." In Communications in Computer and Information Science, 3–12. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54623-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gupta, Nikhil Kumar, and Girijesh Singh. "In-Memory Computation for Real-Time Face Recognition." In Intelligent Computing and Applications, 531–39. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5566-4_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

BanaGozar, Ali, Kanishkan Vadivel, Joonas Multanen, Pekka Jääskeläinen, Sander Stuijk, and Henk Corporaal. "System Simulation of Memristor Based Computation in Memory Platforms." In Lecture Notes in Computer Science, 152–68. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60939-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jacob, Riko, Tobias Lieber, and Matthias Mnich. "Treewidth Computation and Kernelization in the Parallel External Memory Model." In Advanced Information Systems Engineering, 78–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44602-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fu, Yu, Wei Wang, Lingjia Meng, Qiongxiao Wang, Yuan Zhao, and Jingqiang Lin. "VIRSA: Vectorized In-Register RSA Computation with Memory Disclosure Resistance." In Information and Communications Security, 293–309. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86890-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Yang, Xiaolin Chang, Ziye Jia, Zhu Han, and Zhen Han. "Processing in Memory Assisted MEC 3C Resource Allocation for Computation Offloading." In Algorithms and Architectures for Parallel Processing, 695–709. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60245-1_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baldoni, Roberto, Silvia Bonomi, and Michel Raynal. "Joining a Distributed Shared Memory Computation in a Dynamic Distributed System." In Software Technologies for Embedded and Ubiquitous Systems, 91–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10265-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pedretti, Giacomo. "One Step in-Memory Solution of Inverse Algebraic Problems." In Special Topics in Information Technology, 63–76. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-62476-7_6.

Full text
Abstract:
AbstractMachine learning requires to process large amount of irregular data and extract meaningful information. Von-Neumann architecture is being challenged by such computation, in fact a physical separation between memory and processing unit limits the maximum speed in analyzing lots of data and the majority of time and energy are spent to make information travel from memory to the processor and back. In-memory computing executes operations directly within the memory without any information travelling. In particular, thanks to emerging memory technologies such as memristors, it is possible to program arbitrary real numbers directly in a single memory device in an analog fashion and at the array level, execute algebraic operation in-memory and in one step. In this chapter the latest results in accelerating inverse operation, such as the solution of linear systems, in-memory and in a single computational cycle will be presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Chua-Chin, Nanang Sulistiyanto, Tsung-Yi Tsai, and Yu-Hsuan Chen. "Multifunctional In-Memory Computation Architecture Using Single-Ended Disturb-Free 6T SRAM." In Lecture Notes in Electrical Engineering, 49–57. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1289-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mandal, Swagata, Yaswanth Tavva, Debjyoti Bhattacharjee, and Anupam Chattopadhyay. "ReRAM Based In-Memory Computation of Single Bit Error Correcting BCH Code." In VLSI-SoC: Design and Engineering of Electronics Systems Based on New Computing Paradigms, 128–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23425-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computation-in-memory"

1

Lin, Zhiting, Jian Zhang, Xiulong Wu, and Chunyu Peng. "Memory Compiler for RRAM In-Memory Computation." In 2022 7th International Conference on Integrated Circuits and Microsystems (ICICM). IEEE, 2022. http://dx.doi.org/10.1109/icicm56102.2022.10011325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Du Nguyen, Hoang Anh, Lei Xie, Mottaqiallah Taouil, Razvan Nane, Said Hamdioui, and Koen Bertels. "Computation-in-memory based parallel adder." In 2015 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH'15). IEEE, 2015. http://dx.doi.org/10.1109/nanoarch.2015.7180587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Meng-Fan, Ru Huang, and Seung-Jun Bae. "Session 16 Overview: Computation in Memory." In 2021 IEEE International Solid- State Circuits Conference (ISSCC). IEEE, 2021. http://dx.doi.org/10.1109/isscc42613.2021.9365967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Jintao, Hoang Anh Du Nguyen, Lei Xie, Mottaqiallah Taouil, and Said Hamdioui. "Memristive devices for computation-in-memory." In 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2018. http://dx.doi.org/10.23919/date.2018.8342278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hamdioui, Said. "Computation in Memory for Data-Intensive Applications." In SCOPES '15: 18th International Workshop on Software and Compilers for Embedded Systems. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2764967.2771820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rai, Shubham, Mengyun Liu, Anteneh Gebregiorgis, Debjyoti Bhattacharjee, Krishnendu Chakrabarty, Said Hamdioui, Anupam Chattopadhyay, Jens Trommer, and Akash Kumar. "Perspectives on Emerging Computation-in-Memory Paradigms." In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2021. http://dx.doi.org/10.23919/date51398.2021.9473976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cassuto, Yuval, and Koby Crammer. "In-memory hamming similarity computation in resistive arrays." In 2015 IEEE International Symposium on Information Theory (ISIT). IEEE, 2015. http://dx.doi.org/10.1109/isit.2015.7282569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Srinivasa, Srivatsa, Akshay Krishna Ramanathan, Jainaveen Sundaram, Dileep Kurian, Srinivasan Gopal, Nilesh Jain, Anuradha Srinivasan, Ravi Iyer, Vijaykrishnan Narayanan, and Tanay Karnik. "Trends and Opportunities for SRAM Based In-Memory and Near-Memory Computation." In 2021 22nd International Symposium on Quality Electronic Design (ISQED). IEEE, 2021. http://dx.doi.org/10.1109/isqed51717.2021.9424263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Bin. "Emerging Non-Volatile Memories for Computation-in-Memory." In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2020. http://dx.doi.org/10.1109/asp-dac47756.2020.9045394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Yongyang, Mingjie Tang, Walid G. Aref, Qutaibah M. Malluhi, Mostafa M. Abbas, and Mourad Ouzzani. "In-Memory Distributed Matrix Computation Processing and Optimization." In 2017 IEEE 33rd International Conference on Data Engineering (ICDE). IEEE, 2017. http://dx.doi.org/10.1109/icde.2017.150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography