Дисертації з теми "Computation-in-memory"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-20 дисертацій для дослідження на тему "Computation-in-memory".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Rehn, Martin. "Aspects of memory and representation in cortical computation." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4161.
Повний текст джерелаIn this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.
QC 20100916
Vasilev, Vasil P. "Exploiting the memory-communication duality in parallel computation." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419529.
Повний текст джерелаHsieh, Wilson Cheng-Yi. "Dynamic computation migration in distributed shared memory systems." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36635.
Повний текст джерелаVita.
Includes bibliographical references (p. 123-131).
by Wilson Cheng-Yi Hsieh.
Ph.D.
Farzadfard, Fahim. "Scalable platforms for computation and memory in living cells." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115599.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 245-265).
Living cells are biological computers - constantly sensing, processing and responding to biological cues they receive over time and space. Devised by evolution, these biological machines are capable of performing many computing and memory operations, some of which are analogous to and some are distinct from man-made computers. The ability to rationally design and dynamically control genetic programs in living cells in a robust and scalable fashion offers unprecedented capacities to investigate and engineer biological systems and holds a great promise for many biotechnological and biomedical applications. In this thesis, I describe foundational platforms for computation and memory in living cells and demonstrate strategies for investigating biology and engineering robust, scalable, and sophisticated cellular programs. These include platforms for genomically-encoded analog memory (SCRIBE - Chapter 2), efficient and generalizable DNA writers for spatiotemporal recording and genome engineering (HiSCRIBE - Chapter 3), single-nucleotide resolution digital and analog computing and memory (DOMINO - Chapter 4), concurrent, autonomous and high-capacity recording of signaling dynamics and events histories for cell lineage mapping with tunable resolution (ENGRAM - Chapter 5), continuous in vivo evolution and synthetic Lamarckian evolution (DRIVE - Chapter 6), tunable and multifunctional transcriptional factors for gene regulation in eukaryotes (crisprTF - Chapter 7), and an unbiased, high-throughput and combinatorial strategy for perturbing transcriptional networks for genetic screening (PRISM - Chapter 8). I envision the platforms and approaches described herein will enable broad applications for investigating basic biology and engineering cellular programs.
by Fahim Farzadfard.
Ph. D.
Beattie, Bridget Joan Healy. "The use of libraries for numerical computation in distributed memory MIMD systems." Thesis, University of Liverpool, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266172.
Повний текст джерелаHong, Chao, and 洪潮. "Parallel processing in power systems computation on a distributed memory message passing multicomputer." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B3124032X.
Повний текст джерелаHong, Chao. "Parallel processing in power systems computation on a distributed memory message passing multicomputer /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22050383.
Повний текст джерелаMiryala, Goutham. "In Memory Computation of Glowworm Swarm Optimization Applied to Multimodal Functions Using Apache Spark." Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/28755.
Повний текст джерелаBreuer, Thomas [Verfasser], Regina [Akademischer Betreuer] Dittmann, and Tobias G. [Akademischer Betreuer] Noll. "Development of ReRAM-based devices for logic- and computation-in-memory applications / Thomas Breuer ; Regina Dittmann, Tobias G. Noll." Aachen : Universitätsbibliothek der RWTH Aachen, 2017. http://d-nb.info/1162499680/34.
Повний текст джерелаAlhaj, Ali Khaled. "New design approaches for flexible architectures and in-memory computing based on memristor technologies." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0197.
Повний текст джерелаThe recent development of new non-volatile memory technologies based on the memristor concept has triggered many research efforts to explore their potential usage in different application domains. The distinctive features of memristive devices and their suitability for CMOS integration are expected to lead for novel architecture design paradigms enabling unprecedented levels of energy efficiency, density, and reconfigurability. In this context, the goal of this thesis work was to explore and introduce new memristor based designs that combine flexibility and efficiency through the proposal of original architectures that break the limits of the existing ones. This exploration and study have been conducted at three levels: interconnect, processing, and memory levels. At interconnect level, we have explored the use of memristive devices to allow high degree of flexibility based on programmable interconnects. This allows to propose the first memristor-based reconfigurable fast Fourier transform architecture, namely mrFFT. Memristors are inserted as reconfigurable switches at the level of interconnects in order to establish flexible on-chip routing. At processing level, we have explored the use of memristive devices and their integration with CMOS technologies for combinational logic design. Such hybrid memristor-CMOS designs exploit the high integration density of memristors in order to improve the performance of digital designs, and particularly arithmetic logic units. At memory level, we have explored new in-memory computing approaches and proposed a novel logic design style, namely Memristor Overwrite Logic (MOL), associated with an original MOL-based computational memory. The proposed approach allows efficient combination of storage and processing in order to bypass the memory wall problem and thus to improve the computational efficiency. The proposed approach has been applied in three real application case studies for the sake of validation and performance evaluation
Shaikh, Sajid S. "COMPUTATION IN SOCIAL NETWORKS." Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1185560088.
Повний текст джерелаGingell, Sarah M. "On the role of the hippocampus in the acquisition, long-term retention and semanticisation of memory." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/1748.
Повний текст джерелаHuang, S. C., and 黃士權. "FE Parallel Computation with Memory Allocation in Shared-memory Environment." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/52645422516898944195.
Повний текст джерелаLai, Dong-Juh, and 賴棟助. "FE Parallel Computation with Domain Decomposition and Coloring in Shared Memory Environment." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/29895715573310972578.
Повний текст джерелаShen, Bi Hui, and 沈碧輝. "Parallel Computation of the Two-Way Wave Equation in Shared Memory Multiprocessor Architecture." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/66627629009787277253.
Повний текст джерелаSchittler, Neves Fabio. "Universal Computation and Memory by Neural Switching." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B5D1-6.
Повний текст джерелаDimcovic, Zlatko. "Discrete-time quantum walks via interchange framework and memory in quantum evolution." Thesis, 2012. http://hdl.handle.net/1957/30037.
Повний текст джерелаGraduation date: 2012
Hedrick, Nathan Gray. "A Three-Molecule Model of Structural Plasticity: the Role of the Rho family GTPases in Local Biochemical Computation in Dendrites." Diss., 2015. http://hdl.handle.net/10161/11347.
Повний текст джерелаIt has long been appreciated that the process of learning might invoke a physical change in the brain, establishing a lasting trace of experience. Recent evidence has revealed that this change manifests, at least in part, by the formation of new connections between neurons, as well as the modification of preexisting ones. This so-called structural plasticity of neural circuits – their ability to physically change in response to experience – has remained fixed as a primary point of focus in the field of neuroscience.
A large portion of this effort has been directed towards the study of dendritic spines, small protrusions emanating from neuronal dendrites that constitute the majority of recipient sites of excitatory neuronal connections. The unique, mushroom-like morphology of these tiny structures has earned them considerable attention, with even the earliest observers suggesting that their unique shape affords important functional advantages that would not be possible if synapses were to directly contact dendrites. Importantly, dendritic spines can be formed, eliminated, or structurally modified in response to both neural activity as well as learning, suggesting that their organization reflects the experience of the neural network. As such, elucidating how these structures undergo such rearrangements is of critical importance to understanding both learning and memory.
As dendritic spines are principally composed of the cytoskeletal protein actin, their formation, elimination, and modification requires biochemical signaling networks that can remodel the actin cytoskeleton. As a result, significant effort has been placed into identifying and characterizing such signaling networks and how they are controlled during synaptic activity and learning. Such efforts have highlighted Rho family GTPases – binary signaling proteins central in controlling the dynamics of the actin cytoskeleton – as attractive targets for understanding how the structural modification of spines might be controlled by synaptic activity. While much has been revealed regarding the importance of the Rho GTPases for these processes, the specific spatial and temporal features of their signals that impart such structural changes remains unclear.
The central hypotheses of the following research dissertation are as follows: first, that synaptic activity rapidly initiates Rho GTPase signaling within single dendritic spines, serving as the core mechanism of dendritic spine structural plasticity. Next, that each of the Rho GTPases subsequently expresses a spatially distinct pattern of activation, with some signals remaining highly localized, and some becoming diffuse across a region of the nearby dendrite. The diffusive signals modify the plasticity induction threshold of nearby dendritic spines, and the spatially restricted signals serve to keep the expression of plasticity specific to those spines that receive synaptic input. This combination of differentially spatially regulated signals thus equips the neuronal dendrite with the ability to perform local biochemical computations, potentially establishing an organizational preference for the arrangement of dendritic spines along a dendrite. Finally, the consequences of the differential signal patterns also help to explain several seemingly disparate properties of one of the primary upstream activators of these proteins: brain-derived neurotrophic factor (BDNF).
The first section of this dissertation describes the characterization of the activity patterns of one of the Rho family GTPases, Rac1. Using a novel Förster Resonance Energy Transfer (FRET)- based biosensor in combination with two-photon fluorescence lifetime imaging (2pFLIM) and single-spine stimulation by two-photon glutamate uncaging, the activation profile and kinetics of Rac1 during synaptic stimulation were characterized. These experiments revealed that Rac1 conveys signals to both activated spines as well as nearby, unstimulated spines that are in close proximity to the target spine. Despite the diffusion of this structural signal, however, the structural modification associated with synaptic stimulation remained restricted to the stimulated spine. Thus, Rac1 activation is not sufficient to enlarge spines, but nonetheless likely confers some heretofore-unknown function to nearby synapses.
The next set of experiments set out to detail the upstream molecular mechanisms controlling Rac1 activation. First, it was found that Rac1 activation during sLTP depends on calcium through NMDA receptors and subsequent activation of CaMKII, suggesting that Rac1 activation in this context agrees with substantial evidence linking NMDAR-CaMKII signaling to LTP in the hippocampus. Next, in light of recent evidence linking structural plasticity to another potential upstream signaling complex, BDNF-TrkB, we explored the possibility that BDNF-TrkB signaling functioned in structural plasticity via Rac1 activation. To this end, we first explored the release kinetics of BDNF and the activation kinetics of TrkB using novel biosensors in conjunction with 2p glutamate uncaging. It was found that release of BDNF from single dendritic spines during sLTP induction activates TrkB on that same spine in an autocrine manner, and that this autocrine system was necessary for both sLTP and Rac1 activation. It was also found that BDNF-TrkB signaling controls the activity of another Rho GTPase, Cdc42, suggesting that this autocrine loop conveys both synapse-specific signals (through Cdc42) and heterosynaptic signals (through Rac1).
The next set of experiments detail one the potential consequences of heterosynaptic Rac1 signaling. The spread of Rac1 activity out of the stimulated spine was found to be necessary for lowering the plasticity threshold at nearby spines, a process known as synaptic crosstalk. This was also true for the Rho family GTPase, RhoA, which shows a similar diffusive activity pattern. Conversely, the activity of Cdc42, a Rho GTPase protein whose activity is highly restricted to stimulated spines, was required only for input-specific plasticity induction. Thus, the spreading of a subset of Rho GTPase signaling into nearby spines modifies the plasticity induction threshold of these spines, increasing the likelihood that synaptic activity at these sites will induce structural plasticity. Importantly, these data suggest that the autocrine BDNF-TrkB loop described above simultaneously exerts control over both homo- and heterosynaptic structural plasticity.
The final set of experiments reveals that the spreading of GTPase activity from stimulated spines helps to overcome the high activation thresholds of these proteins to facilitate nearby plasticity. Both Rac1 and RhoA, the activity of which spread into nearby spines, showed high activation thresholds, making weak stimuli incapable of activating them. Thus, signal spreading from a strongly stimulated spine can lower the plasticity threshold at nearby spines in part by supplementing the activation of high-threshold Rho GTPases at these sites. In contrast, the highly compartmentalized Rho GTPase Cdc42 showed a very low activation threshold, and thus did not require signal spreading to achieve high levels of activity to even a weak stimulus. As a result, synaptic crosstalk elicits cooperativity of nearby synaptic events by first priming a local region of the dendrite with several (but not all) of the factors required for structural plasticity, which then allows even weak inputs to achieve plasticity by means of localized Cdc42 activation.
Taken together, these data reveal a molecular pattern whereby BDNF-dependent structural plasticity can simultaneously maintain input-specificity while also relaying heterosynaptic signals along a local stretch of dendrite via coordination of differential spatial signaling profiles of the Rho GTPase proteins. The combination of this division of spatial signaling patterns and different activation thresholds reveals a unique heterosynaptic coincidence detection mechanism that allows for cooperative expression of structural plasticity when spines are close together, which in turn provides a putative mechanism for how neurons arrange structural modifications during learning.
Dissertation
(10223831), Yuankun Fu. "Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis." Thesis, 2021.
Знайти повний текст джерелаJaneke, Hendrik Christiaan. "Connectionist modelling in cognitive science: an exposition and appraisal." Thesis, 2003. http://hdl.handle.net/10500/635.
Повний текст джерелаPsychology
D. Litt. et Phil. (Psychology)