Dissertations / Theses on the topic 'Computation-in-memory'

To see the other types of publications on this topic, follow the link: Computation-in-memory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Computation-in-memory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rehn, Martin. "Aspects of memory and representation in cortical computation." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4161.

Full text
Abstract:
Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag.
In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.
QC 20100916
APA, Harvard, Vancouver, ISO, and other styles
2

Vasilev, Vasil P. "Exploiting the memory-communication duality in parallel computation." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hsieh, Wilson Cheng-Yi. "Dynamic computation migration in distributed shared memory systems." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36635.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Vita.
Includes bibliographical references (p. 123-131).
by Wilson Cheng-Yi Hsieh.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Farzadfard, Fahim. "Scalable platforms for computation and memory in living cells." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115599.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 245-265).
Living cells are biological computers - constantly sensing, processing and responding to biological cues they receive over time and space. Devised by evolution, these biological machines are capable of performing many computing and memory operations, some of which are analogous to and some are distinct from man-made computers. The ability to rationally design and dynamically control genetic programs in living cells in a robust and scalable fashion offers unprecedented capacities to investigate and engineer biological systems and holds a great promise for many biotechnological and biomedical applications. In this thesis, I describe foundational platforms for computation and memory in living cells and demonstrate strategies for investigating biology and engineering robust, scalable, and sophisticated cellular programs. These include platforms for genomically-encoded analog memory (SCRIBE - Chapter 2), efficient and generalizable DNA writers for spatiotemporal recording and genome engineering (HiSCRIBE - Chapter 3), single-nucleotide resolution digital and analog computing and memory (DOMINO - Chapter 4), concurrent, autonomous and high-capacity recording of signaling dynamics and events histories for cell lineage mapping with tunable resolution (ENGRAM - Chapter 5), continuous in vivo evolution and synthetic Lamarckian evolution (DRIVE - Chapter 6), tunable and multifunctional transcriptional factors for gene regulation in eukaryotes (crisprTF - Chapter 7), and an unbiased, high-throughput and combinatorial strategy for perturbing transcriptional networks for genetic screening (PRISM - Chapter 8). I envision the platforms and approaches described herein will enable broad applications for investigating basic biology and engineering cellular programs.
by Fahim Farzadfard.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Beattie, Bridget Joan Healy. "The use of libraries for numerical computation in distributed memory MIMD systems." Thesis, University of Liverpool, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hong, Chao, and 洪潮. "Parallel processing in power systems computation on a distributed memory message passing multicomputer." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B3124032X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hong, Chao. "Parallel processing in power systems computation on a distributed memory message passing multicomputer /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22050383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miryala, Goutham. "In Memory Computation of Glowworm Swarm Optimization Applied to Multimodal Functions Using Apache Spark." Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/28755.

Full text
Abstract:
Glowworm Swarm Optimization (GSO) is one of the optimization techniques, which need to be parallelized in order to evaluate large problems with high-dimensional function spaces. There are various issues involved in the parallelization of any algorithm such as efficient communication among nodes in a cluster, load balancing, automatic node failure recovery, and scalability of nodes at runtime. In this paper, we have implemented the GSO algorithm with the Apache Spark framework. The Spark framework is designed in such a way that one does not need to deal with any parallelization details except the logic of the algorithm itself. For the experimentation, two multimodal benchmark functions were used to evaluate the Spark-GSO algorithm with various sizes of dimensionality. We evaluate the optimization results of the two evaluation functions as well as we will compare the Spark results with the ones obtained using a previously implemented MapReduce-based GSO algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Breuer, Thomas [Verfasser], Regina [Akademischer Betreuer] Dittmann, and Tobias G. [Akademischer Betreuer] Noll. "Development of ReRAM-based devices for logic- and computation-in-memory applications / Thomas Breuer ; Regina Dittmann, Tobias G. Noll." Aachen : Universitätsbibliothek der RWTH Aachen, 2017. http://d-nb.info/1162499680/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alhaj, Ali Khaled. "New design approaches for flexible architectures and in-memory computing based on memristor technologies." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0197.

Full text
Abstract:
Le développement récent de nouvelles technologies de mémoires non-volatiles basées sur le concept de memristor a suscité de nombreux efforts pour explorer leur utilisation potentielle dans différents domaines d'application. Les propriétés uniques de ces dispositifs memristifs et leur compatibilité pour uneintégration avec les technologies CMOS conventionnelles permettent de nouveaux paradigmes de conception d’architecture, offrant des niveaux sans précédent de densité, de reconfigurabilité et d’efficacité énergétique. Dans ce contexte, le but de ce travail de thèse était d'explorer et d'introduire de nouvelles approches de conception basées sur les memristors pour combiner flexibilité et efficacité en proposant des architectures originales qui dépassent les limites des architectures existantes. Cette exploration et cette étude ont été menées à trois niveaux : interconnexion, traitement et mémoire. Au niveau des interconnexions, nous avons étudié l'utilisation de dispositifs memristifs pour permettre une grande flexibilité basée sur des réseaux d'interconnexion programmables. Cela a permis de proposer la première architecture de transformée de Fourier rapide reconfigurable basée sur des memristors, nommée mrFFT. Les memristors sont insérés comme des commutateurs reconfigurables au niveau des interconnexions afin d'établir un routage flexible puce. Au niveau du traitement, nous avons exploré l'utilisation de dispositifs memristifs et leur intégration avec les technologies CMOS pour la conception de fonctions logique combinatoire. Ces circuits hybrides memristor-CMOS exploitent la forte densité d'intégration des memristors afin d'améliorer les performances des implémentations numériques, et en particulier des unités arithmétiques et logiques. Au niveau mémoire, une nouvelle approche de calcul en mémoire a été introduite. Dans ce contexte, un nouveau style de conception logique a été proposé, nommé Memristor Overwrite Logic (MOL), associé à une architecture originale de mémoire de calcul. L’approche proposée permet de combiner efficacement le stockage et le traitement afin de contourner les problèmes liés aux accès mémoire et d'améliorer ainsi l'efficacité de calcul. L'approche proposée a été appliquée dans trois études de cas à des fins de validation et d'évaluation des performances
The recent development of new non-volatile memory technologies based on the memristor concept has triggered many research efforts to explore their potential usage in different application domains. The distinctive features of memristive devices and their suitability for CMOS integration are expected to lead for novel architecture design paradigms enabling unprecedented levels of energy efficiency, density, and reconfigurability. In this context, the goal of this thesis work was to explore and introduce new memristor based designs that combine flexibility and efficiency through the proposal of original architectures that break the limits of the existing ones. This exploration and study have been conducted at three levels: interconnect, processing, and memory levels. At interconnect level, we have explored the use of memristive devices to allow high degree of flexibility based on programmable interconnects. This allows to propose the first memristor-based reconfigurable fast Fourier transform architecture, namely mrFFT. Memristors are inserted as reconfigurable switches at the level of interconnects in order to establish flexible on-chip routing. At processing level, we have explored the use of memristive devices and their integration with CMOS technologies for combinational logic design. Such hybrid memristor-CMOS designs exploit the high integration density of memristors in order to improve the performance of digital designs, and particularly arithmetic logic units. At memory level, we have explored new in-memory computing approaches and proposed a novel logic design style, namely Memristor Overwrite Logic (MOL), associated with an original MOL-based computational memory. The proposed approach allows efficient combination of storage and processing in order to bypass the memory wall problem and thus to improve the computational efficiency. The proposed approach has been applied in three real application case studies for the sake of validation and performance evaluation
APA, Harvard, Vancouver, ISO, and other styles
11

Shaikh, Sajid S. "COMPUTATION IN SOCIAL NETWORKS." Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1185560088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gingell, Sarah M. "On the role of the hippocampus in the acquisition, long-term retention and semanticisation of memory." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/1748.

Full text
Abstract:
A consensus on how to characterise the anterograde and retrograde memory processes that are lost or spared after hippocampal damage has not been reached. In this thesis, I critically re-examine the empirical literature and the assumptions behind current theories. I formulate a coherent view of what makes a task hippocampally dependent at acquisition and how this relates to its long-term fate. Findings from a neural net simulation indicate the plausibility of my proposals. My proposals both extend and constrain current views on the role of the hippocampus in the rapid acquisition of information and in learning complex associations. In general, tasks are most likely to require the hippocampus for acquisition if they involve rapid, associative learning about unfamiliar, complex, low salience stimuli. However, none of these factors alone is sufficient to obligatorily implicate the hippocampus in acquisition. With the exception of associations with supra-modal information that are always dependent on the hippocampus, it is the combination of factors that is important. Detailed, complex information that is obligatorily hippocampally-dependent at acquisition remains so for its lifetime. However, all memories are semanticised as they age through the loss of detailed context-specific information and because generic cortically-represented information starts to dominate recall. Initially hippocampally dependent memories may appear to become independent of the hippocampus over time, but recall changes qualitatively. Multi-stage, lifelong post-acquisition memory processes produce semanticised re-representations of memories of differing specificity and complexity, that can serve different purposes. The model simulates hippocampal and cortical interactions in the acquisition and maintenance of episodic and semantic events, and behaves in accordance with my proposals. In particular, conceptualising episodic and semantic memory as representing points on a continuum of memory types appears viable. Support is also found for proposals on the relative importance of the hippocampus and cortex in the rapid acquisition of information and the acquisition of complex multi-model information; and the effect of existing knowledge on new learning. Furthermore, episodic and semantic events become differentially dependent on cortical and hippocampal components. Finally, as a memory ages, it is automatically semanticised and becomes cortically dependent.
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, S. C., and 黃士權. "FE Parallel Computation with Memory Allocation in Shared-memory Environment." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/52645422516898944195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lai, Dong-Juh, and 賴棟助. "FE Parallel Computation with Domain Decomposition and Coloring in Shared Memory Environment." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/29895715573310972578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Shen, Bi Hui, and 沈碧輝. "Parallel Computation of the Two-Way Wave Equation in Shared Memory Multiprocessor Architecture." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/66627629009787277253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Schittler, Neves Fabio. "Universal Computation and Memory by Neural Switching." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B5D1-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dimcovic, Zlatko. "Discrete-time quantum walks via interchange framework and memory in quantum evolution." Thesis, 2012. http://hdl.handle.net/1957/30037.

Full text
Abstract:
One of the newer and rapidly developing approaches in quantum computing is based on "quantum walks," which are quantum processes on discrete space that evolve in either discrete or continuous time and are characterized by mixing of components at each step. The idea emerged in analogy with the classical random walks and stochastic techniques, but these unitary processes are very different even as they have intriguing similarities. This thesis is concerned with study of discrete-time quantum walks. The original motivation from classical Markov chains required for discrete-time quantum walks that one adds an auxiliary Hilbert space, unrelated to the one in which the system evolves, in order to be able to mix components in that space and then take the evolution steps accordingly (based on the state in that space). This additional, "coin," space is very often an internal degree of freedom like spin. We have introduced a general framework for construction of discrete-time quantum walks in a close analogy with the classical random walks with memory that is rather different from the standard "coin" approach. In this method there is no need to bring in a different degree of freedom, while the full state of the system is still described in the direct product of spaces (of states). The state can be thought of as an arrow pointing from the previous to the current site in the evolution, representing the one-step memory. The next step is then controlled by a single local operator assigned to each site in the space, acting quite like a scattering operator. This allows us to probe and solve some problems of interest that have not had successful approaches with "coined" walks. We construct and solve a walk on the binary tree, a structure of great interest but until our result without an explicit discrete time quantum walk, due to difficulties in managing coin spaces necessary in the standard approach. Beyond algorithmic interests, the model based on memory allows one to explore effects of history on the quantum evolution and the subtle emergence of classical features as "memory" is explicitly kept for additional steps. We construct and solve a walk with an additional correlation step, finding interesting new features. On the other hand, the fact that the evolution is driven entirely by a local operator, not involving additional spaces, enables us to choose the Fourier transform as an operator completely controlling the evolution. This in turn allows us to combine the quantum walk approach with Fourier transform based techniques, something decidedly not possible in classical computational physics. We are developing a formalism for building networks manageable by walks constructed with this framework, based on the surprising efficiency of our framework in discovering internals of a simple network that we so far solved. Finally, in line with our expectation that the field of quantum walks can take cues from the rich history of development of the classical stochastic techniques, we establish starting points for the work on non-Abelian quantum walks, with a particular quantum walk analog of the classical "card shuffling," the walk on the permutation group. In summary, this thesis presents a new framework for construction of discrete time quantum walks, employing and exploring memoried nature of unitary evolution. It is applied to fully solving the problems of: A walk on the binary tree and exploration of the quantum-to-classical transition with increased correlation length (history). It is then used for simple network discovery, and to lay the groundwork for analysis of complex networks, based on combined power of efficient exploration of the Hilbert space (as a walk mixing components) and Fourier transformation (since we can choose this for the evolution operator). We hope to establish this as a general technique as its power would be unmatched by any approaches available in the classical computing. We also looked at the promising and challenging prospect of walks on non-Abelian structures by setting up the problem of "quantum card shuffling," a quantum walk on the permutation group. Relation to other work is thoroughly discussed throughout, along with examination of the context of our work and overviews of our current and future work.
Graduation date: 2012
APA, Harvard, Vancouver, ISO, and other styles
18

Hedrick, Nathan Gray. "A Three-Molecule Model of Structural Plasticity: the Role of the Rho family GTPases in Local Biochemical Computation in Dendrites." Diss., 2015. http://hdl.handle.net/10161/11347.

Full text
Abstract:

It has long been appreciated that the process of learning might invoke a physical change in the brain, establishing a lasting trace of experience. Recent evidence has revealed that this change manifests, at least in part, by the formation of new connections between neurons, as well as the modification of preexisting ones. This so-called structural plasticity of neural circuits – their ability to physically change in response to experience – has remained fixed as a primary point of focus in the field of neuroscience.

A large portion of this effort has been directed towards the study of dendritic spines, small protrusions emanating from neuronal dendrites that constitute the majority of recipient sites of excitatory neuronal connections. The unique, mushroom-like morphology of these tiny structures has earned them considerable attention, with even the earliest observers suggesting that their unique shape affords important functional advantages that would not be possible if synapses were to directly contact dendrites. Importantly, dendritic spines can be formed, eliminated, or structurally modified in response to both neural activity as well as learning, suggesting that their organization reflects the experience of the neural network. As such, elucidating how these structures undergo such rearrangements is of critical importance to understanding both learning and memory.

As dendritic spines are principally composed of the cytoskeletal protein actin, their formation, elimination, and modification requires biochemical signaling networks that can remodel the actin cytoskeleton. As a result, significant effort has been placed into identifying and characterizing such signaling networks and how they are controlled during synaptic activity and learning. Such efforts have highlighted Rho family GTPases – binary signaling proteins central in controlling the dynamics of the actin cytoskeleton – as attractive targets for understanding how the structural modification of spines might be controlled by synaptic activity. While much has been revealed regarding the importance of the Rho GTPases for these processes, the specific spatial and temporal features of their signals that impart such structural changes remains unclear.

The central hypotheses of the following research dissertation are as follows: first, that synaptic activity rapidly initiates Rho GTPase signaling within single dendritic spines, serving as the core mechanism of dendritic spine structural plasticity. Next, that each of the Rho GTPases subsequently expresses a spatially distinct pattern of activation, with some signals remaining highly localized, and some becoming diffuse across a region of the nearby dendrite. The diffusive signals modify the plasticity induction threshold of nearby dendritic spines, and the spatially restricted signals serve to keep the expression of plasticity specific to those spines that receive synaptic input. This combination of differentially spatially regulated signals thus equips the neuronal dendrite with the ability to perform local biochemical computations, potentially establishing an organizational preference for the arrangement of dendritic spines along a dendrite. Finally, the consequences of the differential signal patterns also help to explain several seemingly disparate properties of one of the primary upstream activators of these proteins: brain-derived neurotrophic factor (BDNF).

The first section of this dissertation describes the characterization of the activity patterns of one of the Rho family GTPases, Rac1. Using a novel Förster Resonance Energy Transfer (FRET)- based biosensor in combination with two-photon fluorescence lifetime imaging (2pFLIM) and single-spine stimulation by two-photon glutamate uncaging, the activation profile and kinetics of Rac1 during synaptic stimulation were characterized. These experiments revealed that Rac1 conveys signals to both activated spines as well as nearby, unstimulated spines that are in close proximity to the target spine. Despite the diffusion of this structural signal, however, the structural modification associated with synaptic stimulation remained restricted to the stimulated spine. Thus, Rac1 activation is not sufficient to enlarge spines, but nonetheless likely confers some heretofore-unknown function to nearby synapses.

The next set of experiments set out to detail the upstream molecular mechanisms controlling Rac1 activation. First, it was found that Rac1 activation during sLTP depends on calcium through NMDA receptors and subsequent activation of CaMKII, suggesting that Rac1 activation in this context agrees with substantial evidence linking NMDAR-CaMKII signaling to LTP in the hippocampus. Next, in light of recent evidence linking structural plasticity to another potential upstream signaling complex, BDNF-TrkB, we explored the possibility that BDNF-TrkB signaling functioned in structural plasticity via Rac1 activation. To this end, we first explored the release kinetics of BDNF and the activation kinetics of TrkB using novel biosensors in conjunction with 2p glutamate uncaging. It was found that release of BDNF from single dendritic spines during sLTP induction activates TrkB on that same spine in an autocrine manner, and that this autocrine system was necessary for both sLTP and Rac1 activation. It was also found that BDNF-TrkB signaling controls the activity of another Rho GTPase, Cdc42, suggesting that this autocrine loop conveys both synapse-specific signals (through Cdc42) and heterosynaptic signals (through Rac1).

The next set of experiments detail one the potential consequences of heterosynaptic Rac1 signaling. The spread of Rac1 activity out of the stimulated spine was found to be necessary for lowering the plasticity threshold at nearby spines, a process known as synaptic crosstalk. This was also true for the Rho family GTPase, RhoA, which shows a similar diffusive activity pattern. Conversely, the activity of Cdc42, a Rho GTPase protein whose activity is highly restricted to stimulated spines, was required only for input-specific plasticity induction. Thus, the spreading of a subset of Rho GTPase signaling into nearby spines modifies the plasticity induction threshold of these spines, increasing the likelihood that synaptic activity at these sites will induce structural plasticity. Importantly, these data suggest that the autocrine BDNF-TrkB loop described above simultaneously exerts control over both homo- and heterosynaptic structural plasticity.

The final set of experiments reveals that the spreading of GTPase activity from stimulated spines helps to overcome the high activation thresholds of these proteins to facilitate nearby plasticity. Both Rac1 and RhoA, the activity of which spread into nearby spines, showed high activation thresholds, making weak stimuli incapable of activating them. Thus, signal spreading from a strongly stimulated spine can lower the plasticity threshold at nearby spines in part by supplementing the activation of high-threshold Rho GTPases at these sites. In contrast, the highly compartmentalized Rho GTPase Cdc42 showed a very low activation threshold, and thus did not require signal spreading to achieve high levels of activity to even a weak stimulus. As a result, synaptic crosstalk elicits cooperativity of nearby synaptic events by first priming a local region of the dendrite with several (but not all) of the factors required for structural plasticity, which then allows even weak inputs to achieve plasticity by means of localized Cdc42 activation.

Taken together, these data reveal a molecular pattern whereby BDNF-dependent structural plasticity can simultaneously maintain input-specificity while also relaying heterosynaptic signals along a local stretch of dendrite via coordination of differential spatial signaling profiles of the Rho GTPase proteins. The combination of this division of spatial signaling patterns and different activation thresholds reveals a unique heterosynaptic coincidence detection mechanism that allows for cooperative expression of structural plasticity when spines are close together, which in turn provides a putative mechanism for how neurons arrange structural modifications during learning.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
19

(10223831), Yuankun Fu. "Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis." Thesis, 2021.

Find full text
Abstract:
As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis.

I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network tools shows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times.

After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times.
APA, Harvard, Vancouver, ISO, and other styles
20

Janeke, Hendrik Christiaan. "Connectionist modelling in cognitive science: an exposition and appraisal." Thesis, 2003. http://hdl.handle.net/10500/635.

Full text
Abstract:
This thesis explores the use of artificial neural networks for modelling cognitive processes. It presents an exposition of the neural network paradigm, and evaluates its viability in relation to the classical, symbolic approach in cognitive science. Classical researchers have approached the description of cognition by concentrating mainly on an abstract, algorithmic level of description in which the information processing properties of cognitive processes are emphasised. The approach is founded on seminal ideas about computation, and about algorithmic description emanating, amongst others, from the work of Alan Turing in mathematical logic. In contrast to the classical conception of cognition, neural network approaches are based on a form of neurocomputation in which the parallel distributed processing mechanisms of the brain are highlighted. Although neural networks are generally accepted to be more neurally plausible than their classical counterparts, some classical researchers have argued that these networks are best viewed as implementation models, and that they are therefore not of much relevance to cognitive researchers because information processing models of cognition can be developed independently of considerations about implementation in physical systems. In the thesis I argue that the descriptions of cognitive phenomena deriving from neural network modelling cannot simply be reduced to classical, symbolic theories. The distributed representational mechanisms underlying some neural network models have interesting properties such as similarity-based representation, content-based retrieval, and coarse coding which do not have straightforward equivalents in classical systems. Moreover, by placing emphasis on how cognitive processes are carried out by brain-like mechanisms, neural network research has not only yielded a new metaphor for conceptualising cognition, but also a new methodology for studying cognitive phenomena. Neural network simulations can be lesioned to study the effect of such damage on the behaviour of the system, and these systems can be used to study the adaptive mechanisms underlying learning processes. For these reasons, neural network modelling is best viewed as a significant theoretical orientation in the cognitive sciences, instead of just an implementational endeavour.
Psychology
D. Litt. et Phil. (Psychology)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography