Dissertations / Theses on the topic 'Cognitive computation'

To see the other types of publications on this topic, follow the link: Cognitive computation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cognitive computation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mansinghka, Vikash Kumar. "Natively probabilistic computation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/47892.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.
Includes bibliographical references (leaves 129-135).
I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.
(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.
by Vikash Kumar Mansinghka.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Sprevak, Mark Daniel. "Computation in mind and world : a realist account of computation in cognitive science." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jonas, Eric Michael. "Stochastic architectures for probabilistic computation." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/87457.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 107-111).
The brain interprets ambiguous sensory information faster and more reliably than modern computers, using neurons that are slower and less reliable than logic gates. But Bayesian inference, which is at the heart of many models for sensory information processing and cognition, as well as many machine intelligence systems, appears computationally challenging, even given modern transistor speeds and energy budgets. The computational principles and structures needed to narrow this gap are unknown. Here I show how to build fast Bayesian computing machines using intentionally stochastic, digital parts, narrowing this efficiency gap by multiple orders of magnitude. By connecting stochastic digital components according to simple mathematical rules, it is possible to rapidly, reliably and accurately solve many Bayesian inference problems using massively parallel, low precision circuits. I show that our circuits can solve problems of depth and motion perception, perceptual learning and causal reasoning via inference over 10,000+ latent variables in real time - a 1,000x speed advantage over commodity microprocessors - by exploiting stochasticity. I will show how this natively stochastic approach follows naturally from the probability algebra, giving rise to easy-to-understand rules for abstraction and composition. I have developed a compiler that automatically generate circuits for a wide variety of problems fixed-structure problems. I then present stochastic computing architectures for models that are viable even when constrained by silicon area and dynamic creation and destruction of random variables. These results thus expose a new role for randomness and Bayesian inference in the engineering and reverse-engineering of computing machines.
by Eric Jonas.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Ullman, Michael Thomas. "The computation of inflectional morphology." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghahramani, Zoubin. "Computation and psychophysics of sensorimotor integration." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kell, Alexander James Eaton. "Hierarchy and invariance in auditory cortical computation." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/132746.

Full text
Abstract:
Thesis: Ph. D. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, June, 2019
Cataloged from the PDF version of thesis. "June 2019"--Hand written on title page.
Includes bibliographical references.
With ease, we recognize a friend's voice in a crowd, or pick out the first violin in a concerto. But the effortlessness of everyday perception masks its computational challenge. Perception does not occur in the eyes and ears - indeed, nearly half of primate cortex is dedicated to it. While much is known about peripheral auditory processing, auditory cortex remains poorly understood. This thesis addresses basic questions about the functional and computational organization of human auditory cortex through three studies. In the first study we show that a hierarchical neural network model optimized to recognize speech and music does so at human levels, exhibits a similar pattern of behavioral errors, and predicts cortical responses, as measured with fMRI. The multi-task optimization procedure we introduce produces separate music and speech pathways after a shared front end, potentially recapitulating aspects of auditory cortical functional organization. Within the model, different layers best predict primary and non-primary voxels, revealing a hierarchical organization in human auditory cortex. We then seek to characterize the representational transformations that occur across stages of the putative cortical hierarchy, probing for one candidate: invariance to realworld background noise. To measure invariance, we correlate voxel responses to natural sounds with and without real-world background noise. Non-primary responses are substantially more noise-invariant than primary responses. These results illustrate a representational consequence of the potential hierarchical organization of the auditory system. Lastly, we explore of the generality of deep neural networks as models of human hearing by simulating many psychophysical and fMRI experiments on the above-described neural network model. The results provide an extensive comparison of the performance characteristics and internal representations of a deep neural network with those of humans. We observe many similarities that suggest that the model replicates a broad variety of aspects of auditory perception. However, we also find discrepancies that suggest targets for future modeling efforts.
by Alexander James Eaton Kell.
Ph. D. in Neuroscience
Ph.D.inNeuroscience Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences
APA, Harvard, Vancouver, ISO, and other styles
7

Heirdsfield, Ann M. "Mental computation: The identification of associated cognitive, metacognitive and affective factors." Thesis, Queensland University of Technology, 2001. https://eprints.qut.edu.au/36637/1/36637_Digitised%20Thesis.pdf.

Full text
Abstract:
The purpose of the study was to develop an explanation why some children are better at addition and subtraction mental computation than others. For the purposes of this thesis, mental computation was defined as "the process of carrying out arithmetic calculations without the aid of external devices" (Sowder, 1988, p.182). To reflect current views of mental computation as calculating with the head, rather than merely, in the head, the definition was extended to calculating using strategies with understanding (Anghileri, 1999). Thus, proficiency was not confined to accuracy, but also included flexibility of strategy choice. The study investigated the part played by number sense knowledge (e.g., numeration, number facts, estimation and effects of operations on number), metacognition, affects (e.g., beliefs, attitudes), and memory. The study showed that students proficient in mental computation (accurate and flexible) possessed integrated understandings of number facts (speed, accuracy, and efficient number facts), numeration, and number and operation. These proficient students also exhibited some metacognitive strategies and possessed reasonable short term memory and executive functioning. Where there was less knowledge and fewer connections between knowledge, students compensated in different ways, depending on their beliefs and what knowledge they possessed. Accurate and inflexible students used the teacher taught strategy of mental image of pen and paper algorithm in which strong beliefs were held. Combined with fast and accurate number facts and some numeration understanding, their familiarity with this strategy enabled the students to complete the mental computation tasks with accuracy. Working memory was sufficient to use an inefficient mental strategy accurately. The visuospatial scratchpad was used as a visual memory aid. The inaccurate and flexible students compensated for their poor number facts and minimal and disconnected knowledge base by using a variety of mental strategies in an endeavour to find one that would enable them to complete the calculation. Although their limited numeration understanding and memory (including central executive) were sufficient to support the development of some alternative strategies, these were not high level strategies. Finally, the inaccurate and inflexible students who exhibited deficient and disconnected understanding tried to compensate by using teacher-taught procedures (similar to the strategy employed by accurate and inflexible students), but they were unsuccessful, as they possessed no procedural understanding and also had poor working memory. Detailed analysis of students' knowledge was used to develop frameworks, which explained children's proficiency in addition and subtraction mental computation. The theoretical frameworks explained the influence of contributing factors and the relationships (if any) between them. The frameworks formed the basis of flowcharts, which explained the process in mental computation for each group of students. The importance of connected knowledge for proficient mental computation demonstrates the need for teaching practices to focus on the development of an extensive and integrated knowledge base. Students can and do formulate their own strategies, but do not always use them accurately. Therefore, students should be encouraged to formulate their own strategies but in a supportive environment that assists them to use strategies appropriately. Because of memory load, students should be permitted to use external memory aids (e.g. pen and paper) to assist mental computation. This has a second payoff in that efficient mental strategies are, at times, also efficient written strategies. By having students formulate mental strategies, they have to call upon number sense knowledge, thus acquiring connected knowledge while they develop computational procedures. This is in contrast to students using teacher-taught procedures, which require little connected knowledge.
APA, Harvard, Vancouver, ISO, and other styles
8

Wells, Andrew J. "The External Tape Hypothesis : a Turing machine based approach to cognitive computation." Thesis, London School of Economics and Political Science (University of London), 1994. http://etheses.lse.ac.uk/118/.

Full text
Abstract:
The symbol processing or "classical cognitivist" approach to mental computation suggests that the cognitive architecture operates rather like a digital computer. The components of the architecture are input, output and central systems. The input and output systems communicate with both the internal and external environments of the cognizer and transmit codes to and from the rule governed, central processing system which operates on structured representational expressions in the internal environment. The connectionist approach, by contrast, suggests that the cognitive architecture should be thought of as a network of interconnected neuron-like processing elements (nodes) which operates rather like a brain. Connectionism distinguishes input, output and central or "hidden" layers of nodes. Connectionists claim that internal processing consists not of the rule governed manipulation of structured symbolic expressions, but of the excitation and inhibition of activity and the alteration of connection strengths via message passing within and between layers of nodes in the network. A central claim of the thesis is that neither symbol processing nor connectionism provides an adequate characterization of the role of the external environment in cognitive computation. An alternative approach, called the External Tape Hypothesis (ETH), is developed which claims, on the basis of Turing's analysis of routine computation, that the Turing machine model can be used as the basis for a theory which includes the environment as an essential part of the cognitive architecture. The environment is thought of as the tape, and the brain as the control of a Turing machine. Finite state automata, Turing machines, and universal Turing machines are described, including details of Turing's original universal machine construction. A short account of relevant aspects of the history of digital computation is followed by a critique of the symbol processing approach as it is construed by influential proponents such as Allen Newell and Zenon Pylyshyn among others. The External Tape Hypothesis is then developed as an alternative theoretical basis. In the final chapter, the ETH is combined with the notion of a self-describing Turing machine to provide the basis for an account of thinking and the development of internal representations.
APA, Harvard, Vancouver, ISO, and other styles
9

Aboalela, Rania Anwar. "An Assessment of Knowledge by Pedagogical Computation on Cognitive Level mapped Concept Graphs." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1496941747313396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fayez, Almohanad Samir. "Design Space Decomposition for Cognitive and Software Defined Radios." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23180.

Full text
Abstract:
Software Defined Radios (SDRs) lend themselves to flexibility and extensibility because they
depend on software to implement radio functionality. Cognitive Engines (CEs) introduce
intelligence to radio by monitoring radio performance through a set of meters and configuring
the underlying radio design by modifying its knobs. In Cognitive Radio (CR) applications,
CEs intelligently monitor radio performance and reconfigure them to meet it application
and RF channel needs. While the issue of introducing computational knobs and meters
is mentioned in literature, there has been little work on the practical issues involved in
introducing such computational radio controls.

This dissertation decomposes the radio definition to reactive models for the CE domain
and real-time, or dataflow models, for the SDR domain. By allowing such design space
decomposition, CEs are able to define implementation independent radio graphs and rely on
a model transformation layer to transform reactive radio models to real-time radio models
for implementation. The definition of knobs and meters in the CE domain is based on
properties of the dataflow models used in implementing SDRs. A framework for developing
this work is presented, and proof of concept radio applications are discussed to demonstrate
how CEs can gain insight into computational aspects of their radio implementation during
their reconfiguration decision process.

Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Rieuf, Vincent. "Impact de l’expérience immersive sur la prise en compte du kansei en design industriel amont." Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0027/document.

Full text
Abstract:
Dans un contexte industriel en constante évolution, le designer industriel utilise la représentation comme un vecteur lui permettant de s’inspirer et d’opérer des choix stylistiques afin d’imaginer l’expérience induite par les concepts de produits qu’il développe. Ce travail doctoral présente l'étude comparative entre l'activité de design amont traditionnelle et l'activité de design amont immersive, permettant l'évaluation et la modélisation de l'activité de Kansei Design Virtuel. Mes travaux portent essentiellement sur l'application et l'expérimentation de théories fondamentales à travers la conception de deux outils consécutifs du processus de design:• Les Univers de Tendance sont des environnements inspirationnels spatiaux immersifs dédies à la compréhension d'une tendance stylistique et conçu pour substituer et augmenter le rôle des planches de tendances traditionnelles.• Le Dessin Tridimensionnel Immersif est un environnement générationnel permettant au designer de déposer, effacer, manipuler… un tracé dans l'espace et dédié aux premiers croquis d'idéation.Ces recherches ont pour but de développer des outils et un workflow digital immersif permettant d'une part, d'anticiper le Kansei (relation holistique designer/client-produit) afin d'optimiser les choix stylistiques stratégiques et d'autre part, de maximiser la fidélité de retranscription entre espace d'inspiration et espace de génération tout en augmentant la capacité du designer à produire des concepts esthétiques et innovants
In an ever-changing context, the industrial design uses representation as a vector for inspiration and as a tool to operate stylistic choices which in turn enable the shaping of the experience induced by the designed product.This doctoral research presents the comparative study of traditional early design activity and immersive early design activity. This enables the evaluation and modeling of Virtual Kansei Design. My work essentially address the application and experimentation of fundamental theories through the design of two successive tools composing and innovative early design process.• The Immersive Moodboards are spatial immersive inspirational environments dedicated to the understanding of a stylistic trend, designed to substitute and enhance traditional moodboards.• The Immersive sketching is a generational environment enabling the design to position, erase, manipulate… a graphical mark in a three dimensional space planned for the creation of the first ideation sketches.This research aim to develop tools and a digital immersive workflow which first of all enables the design to anticipate Kansei (holistic relationship between the designer/user and the product) in order to optimize strategic style related choices and secondly enhances the fidelity between inspiration and generation while increasing the ability of the designer to produce innovating and aesthetic concepts
APA, Harvard, Vancouver, ISO, and other styles
12

Riera, Villanueva Marc. "Low-power accelerators for cognitive computing." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669828.

Full text
Abstract:
Deep Neural Networks (DNNs) have achieved tremendous success for cognitive applications, and are especially efficient in classification and decision making problems such as speech recognition or machine translation. Mobile and embedded devices increasingly rely on DNNs to understand the world. Smartphones, smartwatches and cars perform discriminative tasks, such as face or object recognition, on a daily basis. Despite the increasing popularity of DNNs, running them on mobile and embedded systems comes with several main challenges: delivering high accuracy and performance with a small memory and energy budget. Modern DNN models consist of billions of parameters requiring huge computational and memory resources and, hence, they cannot be directly deployed on low-power systems with limited resources. The objective of this thesis is to address these issues and propose novel solutions in order to design highly efficient custom accelerators for DNN-based cognitive computing systems. In first place, we focus on optimizing the inference of DNNs for sequence processing applications. We perform an analysis of the input similarity between consecutive DNN executions. Then, based on the high degree of input similarity, we propose DISC, a hardware accelerator implementing a Differential Input Similarity Computation technique to reuse the computations of the previous execution, instead of computing the entire DNN. We observe that, on average, more than 60% of the inputs of any neural network layer tested exhibit negligible changes with respect to the previous execution. Avoiding the memory accesses and computations for these inputs results in 63% energy savings on average. In second place, we propose to further optimize the inference of FC-based DNNs. We first analyze the number of unique weights per input neuron of several DNNs. Exploiting common optimizations, such as linear quantization, we observe a very small number of unique weights per input for several FC layers of modern DNNs. Then, to improve the energy-efficiency of FC computation, we present CREW, a hardware accelerator that implements a Computation Reuse and an Efficient Weight Storage mechanism to exploit the large number of repeated weights in FC layers. CREW greatly reduces the number of multiplications and provides significant savings in model memory footprint and memory bandwidth usage. We evaluate CREW on a diverse set of modern DNNs. On average, CREW provides 2.61x speedup and 2.42x energy savings over a TPU-like accelerator. In third place, we propose a mechanism to optimize the inference of RNNs. RNN cells perform element-wise multiplications across the activations of different gates, sigmoid and tanh being the common activation functions. We perform an analysis of the activation function values, and show that a significant fraction are saturated towards zero or one in popular RNNs. Then, we propose CGPA to dynamically prune activations from RNNs at a coarse granularity. CGPA avoids the evaluation of entire neurons whenever the outputs of peer neurons are saturated. CGPA significantly reduces the amount of computations and memory accesses while avoiding sparsity by a large extent, and can be easily implemented on top of conventional accelerators such as TPU with negligible area overhead, resulting in 12% speedup and 12% energy savings on average for a set of widely used RNNs. Finally, in the last contribution of this thesis we focus on static DNN pruning methodologies. DNN pruning reduces memory footprint and computational work by removing connections and/or neurons that are ineffectual. However, we show that prior pruning schemes require an extremely time-consuming iterative process that requires retraining the DNN many times to tune the pruning parameters. Then, we propose a DNN pruning scheme based on Principal Component Analysis and relative importance of each neuron's connection that automatically finds the optimized DNN in one shot.
Les xarxes neuronals profundes (DNN) han aconseguit un èxit enorme en aplicacions cognitives, i són especialment eficients en problemes de classificació i presa de decisions com ara reconeixement de veu o traducció automàtica. Els dispositius mòbils depenen cada cop més de les DNNs per entendre el món. Els telèfons i rellotges intel·ligents, o fins i tot els cotxes, realitzen diàriament tasques discriminatòries com ara el reconeixement de rostres o objectes. Malgrat la popularitat creixent de les DNNs, el seu funcionament en sistemes mòbils presenta diversos reptes: proporcionar una alta precisió i rendiment amb un petit pressupost de memòria i energia. Les DNNs modernes consisteixen en milions de paràmetres que requereixen recursos computacionals i de memòria enormes i, per tant, no es poden utilitzar directament en sistemes de baixa potència amb recursos limitats. L'objectiu d'aquesta tesi és abordar aquests problemes i proposar noves solucions per tal de dissenyar acceleradors eficients per a sistemes de computació cognitiva basats en DNNs. En primer lloc, ens centrem en optimitzar la inferència de les DNNs per a aplicacions de processament de seqüències. Realitzem una anàlisi de la similitud de les entrades entre execucions consecutives de les DNNs. A continuació, proposem DISC, un accelerador que implementa una tècnica de càlcul diferencial, basat en l'alt grau de semblança de les entrades, per reutilitzar els càlculs de l'execució anterior, en lloc de computar tota la xarxa. Observem que, de mitjana, més del 60% de les entrades de qualsevol capa de les DNNs utilitzades presenten canvis menors respecte a l'execució anterior. Evitar els accessos de memòria i càlculs d'aquestes entrades comporta un estalvi d'energia del 63% de mitjana. En segon lloc, proposem optimitzar la inferència de les DNNs basades en capes FC. Primer analitzem el nombre de pesos únics per neurona d'entrada en diverses xarxes. Aprofitant optimitzacions comunes com la quantització lineal, observem un nombre molt reduït de pesos únics per entrada en diverses capes FC de DNNs modernes. A continuació, per millorar l'eficiència energètica del càlcul de les capes FC, presentem CREW, un accelerador que implementa un eficient mecanisme de reutilització de càlculs i emmagatzematge dels pesos. CREW redueix el nombre de multiplicacions i proporciona estalvis importants en l'ús de la memòria. Avaluem CREW en un conjunt divers de DNNs modernes. CREW proporciona, de mitjana, una millora en rendiment de 2,61x i un estalvi d'energia de 2,42x. En tercer lloc, proposem un mecanisme per optimitzar la inferència de les RNNs. Les cel·les de les xarxes recurrents realitzen multiplicacions element a element de les activacions de diferents comportes, sigmoides i tanh sent les funcions habituals d'activació. Realitzem una anàlisi dels valors de les funcions d'activació i mostrem que una fracció significativa està saturada cap a zero o un en un conjunto d'RNNs populars. A continuació, proposem CGPA per podar dinàmicament les activacions de les RNNs a una granularitat gruixuda. CGPA evita l'avaluació de neurones senceres cada vegada que les sortides de neurones parelles estan saturades. CGPA redueix significativament la quantitat de càlculs i accessos a la memòria, aconseguint en mitjana un 12% de millora en el rendiment i estalvi d'energia. Finalment, en l'última contribució d'aquesta tesi ens centrem en metodologies de poda estàtica de les DNNs. La poda redueix la petjada de memòria i el treball computacional mitjançant l'eliminació de connexions o neurones redundants. Tanmateix, mostrem que els esquemes de poda previs fan servir un procés iteratiu molt llarg que requereix l'entrenament de les DNNs moltes vegades per ajustar els paràmetres de poda. A continuació, proposem un esquema de poda basat en l'anàlisi de components principals i la importància relativa de les connexions de cada neurona que optimitza automàticament el DNN optimitzat en un sol tret sense necessitat de sintonitzar manualment múltiples paràmetres
APA, Harvard, Vancouver, ISO, and other styles
13

Buss, Aaron Thomas. "Closing the developmental loop on the behavioral and neural dynamics of flexible rule-use." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4949.

Full text
Abstract:
Executive function (EF) is a central aspect of cognition that undergoes significant changes in early childhood. Changes in EF in early childhood are robustly predictive of academic achievement and general quality of life measures later in adulthood. I develop a dynamic neural field (DNF) model which provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function--the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 5-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. In Study 1, I use the DNF model to integrate behavioral and neural processes by simulating hemodynamics associated with the early emergence of flexible rule-use. I then test predictions of the model using near-infrared spectroscopy. In Study 2, I develop a DCCS that can be used with adults that sheds light on key aspects of the task as they have been revealed with children. Using fMRI, a pattern of behavioral and neural effects shed light on the central processes involved in flexible rule-use. These two studies demonstrate that performance emerges as a property of system-wide interactions and that common neurocognitive effects .can be found between childhood and adulthood.
APA, Harvard, Vancouver, ISO, and other styles
14

Lundqvist, Tomas. "Creating Resilience – A Matter of Control or Computation? : Resilience Engineering explored through the lenses of Cognitive Systems Engineering and Distributed Cognition in a patient safety case study." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102366.

Full text
Abstract:
In recent years, the research approach known as Resilience Engineering (RE) has offered a promising new way of understanding safety-critical organizations, but less in the way of empirical methods for analysis. In this master’s thesis, an extensive comparison was made between RE and two different research approaches on cognitive systems: Distributed Cognition (DC) and Cognitive Systems Engineering (CSE) with the aim of exploring whether these approaches can contribute to the analysis and understanding of resilience. In addition to a theoretical comparison, an ethnographic healthcare case study was conducted, analyzing the patient safety at a pediatric emergency department using the Three-Level Analytical Framework from DC and the Extended Control Model from CSE, then conducting an RE analysis based on the former two analyses. It was found that while the DC and CSE approaches can explain how an organization adapts to current demands, neither approach fully addresses the issue of future demands anticipation, central to the RE perspective. However, the CSE framework lends itself well as an empirical ground providing the entry points for a more thoroughgoing RE analysis, while the inclusion of physical context in a DC analysis offers valuable insights to safety-related issues that would otherwise be left out in the study of resilience.
APA, Harvard, Vancouver, ISO, and other styles
15

Burke, Lauren. "Computer Science Education at The Claremont Colleges: The Building of an Intuition." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/scripps_theses/875.

Full text
Abstract:
In this thesis, I discuss how the undergraduate computer scientist is trained, and how they learn what I am calling computational intuition. Computational intuition describes the methodology in which computer scientists approach their problems and solve them through the use of computers. Computational intuition is a series of skills and a way of thinking or approaching problems that students learn throughout their education. The main way that computational intuition is taught to students is through the experience they gain as they work on homework and classwork problems. To develop computational intuition, students learn explicit knowledge and techniques as well as knowledge that is tacit and harder to teach within the lectures of a classroom environment. Computational intuition includes concepts that professors and students discuss which include “computer science intuition,” “computational thinking,” general problem solving skills or heuristics, and trained judgement. This way of learning is often social, and I draw on the pedagogy of cognitive apprenticeship to understand the interactions between the professors, tutors, and other students help learners gain an understanding of the “computer science intuition.” It is this method of thinking that computer scientists at the Claremont Colleges have stated as being one of the most essential items that should be taught and gained throughout their education and signals a wider understanding of computer science as a field.
APA, Harvard, Vancouver, ISO, and other styles
16

Schultheis, Holger. "Computational cognitive modeling of control in spatial cognition." Lengerich Berlin Bremen Miami, Fla. Riga Viernheim Wien Zagreb Pabst Science Publ, 2009. http://d-nb.info/998029661/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Powell, Nathaniel V. "The role of Uncertainty in Categorical Perception Utilizing Statistical Learning in Robots." ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/581.

Full text
Abstract:
At the heart of statistical learning lies the concept of uncertainty. Similarly, embodied agents such as robots and animals must likewise address uncertainty, as sensation is always only a partial reflection of reality. This thesis addresses the role that uncertainty can play in a central building block of intelligence: categorization. Cognitive agents are able to perform tasks like categorical perception through physical interaction (active categorical perception; ACP), or passively at a distance (distal categorical perception; DCP). It is possible that the former scaffolds the learning of the latter. However, it is unclear whether DCP indeed scaffolds ACP in humans and animals, nor how a robot could be trained to likewise learn DCP from ACP. Here we demonstrate a method for doing so which involves uncertainty: robots perform ACP when uncertain and DCP when certain. Furthermore, we demonstrate that robots trained in such a manner are more competent at categorizing novel objects than robots trained to categorize in other ways. This suggests that such a mechanism would also be useful for humans and animals, suggesting that they may be employing some version of this mechanism.
APA, Harvard, Vancouver, ISO, and other styles
18

Tyska, Carvalho Jônata. "Adaptive behaviour in evolving robots." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/10547.

Full text
Abstract:
In this thesis, the evolution of adaptive behaviour in artificial agents is studied. More specifically, two types of adaptive behaviours are studied: articulated and cognitive ones. Chapter 1 presents a general introduction together with a brief presentation of the research area of this thesis, its main goals and a brief overview of the experimental studies done, the results and conclusions obtained. On chapter 2, I briefly present some promising methods that automatically generate robot controllers and/or body plans and potentially could help in the development of adaptive robots. Among these methods I present in details evolutionary robotics, a method inspired on natural evolution, and the biological background regarding adaptive behaviours in biological organisms, which provided inspiration for the studies presented in this thesis. On chapter 3, I present a detailed study regarding the evolution of articulated behaviours, i.e., behaviours that are organized in functional sub-parts, and that are combined and used in a sequential and context-dependent way, regardless if there is a structural division in the robot controller or not. The experiments performed with a single goal task, a cleaning task, showed that it is possible to evolve articulated behaviours even in this condition and without structural division of the robot controller. Also the analysis of the results showed that this type of integrated modular behaviours brought performance advantages compared to structural divided controllers. Analysis of robots' behaviours helped to clarify that the evolution of this type of behaviour depended on the characteristics of the neural network controllers and the robot's sensorimotor capacities, that in turn defined the capacity of the robot to generate opportunity for actions, which in psychological literature is often called affordances. In chapter 4, a study seeking to understand the role of reactive strategies in the evolution of cognitive solutions, i.e. those capable of integrating information over time encoding it on internal states that will regulate the robot's behaviour in the future, is presented. More specifically I tried to understand whether the existence of sub-optimal reactive strategies prevent the development of cognitive solutions, or they can promote the evolution of solutions capable of combining reactive strategies and the use of internal information for solving a response delayed task, the double t-maze. The results obtained showed that reactive strategies capable of offloading cognitive work to the agent/environmental relation can promote, rather than prevent the evolution of solutions relying on internal information. The analysis of these results clarified how these two mechanisms interact producing a hybrid superior and robust solution for the delayed response task.
APA, Harvard, Vancouver, ISO, and other styles
19

Costa, César Rennó. "Controle de síntese sonora por analogia acústica e semântica aplicando computação bio-inspirada." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259090.

Full text
Abstract:
Orientadores: Fernando José Von Zuben, Jônatas Manzolli
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-17T09:26:02Z (GMT). No. of bitstreams: 1 Costa_CesarRenno_M.pdf: 8817422 bytes, checksum: f05c86a8d8717568f1afd9da373b6a55 (MD5) Previous issue date: 2007
Resumo: Este trabalho sugere novos paradigmas de controle de mecanismos de síntese sonora. Utilizando conceitos das ciências cognitivas, o processo gerativo é modelado como um sistema de conversões entre representações, da atuação subjetiva do usuário, passando pela descritiva e culminando no material sonoro. A partir do estudo da analogia descritiva, engendra-se a analogia acústica, representação por amostras sonoras, e a analogia semântica, representação por linguagem. Aplicadas à arquitetura modelada, essas analogias permitem que o processo de síntese sonora tenha um caráter mais intuitivo. São apresentadas duas implementações práticas, sendo que técnicas de computação bio-inspirada fornecem o maquinário computacional para a realização do mapeamento entre representações e controle do processo de síntese
Abstract: This work suggests novel control paradigms of sound synthesis mechanisms. Applying cognitive science concepts, the generative process is modeled as a system of conversions throughout representations: from user's insight, through descriptive, to the sound material. From descriptive analogy studies, the acoustic analogy (representation through sound) and the semantic analogy (representation through language) are engendered. Applied to the modeled architecture, these analogies allow the synthesis process to have a more intuitive nature. Two practical implementations are presented. Bio-inspired computing provides the computational machinery used to map different representations and to control the synthesis process
Mestrado
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
20

Boruta, Luc. "Indicateurs d'allophonie et de phonémicité." Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://tel.archives-ouvertes.fr/tel-00746163.

Full text
Abstract:
Bien que nous ne distinguions qu'un nombre fini et restreint de catégories de sons (les phonèmes d'une langue donnée), les sons des messages que nous recevons ne sont jamais identiques. Étant donnée l'ubiquité des processus allophoniques à travers les langues et le fait que chaque langue dispose de son propre inventaire phonémique, quels types d'indices les nourrissons, par exemple anglophones, pourraient-ils exploiter pour découvrir que [sıŋkıŋ] et [θıŋkıŋ] (sinking vs. thinking) ne peuvent pas dénoter la même action ? Le travail présenté dans cette thèse prolonge les travaux initiés par Peperkamp et al. (2006) concernant la définition de mesures de dissimilarité phone à phone indiquant quels phones sont des réalisations d'un même phonème. Nous montrons que résoudre la tâche proposée par Peperkamp et al. n'apporte pas une réponse complète au problème de l'acquisition des phonèmes, principalement parce que des limitations empiriques et formelles résultent de sa formulation phone à phone. Nous proposons une reformulation du problème comme un problème d'apprentissage automatique non-supervisé par partitionnement reposant sur le positionnement multidimensionnel des données. Les résultats de diverses expériences d'apprentissage supervisé et non-supervisé indiquent systématiquement que de bons indicateurs d'allophonie ne sont pas nécessairement de bons indicateurs de phonémicité. Dans l'ensemble, les résultats computationnels présentés dans ce travail suggèrent qu'allophonie et phonémicité ne peuvent être découvertes à partir d'informations acoustique, temporelle, distributionnelle ou lexicale que si, en moyenne, les phonèmes présentent peu de variabilité.
APA, Harvard, Vancouver, ISO, and other styles
21

Lowry, Mark D. "Evaluating Theories of Bilingual Language Control Using Computational Models." Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7852.

Full text
Abstract:
Bilingual language control refers to how bilinguals are able to speak exclusively in one language without the unintended language intruding. Two prominent verbal theories of bilingual language control have been proposed by researchers: the inhibitory control model (ICM) and the lexical selection mechanism model (LSM). The ICM posits that domain-general inhibition is employed in order to suppress the unintended language’s activation. The LSM posits that inhibition is not used; rather a lexical selection mechanism targets only the intended language’s words. In order to better test the theories’ hypotheses, I developed computational models to estimate participants’ reaction times when naming in blocks of semantically related pictures and in blocks of semantically unrelated pictures. For these tasks, the ICM model predicts that semantic interference will be abolished when bilinguals switch languages, while the LSM model does not. In Experiment One, English-Spanish bilinguals named pictures that were either semantically related to the previous four trials, or semantically unrelated to the previous four trials. Research indicated that language switching did not abolish priming effects, supporting the ICM. These results contradict conclusions found in previous literature. To reconcile this, another experiment was conducted. It was similar to Experiment One, except filler trials separated semantically related trials. Results showed that each time a semantically related neighbor was presented, naming latency increased by ~10ms regardless of language switching or number of filler items. It suggests that the existing literature mistook incremental learning effects as priming effects, and it demonstrates a need to incorporate theories of incremental learning into theories of bilingual language control.
APA, Harvard, Vancouver, ISO, and other styles
22

Madl, Tamas. "Bayesian mechanisms in spatial cognition : towards real-world capable computational cognitive models of spatial memory." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/bayesian-mechanisms-in-spatial-cognition-towards-realworld-capable-computational-cognitive-models-of-spatial-memory(665d1016-b841-47de-9b2d-40ddd8a0ff0d).html.

Full text
Abstract:
Existing computational cognitive models of spatial memory often neglect difficulties posed by the real world, such as sensory noise, uncertainty, and high spatial complexity. On the other hand, robotics is unconcerned with understanding biological cognition. This thesis takes an interdisciplinary approach towards developing cognitively plausible spatial memory models able to function in realistic environments, despite sensory noise and spatial complexity. We hypothesized that Bayesian localization and error correction accounts for how brains might maintain accurate location estimates, despite sensory errors. We argued that these mechanisms are psychologically plausible (producing human-like behaviour) as well as neurally plausible (implementable in brains). To support our hypotheses, we reported modelling results of neural recordings from rats (acquired outside this PhD), constituting the first evidence for Bayesian inference in neurons representing spatial location, as well as modelling human behaviour data. In addition to dealing with uncertainty, spatial representations have to be stored and used efficiently in realistic environments, by using structured representations such as hierarchies (which facilitate efficient retrieval and route planning). Evidence suggests that human spatial memories are structured hierarchically, but the process responsible for these structures has not been known. We investigated features influencing them using data from experiments in real-world and virtual reality environments, and proposed a computational model able to predict them in advance (based on clustering in psychological space). We have extended a general cognitive architecture, LIDA (Learning Intelligent Distribution Agent), by these probabilistic models of how brains might estimate, correct, and structure representations of spatial locations. We demonstrated the ability of the resulting model to deal with the challenges of realistic environments by running it in high-fidelity robotic simulations, modelled after participants' actual cities. Our results show that the model can deal with noise, uncertainty and complexity, and that it can reproduce the spatial accuracies of human participants.
APA, Harvard, Vancouver, ISO, and other styles
23

Lie, Nga-sze, and 李雅詩. "Abduction and computation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B4819928X.

Full text
Abstract:
In the thesis, Fodor’s arguments against computationalism are defeated. His arguments appeal to syntactic constraints and intractability. We argue that arguments based on syntactic constraints are not satisfactory. We then argue that the argument via intractability is not satisfactory either. We also discuss various approaches to the problem of abduction in a computationalist setting. We argue that the social solution that human everyday cognitive activity is not isotropic and Quinean is correct. Secondly, we argue that the local solution is too preliminary a proposal. We give our objections concerning the calculation of the effect to effort ratio and the claim that memory organization leads one to relevant information. Thirdly, we argue that the natural language approach is circular. Fourthly, we arguedthat the web search approach provides a partial account of finding relevant information but leaves out the key problem of evaluating the search results. Fifthly, we argue that the global workspace approach relegates the most important part of the solution to consciousness. In the end, we give a framework sketching mechanisms that could solve the problem of abduction.
published_or_final_version
Philosophy
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
24

Carbonaro, Michael David. "Computational cognitive modeling of concept attainment." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22960.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Miri, Hossein. "CernoCAMAL : a probabilistic computational cognitive architecture." Thesis, University of Hull, 2012. http://hydra.hull.ac.uk/resources/hull:6887.

Full text
Abstract:
This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes. The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally. The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows: - The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically. - The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems. - The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL. A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMAL’s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis.
APA, Harvard, Vancouver, ISO, and other styles
26

Seminck, Olga. "Cognitive Computational Models of Pronoun Resolution." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC184/document.

Full text
Abstract:
La résolution des pronoms est le processus par lequel un pronom anaphorique est mis en relation avec son antécédent. Les humains en sont capables sans efforts notables en situation normale. En revanche, les systèmes automatiques ont une performance qui reste loin derrière, malgré des algorithmes de plus en plus sophistiqués, développés par la communauté du Traitement Automatique des Langues. La recherche en psycholinguistique a montré à travers des expériences qu'au cours de la résolution de nombreux facteurs sont pris en compte par les locuteurs. Une question importante se pose : comment les facteurs interagissent et quel poids faut-il attribuer à chacun d'entre eux ? Une deuxième question qui se pose alors est comment les théories linguistiques de la résolution des pronoms incorporent tous les facteurs. Nous proposons une nouvelle approche à ces problématiques : la simulation computationnelle de la charge cognitive de la résolution des pronoms. La motivation pour notre approche est double : d'une part, l'implémentation d'hypothèses par un système computationnel permet de mieux spécifier les théories, d’autre part, les systèmes automatiques peuvent faire des prédictions sur des données naturelles comme les corpus de mouvement oculaires. De cette façon, les modèles computationnels représentent une alternative aux expériences classiques avec des items expérimentaux construits manuellement. Nous avons fait plusieurs expériences afin d'explorer les modèles cognitifs computationnels de la résolution des pronoms. D'abord, nous avons simulé la charge cognitive des pronoms en utilisant des poids de facteurs de résolution appris sur corpus. Ensuite, nous avons testé si les concepts de la Théorie de l’Information sont pertinents pour prédire la charge cognitive des pronoms. Finalement, nous avons procédé à l’évaluation d’un modèle psycholinguistique sur des données issues d’un corpus enrichi de mouvements oculaires. Les résultats de nos expériences montrent que la résolution des pronoms est en effet multi-factorielle et que l’influence des facteurs peut être estimée sur corpus. Nos résultats montrent aussi que des concepts de la Théorie de l’Information sont pertinents pour la modélisation des pronoms. Nous concluons que l’évaluation des théories sur des données de corpus peut jouer un rôle important dans le développement de ces théories et ainsi amener dans le futur à une meilleure prise en compte du contexte discursif
Pronoun resolution is the process in which an anaphoric pronoun is linked to its antecedent. In a normal situation, humans do not experience much cognitive effort due to this process. However, automatic systems perform far from human accuracy, despite the efforts made by the Natural Language Processing community. Experimental research in the field of psycholinguistics has shown that during pronoun resolution many linguistic factors are taken into account by speakers. An important question is thus how much influence each of these factors has and how the factors interact with each-other. A second question is how linguistic theories about pronoun resolution can incorporate all relevant factors. In this thesis, we propose a new approach to answer these questions: computational simulation of the cognitive load of pronoun resolution. The motivation for this approach is two-fold. On the one hand, implementing hypotheses about pronoun resolution in a computational system leads to a more precise formulation of theories. On the other hand, robust computational systems can be run on uncontrolled data such as eye movement corpora and thus provide an alternative to hand-constructed experimental material. In this thesis, we conducted various experiments. First, we simulated the cognitive load of pronouns by learning the magnitude of impact of various factors on corpus data. Second, we tested whether concepts from Information Theory were relevant to predict the cognitive load of pronoun resolution. Finally, we evaluated a theoretical model of pronoun resolution on a corpus enriched with eye movement data. Our research shows that multiple factors play a role in pronoun resolution and that their influence can be estimated on corpus data. We also demonstrate that the concepts of Information Theory play a role in pronoun resolution. We conclude that the evaluation of hypotheses on corpus data enriched with cognitive data ---- such as eye movement data --- play an important role in the development and evaluation of theories. We expect that corpus based methods will lead to a better modelling of the influence of discourse structure on pronoun resolution in future work
APA, Harvard, Vancouver, ISO, and other styles
27

Gok, Selvi Elif. "Modeling Consciousness: A Comparison Of Computational Models." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611178/index.pdf.

Full text
Abstract:
There has been a recent flurry of activity in consciousness research. Although an operational definition of consciousness has not yet been developed, philosophy has come to identify a set of features and aspects that are thought to be associated with the various elements of consciousness. On the other hand, there have been several recent attempts to develop computational models of consciousness that are claimed to capture or illustrate one or more aspects of consciousness. As a plausible substitute to evaluating how well the current computational models model consciousness, this study examines how the current computational models fare in modeling those aspects and features of consciousness identified by philosophy. Following a detailed and critical review of the literature of philosophy of consciousness, this study constructs a composite and eclectic list of features and aspects that would be expected in any successful model of consciousness. The study then evaluates, from the viewpoint of that list, some of the current self-claimed computational models of consciousness, specifically CLARION, IDA, ACT-R and model proposed in the Cleeremans'
review and study. The computational models studied are evaluated with respect to each identified aspect and feature of consciousness.
APA, Harvard, Vancouver, ISO, and other styles
28

Urgen, Burcu Aysen. "A Philosophical Analysis Of Computational Modeling In Cognitive Science." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608832/index.pdf.

Full text
Abstract:
This study analyses the methodology of computational cognitive modeling as one of the ways of conducting research in cognitive science. The aim of the study is to provide an understanding of the place of computational cognitive models in understanding human cognition. Considering the vast number of computational cognitive models which have been just given to account for some cognitive phenomenon by solely simulating some experimental study and fitting to empirical data, a practice-oriented approach is adopted in this study to understand the work of the modeler, and accordingly to discover the potential of computational cognitive models, apart from their being simulation tools. In pursuit of this aim, a framework with a practice-oriented approach from the philosophy of science literature, which is Morgan &
Morrison (1999)&rsquo
s account, is employed on a case study. The framework emphasizes four key elements to understand the place of models in science, which are the construction of models, the function of models, the representation they provide, and the ways we learn from models. The case study Q-Soar (Simon, Newell &
Klahr, 1991), is a model built with Soar cognitive architecture (Laird, Newell &
Rosenbloom, 1987) which is representative of a class of computational cognitive models. Discussions are included for how to make generalizations for computational cognitive models out of this class, i.e. for models that are built with other modeling paradigms.
APA, Harvard, Vancouver, ISO, and other styles
29

Rendell, Nicholas. "Mechanisms of cognitive reserve : computational and experimental explorations." Thesis, Birkbeck (University of London), 2017. http://bbktheses.da.ulcc.ac.uk/256/.

Full text
Abstract:
Cognitive reserve is the name given to the latent variable that describes individual differences in the ability to offset cognitive decline in old age. This thesis attempts to provide mechanistic explanations for two major aspects of cognitive reserve. These are neural compensation and neural reserve. Furthermore, behavioural experiments carried out as part of this investigation have extended the knowledge of existing theories as to the age invariance of neural compensation and the relationship between language, other more traditional proxies of cognitive reserve, and executive control. The results of these studies carried out in this thesis have demonstrated a biologically viable mechanism for the monitoring of task demand with resultant control of interhemispheric communication as a method of compensation. Further, this aspect of neural compensation was not found in younger participants. The neural network model in this thesis demonstrated differences over age in the spacing of representations for bilingual and monolingual networks as well as demonstrating increased inhibition in the bilingual network as a result of a negative relationship between weights from the tags of each language to nodes in the hidden layer. Finally, regression analysis using data from two large scale behavioural experiments demonstrated a minimal influence of bilingual language use on performance in executive control tasks. The models in this thesis provide an insight into the mechanisms behind cognitive reserve whilst supporting empirical results. Further, the results from the neural network model allowed predictions to be made with regard to the performance of bilinguals in dual category retrieval tasks. The lack of a relationship between bilingualism and cognitive control is supported by emerging research in the area and suggests that the functionality underlying cognitive reserve may be better described by biological rather than cognitive processes.
APA, Harvard, Vancouver, ISO, and other styles
30

Jin, Lifeng. "Computational Modeling of Syntax Acquisition with Cognitive Constraints." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1594934826359118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Alcock, Rupert. "Governing the new unconscious : cognition, computation and biopolitics." Thesis, University of Bristol, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.723433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kleiman-Weiner, Max. "Computational foundations of human social intelligence." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120621.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 199-211).
This thesis develops formal computational cognitive models of the social intelligence underlying human cooperation and morality. Human social intelligence is uniquely powerful. We collaborate with others to accomplish together what none of us could do on our own; we share the benefits of collaboration fairly and trust others to do the same. Even young children work and play collaboratively, guided by normative principles, and with a sophistication unparalleled in other animal species. Here, I seek to understand these everyday feats of social intelligence in computational terms. What are the cognitive representations and processes that underlie these abilities and what are their origins? How can we apply these cognitive principles to build machines that have the capacity to understand, learn from, and cooperate with people? The overarching formal framework of this thesis is the integration of individually rational, hierarchical Bayesian models of learning, together with socially rational multi-agent and game-theoretic models of cooperation. I use this framework to probe cognitive questions across three time-scales: evolutionary, developmental, and in the moment. First, I investigate the evolutionary origins of the cognitive structures that enable cooperation and support social learning. I then describe how these structures are used to learn social and moral knowledge rapidly during development, leading to the accumulation of knowledge over generations. Finally I show how this knowledge is used and generalized in the moment, across an infinitude of possible situations. This framework is applied to a variety of cognitively challenging social inferences: determining the intentions of others, distinguishing who is friend or foe, and inferring the reputation of others all from just a single observation of behavior. It also answers how these inferences enable fair and reciprocal cooperation, the computation of moral permissibility, and moral learning. This framework predicts and explains human judgment and behavior measured in large-scale multi-person experiments. Together, these results shine light on how the scale and scope of human social behavior is ultimately grounded in the sophistication of our social intelligence.
by Max Kleiman-Weiner.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Mukovskiy, Albert [Verfasser]. "Computational Methods for Cognitive and Cooperative Robotics / Albert Mukovskiy." Tübingen : Universitätsbibliothek Tübingen, 2019. http://d-nb.info/1227480946/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chada, Daniel de Magalhães. "From cognitive science to management science: two computational contributions." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/17053.

Full text
Abstract:
Submitted by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:57:06Z No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:58:17Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T13:00:07Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Made available in DSpace on 2016-09-12T13:03:31Z (GMT). No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5) Previous issue date: 2011
This work is composed of two contributions. One borrows from the work of Charles Kemp and Joshua Tenenbaum, concerning the discovery of structural form: their model is used to study the Business Week Rankings of U.S. Business Schools, and to investigate how other structural forms (structured visualizations) of the same information used to generate the rankings can bring insights into the space of business schools in the U.S., and into rankings in general. The other essay is purely theoretical in nature. It is a study to develop a model of human memory that does not exceed our (human) psychological short-term memory limitations. This study is based on Pentti Kanerva’s Sparse Distributed Memory, in which human memories are registered into a vast (but virtual) memory space, and this registration occurs in massively parallel and distributed fashion, in ideal neurons.
Este trabalho é composto de duas contribuições. Uma se usa do trabalhode Charles Kemp e Joshua Tenenbaum sobre a descoberta da forma estrutural: o seu modelo é usado para estudar os rankings da revista Business Week sobre escolas de administração, e para investigar como outras formas estruturais (visualizações estruturadas) da mesma informação usada para gerar os rankings pode trazer discernimento no espaço de escolas de negócios nos Estados Unidos e em rankings em geral. O outro ensaio é de natureza puramente teórica. Ele é um estudo no desenvolvimento de um modelo de memória que não excede os nossos (humanos) limites de memória de curto-prazo. Este estudo se baseia na Sparse Distributed Memory (Memória Esparsa e Distribuida) de Pentti Kanerva, na qual memórias humanas são registradas em um vasto (mas virtual) espaço, e este registro ocorre de forma maciçamente paralela e distribuida, em neurons ideais.
APA, Harvard, Vancouver, ISO, and other styles
35

Johnson, Joseph G. "A computational modeling account of robust preference reversal phenomena." [Bloomington, Ind.] : Indiana University, 2004. http://wwwlib.umi.com/dissertations/fullcit/3162242.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Psychology, 2004.
Title from PDF t.p. (viewed Dec. 1, 2008). Source: Dissertation Abstracts International, Volume: 66-01, Section: B, page: 0586. Chair: Jerome R. Busemeyer.
APA, Harvard, Vancouver, ISO, and other styles
36

Passera, Anthony. "A computational model of visuo-motor development." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Pasquali, Antoine. "Learning with and without consciousness: empirical and computational explorations." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210269.

Full text
Abstract:
Is it possible to learn without awareness? If so, what can learn without awareness, and what are the different mechanisms that differentiate between learning with and without consciousness? How can best measure awareness?

Here are a few of the many questions that I have attempted to investigate during the past few years. The main goal of this thesis was to explore the differences between conscious and unconscious learning. Thus, I will expose the behavioral and computational explorations that we conducted during the last few years. To present them properly, I first review the main concepts that, for almost a century now, researchers in the fields of neuroscience have formulated in order to tackle the issues of both learning and consciousness. Then I detail different hypotheses that guided our empirical and computational explorations. Notably, a few series of experiments allowed identification of several mechanisms that participate in either unconscious or conscious learning. In addition we explored a computational framework for explaining how one could learn unconsciously and nonetheless gain subjective access to one’s mental events. After reviewing the unfolding of our investigation, I detail the mechanisms that we identified as responsible for differences between learning with and without consciousness, and propose new hypotheses to be evaluated in the future.


Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
38

Wrigley, Stuart Nicholas. "A theory and computational model of auditory selective attention." Thesis, University of Sheffield, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Painter, Joan. "Imaginal processing in the two hemispheres : a computational investigation." Thesis, Goldsmiths College (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Aparicio, Mera Juan José. "Representación computacional de las perífrasis de fase: de la cognición a la computación." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/392696.

Full text
Abstract:
Esta tesis, a partir de la lingüística cognitiva y desde una perspectiva empírica, aborda el fenómeno de las perífrasis de fase del español. Así, uno de nuestros objetivos es conocer, clarificar y caracterizar sistemáticamente el estatus semántico-aspectual de estas perífrasis para poder representarlas computacionalmente. La semántica y el aspecto de las perífrasis de fase son considerados como mecanismos de composición, en los que el significado de una unidad compleja se construye a partir del significado de unidades simples. Asimismo, hemos considerado necesario representar los procesos de coerción derivados de la composicionalidad del significado. Es decir, el movimiento entre categorías aspectuales. En esta propuesta de representación, el foco de atención se sitúa en la dimensión semántico-conceptual de la construcción perifrástica. Así, el concepto de “esquema” es un punto clave en el análisis. Esta es la razón por la que, en los procesos de formación de las perífrasis, aparecen diferentes combinaciones y restricciones. Un verbo léxico sólo puede participar en aquellas perífrasis que expresen un esquema del escenario que sea apropiado para la situación denotada. Esta propuesta de caracterización aspectual de las perífrasis de fase puede captar tanto las restricciones a que da lugar la interrelación entre el aspecto léxico y el contexto perifrástico, como la naturaleza gradual del “Aktionsart”. De esta manera, se ofrecen nuevas vías para ver la relación y los cambios que se dan entre categorías. Precisamente, algunas categorías aspectuales, en relación a su frecuencia de aparición en las perífrasis de fase, no presentan una organización interna homogénea, sino miembros más prototípicos, que cumplen todas las condiciones definitorias de su clase, y miembros más fronterizos que, a través de procesos de coerción, pueden desplazarse hacia otras categorías aspectuales. Esto nos ha permitido identificar diferentes subtipos aspectuales. Se propone un sistema de representación que no solo está motivado cognitivamente; sino que, sobre todo, está contrastado empíricamente a partir de las metodologías que ofrecen la lingüística de corpus y las técnicas estadísticas. Así, esta tesis amalgama diferentes metodologías empíricas en el estudio de las perífrasis de fase y su representación. En este sentido, se ha realizado un estudio de corpus de amplia cobertura que nos ha permitido, en primer lugar, validar el hecho de que las perífrasis de fase son 2 sensibles al “Aktionsart”; en segundo lugar, identificar y clasificar las diferentes rutas de coerción aspectual que se dan en este tipo de perífrasis y, por último, demostrar que en estas perífrasis a mayor expresividad, menos rentabilidad funcional. Se ha implementado un sistema de análisis de la estructura eventiva que nos ha permitido elaborar un conjunto inicial de criterios para la anotación de las perífrasis de fase en un corpus del español. Finalmente, el modelo de representación propuesto permite acercar cognición y computación. Los parámetros de la lingüística cognitiva han sido formalizados y han resultado ser adecuados para su representación computacional.
This thesis, based on cognitive linguistics and stemming from an empirical perspective, deals with the phenomenon of Spanish phase periphrases. Thus, one of our goals is to understand, clarify and systematically characterize their semantic-aspectual status in order to represent them from a computational point of view. The semantics and aspect of the phase periphrases are considered as mechanisms of composition, in which the meaning of a complex unit is built from the meaning of simple units. In this proposal of representation, the focus is on the semantic-conceptual dimension of the periphrastic construction. Consequently, the concept of "scheme" is a key point in the analysis. This is why different combinations and restrictions arise in the formation processes of periphrases. A lexical verb can only participate in those periphrases that express an appropriate setting scheme for the denoted situation. This proposal of aspectual characterization of phase periphrases can capture both resulting restrictions from the interrelationship between lexical aspect and periphrastic context, and the gradual nature of the "Aktionsart". Thus, new ways are offered for observing the relationship and the changes that occur between categories. The system of representation that is proposed is motivated not only cognitively; but, above all, it is empirically verified against the methodologies provided by corpus linguistics and statistical techniques. Therefore, this thesis gathers different empirical methodologies in the study of phase periphrases and their representation. In this sense, a study of corpus of broad-coverage has been made, which has allowed us, first, to confirm that the phase periphrases are sensitive to the “Aktionsart”; secondly, to identify and classify the different resulting routes of aspectual coercion in this kind of periphrases, and last, to demonstrate that in these periphrases the greater expression, the lower functional profitability. The system of event structure analysis implemented has allowed us to develop an initial set of criteria for the annotation of the phase periphrases in a corpus of Spanish. Finally, the model of representation proposed allows cognition and computing to be brought near. The parameters of cognitive linguistics have been formalized and have been proved to be suitable for their computational representation.
APA, Harvard, Vancouver, ISO, and other styles
41

Lundh, Dan. "A computational neuroscientific model for short-term memory." Thesis, University of Exeter, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gartenberg, Daniel. "A Comprehensive Computational Model of Sustained Attention." Thesis, George Mason University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10130797.

Full text
Abstract:

The vigilance decrement is the decline in performance over time that characterizes tasks requiring sustained attention. Resource Theory proposes that the vigilance decrement is due to information processing assets that become depleted with use. Resource theorists must thus identify these assets and the process of how resources are depleted and replenished. The Microlapse Theory of Fatigue (MTF) identifies the resource that is depleted when performing a sustained attention task as the central executive attentional network. The depletion of the central executive network resource results in microlapses or brief gaps in attention that prevent the perception and processing of information. The MTF can explain various effects in the sustained attention literature regarding how resources are depleted. However, the MTF alone cannot explain the event rate effect or the motivation effect because it does not include replenishment mechanisms that can occur during a sustained attention task. To better understand the process of replenishment, participants were assigned to varying event rate and external motivation conditions in a novel paradigm that could measure the perceptual processing of a trial over time. These stages of processing included when participants looked at the first stimulus, looked at the second stimulus, and responded. In Experiment 1, it was found that the vigilance decrement was more severe for faster event rates, consistent with Resource Theory and counter to the MTF. In Experiment 2, the event rate effect was replicated, but unexpectedly, external motivation did not impact the vigilance decrement. In both experiments it was found that for the stages of processing that involved looking at the stimuli, more slowing was found as event rate increased. Additionally, more slowing was detected earlier in the processing of a trial than later. These results supported the process of microlapses inducing the vigilance decrement due to not having enough time to perceive, encode, and respond to stimuli, as described by the MTF. It was interpreted that the interaction between time-on-task and event rate was due to opportunistic breaks that occurred more frequently in slower event rate conditions. The finding that more slowing occurred earlier in processing was interpreted as evidence for internal rewards related to learning impacting the speed of processing a trial. To explain these findings, I propose the Microlapse Theory of Fatigue with Replenishment (MTFR) a process model similar to MTF, but that includes additional replenishment mechanisms related to opportunistic rest periods and internal rewards. The Microlapse Theory of Fatigue with Replenishment (MTFR) closely correlates to the empirical data and is an important step forward in the effort to build a comprehensive model of sustained attention.

APA, Harvard, Vancouver, ISO, and other styles
43

Dreany, Harry Hayes. "Safety Engineering of Computational Cognitive Architectures within Safety-Critical Systems." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10688677.

Full text
Abstract:

This paper presents the integration of an intelligent decision support model (IDSM) with a cognitive architecture that controls an autonomous non-deterministic safety-critical system. The IDSM will integrate multi-criteria, decision-making tools via intelligent technologies such as expert systems, fuzzy logic, machine learning, and genetic algorithms.

Cognitive technology is currently simulated within safety-critical systems to highlight variables of interest, interface with intelligent technologies, and provide an environment that improves the system’s cognitive performance. In this study, the IDSM is being applied to an actual safety-critical system, an unmanned surface vehicle (USV) with embedded artificial intelligence (AI) software. The USV’s safety performance is being researched in a simulated and a real-world, maritime based environment. The objective is to build a dynamically changing model to evaluate a cognitive architecture’s ability to ensure safe performance of an intelligent safety-critical system. The IDSM does this by finding a set of key safety performance parameters that can be critiqued via safety measurements, mechanisms, and methodologies. The uniqueness of this research lies in bounding the decision-making associated with the cognitive architecture’s key safety parameters (KSPs). Other real-time applications (RTAs) that would benefit from advancing cognitive science associated with safety are unmanned platforms, transportation technologies, and service robotics. Results will provide cognitive science researchers with a reference for the safety engineering of artificially intelligent safety-critical systems.

APA, Harvard, Vancouver, ISO, and other styles
44

Chan, Tak-Shing Thomas. "A cognitive information theory of music : a computational memetics approach." Thesis, Goldsmiths College (University of London), 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.479386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Cronin, Beau D. "Quantifying uncertainty in computational neuroscience with Bayesian statistical inference." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45336.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008.
Includes bibliographical references (p. 101-106).
Two key fields of computational neuroscience involve, respectively, the analysis of experimental recordings to understand the functional properties of neurons, and modeling how neurons and networks process sensory information in order to represent the environment. In both of these endeavors, it is crucial to understand and quantify uncertainty - when describing how the brain itself draws conclusions about the physical world, and when the experimenter interprets neuronal data. Bayesian modeling and inference methods provide many advantages for doing so. Three projects are presented that illustrate the advantages of the Bayesian approach. In the first, Markov chain Monte Carlo (MCMC) sampling methods were used to answer a range of scientific questions that arise in the analysis of physiological data from tuning curve experiments; in addition, a software toolbox is described that makes these methods widely accessible. In the second project, the model developed in the first project was extended to describe the detailed dynamics of orientation tuning in neurons in cat primary visual cortex. Using more sophisticated sampling-based inference methods, this model was applied to answer specific scientific questions about the tuning properties of a recorded population. The final project uses a Bayesian model to provide a normative explanation of sensory adaptation phenomena. The model was able to explain a range of detailed physiological adaptation phenomena.
by Beau D. Cronin.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
46

Smith, Elliot. "Incoherence and text comprehension : cognitive and computational models of inferential control." Thesis, University of Birmingham, 2000. http://etheses.bham.ac.uk//id/eprint/4653/.

Full text
Abstract:
This thesis describes work on defining and modelling text comprehension. The basis of my approach is a theory of comprehension as a form of abductive reasoning. The specific problem addressed is inferential control and the management of alternative, competing representations. This problem is related to issues of representation quality, with decisions between representations being made on the basis of quality comparisons. Simultaneously, monitoring of representation quality determines when comprehension halts; in other words, there is some kind of threshold against which quality is compared. In the first part of the thesis I analyse concepts of representation quality, describing the structure of episodic and semantic representations and processes. I then look at metrics for representation quality before developing my own metric. The metric is based on the concept of incoherence, derived from the structural potential of representations. The second part of the thesis describes a computational model of incoherence, the Incoherence-Driven Comprehender (IDC). IDC combines AI implementation technology with insights from cognitive psychological studies of text comprehension. I show how IDC can be applied to various comprehension tasks. Throughout the thesis I suggest how aspects of IDC's architecture and behaviour may offer a fresh perspective on human comprehension.
APA, Harvard, Vancouver, ISO, and other styles
47

Vellmer, Sebastian. "Applications of the Fokker-Planck Equation in Computational and Cognitive Neuroscience." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21597.

Full text
Abstract:
In dieser Arbeit werden mithilfe der Fokker-Planck-Gleichung die Statistiken, vor allem die Leistungsspektren, von Punktprozessen berechnet, die von mehrdimensionalen Integratorneuronen [Engl. integrate-and-fire (IF) neuron], Netzwerken von IF Neuronen und Entscheidungsfindungsmodellen erzeugt werden. Im Gehirn werden Informationen durch Pulszüge von Aktionspotentialen kodiert. IF Neurone mit radikal vereinfachter Erzeugung von Aktionspotentialen haben sich in Studien die auf Pulszeiten fokussiert sind als Standardmodelle etabliert. Eindimensionale IF Modelle können jedoch beobachtetes Pulsverhalten oft nicht beschreiben und müssen dazu erweitert werden. Im erste Teil dieser Arbeit wird eine Theorie zur Berechnung der Pulszugleistungsspektren von stochastischen, multidimensionalen IF Neuronen entwickelt. Ausgehend von der zugehörigen Fokker-Planck-Gleichung werden partiellen Differentialgleichung abgeleitet, deren Lösung sowohl die stationäre Wahrscheinlichkeitsverteilung und Feuerrate, als auch das Pulszugleistungsspektrum beschreibt. Im zweiten Teil wird eine Theorie für große, spärlich verbundene und homogene Netzwerke aus IF Neuronen entwickelt, in der berücksichtigt wird, dass die zeitlichen Korrelationen von Pulszügen selbstkonsistent sind. Neuronale Eingangströme werden durch farbiges Gaußsches Rauschen modelliert, das von einem mehrdimensionalen Ornstein-Uhlenbeck Prozess (OUP) erzeugt wird. Die Koeffizienten des OUP sind vorerst unbekannt und sind als Lösung der Theorie definiert. Um heterogene Netzwerke zu untersuchen, wird eine iterative Methode erweitert. Im dritten Teil wird die Fokker-Planck-Gleichung auf Binärentscheidungen von Diffusionsentscheidungsmodellen [Engl. diffusion-decision models (DDM)] angewendet. Explizite Gleichungen für die Entscheidungszugstatistiken werden für den einfachsten und analytisch lösbaren Fall von der Fokker-Planck-Gleichung hergeleitet. Für nichtliniear Modelle wird die Schwellwertintegrationsmethode erweitert.
This thesis is concerned with the calculation of statistics, in particular the power spectra, of point processes generated by stochastic multidimensional integrate-and-fire (IF) neurons, networks of IF neurons and decision-making models from the corresponding Fokker-Planck equations. In the brain, information is encoded by sequences of action potentials. In studies that focus on spike timing, IF neurons that drastically simplify the spike generation have become the standard model. One-dimensional IF neurons do not suffice to accurately model neural dynamics, however, the extension towards multiple dimensions yields realistic behavior at the price of growing complexity. The first part of this work develops a theory of spike-train power spectra for stochastic, multidimensional IF neurons. From the Fokker-Planck equation, a set of partial differential equations is derived that describes the stationary probability density, the firing rate and the spike-train power spectrum. In the second part of this work, a mean-field theory of large and sparsely connected homogeneous networks of spiking neurons is developed that takes into account the self-consistent temporal correlations of spike trains. Neural input is approximated by colored Gaussian noise generated by a multidimensional Ornstein-Uhlenbeck process of which the coefficients are initially unknown but determined by the self-consistency condition and define the solution of the theory. To explore heterogeneous networks, an iterative scheme is extended to determine the distribution of spectra. In the third part, the Fokker-Planck equation is applied to calculate the statistics of sequences of binary decisions from diffusion-decision models (DDM). For the analytically tractable DDM, the statistics are calculated from the corresponding Fokker-Planck equation. To determine the statistics for nonlinear models, the threshold-integration method is generalized.
APA, Harvard, Vancouver, ISO, and other styles
48

Iyer, Laxmi R. "CANDID - A Neurodynamical Model of Idea Generation." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1326828617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Deline, Stéphane. "Différences individuelles dans les processus de contrôle attentionnel chez des personnes jeunes et âgées : approches expérimentale et computationnelle." Phd thesis, Université Rennes 2, 2011. http://tel.archives-ouvertes.fr/tel-00960549.

Full text
Abstract:
L'effet du vieillissement sur les fonctions cognitives de haut niveau demeure encore relativement incompris. Cette recherche vise à mieux comprendre les différences interindividuelles de performances entre les individus jeunes et âgés par l'étude des processus de contrôle attentionnel mis en jeu dans les tâches de commutation attentionnelle. Dans un premier temps, deux tâches d'alternance de type séries alternées ont été administrées à des adultes jeunes et âgés. Les résultats n'indiquent pas d'effet de l'âge sur les coûts d'alternance mesurés mais en revanche un coût d'alternance asymétrique (étude 1) et des coûts d'alternance locaux et globaux différents selon les individus (étude 1 et 2). Dans un second temps, un travail de modélisation du fonctionnement cognitif à l'aide de l'architecture cognitive ACT-R a été réalisé. Il permet de tester la plausibilité des hypothèses de diminution de la vitesse de traitement (VT) et de diminution de la capacité de la mémoire de travail (CMT), à pouvoir reproduire les différences de performances entre jeunes et âgés. Les résultats des tests d'hypothèse pour les deux études réalisées indiquent que ces hypothèses ne reproduisent pas assez les effets empiriquement observés ce qui suppose que les hypothèses de diminution de la VT ou de la CMT sont insuffisantes pour expliquer les différences de performances individuelles observées. Cette étude met en évidence l'intérêt de la modélisation cognitive computationnelle dans la compréhension des processus sous-jacent le fonctionnement cognitif humain.
APA, Harvard, Vancouver, ISO, and other styles
50

Milne, Andrew J. "A computational model of the cognition of tonality." Thesis, Open University, 2013. http://oro.open.ac.uk/38787/.

Full text
Abstract:
Tonality is the organization of pitches, both simultaneously and across time, so that certain pitches and chords are heard as attracted, in varying degrees, to other pitches and chords. Most art music from the seventeenth to the nineteenth centuries, and popular music to the present day, is heavily steeped in a musical language that makes use of tonality to define a ‘central’ most attractive pitch or chord called the tonic. It is widely thought that the feelings of expectancy and resolution induced by movements towards and away from the tonic allow composers to imbue tonal music with meaning and emotion. In this dissertation, I identify and model some of the innate processes by which feelings of tension, resolution, stability, and so forth, are induced by successions of pitches and chords, irrespective of their harmonic consonance. By innate, I mean processes that do not require the learning of a musical corpus—such processes are important because they provide explanations for why tonal music, and our cognition of it, take the specific forms they do. To do this, I introduce a novel family of mathematical methods—metrics applied to expectation tensors—for calculating the similarity of pitch collections. Importantly, such tensors can represent not just the notated pitches of tones, but also their spectral pitches (their harmonics). I then demonstrate how these techniques can be used to model participants’ ratings of the fits of tones in microtonal melodies, and the fits of all twelve chromatic pitches to an established key centre (Krumhansl’s probe tone data). The techniques can also be generalized to predict the tonics of any arbitrarily chosen scale—even scales with unfamiliar tunings. In summary, I demonstrate that psychoacoustic processes, which are innate and universal, play an important role in our cognition of tonality.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography