Littérature scientifique sur le sujet « Neurocomputational models »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Neurocomputational models ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Neurocomputational models"

1

Hale, John T., Luca Campanelli, Jixing Li, Shohini Bhattasali, Christophe Pallier et Jonathan R. Brennan. « Neurocomputational Models of Language Processing ». Annual Review of Linguistics 8, no 1 (14 janvier 2022) : 427–46. http://dx.doi.org/10.1146/annurev-linguistics-051421-020803.

Texte intégral
Résumé :
Efforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the Supplemental Appendix .
Styles APA, Harvard, Vancouver, ISO, etc.
2

Durstewitz, Daniel, Jeremy K. Seamans et Terrence J. Sejnowski. « Neurocomputational models of working memory ». Nature Neuroscience 3, S11 (novembre 2000) : 1184–91. http://dx.doi.org/10.1038/81460.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cutsuridis, Vassilis, Tjitske Heida, Wlodek Duch et Kenji Doya. « Neurocomputational models of brain disorders ». Neural Networks 24, no 6 (août 2011) : 513–14. http://dx.doi.org/10.1016/j.neunet.2011.03.016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hardy, Nicholas F., et Dean V. Buonomano. « Neurocomputational models of interval and pattern timing ». Current Opinion in Behavioral Sciences 8 (avril 2016) : 250–57. http://dx.doi.org/10.1016/j.cobeha.2016.01.012.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bicer, Mustafa Berkan. « Radar-Based Microwave Breast Imaging Using Neurocomputational Models ». Diagnostics 13, no 5 (1 mars 2023) : 930. http://dx.doi.org/10.3390/diagnostics13050930.

Texte intégral
Résumé :
In this study, neurocomputational models are proposed for the acquisition of radar-based microwave images of breast tumors using deep neural networks (DNNs) and convolutional neural networks (CNNs). The circular synthetic aperture radar (CSAR) technique for radar-based microwave imaging (MWI) was utilized to generate 1000 numerical simulations for randomly generated scenarios. The scenarios contain information such as the number, size, and location of tumors for each simulation. Then, a dataset of 1000 distinct simulations with complex values based on the scenarios was built. Consequently, a real-valued DNN (RV-DNN) with five hidden layers, a real-valued CNN (RV-CNN) with seven convolutional layers, and a real-valued combined model (RV-MWINet) consisting of CNN and U-Net sub-models were built and trained to generate the radar-based microwave images. While the proposed RV-DNN, RV-CNN, and RV-MWINet models are real-valued, the MWINet model is restructured with complex-valued layers (CV-MWINet), resulting in a total of four models. For the RV-DNN model, the training and test errors in terms of mean squared error (MSE) are found to be 103.400 and 96.395, respectively, whereas for the RV-CNN model, the training and test errors are obtained to be 45.283 and 153.818. Due to the fact that the RV-MWINet model is a combined U-Net model, the accuracy metric is analyzed. The proposed RV-MWINet model has training and testing accuracy of 0.9135 and 0.8635, whereas the CV-MWINet model has training and testing accuracy of 0.991 and 1.000, respectively. The peak signal-to-noise ratio (PSNR), universal quality index (UQI), and structural similarity index (SSIM) metrics were also evaluated for the images generated by the proposed neurocomputational models. The generated images demonstrate that the proposed neurocomputational models can be successfully utilized for radar-based microwave imaging, especially for breast imaging.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Holker, Ruchi, et Seba Susan. « Neuroscience-Inspired Parameter Selection of Spiking Neuron Using Hodgkin Huxley Model ». International Journal of Software Science and Computational Intelligence 13, no 2 (avril 2021) : 89–106. http://dx.doi.org/10.4018/ijssci.2021040105.

Texte intégral
Résumé :
Spiking neural networks (SNN) are currently being researched to design an artificial brain to teach it how to think, perform, and learn like a human brain. This paper focuses on exploring optimal values of parameters of biological spiking neurons for the Hodgkin Huxley (HH) model. The HH model exhibits maximum number of neurocomputational properties as compared to other spiking models, as per previous research. This paper investigates the HH model parameters of Class 1, Class 2, phasic spiking, and integrator neurocomputational properties. For the simulation of spiking neurons, the NEURON simulator is used since it is easy to understand and code.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dezfouli, Amir, Payam Piray, Mohammad Mahdi Keramati, Hamed Ekhtiari, Caro Lucas et Azarakhsh Mokri. « A Neurocomputational Model for Cocaine Addiction ». Neural Computation 21, no 10 (octobre 2009) : 2869–93. http://dx.doi.org/10.1162/neco.2009.10-08-882.

Texte intégral
Résumé :
Based on the dopamine hypotheses of cocaine addiction and the assumption of decrement of brain reward system sensitivity after long-term drug exposure, we propose a computational model for cocaine addiction. Utilizing average reward temporal difference reinforcement learning, we incorporate the elevation of basal reward threshold after long-term drug exposure into the model of drug addiction proposed by Redish. Our model is consistent with the animal models of drug seeking under punishment. In the case of nondrug reward, the model explains increased impulsivity after long-term drug exposure. Furthermore, the existence of a blocking effect for cocaine is predicted by our model.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Spitzer, M. « Neurocomputational models of cognitive dysfunctions in schizophrenia and therapeutic implications ». European Neuropsychopharmacology 8 (novembre 1998) : S63—S64. http://dx.doi.org/10.1016/s0924-977x(98)80018-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sivia, Jagtar Singh, Amar Partap Singh Pharwaha et Tara Singh Kamal. « Neurocomputational Models for Parameter Estimation of Circular Microstrip Patch Antennas ». Procedia Computer Science 85 (2016) : 393–400. http://dx.doi.org/10.1016/j.procs.2016.05.178.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Reggia, James A. « Neurocomputational models of the remote effects of focal brain damage ». Medical Engineering & ; Physics 26, no 9 (novembre 2004) : 711–22. http://dx.doi.org/10.1016/j.medengphy.2004.06.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Neurocomputational models"

1

Ragonetti, Gianmarco. « A neurocomputational model of reward-based motor learning ». Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/3028.

Texte intégral
Résumé :
2015 - 2016
The following thesis deals with computational models of nervous system employed in motor reinforcement learning. The novel contribution of this work is that it includes a methodology of experiments for evaluating learning rates for human which we compared with the results coming from a computational model we derived from a deep analysis of literature. Rewards or punishments are particular stimuli able to drive for good or for worse the performance of the action to learn. This happens because they can strengthen or weaken the connections among a combination of sensory input stimuli and a combination of motor activation outputs, attributing them some kind of value. A reward/ punisher can originate from innate needs(hunger, thirst, etc), coming from hardwired structures in the brain (hypothalamus), yet it could also come from an initially neutral cue (from cortex or sensory inputs) that acquires the ability to produce value after learning(for example money value, approval).We called the formers primary value, while the latter learned values. The efficacy of a stimulus as a reinforcer/punisher depends on the specific context the action take place (Motivating operation). It is claimed that values drive learning through dopamine firing and that learned values acquire this ability after repetitive pairings with innate primary values, in a Pavlovian classic conditioning paradigm. Under some hypothesis made we propose a computational model made of:  A block taking place in Cortex mapping sensory combinations(posterior cortex) and possible actions(motor cortex) . The weights of the net which corresponds to the probability of a movement , given a sensory combination in input. Rewards/punishments alter these probabilities trhought a selection rule we implemented in Basal Ganglia for action selection;  A block for the production of values (critic): we evaluated two different scenarios In the first we considered the block only fo innate rewards, made of VTA(Ventral Tegmental Area) and Lateral Hypothalamus(innate rewards) and Lateral Habenula(innate punishments) In the second scenario we added the structures for learning of rewards, Amygdala, which learns to produce a dopamine activation on the onset of an initially neutral stimulus and a Ventral Striatum, which learns to predict the occurrence of the innate reward, cancelling its dopamine activation. Innate reward is fundamental for learning value system: even in a well trained system, if the learned stimulus reward is no more able to expect innate stimulus reward( because is occurring late or not at all ), and if this occurs frequently it could lose its reinforcing/weakening abilities. This phenomenon is called acquisition extinction and is strictly dependent on the context (motivating operation). Validation of the model started from Emergent , which provides a biologically accurate model of neuron networks and learning mechanisms and was ported to Matlab , more versatile, in order to prove the ability of system to learn for a specific task . In this simple task the system has to learn among two possible actions , given a group of stimuli of varying cardinality: 2, 4 and 8. We evaluated the task in the 2 scenarios described, one with innate rewards and one with learned rewards. Finally several experiments were performed to evaluate human learning rate: volunteers had to learn to press the right keyboard buttons when visual stimuli appeared on monitor, in order to get an auditory and visual reward. The experiments were carefully designed in a way such to make comparable the result of simple artificial neural network with those of human performers. The strategy was to select a reduced set of responses and a set of visual stimuli as simple as possibles (edges), thus bypassing the problem of a hierarchical complex information representation, by collapsing them in one layer . The result were then fitted with an exponential and a hyperbolical function. Both fitting showed that human learning rate is slow compared to artificial network and decreases with the number of stimuli it has to learn. [edited by author]
XV n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Parziale, Antonio. « A neurocomputational model of reaching movements ». Doctoral thesis, Universita degli studi di Salerno, 2016. http://hdl.handle.net/10556/2341.

Texte intégral
Résumé :
2013 - 2014
How the brain controls movement is a question that has fascinated researchers from different areas as neuroscience, robotics and psychology. To understand how we move is not only an intellectual challenge, but it is important for finding new strategies for nursing people with movement diseases, for rehabilitation and to develop new robotic technology. While there is an agreement about the role of the primary motor cortex (M1) in the execution of voluntary movements, it is still debated what (and how) is encoded by the neural activity of the motor cortex. To unveil the "code" used for executing voluntary movements we investigated the interaction between the motor cortex and the spinal cord, the main recipient of the descending signals departing from M1 neurons. In particular, the research presented in this thesis aims at understanding how primary motor cortex and spinal cord cooperate to execute a reaching movement, and whether a modular organization of the spinal cord can be exploited for controlling the movement. On the basis of physiological studies about the primary motor cortex organization, we have hypothesized that this brain area encodes both movement's parameters and patterns of muscle activation. We argue that the execution of voluntary movements results from the cooperation of different clusters of neurons distributed in the rostral and caudal regions of primary motor cortex, each of which represents different aspects of the ongoing movement. In particular, kinetic aspects of movement are directly represented by the caudal part of primary motor cortex as activations of alpha motoneurons, while kinematic aspects of the movement are encoded by the rostral region and are translated by spinal cord interneurons into alpha motoneurons activation. The population of corticomotoneuron (CM) cells in the caudal part of M1 creates muscle synergies for a direct control of muscle activity, useful to execute highly novel skills that require a direct control of multijoint and single joint movements by the central nervous system (CNS). On the other side, clusters of neurons in the rostral M1 are devoted to the activation of different subpopulations of interneurons in the spinal cord organized in functional modules. Each spinal module implements hardwired muscle synergies regulating the activity of a subset of muscles working around one or more joints. The way a module regulates the muscles activations is related to its structural properties. One area recruits the hard-wired motor primitives hosted in the spinal cord as spatiotemporal synergies, while the other one has direct access to the alpha motoneurons and may build new synergies for the execution of very demanding movements. The existence of these two areas regulating directly and indirectly the muscle activity can explain the controversy about what kind of parameter is encoded by the brain. In order to validate our conjecture about the coexistence of an explicit representation of both kinetic and kinematics aspects of the movement, we have developed and implemented the computational model of the spinal cord and its connections with supraspinal brain. The model incorporates the key anatomical and physiological features of the neurons in the spinal cord (interneurons Ia, Ib and PN and Renshaw cells, and their interconnections). The model envisages descending inputs coming from both rostral and caudal M1 motor cortex and cerebellum (through the rubro- and reticulo-spinal tracts), local inputs from both Golgi tendon organs and spindles, and its output is directed towards alfa motoneurons, which also receive descending inputs from the cortex and local inputs from spindles. The musculoskeletal model used in this study is a one degree-of-freedom arm whose motion is restricted to the extension/flexion of the elbow. The musculoskeletal model includes three muscles: Biceps Short, Brachialis and Triceps Lateral. Our simulations show that the CNS may produce elbow flexion movements with different properties by adopting different strategies for the recruitment and the modulation of interneurons and motoneurons. The results obtained using our computational model confirm what has been hypothesized in literature: modularity may be the organizational principle that the central nervous system exploits in motor control. In humans, the central nervous system can execute motor tasks by recruiting the motor primitives in the spinal cord or by learning new collections of synergies essential for executing novel skills typical of our society. To get more insights about how brain encodes movements and to unveil the role played by the different areas of the brain we verified if the movement generated by our model satisfied the trade-off between speed and accuracy predicted by the Fitts’ law. An interesting result is that the speed-accuracy tradeoff does not follow from the structure of the system, that is capable of performing fast and precise movements, but arises from the strategy adopted to produce faster movements, by starting from a prelearned set of motor commands useful to reach the target position and by modifying only the activations of alfa motoneurons. These results suggest that the brain may use the clusters of neurons in the rostral M1 for encoding the direction of the movement and the clusters of CM cells in the caudal M1 for regulating the tradeoff between speed and accuracy. The simulation performed with our computational model have shown that the activation of an area cannot exclude the activation of the other one but, on the contrary, both the activations are needed to have a simulated behaviour that fits the real behavior. [edited by Author]
XIII n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Marsh, Steven Joseph Thomas. « Efficient programming models for neurocomputation ». Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709268.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Dupuy, Nathalie. « Neurocomputational model for learning, memory consolidation and schemas ». Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33144.

Texte intégral
Résumé :
This thesis investigates how through experience the brain acquires and stores memories, and uses these to extract and modify knowledge. This question is being studied by both computational and experimental neuroscientists as it is of relevance for neuroscience, but also for artificial systems that need to develop knowledge about the world from limited, sequential data. It is widely assumed that new memories are initially stored in the hippocampus, and later are slowly reorganised into distributed cortical networks that represent knowledge. This memory reorganisation is called systems consolidation. In recent years, experimental studies have revealed complex hippocampal-neocortical interactions that have blurred the lines between the two memory systems, challenging the traditional understanding of memory processes. In particular, the prior existence of cortical knowledge frameworks (also known as schemas) was found to speed up learning and consolidation, which seemingly is at odds with previous models of systems consolidation. However, the underlying mechanisms of this effect are not known. In this work, we present a computational framework to explore potential interactions between the hippocampus, the prefrontal cortex, and associative cortical areas during learning as well as during sleep. To model the associative cortical areas, where the memories are gradually consolidated, we have implemented an artificial neural network (Restricted Boltzmann Machine) so as to get insight into potential neural mechanisms of memory acquisition, recall, and consolidation. We analyse the network's properties using two tasks inspired by neuroscience experiments. The network gradually built a semantic schema in the associative cortical areas through the consolidation of multiple related memories, a process promoted by hippocampal-driven replay during sleep. To explain the experimental data we suggest that, as the neocortical schema develops, the prefrontal cortex extracts characteristics shared across multiple memories. We call this information meta-schema. In our model, the semantic schema and meta-schema in the neocortex are used to compute consistency, conflict and novelty signals. We propose that the prefrontal cortex uses these signals to modulate memory formation in the hippocampus during learning, which in turn influences consolidation during sleep replay. Together, these results provide theoretical framework to explain experimental findings and produce predictions for hippocampal-neocortical interactions during learning and systems consolidation.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chadderdon, George L. « A neurocomputational model of the functional role of dopamine in stimulus-response task learning and performance ». [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3355003.

Texte intégral
Résumé :
Thesis (Ph.D.)--Indiana University, Dept. of Psychological and Brain Sciences and Cognitive Science, 2009.
Title from PDF t.p. (viewed on Feb. 5, 2010). Source: Dissertation Abstracts International, Volume: 70-04, Section: B, page: 2609. Adviser: Olaf Sporns.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kolbeck, Carter. « A neurocomputational model of the mammalian fear conditioning circuit ». Thesis, 2013. http://hdl.handle.net/10012/7897.

Texte intégral
Résumé :
In this thesis, I present a computational neural model that reproduces the high-level behavioural results of well-known fear conditioning experiments: first-order conditioning, second-order conditioning, sensory preconditioning, context conditioning, blocking, first-order extinction and renewal (AAB, ABC, ABA), and extinction and renewal after second-order conditioning and sensory preconditioning. The simulated neural populations used to account for the behaviour observed in these experiments correspond to known anatomical regions of the mammalian brain. Parts of the amygdala, periaqueductal gray, cortex and thalamus, and hippocampus are included and are connected to each other in a biologically plausible manner. The model was built using the principles of the Neural Engineering Framework (NEF): a mathematical framework that allows information to be encoded and manipulated in populations of neurons. Each population represents information via the spiking activity of simulated neurons, and is connected to one or more other populations; these connections allow computations to be performed on the information being represented. By specifying which populations are connected to which, and what functions these connections perform, I developed an information processing system that behaves analogously to the fear conditioning circuit in the brain.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Barwiński, Marek [Verfasser]. « A neurocomputational model of memory acquisition for novel faces / by Marek Barwi`nski ». 2008. http://d-nb.info/997248939/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Sadat, Rezai Seyed Omid. « A Neurocomputational Model of Smooth Pursuit Control to Interact with the Real World ». Thesis, 2014. http://hdl.handle.net/10012/8224.

Texte intégral
Résumé :
Whether we want to drive a car, play a ball game, or even enjoy watching a flying bird, we need to track moving objects. This is possible via smooth pursuit eye movements (SPEMs), which maintain the image of the moving object on the fovea (i.e., a very small portion of the retina with high visual resolution). At first glance, performing an accurate SPEM by the brain may seem trivial. However, imperfect visual coding, processing and transmission delays, wide variety of object sizes, and background textures make the task challenging. Furthermore, the existence of distractors in the environment makes it even more complicated and it is no wonder why understanding SPEM has been a classic question of human motor control. To understand physiological systems of which SPEM is an example, creation of models has played an influential role. Models make quantitative predictions that can be tested in experiments. Therefore, modelling SPEM is not only valuable to learn neurobiological mechanisms of smooth pursuit or more generally gaze control but also beneficial to give insight into other sensory-motor functions. In this thesis, I present a neurocomputational SPEM model based on Neural Engineering Framework (NEF) to drive an eye-like robot. The model interacts with the real world in real time. It uses naturalistic images as input and by the use of spiking model neurons controls the robot. This work can be the first step towards more thorough validation of abstract SPEM control models. Besides, it is a small step toward neural models that drive robots to accomplish more intricate sensory-motor tasks such as reaching and grasping.
Styles APA, Harvard, Vancouver, ISO, etc.
9

« A neurocomputational model of the functional role of dopamine in stimulus-response task learning and performance ». INDIANA UNIVERSITY, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3355003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Neurocomputational models"

1

Cottrell, Garrison W., et Janet H. Hsiao. Neurocomputational Models of Face Processing. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199559053.013.0021.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Eliasmith, Chris. Neurocomputational Models : Theory, Application, Philosophical Consequences. Sous la direction de John Bickle. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780195304787.003.0014.

Texte intégral
Résumé :
This article describes the neural engineering framework (NEF), a systematic approach to studying neural systems that has collected and extended a set of consistent methods that are highly general. The NEF draws heavily on past work in theoretical neuroscience, integrating work on neural coding, population representation, and neural dynamics to enable the construction of large-scale biologically plausible neural simulations. It is based on the principles that neural representations defined by a combination of nonlinear encoding and optimal linear decoding and that neural dynamics are characterized by considering neural representations as control theoretic state variables.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Neurocomputational models"

1

Moustafa, Ahmed A., Błażej Misiak et Dorota Frydecka. « Neurocomputational Models of Schizophrenia ». Dans Computational Models of Brain and Behavior, 73–84. Chichester, UK : John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781119159193.ch6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Knott, Alistair. « Neurocomputational Models of Natural Language ». Dans Springer Handbook of Bio-/Neuroinformatics, 835–61. Berlin, Heidelberg : Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-30574-0_48.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hass, Joachim, et Daniel Durstewitz. « Neurocomputational Models of Time Perception ». Dans Advances in Experimental Medicine and Biology, 49–71. New York, NY : Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4939-1782-2_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Denham, Susan L., Salvador Dura-Bernal, Martin Coath et Emili Balaguer-Ballester. « 6. Neurocomputational models of perceptual organization ». Dans Unconscious Memory Representations in Perception, 147–77. Amsterdam : John Benjamins Publishing Company, 2010. http://dx.doi.org/10.1075/aicr.78.08den.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Aleksander, Igor, Barry Dunmall et Valentina Del Frate. « Neurocomputational models of visualisation : A preliminary report ». Dans Lecture Notes in Computer Science, 798–805. Berlin, Heidelberg : Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/bfb0098238.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Serrano, Miguel Ángel, Francisco Molins et Adrián Alacreu-Crespo. « Human Decision-Making Evaluation : From Classical Methods to Neurocomputational Models ». Dans Studies in Systems, Decision and Control, 163–81. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00856-6_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Liènard, Jean, Agnès Guillot et Benoît Girard. « Multi-objective Evolutionary Algorithms to Investigate Neurocomputational Issues : The Case Study of Basal Ganglia Models ». Dans From Animals to Animats 11, 597–606. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15193-4_56.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Chen, Eric Y. H. « A Neurocomputational Model of Early Psychosis ». Dans Lecture Notes in Computer Science, 1149–55. Berlin, Heidelberg : Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45226-3_156.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Vineyard, Craig M., Glory R. Emmanuel, Stephen J. Verzi et Gregory L. Heileman. « A Game Theoretic Model of Neurocomputation ». Dans Biologically Inspired Cognitive Architectures 2012, 373–74. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-34274-5_66.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Peters, James F., et Marcin S. Szczuka. « Rough Neurocomputing : A Survey of Basic Models of Neurocomputation ». Dans Rough Sets and Current Trends in Computing, 308–15. Berlin, Heidelberg : Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45813-1_40.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Neurocomputational models"

1

Torres-Molina, Richard, Andrés Riofrío-Valdivieso, Carlos Bustamante-Orellana et Francisco Ortega-Zamorano. « Prediction of Learning Improvement in Mathematics through a Video Game using Neurocomputational Models ». Dans 11th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007348605540559.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Carvalho, Luís Alfredo Vidal de, Nivea de Carvalho Ferreira et Adriana Fiszman. « A Neurocomputational Model for Autism ». Dans 4. Congresso Brasileiro de Redes Neurais. CNRN, 2016. http://dx.doi.org/10.21528/cbrn1999-082.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Davis, Gregory P., Garrett E. Katz, Daniel Soranzo, Nathaniel Allen, Matthew J. Reinhard, Rodolphe J. Gentili, Michelle E. Costanzo et James A. Reggia. « A Neurocomputational Model of Posttraumatic Stress Disorder ». Dans 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 2021. http://dx.doi.org/10.1109/ner49283.2021.9441345.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rodriguez-Alabarce, Jose, Francisco Ortega-Zamorano, Jose M. Jerez, Kusha Ghoreishi et Leonardo Franco. « Thermal comfort estimation using a neurocomputational model ». Dans 2016 IEEE Latin American Conference on Computational Intelligence (LA-CCI). IEEE, 2016. http://dx.doi.org/10.1109/la-cci.2016.7885703.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sivian, Jagtar S., Amarpartap S. Pharwaha et Tara S. Kamal. « Neurocomputational Model for Analysis Microstrip Antennas for Wireless Communication ». Dans Visualization, Imaging and Image Processing / 783 : Modelling and Simulation / 784 : Wireless Communications. Calgary,AB,Canada : ACTAPRESS, 2012. http://dx.doi.org/10.2316/p.2012.784-009.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yan, Han, Jianwu Dang, Mengxue Cao et Bernd J. Kroger. « A new framework of neurocomputational model for speech production ». Dans 2014 9th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2014. http://dx.doi.org/10.1109/iscslp.2014.6936623.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Baston, Chiara, et Mauro Ursino. « A neurocomputational model of dopamine dependent finger tapping task ». Dans 2016 IEEE 2nd International Forum on Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI). IEEE, 2016. http://dx.doi.org/10.1109/rtsi.2016.7740581.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Socasi, Francisco, Ronny Velastegui, Luis Zhinin-Vera, Rafael Valencia-Ramos, Francisco Ortega-Zamorano et Oscar Chang. « Digital Cryptography Implementation using Neurocomputational Model with Autoencoder Architecture ». Dans 12th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2020. http://dx.doi.org/10.5220/0009154908650872.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Helie, Sebastien, et F. Gregory Ashby. « A neurocomputational model of automaticity and maintenance of abstract rules ». Dans 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178593.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ercelik, Emec, et Neslihan Serap Sengor. « A neurocomputational model implemented on humanoid robot for learning action selection ». Dans 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280750.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie