Littérature scientifique sur le sujet « NEURONS NEURAL NETWORK »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « NEURONS NEURAL NETWORK ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "NEURONS NEURAL NETWORK"

1

Ribar, Srdjan, Vojislav V. Mitic et Goran Lazovic. « Neural Networks Application on Human Skin Biophysical Impedance Characterizations ». Biophysical Reviews and Letters 16, no 01 (6 février 2021) : 9–19. http://dx.doi.org/10.1142/s1793048021500028.

Texte intégral
Résumé :
Artificial neural networks (ANNs) are basically the structures that perform input–output mapping. This mapping mimics the signal processing in biological neural networks. The basic element of biological neural network is a neuron. Neurons receive input signals from other neurons or the environment, process them, and generate their output which represents the input to another neuron of the network. Neurons can change their sensitivity to input signals. Each neuron has a simple rule to process an input signal. Biological neural networks have the property that signals are processed through many parallel connections (massively parallel processing). The activity of all neurons in these parallel connections is summed and represents the output of the whole network. The main feature of biological neural networks is that changes in the sensitivity of the neurons lead to changes in the operation of the entire network. This is called adaptation and is correlated with the learning process of living organisms. In this paper, a set of artificial neural networks are used for classifying the human skin biophysical impedance data.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Dalhoum, Abdel Latif Abu, et Mohammed Al-Rawi. « High-Order Neural Networks are Equivalent to Ordinary Neural Networks ». Modern Applied Science 13, no 2 (27 janvier 2019) : 228. http://dx.doi.org/10.5539/mas.v13n2p228.

Texte intégral
Résumé :
Equivalence of computational systems can assist in obtaining abstract systems, and thus enable better understanding of issues related their design and performance. For more than four decades, artificial neural networks have been used in many scientific applications to solve classification problems as well as other problems. Since the time of their introduction, multilayer feedforward neural network referred as Ordinary Neural Network (ONN), that contains only summation activation (Sigma) neurons, and multilayer feedforward High-order Neural Network (HONN), that contains Sigma neurons, and product activation (Pi) neurons, have been treated in the literature as different entities. In this work, we studied whether HONNs are mathematically equivalent to ONNs. We have proved that every HONN could be converted to some equivalent ONN. In most cases, one just needs to modify the neuronal transfer function of the Pi neuron to convert it to a Sigma neuron. The theorems that we have derived clearly show that the original HONN and its corresponding equivalent ONN would give exactly the same output, which means; they can both be used to perform exactly the same functionality. We also derived equivalence theorems for several other non-standard neural networks, for example, recurrent HONNs and HONNs with translated multiplicative neurons. This work rejects the hypothesis that HONNs and ONNs are different entities, a conclusion that might initiate a new research frontier in artificial neural network research.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Geva, Shlomo, et Joaquin Sitte. « An Exponential Response Neural Net ». Neural Computation 3, no 4 (décembre 1991) : 623–32. http://dx.doi.org/10.1162/neco.1991.3.4.623.

Texte intégral
Résumé :
By using artificial neurons with exponential transfer functions one can design perfect autoassociative and heteroassociative memory networks, with virtually unlimited storage capacity, for real or binary valued input and output. The autoassociative network has two layers: input and memory, with feedback between the two. The exponential response neurons are in the memory layer. By adding an encoding layer of conventional neurons the network becomes a heteroassociator and classifier. Because for real valued input vectors the dot-product with the weight vector is no longer a measure for similarity, we also consider a euclidean distance based neuron excitation and present Lyapunov functions for both cases. The network has energy minima corresponding only to stored prototype vectors. The exponential neurons make it simpler to build fast adaptive learning directly into classification networks that map real valued input to any class structure at its output.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sharma, Subhash Kumar. « An Overview on Neural Network and Its Application ». International Journal for Research in Applied Science and Engineering Technology 9, no 8 (31 août 2021) : 1242–48. http://dx.doi.org/10.22214/ijraset.2021.37597.

Texte intégral
Résumé :
Abstract: In this paper an overview on neural network and its application is focused. In Real-world business applications for neural networks are booming. In some cases, NNs have already become the method of choice for businesses that use hedge fund analytics, marketing segmentation, and fraud detection. Here are some neural network innovators who are changing the business landscape. Here shown that how the biological model of neural network functions, all mammalian brains consist of interconnected neurons that transmit electrochemical signals. Neurons have several components: the body, which includes a nucleus and dendrites; axons, which connect to other cells; and axon terminals or synapses, which transmit information or stimuli from one neuron to another. Combined, this unit carries out communication and integration functions in the nervous system. Keywords: Neurons, neural network, biological model of neural network functions
Styles APA, Harvard, Vancouver, ISO, etc.
5

YAMAZAKI, TADASHI, et SHIGERU TANAKA. « A NEURAL NETWORK MODEL FOR TRACE CONDITIONING ». International Journal of Neural Systems 15, no 01n02 (février 2005) : 23–30. http://dx.doi.org/10.1142/s0129065705000037.

Texte intégral
Résumé :
We studied the dynamics of a neural network that has both recurrent excitatory and random inhibitory connections. Neurons started to become active when a relatively weak transient excitatory signal was presented and the activity was sustained due to the recurrent excitatory connections. The sustained activity stopped when a strong transient signal was presented or when neurons were disinhibited. The random inhibitory connections modulated the activity patterns of neurons so that the patterns evolved without recurrence with time. Hence, a time passage between the onsets of the two transient signals was represented by the sequence of activity patterns. We then applied this model to represent the trace eyeblink conditioning, which is mediated by the hippocampus. We assumed this model as CA3 of the hippocampus and considered an output neuron corresponding to a neuron in CA1. The activity pattern of the output neuron was similar to that of CA1 neurons during trace eyeblink conditioning, which was experimentally observed.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kim, Choongmin, Jacob A. Abraham, Woochul Kang et Jaeyong Chung. « A Neural Network Decomposition Algorithm for Mapping on Crossbar-Based Computing Systems ». Electronics 9, no 9 (18 septembre 2020) : 1526. http://dx.doi.org/10.3390/electronics9091526.

Texte intégral
Résumé :
Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Atanassov, Krassimir, Sotir Sotirov et Tania Pencheva. « Intuitionistic Fuzzy Deep Neural Network ». Mathematics 11, no 3 (31 janvier 2023) : 716. http://dx.doi.org/10.3390/math11030716.

Texte intégral
Résumé :
The concept of an intuitionistic fuzzy deep neural network (IFDNN) is introduced here as a demonstration of a combined use of artificial neural networks and intuitionistic fuzzy sets, aiming to benefit from the advantages of both methods. The investigation presents in a methodological way the whole process of IFDNN development, starting with the simplest form—an intuitionistic fuzzy neural network (IFNN) with one layer with single-input neuron, passing through IFNN with one layer with one multi-input neuron, further subsequent complication—an IFNN with one layer with many multi-input neurons, and finally—the true IFDNN with many layers with many multi-input neurons. The formulas for strongly optimistic, optimistic, average, pessimistic and strongly pessimistic formulas for NN parameters estimation, represented in the form of intuitionistic fuzzy pairs, are given here for the first time for each one of the presented IFNNs. To demonstrate its workability, an example of an IFDNN application to biomedical data is here presented.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Bensimon, Moshe, Shlomo Greenberg et Moshe Haiut. « Using a Low-Power Spiking Continuous Time Neuron (SCTN) for Sound Signal Processing ». Sensors 21, no 4 (4 février 2021) : 1065. http://dx.doi.org/10.3390/s21041065.

Texte intégral
Résumé :
This work presents a new approach based on a spiking neural network for sound preprocessing and classification. The proposed approach is biologically inspired by the biological neuron’s characteristic using spiking neurons, and Spike-Timing-Dependent Plasticity (STDP)-based learning rule. We propose a biologically plausible sound classification framework that uses a Spiking Neural Network (SNN) for detecting the embedded frequencies contained within an acoustic signal. This work also demonstrates an efficient hardware implementation of the SNN network based on the low-power Spike Continuous Time Neuron (SCTN). The proposed sound classification framework suggests direct Pulse Density Modulation (PDM) interfacing of the acoustic sensor with the SCTN-based network avoiding the usage of costly digital-to-analog conversions. This paper presents a new connectivity approach applied to Spiking Neuron (SN)-based neural networks. We suggest considering the SCTN neuron as a basic building block in the design of programmable analog electronics circuits. Usually, a neuron is used as a repeated modular element in any neural network structure, and the connectivity between the neurons located at different layers is well defined. Thus, generating a modular Neural Network structure composed of several layers with full or partial connectivity. The proposed approach suggests controlling the behavior of the spiking neurons, and applying smart connectivity to enable the design of simple analog circuits based on SNN. Unlike existing NN-based solutions for which the preprocessing phase is carried out using analog circuits and analog-to-digital conversion, we suggest integrating the preprocessing phase into the network. This approach allows referring to the basic SCTN as an analog module enabling the design of simple analog circuits based on SNN with unique inter-connections between the neurons. The efficiency of the proposed approach is demonstrated by implementing SCTN-based resonators for sound feature extraction and classification. The proposed SCTN-based sound classification approach demonstrates a classification accuracy of 98.73% using the Real-World Computing Partnership (RWCP) database.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Weaver, Adam L., et Scott L. Hooper. « Follower Neurons in Lobster (Panulirus interruptus) Pyloric Network Regulate Pacemaker Period in Complementary Ways ». Journal of Neurophysiology 89, no 3 (1 mars 2003) : 1327–38. http://dx.doi.org/10.1152/jn.00704.2002.

Texte intégral
Résumé :
Distributed neural networks (ones characterized by high levels of interconnectivity among network neurons) are not well understood. Increased insight into these systems can be obtained by perturbing network activity so as to study the functions of specific neurons not only in the network's “baseline” activity but across a range of network activities. We applied this technique to study cycle period control in the rhythmic pyloric network of the lobster, Panulirus interruptus. Pyloric rhythmicity is driven by an endogenous oscillator, the Anterior Burster (AB) neuron. Two network neurons feed back onto the pacemaker, the Lateral Pyloric (LP) neuron by inhibition and the Ventricular Dilator (VD) neuron by electrical coupling. LP and VD neuron effects on pyloric cycle period can be studied across a range of periods by altering period by injecting current into the AB neuron and functionally removing (by hyperpolarization) the LP and VD neurons from the network at each period. Within a range of pacemaker periods, the LP and VD neurons regulate period in complementary ways. LP neuron removal speeds the network and VD neuron removal slows it. Outside this range, network activity is disrupted because the LP neuron cannot follow slow periods, and the VD neuron cannot follow fast periods. These neurons thus also limit, in complementary ways, normal pyloric activity to a certain period range. These data show that follower neurons in pacemaker networks can play central roles in controlling pacemaker period and suggest that in some cases specific functions can be assigned to individual network neurons.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sajedinia, Zahra, et Sébastien Hélie. « A New Computational Model for Astrocytes and Their Role in Biologically Realistic Neural Networks ». Computational Intelligence and Neuroscience 2018 (5 juillet 2018) : 1–10. http://dx.doi.org/10.1155/2018/3689487.

Texte intégral
Résumé :
Recent studies in neuroscience show that astrocytes alongside neurons participate in modulating synapses. It led to the new concept of “tripartite synapse”, which means that a synapse consists of three parts: presynaptic neuron, postsynaptic neuron, and neighboring astrocytes. However, it is still unclear what role is played by the astrocytes in the tripartite synapse. Detailed biocomputational modeling may help generate testable hypotheses. In this article, we aim to study the role of astrocytes in synaptic plasticity by exploring whether tripartite synapses are capable of improving the performance of a neural network. To achieve this goal, we developed a computational model of astrocytes based on the Izhikevich simple model of neurons. Next, two neural networks were implemented. The first network was only composed of neurons and had standard bipartite synapses. The second network included both neurons and astrocytes and had tripartite synapses. We used reinforcement learning and tested the networks on categorizing random stimuli. The results show that tripartite synapses are able to improve the performance of a neural network and lead to higher accuracy in a classification task. However, the bipartite network was more robust to noise. This research provides computational evidence to begin elucidating the possible beneficial role of astrocytes in synaptic plasticity and performance of a neural network.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "NEURONS NEURAL NETWORK"

1

Voysey, Matthew David. « Inexact analogue CMOS neurons for VLSI neural network design ». Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264387.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lukashev, A. « Basics of artificial neural networks (ANNs) ». Thesis, Київський національний університет технологій та дизайну, 2018. https://er.knutd.edu.ua/handle/123456789/11353.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Schmidt, Peter H. (Peter Harrison). « The transfer characteristic of neurons in a pulse-code neural network ». Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14594.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Brady, Patrick. « Internal representation and biological plausibility in an artificial neural network ». Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311273.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hunter, Russell I. « Improving associative memory in a network of spiking neurons ». Thesis, University of Stirling, 2011. http://hdl.handle.net/1893/6177.

Texte intégral
Résumé :
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory.
Styles APA, Harvard, Vancouver, ISO, etc.
6

D'Alton, S. « A Constructive Neural Network Incorporating Competitive Learning of Locally Tuned Hidden Neurons ». Thesis, Honours thesis, University of Tasmania, 2005. https://eprints.utas.edu.au/243/1/D%27Alton05CompetitivelyTrainedRAN.pdf.

Texte intégral
Résumé :
Performance metrics are a driving force in many fields of work today. The field of constructive neural networks is no different. In this field, the popular measurement metrics (resultant network size, test set accuracy) are difficult to maximise, given their dependence on several varied factors, of which the mostimportant is the dataset to be applied. This project set out with the intention to minimise the number of hidden units installed into a resource allocating network (RAN) (Platt 1991), whilst increasing the accuracy by means of application of competitive learning techniques. Three datasets were used for evaluation of the hypothesis, one being a time-series set, and the other two being more general regression sets. Many trials were conducted during the period of this work, in order to be able to prove conclusively the discovered results. Each trial was different in only one respect from another in an effort to maximise the comparability of the results found. Four metrics were recorded for each trial- network size (per training epoch, and final), test and training set accuracy (again, per training epoch and final), and overall trial runtime. The results indicate that the application of competitive learning algorithms to the RAN results in a considerable reduction in network size (and therefore the associated reduction in processing time) across the vast majority of the trials run. Inspection of the accuracy related metrics indicated that using this method offered no real difference to that of the originalimplementation of the RAN. As such, the positive network-size results found are only half of the bigger picture, meaning there is scope for future work to be done to increase the test set accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Grehl, Stephanie. « Stimulation-specific effects of low intensity repetitive magnetic stimulation on cortical neurons and neural circuit repair in vitro (studying the impact of pulsed magnetic fields on neural tissue) ». Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066706/document.

Texte intégral
Résumé :
Les champs électromagnétiques sont couramment utilisés pour stimuler de manière non-invasive le cerveau humain soit à des fins thérapeutiques ou dans un contexte de recherche. Les effets de la stimulation magnétique varient en fonction de la fréquence et de l'intensité du champ magnétique. Les mécanismes mis en jeu restent inconnus, d'autant plus lors de stimulations à faible intensité. Dans cette thèse, nous avons évalué les effets de stimulations magnétiques répétées à différentes fréquences appliqués à faible intensité (10-13 mT ; Low Intensity Repetitive Magnetic Stimulation : LI-rMS) in vitro, sur des cultures corticales primaires et sur des modèles de réparation neuronale. De plus, nous décrivons une méthodologie pour la construction d'un dispositif instrumental fait sur mesure pour stimuler des cultures cellulaires.Les résultats montrent des effets dépendant de la fréquence sur la libération du calcium des stocks intracellulaires, sur la mort cellulaire, sur la croissance des neurites, sur la réparation neuronale, sur l'activation des neurones et sur l'expression de gènes impliqués. En conclusion, nous avons montré pour la première fois un nouveau mécanisme d'activation cellulaire par les champs magnétiques à faible intensité. Cette activation se fait en l'absence d'induction de potentiels d'action. Les résultats soulignent l'importance biologique de la LI-rMS par elle-même mais aussi en association avec les effets de la rTMS à haute intensité. Une meilleure compréhension des effets fondamentaux de la LI-rMS sur les tissus biologiques est nécessaire afin de mettre au point des applications thérapeutiques efficaces pour le traitement des conditions neurologiques
Electromagnetic fields are widely used to non-invasively stimulate the human brain in clinical treatment and research. This thesis investigates the effects of different low intensity (mT) repetitive magnetic stimulation (LI-rMS) parameters on single neurons and neural networks and describes key aspects of custom tailored LI-rMS delivery in vitro. Our results show stimulation specific effects of LI-rMS on cell survival, neuronal morphology, neural circuit repair and gene expression. We show novel mechanisms underlying cellular responses to stimulation below neuronal firing threshold, extending our understanding of the fundamental effects of LI-rMS on biological tissue which is essential to better tailor therapeutic applications
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gettner, Jonathan A. « Identifying and Predicting Rat Behavior Using Neural Networks ». DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1513.

Texte intégral
Résumé :
The hippocampus is known to play a critical role in episodic memory function. Understanding the relation between electrophysiological activity in a rat hippocampus and rat behavior may be helpful in studying pathological diseases that corrupt electrical signaling in the hippocampus, such as Parkinson’s and Alzheimer’s. Additionally, having a method to interpret rat behaviors from neural activity may help in understanding the dynamics of rat neural activity that are associated with certain identified behaviors. In this thesis, neural networks are used as a black-box model to map electrophysiological data, representative of an ensemble of neurons in the hippocampus, to a T-maze, wheel running or open exploration behavior. The velocity and spatial coordinates of the identified behavior are then predicted using the same neurological input data that was used for behavior identification. Results show that a nonlinear autoregressive process with exogenous inputs (NARX) neural network can partially identify between different behaviors and can generally determine the velocity and spatial position attributes of the identified behavior inside and outside of the trained interval
Styles APA, Harvard, Vancouver, ISO, etc.
9

Vissani, Matteo. « Multisensory features of peripersonal space representation : an analysis via neural network modelling ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Trouver le texte intégral
Résumé :
The peripersonal space (PPS) is the space immediately surrounding the body. It is coded in the brain in a multisensory, body part-centered (e.g. hand-centered, trunk-centered), modular fashion. This is supported by the existence of multisensory neurons (in fronto-parietal areas) with tactile receptive field on a specific body part (hand, arm, trunk, etc.) and visual/auditory receptive field surrounding the same body part. Recent behavioural results (Serino et al. Sci Rep 2015), obtained by using an audio-tactile paradigm, have further supported the existence of distinct PPS representations, each specific of a single body part (hand, trunk, face) and characterized by specific properties. That study has also evidenced that the PPS representations– although distinct – are not independent. In particular, the hand-PPS loses its properties and assumes those of the trunk-PPS when the hand is close to the trunk, as the hand-PPS was encapsulated within the trunk-PPS. Similarly, the face-PPS appears to be englobed into the trunk-PPS. It remains unclear how this interaction, which manifests behaviourally, can be implemented at a neural level by the modular organization of PPS representations. The aim of this Thesis is to propose a neural network model to help the comprehension of the underlying neurocomputational mechanisms. The model includes three subnetworks devoted to the single PPS representations around the hand, face and the trunk. Furthermore, interaction mechanisms– controlled by proprioceptive neurons – have been postulated among the subnetworks. The network is able to reproduce the behavioural data, explaining them in terms of neural properties and response. Moreover, the network provides some novel predictions, that can be tested in vivo. One of this prediction has been tested in this work, by performing an ad-hoc behavioural experiment at the Laboratory of Cognitive Neuroscience (Campus Biotech, Geneva) under the supervision of the neuropsychologist Dr Serino.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yao, Yong. « A neural network in the pond snail, Planorbis corneus : electrophysiology and morphology of pleural ganglion neurons and their input neurons / ». [S.l.] : [s.n.], 1986. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "NEURONS NEURAL NETWORK"

1

Gerstner, Wulfram. Spiking neuron models : Single neurons, populations, plasticity. Cambridge, U.K : Cambridge University Press, 2002.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Eeckman, Frank H. Computation in neurons and neural systems. New York : Springer, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

H, Eeckman Frank, et Conference on Computation and Neural Systems (1993 : Washington, D.C.), dir. Computation in neurons and neural systems. Boston : Kluwer Academic Publishers, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Fellin, Tommaso, et Michael Halassa, dir. Neuronal Network Analysis. Totowa, NJ : Humana Press, 2012. http://dx.doi.org/10.1007/978-1-61779-633-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

1931-, Taylor John Gerald, et Mannion C. L. T, dir. Coupled oscillating neurons. London : Springer-Verlag, 1992.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Lek, Sovan, et Jean-François Guégan, dir. Artificial Neuronal Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-57030-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lek, Sovan. Artificial Neuronal Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2000.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Aizenberg, Igor. Complex-Valued Neural Networks with Multi-Valued Neurons. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20353-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

service), SpringerLink (Online, dir. Complex-Valued Neural Networks with Multi-Valued Neurons. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

G, Stein Paul S., dir. Neurons, networks, and motor behavior. Cambridge, Mass : MIT Press, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "NEURONS NEURAL NETWORK"

1

De Wilde, Philippe. « Neurons in the Brain ». Dans Neural Network Models, 53–70. London : Springer London, 1997. http://dx.doi.org/10.1007/978-1-84628-614-8_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Marinaro, Maria. « Deterministic Networks with Ternary Neurons ». Dans Neural Network Dynamics, 57–74. London : Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-2001-8_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Taylor, J. G. « Temporal Patterns and Leaky Integrator Neurons ». Dans International Neural Network Conference, 952–55. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_144.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Pawelzik, K., F. Wolf, J. Deppisch et T. Geisel. « Network Dynamics and Correlated Spikes ». Dans Computation in Neurons and Neural Systems, 103–7. Boston, MA : Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2714-5_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bleich, M. E., et R. V. Jensen. « Fatigue in a Dynamic Neural Network ». Dans Computation in Neurons and Neural Systems, 229–34. Boston, MA : Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2714-5_37.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Katayama, Katsuki, Masafumi Yano et Tsuyoshi Horiguchi. « Synchronous Phenomena for Two-Layered Neural Network with Chaotic Neurons ». Dans Neural Information Processing, 19–30. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30499-9_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ritz, Raphael, Wulfram Gerstner et J. Leo van Hemmen. « Associative Binding and Segregation in a Network of Spiking Neurons ». Dans Models of Neural Networks, 175–219. New York, NY : Springer New York, 1994. http://dx.doi.org/10.1007/978-1-4612-4320-5_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Adachi, Masaharu. « Surrogating Neurons in an Associative Chaotic Neural Network ». Dans Advances in Neural Networks – ISNN 2004, 217–22. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_37.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Romero, G., P. A. Castillo, J. J. Merelo et A. Prieto. « Using SOM for Neural Network Visualization ». Dans Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, 629–36. Berlin, Heidelberg : Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8_75.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Payne, Jeremy R., Zhian Xu et Mark E. Nelson. « A Network Model of Automatic Gain Control in the Electrosensory System ». Dans Computation in Neurons and Neural Systems, 203–8. Boston, MA : Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2714-5_33.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "NEURONS NEURAL NETWORK"

1

Bian, Shaoping, Kebin Xu et Jing Hong. « Near neighbor neurons interconnected neural network ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.tht27.

Texte intégral
Résumé :
When the Hopfield neural network is extended to deal with a 2-D image composed of N×N pixels, the weight interconnection is a fourth-rank tensor with N4 elements. Each neuron is interconnected with all other neurons of the network. For an image, N will be large. So N4, the number of elements of the interconnection tensor, will be so large as to make the neural network's learning time (which corresponds to the precalculation of the interconnection tensor elements) too long. It is also difficult to implement the 2-D Hopfield neural network optically.
Styles APA, Harvard, Vancouver, ISO, etc.
2

FILO, G. « Analysis of Neural Network Structure for Implementation of the Prescriptive Maintenance Strategy ». Dans Terotechnology XII. Materials Research Forum LLC, 2022. http://dx.doi.org/10.21741/9781644902059-40.

Texte intégral
Résumé :
Abstract. This paper provides an initial analysis of neural network implementation possibilities in practical implementations of the prescriptive maintenance strategy. The main issues covered are the preparation and processing of input data, the choice of artificial neural network architecture and the models of neurons used in each layer. The methods of categorisation and normalisation within each distinguished category were proposed in input data. Based on the normalisation results, it was suggested to use specific neuron activation functions. As part of the network structure, the applied solutions were analysed, including the number of neuron layers used and the number of neurons in each layer. In further work, the proposed structures of neural networks may undergo a process of supervised or partially supervised training to verify the accuracy and confidence level of the results they generate.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zheng, Shengjie, Lang Qian, Pingsheng Li, Chenggang He, Xiaoqi Qin et Xiaojian Li. « An Introductory Review of Spiking Neural Network and Artificial Neural Network : From Biological Intelligence to Artificial Intelligence ». Dans 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121010.

Texte intégral
Résumé :
Stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review summarizes the basic properties of artificial neural networks as well as spiking neural networks. Our focus is on the biological background and theoretical basis of spiking neurons, different neuronal models, and the connectivity of neural circuits. We also review the mainstream neural network learning mechanisms and network architectures. This review hopes to attract different researchers and advance the development of brain intelligence and artificial intelligence.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sharpe, J. P., K. M. Johnson et M. G. Robinson. « Large scale simulations of an optoelectronic neural network ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1992. http://dx.doi.org/10.1364/oam.1992.mbb4.

Texte intégral
Résumé :
With improving optical device technology it is now possible to consider constructing very large optoelectronic neural networks containing of the order of 1000 fully interconnected neurons. Whether such systems will work, though, depends on the quality of the optical devices and the network architecture. A recent study has indicated that a 200 neuron single layer polarization logic network can operate under the constraints of presently available spatial light modulators.1 In this paper we will extend this study to examine the effect of device limitations on the performance of single- and multilayer networks containing up to 1000 fully interconnected neurons. We consider the effect of nonlinearities, contrast ratio, and quantization noise in the weight matrix and the effect of noise at the inputs and outputs on the convergence rate and number of patterns that can be classified.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Huynh, Alex V., John F. Walkup et Thomas F. Krile. « Optical perceptron-based quadratic neural network ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.mii8.

Texte intégral
Résumé :
Optical quadratic neural networks are currently being investigated because of their advantages over linear neural networks.1 Based on a quadratic neuron already constructed,2 an optical quadratic neural network utilizing four-wave mixing in photorefractive barium titanate (BaTiO3) has been developed. This network implements a feedback loop using a charge-coupled device camera, two monochrome liquid crystal televisions, a computer, and various optical elements. For training, the network employs the supervised quadratic Perceptron algorithm to associate binary-valued input vectors with specified target vectors. The training session is composed of epochs, each of which comprises an entire set of iterations for all input vectors. The network converges when the interconnection matrix remains unchanged for every successive epoch. Using a spatial multiplexing scheme for two bipolar neurons, the network can classify up to eight different input patterns. To the best of our knowledge, this proof-of-principle experiment represents one of the first working trainable optical quadratic networks utilizing a photorefractive medium.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Farhat, Nabil H., et Mostafa Eldefrawy. « The bifurcating neuron ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.mk3.

Texte intégral
Résumé :
Present neural network models ignore temporal considerations, and hence synchronicity in neural networks, by representing neuron response with a transfer function relating frequency of action potentials (firing frequency) to activation potential. Models of living neuron based on the Hudgkin-Huxley model of the excitable membrane of the squid’s axon and its Fitzhugh-Nagumo approximation, exhibit much more complex and rich behavior than that described by firing frequency-activation potential models. We describe the theory, operation, and properties of an integrate-and-fire neuron which we call the bifurcating neuron and show it has rich complex behavior capable of exhibiting synchronous firing or phase-locked operation, periodic firing, chaotic firing, bursting, and bifurcation between these modes of operation. An optoelectronic realization of the bifurcating neuron in the form of a nonlinear relaxation oscillator is described and experimental verification of its behavior is presented. The circuit shows that bifurcating neuron networks could be easier to construct than sigmoidal neuron networks. A population of bifurcating neurons, coupled through a connection matrix, represent a bifurcating neural network that is expected to exhibit properties not normally observed in networks of sigmoidal neurons. These include feature binding, cognition, quenching, and chaos, all of which play a role in higher level brain function. We expect the bifurcating neuron will lead to a new generation of neural networks more neuromorphic and hence more powerful than networks being dealt with today and that optics will play a role in the implementation of bifurcating neuron networks.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhan, Tiffany. « Hyper-Parameter Tuning in Deep Neural Network Learning ». Dans 8th International Conference on Artificial Intelligence and Applications (AI 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121809.

Texte intégral
Résumé :
Deep learning has been increasingly used in various applications such as image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. In deep learning, a convolutional neural network (CNN) is regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The full connectivity of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include penalizing parameters during training or trimming connectivity. CNNs use relatively little pre-processing compared to other image classification algorithms. Given the rise in popularity and use of deep neural network learning, the problem of tuning hyperparameters is increasingly prominent tasks in constructing efficient deep neural networks. In this paper, the tuning of deep neural network learning (DNN) hyper-parameters is explored using an evolutionary based approach popularized for use in estimating solutions to problems where the problem space is too large to get an exact solution.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lin, Steven, Demetri Psaltis et Jae Kim. « High-gain GaAs optoelectronic thresholding devices for neural network implementation ». Dans Integrated Photonics Research. Washington, D.C. : Optica Publishing Group, 1991. http://dx.doi.org/10.1364/ipr.1991.tuc1.

Texte intégral
Résumé :
The optical implementation of a neural network consists of two basic components: a 2-D array of neurons and interconnections. Each neuron is a nonlinear processing element that, in its simplest form, produces an output which is the thresholded version of the input. Monolithic optoelectronic integrated devices are candidates for these neurons. However, in order for these devices to be used as neurons in a practical experiment, they must be large in number (104/cm2 - 106/cm2) and exhibit high gain. This puts a stringent requirement on the electrical power dissipation. Thus, these devices have to be operated at low enough current levels so that the power dissipation on the chip does not exceed the heat-sinking capability, and yet the current levels need to be large enough to be able to produce high gain. This means sensitive input devices are a must. To achieve these goals, the speed requirement of the devices must be relaxed as the operation of neural network does not have to be too fast.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wright, John A., Svetlana Tatic-Lucic, Yu-Chong Tai, Michael P. Maher, Hannah Dvorak et Jerome Pine. « Towards a Functional MEMS Neurowell by Physiological Experimentation ». Dans ASME 1996 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/imece1996-1370.

Texte intégral
Résumé :
Abstract Specificity in neuron targeting during stimulation and recording is essential in executing complex neuronal network studies. Physiological experiments have shown that young (less than one week old) cultured neurons can escape through a 0.5 μm-square hole. Through iterative physiological experimentation, a functional MEMS structure we call a canopy neurowell has been developed. Essentially a micromechanical cage, the structure consists of a well, an electrode bottom, and a nitride canopy cover with integrated micro-tunnels. It is able to physically confine, and make electrical contact with, a neuron. Potentially, neurowells are capable of multi-site studying of long-term, in vitro and in vivo neural systems. Experiments show that the canopy neurowell is mechanically functional and biologically compatible.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Petrisor, G. C., B. K. Jenkins, H. Chin et A. R. Tanguay. « Dual-function adaptive neural networks for photonic implementation ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1990. http://dx.doi.org/10.1364/oam.1990.mvv1.

Texte intégral
Résumé :
We present an adaptive neural-network model in which each neuron unit has two inputs, a single potential, and two outputs. This network model can exhibit general functionality, although it uses only nonnegative values for all signals and weights in the interconnection. In this dual-function network (DFN), each output is a distinct nonlinear function of the potential of the neuron. Each neuron-unit input receives a linear combination of both outputs from any set of neurons in the preceding layer. A learning algorithm for multilayer feedforward DFN's based on gradient descent of an error function and similar to backward-error-propagation learning is presented.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "NEURONS NEURAL NETWORK"

1

Markova, Oksana, Serhiy Semerikov et Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, mai 2018. http://dx.doi.org/10.31812/0564/2250.

Texte intégral
Résumé :
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tarasenko, Andrii O., Yuriy V. Yakimov et Vladimir N. Soloviev. Convolutional neural networks for image classification. [б. в.], février 2020. http://dx.doi.org/10.31812/123456789/3682.

Texte intégral
Résumé :
This paper shows the theoretical basis for the creation of convolutional neural networks for image classification and their application in practice. To achieve the goal, the main types of neural networks were considered, starting from the structure of a simple neuron to the convolutional multilayer network necessary for the solution of this problem. It shows the stages of the structure of training data, the training cycle of the network, as well as calculations of errors in recognition at the stage of training and verification. At the end of the work the results of network training, calculation of recognition error and training accuracy are presented.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cárdenas-Cárdenas, Julián Alonso, Deicy J. Cristiano-Botia et Nicolás Martínez-Cortés. Colombian inflation forecast using Long Short-Term Memory approach. Banco de la República, juin 2023. http://dx.doi.org/10.32468/be.1241.

Texte intégral
Résumé :
We use Long Short Term Memory (LSTM) neural networks, a deep learning technique, to forecast Colombian headline inflation one year ahead through two approaches. The first one uses only information from the target variable, while the second one incorporates additional information from some relevant variables. We employ sample rolling to the traditional neuronal network construction process, selecting the hyperparameters with criteria for minimizing the forecast error. Our results show a better forecasting capacity of the network with information from additional variables, surpassing both the other LSTM application and ARIMA models optimized for forecasting (with and without explanatory variables). This improvement in forecasting accuracy is most pronounced over longer time horizons, specifically from the seventh month onwards.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Williams, J. G., et W. C. Jouse. Drive reinforcement neurals networks for reactor control. Final report. Office of Scientific and Technical Information (OSTI), juin 1995. http://dx.doi.org/10.2172/114638.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Tam, David C. A Study of Neuronal Properties, Synaptic Plasticity and Network Interactions Using a Computer Reconstituted Neuronal Network Derived from Fundamental Biophysical Principles. Fort Belvoir, VA : Defense Technical Information Center, juin 1992. http://dx.doi.org/10.21236/ada257221.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Tam, David C. A Study of Neuronal Properties, Synaptic Plasticity and Network Interactions Using a Computer Reconstituted Neuronal Network Derived from Fundamental Biophysical Principles. Fort Belvoir, VA : Defense Technical Information Center, décembre 1990. http://dx.doi.org/10.21236/ada230477.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kater, S. B., et Barbara C. Hayes. Circuit Behavior in the Development of Neuronal Networks. Fort Belvoir, VA : Defense Technical Information Center, février 1988. http://dx.doi.org/10.21236/ada198040.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Brown, Thomas H. Long-Term Synaptic Plasticity and Learning in Neuronal Networks. Fort Belvoir, VA : Defense Technical Information Center, juillet 1986. http://dx.doi.org/10.21236/ada173170.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Berger, Theodore W. A Systems Theoretic Investigation of Neuronal Network Properties of the Hippocampal Formation. Fort Belvoir, VA : Defense Technical Information Center, novembre 1991. http://dx.doi.org/10.21236/ada250246.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rulkov, Nikolai. Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics. Fort Belvoir, VA : Defense Technical Information Center, février 2016. http://dx.doi.org/10.21236/ad1004577.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie