Academic literature on the topic 'Neural computer'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural computer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural computer"

1

Nakajima, K., Y. Mizugaki, T. Yamashita, and Y. Sawada. "Superconducting neural computer." Applied Superconductivity 1, no. 10-12 (October 1993): 1893–905. http://dx.doi.org/10.1016/0964-1807(93)90337-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

PĂUN, GHEORGHE, MARIO J. PÉREZ-JIMÉNEZ, and GRZEGORZ ROZENBERG. "COMPUTING MORPHISMS BY SPIKING NEURAL P SYSTEMS." International Journal of Foundations of Computer Science 18, no. 06 (December 2007): 1371–82. http://dx.doi.org/10.1142/s0129054107005418.

Full text
Abstract:
We continue the study of the spiking neural P systems considered as transducers of binary strings or binary infinite sequences, and we investigate their ability to compute morphisms. The class of computed morphisms is rather restricted: length preserving or erasing, and the so-called 2-block morphisms can be computed; however, non-erasing non-length-preserving morphisms cannot be computed.
APA, Harvard, Vancouver, ISO, and other styles
3

Ziegel, Eric R. "Neural Networks in Computer Intelligence." Technometrics 37, no. 4 (November 1995): 470. http://dx.doi.org/10.1080/00401706.1995.10484401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

de Oliveira, P. M. C., Harvey Gould, and Jan Tobochnik. "Computer Simulations of Neural Networks." Computers in Physics 11, no. 5 (1997): 443. http://dx.doi.org/10.1063/1.4822587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lirov, Yuval. "Computer aided neural network engineering." Neural Networks 5, no. 4 (July 1992): 711–19. http://dx.doi.org/10.1016/s0893-6080(05)80047-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Akamatsu, Norio, Yoshihiro Nakamura, and Tohru Kawabe. "Neural computer using electromagnetic coupling." Systems and Computers in Japan 23, no. 8 (1992): 85–96. http://dx.doi.org/10.1002/scj.4690230809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Selviah, David R. "Neural computer uses optical fingers." Physics World 7, no. 7 (July 1994): 29–30. http://dx.doi.org/10.1088/2058-7058/7/7/29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Howlett, R. J., and S. D. Walters. "Multi-computer neural network architecture." Electronics Letters 35, no. 16 (1999): 1350. http://dx.doi.org/10.1049/el:19990962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schöneburg, E. "Neural networks hunt computer viruses." Neurocomputing 2, no. 5-6 (July 1991): 243–48. http://dx.doi.org/10.1016/0925-2312(91)90027-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kulkarni, Arun D. "Computer Vision and Fuzzy-Neural Systems." Journal of Electronic Imaging 13, no. 1 (January 1, 2004): 251. http://dx.doi.org/10.1117/1.1640620.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Neural computer"

1

Somers, Harriet. "A neural computer." Thesis, University of York, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Churcher, Stephen. "VLSI neural networks for computer vision." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/13397.

Full text
Abstract:
Recent years have seen the rise to prominence of a powerful new computational paradigm - the so-called artificial neural network. Loosely based on the microstructure of the central nervous system, neural networks are massively parallel arrangements of simple processing elements (neurons) which communicate with each other through variable strength connections (synapses). The simplicity of such a description belies the complexity of calculations which neural networks are able to perform. Allied to this, the emergent properties of noise resistance, fault tolerance, and large data bandwidths (all arising from the parallel architecture) mean that neural networks, when appropriately implemented, represent a powerful tool for solving many problems which require the processing of real-world data. A computer vision task (viz. the classification of regions in images of segmented natural scenes) is presented, as a problem in which large numbers of data need to be processed quickly and accurately, whilst, in certain circumstances, being disambiguated. Of the classifiers tried, the neural network (a multi-layer perceptron) was found to provide the best overall solution, to the task of distinguishing between regions which were 'roads', and those which were 'not roads'. In order that best use might be made of the parallel processing abilities of neural networks, a variety of special purpose hardware implementations are discussed, before two different analogue VLSI designs are presented, complete with characterisation and test results. The latter of these chips (the EPSILON device) is used as the basis for a practical neuro-computing system. The results of experimentation with different applications are presented. Comparisons with computer simulations demonstrate the accuracy of the chips, and their ability to support learning algorithms, thereby proving the viability of the use of pulsed analogue VLSI techniques for the implementation of artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Khan, Altaf Hamid. "Feedforward neural networks with constrained weights." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Full text
Abstract:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
APA, Harvard, Vancouver, ISO, and other styles
4

Kulakov, Anton. "Multiprocessing neural network simulator." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/348420/.

Full text
Abstract:
Over the last few years tremendous progress has been made in neuroscience by employing simulation tools for investigating neural network behaviour. Many simulators have been created during last few decades, and their number and set of features continually grows due to persistent interest from groups of researchers and engineers. A simulation software that is able to simulate a large-scale neural network has been developed and presented in this work. Based on a highly abstract integrate-and-fire neuron model a clock-driven sequential simulator has been developed in C++. The created program is able to associate the input patterns with the output patterns. The novel biologically plausible learning mechanism uses Long Term Potentiation and Long Term Depression to change the strength of the connections between the neurons based on a global binary feedback. Later, the sequentially executed model has been extended to a multi-processor system, which executes the described learning algorithm using the event-driven technique on a parallel distributed framework, simulating a neural network asynchronously. This allows the simulation to manage larger scale neural networks being immune to processor failure and communication problems. The multi-processor neural network simulator has been created, the main benefit of which is the possibility to simulate large scale neural networks using high-parallel distributed computing. For that reason the design of the simulator has been implemented considering an efficient weight-adjusting algorithm and an efficient way for asynchronous local communication between processors.
APA, Harvard, Vancouver, ISO, and other styles
5

Durrant, Simon. "Negative correlation in neural systems." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2387/.

Full text
Abstract:
In our attempt to understand neural systems, it is useful to identify statistical principles that may be beneficial in neural information processing, outline how these principles may work in theory, and demonstrate the benefits through computational modelling and simulation. Negative correlation is one such principle, and is the subject of this work. The main body of the work falls into three parts. The first part demonstrates the space filling and accelerated central limit convergence benefits of negative correlation, both generally and in the specific neural context of V1 receptive fields. I outline two new algorithms combining traditional ICA with a correlation objective function. Correlated component analysis seeks components with a given correlation matrix, while correlated basis analysis seeks basis functions with a given correlation matrix. The benefits of recovering components and basis functions with negative correlations are shown. The second part looks at the functional role of negative correlation for integrate- and-fire neurons in the context of suprathreshold stochastic resonance, for neurons receiving Poisson inputs modelled by a diffusion approximation. I show how the SSR effect can be seen in networks of spiking neurons, and further show how correlation can be used to control the noise level, and that optimal information transmission occurs for negatively correlated inputs when parameters take biophysically plausible values. The final part examines the question of how negative correlation may be implemented in the context of small networks of spiking neurons. Networks of integrate-and-fire neurons with and without lateral inhibitory connections are tested, and the networks with the inhibitory connections are found to perform better and show negatively correlated firing patterns. This result is extended to more biophysically detailed neuron and synapse models, highlighting the robust nature of the mechanism. Finally, the mechanism is explained as a threshold-unit approximation to non-threshold maximum likelihood signal/noise decomposition.
APA, Harvard, Vancouver, ISO, and other styles
6

Baker, Thomas Edward. "Implementation limits for artificial neural networks." Full text open access at:, 1990. http://content.ohsu.edu/u?/etd,268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lam, Yiu Man. "Self-organized cortical map formation by guiding connections /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20LAM.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Adamu, Abdullahi S. "An empirical study towards efficient learning in artificial neural networks by neuronal diversity." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.

Full text
Abstract:
Artificial Neural Networks (ANN) are biologically inspired algorithms, and it is natural that it continues to inspire research in artificial neural networks. From the recent breakthrough of deep learning to the wake-sleep training routine, all have a common source of drawing inspiration: biology. The transfer functions of artificial neural networks play the important role of forming decision boundaries necessary for learning. However, there has been relatively little research on transfer function optimization compared to other aspects of neural network optimization. In this work, neuronal diversity - a property found in biological neural networks- is explored as a potentially promising method of transfer function optimization. This work shows how neural diversity can improve generalization in the context of literature from the bias-variance decomposition and meta-learning. It then demonstrates that neural diversity - represented in the form of transfer function diversity- can exhibit diverse and accurate computational strategies that can be used as ensembles with competitive results without supplementing it with other diversity maintenance schemes that tend to be computationally expensive. This work also presents neural network meta-features described as problem signatures sampled from models with diverse transfer functions for problem characterization. This was shown to meet the criteria of basic properties desired for any meta-feature, i.e. consistency for a problem and discriminatory for different problems. Furthermore, these meta-features were also used to study the underlying computational strategies adopted by the neural network models, which lead to the discovery of the strong discriminatory property of the evolved transfer function. The culmination of this study is the co-evolution of neurally diverse neurons with their weights and topology for efficient learning. It is shown to achieve significant generalization ability as demonstrated by its average MSE of 0.30 on 22 different benchmarks with minimal resources (i.e. two hidden units). Interestingly, these are the properties associated with neural diversity. Thus, showing the properties of efficiency and increased computational capacity could be replicated with transfer function diversity in artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
9

McMichael, Lonny D. (Lonny Dean). "A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Full text
Abstract:
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized adaptrode model, the wiring specification algorithms could also be used to specify the connections in other types of networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Horng-Chang. "Multiresolution neural networks for image edge detection and restoration." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/66740/.

Full text
Abstract:
One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in vision is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade off between position and class resolution and ensures both robustness in noise and efficiency of computation. As edge detection and image restoration are ill-posed, some a priori knowledge is needed to regularize these problems. A multiresolution network is proposed to tackle the uncertainty problem and the regularization of these ill-posed image processing problems. For edge detection, orientation information is used to construct a compatibility function for the strength of the links of the proposed Hopfield neural network. Edge detection 'results are presented for a number of synthetic and natural images which show that the iterative network gives robust results at low signal-to-noise ratios (0 dB) and is at least as good as many previous methods at capturing complex region shapes. For restoration, mean square error is used as the quadratic energy function of the Hopfield neural network. The results of the edge detection are used for adaptive restoration. Also shown are the results of restoration using the proposed iterative network framework.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Neural computer"

1

1942-, Eckmiller Rolf, Malsburg, Christoph von der, 1942-, and North Atlantic Treaty Organization. Scientific Affairs Division., eds. Neural computers. Berlin: Springer-Verlag, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yunong. Zhang neural networks and neural-dynamic method. Hauppauge, N.Y: Nova Science Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neural networks in computer intelligence. New York: McGraw-Hill, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Caudill, Maureen. Understanding neural networks: Computer explorations. Cambridge, Mass: MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fu, LiMin. Neural networks in computer intelligence. New York: McGraw-Hill, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dominique, Valentin, and Edelman Betty, eds. Neural networks. Thousand Oaks, Calif: Sage Publications, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Neural networks. New York: Palgrave, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hoffmann, Norbert. Simulating neural networks. Wiesbaden: Vieweg, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Neural network parallel computing. Boston: Kluwer Academic Publishers, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yi, and Zhou Jiliu, eds. Subspace learning of neural networks. Boca Raton: CRC Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Neural computer"

1

He, Bin, Han Yuan, Jianjun Meng, and Shangkai Gao. "Brain–Computer Interfaces." In Neural Engineering, 131–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43395-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

He, Bin, Shangkai Gao, Han Yuan, and Jonathan R. Wolpaw. "Brain–Computer Interfaces." In Neural Engineering, 87–151. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-5227-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zuse, Konrad. "Faust, Mephistopheles and Computer." In Neural Computers, 9–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-83740-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nakayama, Hideki. "Recurrent Neural Network." In Computer Vision, 1–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-03243-2_855-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wen, Wei, Hanxiao Liu, Yiran Chen, Hai Li, Gabriel Bender, and Pieter-Jan Kindermans. "Neural Predictor for Neural Architecture Search." In Computer Vision – ECCV 2020, 660–76. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58526-6_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ElAarag, Hala. "Neural Networks." In SpringerBriefs in Computer Science, 11–16. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4893-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ryou, Wonryong, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, and Martin Vechev. "Scalable Polyhedral Verification of Recurrent Neural Networks." In Computer Aided Verification, 225–48. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_10.

Full text
Abstract:
AbstractWe present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and non-linear recurrent update functions by combining sampling, optimization, and Fermat’s theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron. Using Prover, we present the first study of certifying a non-trivial use case of recurrent neural networks, namely speech classification. To achieve this, we additionally develop custom abstractions for the non-linear speech preprocessing pipeline. Our evaluation shows that Prover successfully verifies several challenging recurrent models in computer vision, speech, and motion sensor data classification beyond the reach of prior work.
APA, Harvard, Vancouver, ISO, and other styles
8

Chirimuuta, Mazviita. "Your Brain Is Like a Computer: Function, Analogy, Simplification." In Neural Mechanisms, 235–61. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54092-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Feldman, Jerome A. "Structured Neural Networks in Nature and in Computer Science." In Neural Computers, 17–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-83740-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moore Jackson, Melody, and Rudolph Mappus. "Neural Control Interfaces." In Brain-Computer Interfaces, 21–33. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-272-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural computer"

1

Pintea, Florentina A., Tiberiu M. Karnyanszky, and Simona A. Apostol. "Improved computer aided TDT analysis." In 2016 13th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2016. http://dx.doi.org/10.1109/neurel.2016.7800103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lacrama, Dan L., Florentina A. Pintea, Tiberiu M. Karnyanszky, and Diana S. Codat. "Computer aided analysis of projective tests." In 2014 12th Symposium on Neural Network Applications in Electrical Engineering (NEUREL 2014). IEEE, 2014. http://dx.doi.org/10.1109/neurel.2014.7011481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gupta. "Fuzzy neural networks in computer vision." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mueller, Van der Spiegel, Blackman, Chiu, Clare, Dao, Donham, Hsieh, and Loinaz. "A general purpose analog neural computer." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gavrovska, Ana M., Milorad P. Paskas, Irini S. Reljin, and Branimir D. Reljin. "On variance based methods in computer-aided phonocardiography." In 2014 12th Symposium on Neural Network Applications in Electrical Engineering (NEUREL 2014). IEEE, 2014. http://dx.doi.org/10.1109/neurel.2014.7011445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Wenming, Hao Feng, Lijun Ding, Jianhua Wang, and Shoujue Wang. "Neuron Model and its Realization for Semiconductor Neural Computer." In 2009 International Conference on Artificial Intelligence and Computational Intelligence. IEEE, 2009. http://dx.doi.org/10.1109/aici.2009.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pham, Nam, Hao Yu, and Bogdan M. Wilamowski. "Neural Network Trainer through Computer Networks." In 2010 24th IEEE International Conference on Advanced Information Networking and Applications. IEEE, 2010. http://dx.doi.org/10.1109/aina.2010.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

McVey, E. S., R. M. Inigo, J. Minnix, J. Sigda, and Z. Rahman. "Artificial Neural Computer For Image Tracking." In OE/LASE '89, edited by Keith Bromley. SPIE, 1989. http://dx.doi.org/10.1117/12.951678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Saksena, Radhika S. "Neural Network Approach in Computer Simulations." In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, D., and B. Schurmann. "Computer aided investigations of artificial neural systems." In 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170735.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural computer"

1

CORTICON INC PHILADELPHIA PA. Assembly of a Neural Analog Computer. Phase 2. Fort Belvoir, VA: Defense Technical Information Center, February 1994. http://dx.doi.org/10.21236/ada278192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

CORTICON INC PHILADELPHIA PA. Assembly of a Prototype Neural Analog Computer. Phase 1. Fort Belvoir, VA: Defense Technical Information Center, October 1990. http://dx.doi.org/10.21236/ada278079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Z., J. Shimeld, and M. Williamson. Application of computer neural network, and fuzzy set logic to petroleum geology, offshore eastern Canada. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1994. http://dx.doi.org/10.4095/194121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Farhi, Edward, and Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, December 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Full text
Abstract:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
APA, Harvard, Vancouver, ISO, and other styles
5

Middlebrooks, Sam E., John P. Jones, and Patrick H. Henry. The Compass Paradigm for the Systematic Evaluation of U.S. Army Command and Control Systems Using Neural Network and Discrete Event Computer Simulation. Fort Belvoir, VA: Defense Technical Information Center, November 2005. http://dx.doi.org/10.21236/ada450646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shea, Thomas B. Optimization of Neuronal-Computer Interface. Fort Belvoir, VA: Defense Technical Information Center, June 2009. http://dx.doi.org/10.21236/ada515409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Raychev, Nikolay. Can human thoughts be encoded, decoded and manipulated to achieve symbiosis of the brain and the machine. Web of Open Science, October 2020. http://dx.doi.org/10.37686/nsrl.v1i2.76.

Full text
Abstract:
This article discusses the current state of neurointerface technologies, not limited to deep electrode approaches. There are new heuristic ideas for creating a fast and broadband channel from the brain to artificial intelligence. One of the ideas is not to decipher the natural codes of nerve cells, but to create conditions for the development of a new language for communication between the human brain and artificial intelligence tools. Theoretically, this is possible if the brain "feels" that by changing the activity of nerve cells that communicate with the computer, it is possible to "achieve" the necessary actions for the body in the external environment, for example, to take a cup of coffee or turn on your favorite music. At the same time, an artificial neural network that analyzes the flow of nerve impulses must also be directed at the brain, trying to guess the body's needs at the moment with a minimum number of movements. The most important obstacle to further progress is the problem of biocompatibility, which has not yet been resolved. This is even more important than the number of electrodes and the power of the processors on the chip. When you insert a foreign object into your brain, it tries to isolate itself from it. This is a multidisciplinary topic not only for doctors and psychophysiologists, but also for engineers, programmers, mathematicians. Of course, the problem is complex and it will be possible to overcome it only with joint efforts.
APA, Harvard, Vancouver, ISO, and other styles
8

Tam, David C. A Study of Neuronal Properties, Synaptic Plasticity and Network Interactions Using a Computer Reconstituted Neuronal Network Derived from Fundamental Biophysical Principles. Fort Belvoir, VA: Defense Technical Information Center, June 1992. http://dx.doi.org/10.21236/ada257221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tam, David C. A Study of Neuronal Properties, Synaptic Plasticity and Network Interactions Using a Computer Reconstituted Neuronal Network Derived from Fundamental Biophysical Principles. Fort Belvoir, VA: Defense Technical Information Center, December 1990. http://dx.doi.org/10.21236/ada230477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography