Auswahl der wissenschaftlichen Literatur zum Thema „Neural computer“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Neural computer" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Neural computer"

1

Nakajima, K., Y. Mizugaki, T. Yamashita und Y. Sawada. „Superconducting neural computer“. Applied Superconductivity 1, Nr. 10-12 (Oktober 1993): 1893–905. http://dx.doi.org/10.1016/0964-1807(93)90337-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

PĂUN, GHEORGHE, MARIO J. PÉREZ-JIMÉNEZ und GRZEGORZ ROZENBERG. „COMPUTING MORPHISMS BY SPIKING NEURAL P SYSTEMS“. International Journal of Foundations of Computer Science 18, Nr. 06 (Dezember 2007): 1371–82. http://dx.doi.org/10.1142/s0129054107005418.

Der volle Inhalt der Quelle
Annotation:
We continue the study of the spiking neural P systems considered as transducers of binary strings or binary infinite sequences, and we investigate their ability to compute morphisms. The class of computed morphisms is rather restricted: length preserving or erasing, and the so-called 2-block morphisms can be computed; however, non-erasing non-length-preserving morphisms cannot be computed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ziegel, Eric R. „Neural Networks in Computer Intelligence“. Technometrics 37, Nr. 4 (November 1995): 470. http://dx.doi.org/10.1080/00401706.1995.10484401.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

de Oliveira, P. M. C., Harvey Gould und Jan Tobochnik. „Computer Simulations of Neural Networks“. Computers in Physics 11, Nr. 5 (1997): 443. http://dx.doi.org/10.1063/1.4822587.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lirov, Yuval. „Computer aided neural network engineering“. Neural Networks 5, Nr. 4 (Juli 1992): 711–19. http://dx.doi.org/10.1016/s0893-6080(05)80047-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Akamatsu, Norio, Yoshihiro Nakamura und Tohru Kawabe. „Neural computer using electromagnetic coupling“. Systems and Computers in Japan 23, Nr. 8 (1992): 85–96. http://dx.doi.org/10.1002/scj.4690230809.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Selviah, David R. „Neural computer uses optical fingers“. Physics World 7, Nr. 7 (Juli 1994): 29–30. http://dx.doi.org/10.1088/2058-7058/7/7/29.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Howlett, R. J., und S. D. Walters. „Multi-computer neural network architecture“. Electronics Letters 35, Nr. 16 (1999): 1350. http://dx.doi.org/10.1049/el:19990962.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Schöneburg, E. „Neural networks hunt computer viruses“. Neurocomputing 2, Nr. 5-6 (Juli 1991): 243–48. http://dx.doi.org/10.1016/0925-2312(91)90027-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kulkarni, Arun D. „Computer Vision and Fuzzy-Neural Systems“. Journal of Electronic Imaging 13, Nr. 1 (01.01.2004): 251. http://dx.doi.org/10.1117/1.1640620.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Neural computer"

1

Somers, Harriet. „A neural computer“. Thesis, University of York, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Churcher, Stephen. „VLSI neural networks for computer vision“. Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/13397.

Der volle Inhalt der Quelle
Annotation:
Recent years have seen the rise to prominence of a powerful new computational paradigm - the so-called artificial neural network. Loosely based on the microstructure of the central nervous system, neural networks are massively parallel arrangements of simple processing elements (neurons) which communicate with each other through variable strength connections (synapses). The simplicity of such a description belies the complexity of calculations which neural networks are able to perform. Allied to this, the emergent properties of noise resistance, fault tolerance, and large data bandwidths (all arising from the parallel architecture) mean that neural networks, when appropriately implemented, represent a powerful tool for solving many problems which require the processing of real-world data. A computer vision task (viz. the classification of regions in images of segmented natural scenes) is presented, as a problem in which large numbers of data need to be processed quickly and accurately, whilst, in certain circumstances, being disambiguated. Of the classifiers tried, the neural network (a multi-layer perceptron) was found to provide the best overall solution, to the task of distinguishing between regions which were 'roads', and those which were 'not roads'. In order that best use might be made of the parallel processing abilities of neural networks, a variety of special purpose hardware implementations are discussed, before two different analogue VLSI designs are presented, complete with characterisation and test results. The latter of these chips (the EPSILON device) is used as the basis for a practical neuro-computing system. The results of experimentation with different applications are presented. Comparisons with computer simulations demonstrate the accuracy of the chips, and their ability to support learning algorithms, thereby proving the viability of the use of pulsed analogue VLSI techniques for the implementation of artificial neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khan, Altaf Hamid. „Feedforward neural networks with constrained weights“. Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Der volle Inhalt der Quelle
Annotation:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kulakov, Anton. „Multiprocessing neural network simulator“. Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/348420/.

Der volle Inhalt der Quelle
Annotation:
Over the last few years tremendous progress has been made in neuroscience by employing simulation tools for investigating neural network behaviour. Many simulators have been created during last few decades, and their number and set of features continually grows due to persistent interest from groups of researchers and engineers. A simulation software that is able to simulate a large-scale neural network has been developed and presented in this work. Based on a highly abstract integrate-and-fire neuron model a clock-driven sequential simulator has been developed in C++. The created program is able to associate the input patterns with the output patterns. The novel biologically plausible learning mechanism uses Long Term Potentiation and Long Term Depression to change the strength of the connections between the neurons based on a global binary feedback. Later, the sequentially executed model has been extended to a multi-processor system, which executes the described learning algorithm using the event-driven technique on a parallel distributed framework, simulating a neural network asynchronously. This allows the simulation to manage larger scale neural networks being immune to processor failure and communication problems. The multi-processor neural network simulator has been created, the main benefit of which is the possibility to simulate large scale neural networks using high-parallel distributed computing. For that reason the design of the simulator has been implemented considering an efficient weight-adjusting algorithm and an efficient way for asynchronous local communication between processors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Durrant, Simon. „Negative correlation in neural systems“. Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2387/.

Der volle Inhalt der Quelle
Annotation:
In our attempt to understand neural systems, it is useful to identify statistical principles that may be beneficial in neural information processing, outline how these principles may work in theory, and demonstrate the benefits through computational modelling and simulation. Negative correlation is one such principle, and is the subject of this work. The main body of the work falls into three parts. The first part demonstrates the space filling and accelerated central limit convergence benefits of negative correlation, both generally and in the specific neural context of V1 receptive fields. I outline two new algorithms combining traditional ICA with a correlation objective function. Correlated component analysis seeks components with a given correlation matrix, while correlated basis analysis seeks basis functions with a given correlation matrix. The benefits of recovering components and basis functions with negative correlations are shown. The second part looks at the functional role of negative correlation for integrate- and-fire neurons in the context of suprathreshold stochastic resonance, for neurons receiving Poisson inputs modelled by a diffusion approximation. I show how the SSR effect can be seen in networks of spiking neurons, and further show how correlation can be used to control the noise level, and that optimal information transmission occurs for negatively correlated inputs when parameters take biophysically plausible values. The final part examines the question of how negative correlation may be implemented in the context of small networks of spiking neurons. Networks of integrate-and-fire neurons with and without lateral inhibitory connections are tested, and the networks with the inhibitory connections are found to perform better and show negatively correlated firing patterns. This result is extended to more biophysically detailed neuron and synapse models, highlighting the robust nature of the mechanism. Finally, the mechanism is explained as a threshold-unit approximation to non-threshold maximum likelihood signal/noise decomposition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Baker, Thomas Edward. „Implementation limits for artificial neural networks“. Full text open access at:, 1990. http://content.ohsu.edu/u?/etd,268.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lam, Yiu Man. „Self-organized cortical map formation by guiding connections /“. View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20LAM.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Adamu, Abdullahi S. „An empirical study towards efficient learning in artificial neural networks by neuronal diversity“. Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.

Der volle Inhalt der Quelle
Annotation:
Artificial Neural Networks (ANN) are biologically inspired algorithms, and it is natural that it continues to inspire research in artificial neural networks. From the recent breakthrough of deep learning to the wake-sleep training routine, all have a common source of drawing inspiration: biology. The transfer functions of artificial neural networks play the important role of forming decision boundaries necessary for learning. However, there has been relatively little research on transfer function optimization compared to other aspects of neural network optimization. In this work, neuronal diversity - a property found in biological neural networks- is explored as a potentially promising method of transfer function optimization. This work shows how neural diversity can improve generalization in the context of literature from the bias-variance decomposition and meta-learning. It then demonstrates that neural diversity - represented in the form of transfer function diversity- can exhibit diverse and accurate computational strategies that can be used as ensembles with competitive results without supplementing it with other diversity maintenance schemes that tend to be computationally expensive. This work also presents neural network meta-features described as problem signatures sampled from models with diverse transfer functions for problem characterization. This was shown to meet the criteria of basic properties desired for any meta-feature, i.e. consistency for a problem and discriminatory for different problems. Furthermore, these meta-features were also used to study the underlying computational strategies adopted by the neural network models, which lead to the discovery of the strong discriminatory property of the evolved transfer function. The culmination of this study is the co-evolution of neurally diverse neurons with their weights and topology for efficient learning. It is shown to achieve significant generalization ability as demonstrated by its average MSE of 0.30 on 22 different benchmarks with minimal resources (i.e. two hidden units). Interestingly, these are the properties associated with neural diversity. Thus, showing the properties of efficiency and increased computational capacity could be replicated with transfer function diversity in artificial neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

McMichael, Lonny D. (Lonny Dean). „A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model“. Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Der volle Inhalt der Quelle
Annotation:
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized adaptrode model, the wiring specification algorithms could also be used to specify the connections in other types of networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yang, Horng-Chang. „Multiresolution neural networks for image edge detection and restoration“. Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/66740/.

Der volle Inhalt der Quelle
Annotation:
One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in vision is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade off between position and class resolution and ensures both robustness in noise and efficiency of computation. As edge detection and image restoration are ill-posed, some a priori knowledge is needed to regularize these problems. A multiresolution network is proposed to tackle the uncertainty problem and the regularization of these ill-posed image processing problems. For edge detection, orientation information is used to construct a compatibility function for the strength of the links of the proposed Hopfield neural network. Edge detection 'results are presented for a number of synthetic and natural images which show that the iterative network gives robust results at low signal-to-noise ratios (0 dB) and is at least as good as many previous methods at capturing complex region shapes. For restoration, mean square error is used as the quadratic energy function of the Hopfield neural network. The results of the edge detection are used for adaptive restoration. Also shown are the results of restoration using the proposed iterative network framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Neural computer"

1

1942-, Eckmiller Rolf, Malsburg, Christoph von der, 1942- und North Atlantic Treaty Organization. Scientific Affairs Division., Hrsg. Neural computers. Berlin: Springer-Verlag, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhang, Yunong. Zhang neural networks and neural-dynamic method. Hauppauge, N.Y: Nova Science Publishers, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Neural networks in computer intelligence. New York: McGraw-Hill, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Caudill, Maureen. Understanding neural networks: Computer explorations. Cambridge, Mass: MIT Press, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fu, LiMin. Neural networks in computer intelligence. New York: McGraw-Hill, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dominique, Valentin, und Edelman Betty, Hrsg. Neural networks. Thousand Oaks, Calif: Sage Publications, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Neural networks. New York: Palgrave, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hoffmann, Norbert. Simulating neural networks. Wiesbaden: Vieweg, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Neural network parallel computing. Boston: Kluwer Academic Publishers, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Yi, und Zhou Jiliu, Hrsg. Subspace learning of neural networks. Boca Raton: CRC Press, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Neural computer"

1

He, Bin, Han Yuan, Jianjun Meng und Shangkai Gao. „Brain–Computer Interfaces“. In Neural Engineering, 131–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43395-6_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

He, Bin, Shangkai Gao, Han Yuan und Jonathan R. Wolpaw. „Brain–Computer Interfaces“. In Neural Engineering, 87–151. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-5227-0_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zuse, Konrad. „Faust, Mephistopheles and Computer“. In Neural Computers, 9–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-83740-1_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nakayama, Hideki. „Recurrent Neural Network“. In Computer Vision, 1–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-03243-2_855-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wen, Wei, Hanxiao Liu, Yiran Chen, Hai Li, Gabriel Bender und Pieter-Jan Kindermans. „Neural Predictor for Neural Architecture Search“. In Computer Vision – ECCV 2020, 660–76. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58526-6_39.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

ElAarag, Hala. „Neural Networks“. In SpringerBriefs in Computer Science, 11–16. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4893-7_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ryou, Wonryong, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan und Martin Vechev. „Scalable Polyhedral Verification of Recurrent Neural Networks“. In Computer Aided Verification, 225–48. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_10.

Der volle Inhalt der Quelle
Annotation:
AbstractWe present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and non-linear recurrent update functions by combining sampling, optimization, and Fermat’s theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron. Using Prover, we present the first study of certifying a non-trivial use case of recurrent neural networks, namely speech classification. To achieve this, we additionally develop custom abstractions for the non-linear speech preprocessing pipeline. Our evaluation shows that Prover successfully verifies several challenging recurrent models in computer vision, speech, and motion sensor data classification beyond the reach of prior work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chirimuuta, Mazviita. „Your Brain Is Like a Computer: Function, Analogy, Simplification“. In Neural Mechanisms, 235–61. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54092-0_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Feldman, Jerome A. „Structured Neural Networks in Nature and in Computer Science“. In Neural Computers, 17–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-83740-1_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Moore Jackson, Melody, und Rudolph Mappus. „Neural Control Interfaces“. In Brain-Computer Interfaces, 21–33. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-272-8_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Neural computer"

1

Pintea, Florentina A., Tiberiu M. Karnyanszky und Simona A. Apostol. „Improved computer aided TDT analysis“. In 2016 13th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2016. http://dx.doi.org/10.1109/neurel.2016.7800103.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lacrama, Dan L., Florentina A. Pintea, Tiberiu M. Karnyanszky und Diana S. Codat. „Computer aided analysis of projective tests“. In 2014 12th Symposium on Neural Network Applications in Electrical Engineering (NEUREL 2014). IEEE, 2014. http://dx.doi.org/10.1109/neurel.2014.7011481.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gupta. „Fuzzy neural networks in computer vision“. In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118460.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mueller, Van der Spiegel, Blackman, Chiu, Clare, Dao, Donham, Hsieh und Loinaz. „A general purpose analog neural computer“. In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118696.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gavrovska, Ana M., Milorad P. Paskas, Irini S. Reljin und Branimir D. Reljin. „On variance based methods in computer-aided phonocardiography“. In 2014 12th Symposium on Neural Network Applications in Electrical Engineering (NEUREL 2014). IEEE, 2014. http://dx.doi.org/10.1109/neurel.2014.7011445.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cao, Wenming, Hao Feng, Lijun Ding, Jianhua Wang und Shoujue Wang. „Neuron Model and its Realization for Semiconductor Neural Computer“. In 2009 International Conference on Artificial Intelligence and Computational Intelligence. IEEE, 2009. http://dx.doi.org/10.1109/aici.2009.95.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pham, Nam, Hao Yu und Bogdan M. Wilamowski. „Neural Network Trainer through Computer Networks“. In 2010 24th IEEE International Conference on Advanced Information Networking and Applications. IEEE, 2010. http://dx.doi.org/10.1109/aina.2010.169.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

McVey, E. S., R. M. Inigo, J. Minnix, J. Sigda und Z. Rahman. „Artificial Neural Computer For Image Tracking“. In OE/LASE '89, herausgegeben von Keith Bromley. SPIE, 1989. http://dx.doi.org/10.1117/12.951678.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Saksena, Radhika S. „Neural Network Approach in Computer Simulations“. In THE MONTE CARLO METHOD IN THE PHYSICAL SCIENCES: Celebrating the 50th Anniversary of the Metropolis Algorithm. AIP, 2003. http://dx.doi.org/10.1063/1.1632164.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wang, D., und B. Schurmann. „Computer aided investigations of artificial neural systems“. In 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170735.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Neural computer"

1

CORTICON INC PHILADELPHIA PA. Assembly of a Neural Analog Computer. Phase 2. Fort Belvoir, VA: Defense Technical Information Center, Februar 1994. http://dx.doi.org/10.21236/ada278192.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

CORTICON INC PHILADELPHIA PA. Assembly of a Prototype Neural Analog Computer. Phase 1. Fort Belvoir, VA: Defense Technical Information Center, Oktober 1990. http://dx.doi.org/10.21236/ada278079.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Huang, Z., J. Shimeld und M. Williamson. Application of computer neural network, and fuzzy set logic to petroleum geology, offshore eastern Canada. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1994. http://dx.doi.org/10.4095/194121.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Farhi, Edward, und Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, Dezember 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Der volle Inhalt der Quelle
Annotation:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Middlebrooks, Sam E., John P. Jones und Patrick H. Henry. The Compass Paradigm for the Systematic Evaluation of U.S. Army Command and Control Systems Using Neural Network and Discrete Event Computer Simulation. Fort Belvoir, VA: Defense Technical Information Center, November 2005. http://dx.doi.org/10.21236/ada450646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shea, Thomas B. Optimization of Neuronal-Computer Interface. Fort Belvoir, VA: Defense Technical Information Center, Juni 2009. http://dx.doi.org/10.21236/ada515409.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Raychev, Nikolay. Can human thoughts be encoded, decoded and manipulated to achieve symbiosis of the brain and the machine. Web of Open Science, Oktober 2020. http://dx.doi.org/10.37686/nsrl.v1i2.76.

Der volle Inhalt der Quelle
Annotation:
This article discusses the current state of neurointerface technologies, not limited to deep electrode approaches. There are new heuristic ideas for creating a fast and broadband channel from the brain to artificial intelligence. One of the ideas is not to decipher the natural codes of nerve cells, but to create conditions for the development of a new language for communication between the human brain and artificial intelligence tools. Theoretically, this is possible if the brain "feels" that by changing the activity of nerve cells that communicate with the computer, it is possible to "achieve" the necessary actions for the body in the external environment, for example, to take a cup of coffee or turn on your favorite music. At the same time, an artificial neural network that analyzes the flow of nerve impulses must also be directed at the brain, trying to guess the body's needs at the moment with a minimum number of movements. The most important obstacle to further progress is the problem of biocompatibility, which has not yet been resolved. This is even more important than the number of electrodes and the power of the processors on the chip. When you insert a foreign object into your brain, it tries to isolate itself from it. This is a multidisciplinary topic not only for doctors and psychophysiologists, but also for engineers, programmers, mathematicians. Of course, the problem is complex and it will be possible to overcome it only with joint efforts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tam, David C. A Study of Neuronal Properties, Synaptic Plasticity and Network Interactions Using a Computer Reconstituted Neuronal Network Derived from Fundamental Biophysical Principles. Fort Belvoir, VA: Defense Technical Information Center, Juni 1992. http://dx.doi.org/10.21236/ada257221.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tam, David C. A Study of Neuronal Properties, Synaptic Plasticity and Network Interactions Using a Computer Reconstituted Neuronal Network Derived from Fundamental Biophysical Principles. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1990. http://dx.doi.org/10.21236/ada230477.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie