Books on the topic 'Recurrent Neural Network architecture'

To see the other types of publications on this topic, follow the link: Recurrent Neural Network architecture.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 books for your research on the topic 'Recurrent Neural Network architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse books on a wide variety of disciplines and organise your bibliography correctly.

1

Dayhoff, Judith E. Neural network architectures: An introduction. New York, N.Y: Van Nostrand Reinhold, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

T, Leondes Cornelius, ed. Neural network systems, techniques, and applications. San Diego: Academic Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

C, Jain L., and Johnson R. P, eds. Automatic generation of neural network architecture using evolutionary computation. Singapore: World Scientific, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

United States. National Aeronautics and Space Administration., ed. A neural network architecture for implementation of expert sytems for real time monitoring. [Cincinnati, Ohio]: University of Cincinnati, College of Engineering, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lim, Chee Peng. Probabilistic fuzzy ARTMAP: An autonomous neural network architecture for Bayesian probability estimation. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

United States. National Aeronautics and Space Administration., ed. A novel approach to noise-filtering based on a gain-scheduling neural network architecture. [Washington, DC]: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lim, Chee Peng. A Multiple neural network architecture for sequential evidence aggregation and incomplete data classification. Sheffield: Univeristy of Sheffield, Dept. of Automatic Control and Systems Engineering, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Salem, Fathi M. Recurrent Neural Networks: From Simple to Gated Architectures. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mandic, Danilo P., and Jonathon A. Chambers. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley & Sons, Incorporated, John, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mandic, Danilo P., and Jonathon A. Chambers. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley & Sons, Incorporated, John, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Magic, John, and Mark Magic. Action Recognition Using Python and Recurrent Neural Network. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Yi, Zhang, and K. K. Tan. Convergence Analysis of Recurrent Neural Networks (Network Theory and Applications). Springer, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

SpiNNaker: A Spiking Neural Network Architecture. now publishers, Inc., 2020. http://dx.doi.org/10.1561/9781680836523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

SpiNNaker - a Spiking Neural Network Architecture. Now Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
20

Neural Network Architectures: An Introduction. Van Nostrand Reinhold, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Magic, John, and Mark Magic. Action Recognition: Step-By-step Recognizing Actions with Python and Recurrent Neural Network. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shan, Yunting, John Magic, and Mark Magic. Action Recognition: Step-By-step Recognizing Actions with Python and Recurrent Neural Network. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hinton, Geoffrey E. Neural network architectures for artificial intelligence (Tutorial). American Association for Artificial Intelligence, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

Chiang, Chin. The architecture and design of a neural network classifier. 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ho, Ki-Cheong. Optimisation of neural network architecture for modelling and control. 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kane, Andrew J. An instruction systolic array architecture for multiple neural network types. 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

A novel approach to noise-filtering based on a gain-scheduling neural network architecture. [Washington, DC]: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
28

Parallel Implementation of an Artificial Neural Network Integrated Feature and Architecture Selection Algorithm. Storming Media, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mitchell, Laura, Vishnu Subramanian, and Sri Yogesh K. Deep Learning with Pytorch 1. x: Implement Deep Learning Techniques and Neural Network Architecture Variants Using Python, 2nd Edition. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
30

Fletcher, Justin Barrows Swore. A constructive approach to hybrid architectures for machine learning. 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
31

Thagard, Paul. Brain-Mind. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190678715.001.0001.

Full text
Abstract:
Minds enable people to perceive, imagine, solve problems, understand, learn, speak, reason, create, and be emotional and conscious. Competing explanations of how the mind works have identified it as soul, computer, brain, dynamical system, or social construction. This book explains minds in terms of interacting mechanisms operating at multiple levels, including the social, mental, neural, and molecular. Brain–Mind presents a unified, brain-based theory of cognition and emotion with applications to the most complex kinds of thinking, right up to consciousness and creativity. Unification comes from systematic application of Chris Eliasmith’s powerful new Semantic Pointer Architecture, a highly original synthesis of neural network and symbolic ideas about how the mind works. The book shows the relevance of semantic pointers to a full range of important kinds of mental representations, from sensations and imagery to concepts, rules, analogies, and emotions. Neural mechanisms are used to explain many phenomena concerning consciousness, action, intention, language, creativity, and the self. This book belongs to a trio that includes Mind–Society: From Brains to Social Sciences and Professions and Natural Philosophy: From Social Brains to Knowledge, Reality, Morality, and Beauty. They can be read independently, but together they make up a Treatise on Mind and Society that provides a unified and comprehensive treatment of the cognitive sciences, social sciences, professions, and humanities.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography