Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Recurrent Neural Network architecture.

Książki na temat „Recurrent Neural Network architecture”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 31 najlepszych książek naukowych na temat „Recurrent Neural Network architecture”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj książki z różnych dziedzin i twórz odpowiednie bibliografie.

1

Dayhoff, Judith E. Neural network architectures: An introduction. New York, N.Y: Van Nostrand Reinhold, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

T, Leondes Cornelius, red. Neural network systems, techniques, and applications. San Diego: Academic Press, 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

C, Jain L., i Johnson R. P, red. Automatic generation of neural network architecture using evolutionary computation. Singapore: World Scientific, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

United States. National Aeronautics and Space Administration., red. A neural network architecture for implementation of expert sytems for real time monitoring. [Cincinnati, Ohio]: University of Cincinnati, College of Engineering, 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lim, Chee Peng. Probabilistic fuzzy ARTMAP: An autonomous neural network architecture for Bayesian probability estimation. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

United States. National Aeronautics and Space Administration., red. A novel approach to noise-filtering based on a gain-scheduling neural network architecture. [Washington, DC]: National Aeronautics and Space Administration, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Lim, Chee Peng. A Multiple neural network architecture for sequential evidence aggregation and incomplete data classification. Sheffield: Univeristy of Sheffield, Dept. of Automatic Control and Systems Engineering, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Salem, Fathi M. Recurrent Neural Networks: From Simple to Gated Architectures. Springer International Publishing AG, 2021.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Mandic, Danilo P., i Jonathon A. Chambers. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley & Sons, Incorporated, John, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Mandic, Danilo P., i Jonathon A. Chambers. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley & Sons, Incorporated, John, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Magic, John, i Mark Magic. Action Recognition Using Python and Recurrent Neural Network. Independently Published, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Yi, Zhang, i K. K. Tan. Convergence Analysis of Recurrent Neural Networks (Network Theory and Applications). Springer, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

SpiNNaker: A Spiking Neural Network Architecture. now publishers, Inc., 2020. http://dx.doi.org/10.1561/9781680836523.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

SpiNNaker - a Spiking Neural Network Architecture. Now Publishers, 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Neural Network Architectures: An Introduction. Van Nostrand Reinhold, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Magic, John, i Mark Magic. Action Recognition: Step-By-step Recognizing Actions with Python and Recurrent Neural Network. Independently Published, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Shan, Yunting, John Magic i Mark Magic. Action Recognition: Step-By-step Recognizing Actions with Python and Recurrent Neural Network. Independently Published, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Hinton, Geoffrey E. Neural network architectures for artificial intelligence (Tutorial). American Association for Artificial Intelligence, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Chiang, Chin. The architecture and design of a neural network classifier. 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Ho, Ki-Cheong. Optimisation of neural network architecture for modelling and control. 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Kane, Andrew J. An instruction systolic array architecture for multiple neural network types. 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

A novel approach to noise-filtering based on a gain-scheduling neural network architecture. [Washington, DC]: National Aeronautics and Space Administration, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Parallel Implementation of an Artificial Neural Network Integrated Feature and Architecture Selection Algorithm. Storming Media, 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Mitchell, Laura, Vishnu Subramanian i Sri Yogesh K. Deep Learning with Pytorch 1. x: Implement Deep Learning Techniques and Neural Network Architecture Variants Using Python, 2nd Edition. Packt Publishing, Limited, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Fletcher, Justin Barrows Swore. A constructive approach to hybrid architectures for machine learning. 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Thagard, Paul. Brain-Mind. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190678715.001.0001.

Pełny tekst źródła
Streszczenie:
Minds enable people to perceive, imagine, solve problems, understand, learn, speak, reason, create, and be emotional and conscious. Competing explanations of how the mind works have identified it as soul, computer, brain, dynamical system, or social construction. This book explains minds in terms of interacting mechanisms operating at multiple levels, including the social, mental, neural, and molecular. Brain–Mind presents a unified, brain-based theory of cognition and emotion with applications to the most complex kinds of thinking, right up to consciousness and creativity. Unification comes from systematic application of Chris Eliasmith’s powerful new Semantic Pointer Architecture, a highly original synthesis of neural network and symbolic ideas about how the mind works. The book shows the relevance of semantic pointers to a full range of important kinds of mental representations, from sensations and imagery to concepts, rules, analogies, and emotions. Neural mechanisms are used to explain many phenomena concerning consciousness, action, intention, language, creativity, and the self. This book belongs to a trio that includes Mind–Society: From Brains to Social Sciences and Professions and Natural Philosophy: From Social Brains to Knowledge, Reality, Morality, and Beauty. They can be read independently, but together they make up a Treatise on Mind and Society that provides a unified and comprehensive treatment of the cognitive sciences, social sciences, professions, and humanities.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii