Siga este link para ver outros tipos de publicações sobre o tema: Neural networks.

Artigos de revistas sobre o tema "Neural networks"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Neural networks".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Navghare, Tukaram, Aniket Muley e Vinayak Jadhav. "Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach". Indian Journal Of Science And Technology 17, n.º 4 (26 de janeiro de 2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

N, Vikram. "Artificial Neural Networks". International Journal of Research Publication and Reviews 4, n.º 4 (23 de abril de 2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

O. H. Abdelwahed, O. H. Abdelwahed, e M. El-Sayed Wahed. "Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks". Indian Journal of Applied Research 3, n.º 6 (1 de outubro de 2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer e Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks". 網際網路技術學刊 23, n.º 1 (janeiro de 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Texto completo da fonte
Resumo:
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Perfetti, R. "A neural network to design neural networks". IEEE Transactions on Circuits and Systems 38, n.º 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

AVeselý. "Neural networks in data mining". Agricultural Economics (Zemědělská ekonomika) 49, No. 9 (2 de março de 2012): 427–31. http://dx.doi.org/10.17221/5427-agricecon.

Texto completo da fonte
Resumo:
To posses relevant information is an inevitable condition for successful enterprising in modern business. Information could be parted to data and knowledge. How to gather, store and retrieve data is studied in database theory. In the knowledge engineering, there is in the centre of interest the knowledge and methods of its formalization and gaining are studied. Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Classical methods of gaining knowledge from data sets are statistical methods. In data mining, new methods besides statistical are used. These new methods have their origin in artificial intelligence. They look for unknown and unexpected relations, which can be uncovered by exploring of data in database. In the article, a utilization of modern methods of data mining is described and especially the methods based on neural networks theory are pursued. The advantages and drawbacks of applications of multiplayer feed forward neural networks and Kohonen&rsquo;s self-organizing maps are discussed. Kohonen&rsquo;s self-organizing map is the most promising neural data-mining algorithm regarding its capability to visualize high-dimensional data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

J, Joselin, Dinesh T e Ashiq M. "A Review on Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31 de outubro de 2018): 565–69. http://dx.doi.org/10.31142/ijtsrd18461.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ziroyan, M. A., E. A. Tusova, A. S. Hovakimian e S. G. Sargsyan. "Neural networks apparatus in biometrics". Contemporary problems of social work 1, n.º 2 (30 de junho de 2015): 129–37. http://dx.doi.org/10.17922/2412-5466-2015-1-2-129-137.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Alle, Kailash. "Sentiment Analysis Using Neural Networks". International Journal of Science and Research (IJSR) 7, n.º 12 (5 de dezembro de 2018): 1604–8. http://dx.doi.org/10.21275/sr24716104045.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Rodriguez, Nathaniel, Eduardo Izquierdo e Yong-Yeol Ahn. "Optimal modularity and memory capacity of neural reservoirs". Network Neuroscience 3, n.º 2 (janeiro de 2019): 551–66. http://dx.doi.org/10.1162/netn_a_00082.

Texto completo da fonte
Resumo:
The neural network is a powerful computing framework that has been exploited by biological evolution and by humans for solving diverse problems. Although the computational capabilities of neural networks are determined by their structure, the current understanding of the relationships between a neural network’s architecture and function is still primitive. Here we reveal that a neural network’s modular architecture plays a vital role in determining the neural dynamics and memory performance of the network of threshold neurons. In particular, we demonstrate that there exists an optimal modularity for memory performance, where a balance between local cohesion and global connectivity is established, allowing optimally modular networks to remember longer. Our results suggest that insights from dynamical analysis of neural networks and information-spreading processes can be leveraged to better design neural networks and may shed light on the brain’s modular organization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Marton, Sascha, Stefan Lüdtke e Christian Bartelt. "Explanations for Neural Networks by Neural Networks". Applied Sciences 12, n.º 3 (18 de janeiro de 2022): 980. http://dx.doi.org/10.3390/app12030980.

Texto completo da fonte
Resumo:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Wang, Jun. "Artificial neural networks versus natural neural networks". Decision Support Systems 11, n.º 5 (junho de 1994): 415–29. http://dx.doi.org/10.1016/0167-9236(94)90016-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Yashchenko, V. O. "Neural-like growing networks in the development of general intelligence. Neural-like growing networks (P. II)". Mathematical machines and systems 1 (2023): 3–29. http://dx.doi.org/10.34121/1028-9763-2023-1-3-29.

Texto completo da fonte
Resumo:
This article is devoted to the development of general artificial intelligence (AGI) based on a new type of neural networks – “neural-like growing networks”. It consists of two parts. The first one was published in N4, 2022, and describes an artificial neural-like element (artificial neuron) in terms of its functionality, which is as close as possible to a biological neuron. An artificial neural-like element is the main element in building neural-like growing networks. The second part deals with the structures and functions of artificial and natural neural networks. The paper proposes a new approach for creating neural-like growing networks as a means of developing AGI that is as close as possible to the natural intelligence of a person. The intelligence of man and living organisms is formed by their nervous system. According to I.P. Pavlov's definition, the main mechanism of higher nervous activity is the reflex activity of the nervous system. In the nerve cell, the main storage of unconditioned reflexes is the deoxyribonucleic acid (DNA) molecule. The article describes ribosomal protein synthesis that contributes to the implementation of unconditioned reflexes and the formation of conditioned reflexes as the basis for learning biological objects. The first part of the work shows that the structure and functions of ribosomes almost completely coincide with the structure and functions of the Turing machine. Turing invented this machine to prove the fundamental (theoretical) possibility of constructing arbitrarily complex algorithms from extremely simple operations, and the operations themselves are performed automatically. Here arises a stunning analogy, nature created DNA and the ribosome to build complex algorithms for creating biological objects and their communication with each other and with the external environment, and the ribosomal protein synthesis is carried out by many ribosomes at the same time. It was concluded that the nerve cells of the brain are analog multi-machine complexes – ultra-fast molecular supercomputers with an unusually simple analog programming device.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering Research and Science 5, n.º 2 (13 de fevereiro de 2020): 171–77. http://dx.doi.org/10.24018/ejers.2020.5.2.1689.

Texto completo da fonte
Resumo:
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering and Technology Research 5, n.º 2 (13 de fevereiro de 2020): 171–77. http://dx.doi.org/10.24018/ejeng.2020.5.2.1689.

Texto completo da fonte
Resumo:
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Li, Xiao Hu, Feng Xu, Jin Hua Zhang e Su Nan Wang. "A New Small-World Neural Network with its Performance on Fault Tolerance". Advanced Materials Research 629 (dezembro de 2012): 719–24. http://dx.doi.org/10.4028/www.scientific.net/amr.629.719.

Texto completo da fonte
Resumo:
Many artificial neural networks are the simple simulation of brain neural network’s architecture and function. However, how to rebuild new artificial neural network which architecture is similar to biological neural networks is worth studying. In this study, a new multilayer feedforward small-world neural network is presented using the results form research on complex network. Firstly, a new multilayer feedforward small-world neural network which relies on the rewiring probability heavily is built up on the basis of the construction ideology of Watts-Strogatz networks model and community structure. Secondly, fault tolerance is employed in investigating the performances of new small-world neural network. When the network with connection fault or neuron damage is used to test the fault tolerance performance under different rewiring probability, simulation results show that the fault tolerance capability of small-world neural network outmatches that of the same scale regular network when the fault probability is more than 40%, while random network has the best fault tolerance capability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

JORGENSEN, THOMAS D., BARRY P. HAYNES e CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES". International Journal of Neural Systems 18, n.º 05 (outubro de 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Texto completo da fonte
Resumo:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit e Praneesh Mohandas. "New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays". Fractal and Fractional 6, n.º 9 (13 de setembro de 2022): 515. http://dx.doi.org/10.3390/fractalfract6090515.

Texto completo da fonte
Resumo:
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan e Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, n.º 1 (30 de março de 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Texto completo da fonte
Resumo:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Tetko, Igor V. "Neural Network Studies. 4. Introduction to Associative Neural Networks". Journal of Chemical Information and Computer Sciences 42, n.º 3 (26 de março de 2002): 717–28. http://dx.doi.org/10.1021/ci010379o.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Garcia, R. K., K. Moreira Gandra, J. M. Block e D. Barrera-Arellano. "Neural networks to formulate special fats". Grasas y Aceites 63, n.º 3 (5 de julho de 2012): 245–52. http://dx.doi.org/10.3989/gya.119011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Veselý, A., e D. Brechlerová. "Neural networks in intrusion detection systems". Agricultural Economics (Zemědělská ekonomika) 50, No. 1 (24 de fevereiro de 2012): 35–40. http://dx.doi.org/10.17221/5164-agricecon.

Texto completo da fonte
Resumo:
Security of an information system is its very important property, especially today, when computers are interconnected via internet. Because no system can be absolutely secure, the timely and accurate detection of intrusions is necessary. For this purpose, Intrusion Detection Systems (IDS) were designed. There are two basic models of IDS: misuse IDS and anomaly IDS. Misuse systems detect intrusions by looking for activity that corresponds to the known signatures of intrusions or vulnerabilities. Anomaly systems detect intrusions by searching for an abnormal system activity. Most IDS commercial tools are misuse systems with rule-based expert system structure. However, these techniques are less successful when attack characteristics vary from built-in signatures. Artificial neural networks offer the potential to resolve these problems. As far as anomaly systems are concerned, it is very difficult to build them, because it is difficult to define the normal and abnormal behaviour of a&nbsp;system. Also for building anomaly system, neural networks can be used, because they can learn to discriminate the normal and abnormal behaviour of a&nbsp;system from examples. Therefore, they offer a&nbsp;promising technique for building anomaly systems. This paper presents an overview of the applicability of neural networks in building intrusion systems and discusses advantages and drawbacks of neural network technology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Mu, Yangzi, Mengxing Huang, Chunyang Ye e Qingzhou Wu. "Diagnosis Prediction via Recurrent Neural Networks". International Journal of Machine Learning and Computing 8, n.º 2 (abril de 2018): 117–20. http://dx.doi.org/10.18178/ijmlc.2018.8.2.673.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Sandhiya, K., M. Vidhya e M. Shivaranjani S. Saranya. "Smart Fruit Classification using Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (31 de dezembro de 2017): 1298–303. http://dx.doi.org/10.31142/ijtsrd6986.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

S, Pothumani, e Priya N. "Analysis of RAID in Neural Networks". Journal of Advanced Research in Dynamical and Control Systems 11, n.º 0009-SPECIAL ISSUE (25 de setembro de 2019): 589–94. http://dx.doi.org/10.5373/jardcs/v11/20192609.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

TYMOSHENKO, Pavlo, Yevgen ZASOBA, Olexander KOVALCHUK e Olexander PSHENYCHNYY. "NEUROEVOLUTIONARY ALGORITHMS FOR NEURAL NETWORKS GENERATING". Herald of Khmelnytskyi National University. Technical sciences 315, n.º 6(1) (29 de dezembro de 2022): 240–44. http://dx.doi.org/10.31891/2307-5732-2022-315-6-240-244.

Texto completo da fonte
Resumo:
Solving engineering problems using conventional neural networks requires long-term research on the choice of architecture and hyperparameters. A strong artificial intelligence would be devoid of such shortcomings. Such research is carried out using a very wide range of approaches: for example, biological (attempts to grow a brain in laboratory conditions), hardware (creating neural processors) or software (using the power of ordinary CPUs and GPUs). The goal of the work is to develop such a system that would allow using evolutionary approaches to generate neural networks suitable for solving problems. This is called “neuroevolution”. The purpose of this work also includes the study of the features of possible applicable evolutionary strategies. The object of research in this work is a neuroevolutionary approach to solving problems of machine learning. The subject of research is evolutionary strategies, neural coding methods networks in the organism’s genome. The scientific novelty of the work lies in the testing of previously unused evolutionary strategies and the generalization of the obtained system to the systems of “general artificial intelligence”. A system for simulating neuroevolution was created. The specifics of implementation were considered, the choice of algorithms was justified, and their work was explained. In order to perform experiments, datasets were created and methods of applying neuroevolutionary systems were developed. It was possible to choose the most optimal training parameters, to find out the relationship between them, as well as the accuracy and speed of training. It cannot be said that the models implemented within this work directly bring us closer to strong AI. They still lack their own memory as well as a certain level of complexity. For successful use, it is necessary to configure the view of the input data or perform some calculations outside the model. However, in the future, such a system can be developed, for example, to work with SNNs, or for use on special equipment
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Henzinger, Thomas A., Mathias Lechner e Đorđe Žikelić. "Scalable Verification of Quantized Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 5 (18 de maio de 2021): 3787–95. http://dx.doi.org/10.1609/aaai.v35i5.16496.

Texto completo da fonte
Resumo:
Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized model of the actual network which works over real arithmetic and ignores rounding imprecisions. This idealization is in stark contrast to network quantization, which is a technique that trades numerical precision for computational efficiency and is, therefore, often applied in practice. Neglecting rounding errors of such low-bit quantized neural networks has been shown to lead to wrong conclusions about the network's correctness. Thus, the desired approach for verifying quantized neural networks would be one that takes these rounding errors into account. In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP. Furthermore, we explore several practical heuristics toward closing the complexity gap between idealized and bit-exact verification. In particular, we propose three techniques for making SMT-based verification of quantized neural networks more scalable. Our experiments demonstrate that our proposed methods allow a speedup of up to three orders of magnitude over existing approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Zhang, Wei, Zhi Han, Xiai Chen, Baichen Liu, Huidi Jia e Yandong Tang. "Fully Kernected Neural Networks". Journal of Mathematics 2023 (28 de junho de 2023): 1–9. http://dx.doi.org/10.1155/2023/1539436.

Texto completo da fonte
Resumo:
In this paper, we apply kernel methods to deep convolutional neural network (DCNN) to improve its nonlinear ability. DCNNs have achieved significant improvement in many computer vision tasks. For an image classification task, the accuracy comes to saturation when the depth and width of network are enough and appropriate. The saturation accuracy will not rise even by increasing the depth and width. We find that improving nonlinear ability of DCNNs can break through the saturation accuracy. In a DCNN, the former layer is more inclined to extract features and the latter layer is more inclined to classify features. Therefore, we apply kernel methods at the last fully connected layer to implicitly map features to a higher-dimensional space to improve nonlinear ability so that the network achieves better linear separability. Also, we name the network as fully kernected neural networks (fully connected neural networks with kernel methods). Our experiment result shows that fully kernected neural networks achieve higher classification accuracy and faster convergence rate than baseline networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Simons, Robert, e J. G. Taylor. "Neural Networks." Journal of the Operational Research Society 47, n.º 4 (abril de 1996): 596. http://dx.doi.org/10.2307/3010740.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Schier, R. "Neural networks." Radiology 191, n.º 1 (abril de 1994): 291. http://dx.doi.org/10.1148/radiology.191.1.8134593.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Tafti, Mohammed H. A. "Neural networks". ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, n.º 1 (março de 1992): 51–54. http://dx.doi.org/10.1145/134347.134361.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Turega, M. A. "Neural Networks". Computer Journal 35, n.º 3 (1 de junho de 1992): 290. http://dx.doi.org/10.1093/comjnl/35.3.290.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Jordan, Michael I., e Christopher M. Bishop. "Neural networks". ACM Computing Surveys 28, n.º 1 (março de 1996): 73–75. http://dx.doi.org/10.1145/234313.234348.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Dory, Robert A. "Neural Networks". Computers in Physics 4, n.º 3 (1990): 324. http://dx.doi.org/10.1063/1.4822918.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Ganssle, Graham. "Neural networks". Leading Edge 37, n.º 8 (agosto de 2018): 616–19. http://dx.doi.org/10.1190/tle37080616.1.

Texto completo da fonte
Resumo:
We've all heard a proselytizing hyperbolist make the artificial-intelligence-is-going-to-steal-my-job speech. If you subscribe, look at the code in the notebook accompanying this tutorial at https://github.com/seg/tutorials-2018 . It demonstrates a small neural network. You'll find a simple system composed chiefly of multiply and add operations. That's really all that happens inside a neural network. Multiply and add. There's no magic here.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

McGourty, Christine. "Neural networks". Nature 335, n.º 6186 (setembro de 1988): 103. http://dx.doi.org/10.1038/335103b0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Simons, Robert. "Neural Networks". Journal of the Operational Research Society 47, n.º 4 (abril de 1996): 596–97. http://dx.doi.org/10.1057/jors.1996.70.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Beatty, P. C. W. "Neural networks". Current Anaesthesia & Critical Care 9, n.º 4 (agosto de 1998): 168–73. http://dx.doi.org/10.1016/s0953-7112(98)80050-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Cutler, Adele. "Neural Networks". Technometrics 42, n.º 4 (novembro de 2000): 432. http://dx.doi.org/10.1080/00401706.2000.10485724.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Signorini, DavidF, JimM Slattery, S. R. Dodds, Victor Lane e Peter Littlejohns. "Neural networks". Lancet 346, n.º 8988 (dezembro de 1995): 1500–1501. http://dx.doi.org/10.1016/s0140-6736(95)92525-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Jefferson, MilesF, Neil Pendleton, Sam Lucas, MichaelA Horan e Lionel Tarassenko. "Neural networks". Lancet 346, n.º 8991-8992 (dezembro de 1995): 1712. http://dx.doi.org/10.1016/s0140-6736(95)92880-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Gutfreund, H. "NEURAL NETWORKS". International Journal of Modern Physics B 04, n.º 06 (maio de 1990): 1223–39. http://dx.doi.org/10.1142/s0217979290000607.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Medoff, Deborah R., e M.-A. Tagamets. "Neural Networks". American Journal of Psychiatry 157, n.º 10 (outubro de 2000): 1571. http://dx.doi.org/10.1176/appi.ajp.157.10.1571.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Lewis, David A. "Neural Networks". American Journal of Psychiatry 157, n.º 11 (novembro de 2000): 1752. http://dx.doi.org/10.1176/appi.ajp.157.11.1752.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Graybiel, Ann M. "Neural Networks". American Journal of Psychiatry 158, n.º 1 (janeiro de 2001): 21. http://dx.doi.org/10.1176/appi.ajp.158.1.21.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Tamminga, Carol A., e Henry H. Holcomb. "Neural Networks". American Journal of Psychiatry 158, n.º 2 (fevereiro de 2001): 185. http://dx.doi.org/10.1176/appi.ajp.158.2.185.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Schwindling, Jerome. "Neural Networks". EPJ Web of Conferences 4 (2010): 02002. http://dx.doi.org/10.1051/epjconf/20100402002.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Widrow, Bernard, David E. Rumelhart e Michael A. Lehr. "Neural networks". Communications of the ACM 37, n.º 3 (março de 1994): 93–105. http://dx.doi.org/10.1145/175247.175257.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Kock, Gerd. "Neural networks". Microprocessing and Microprogramming 38, n.º 1-5 (setembro de 1993): 679. http://dx.doi.org/10.1016/0165-6074(93)90210-c.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Titterington, Michael. "Neural networks". WIREs Computational Statistics 2, n.º 1 (21 de dezembro de 2009): 1–8. http://dx.doi.org/10.1002/wics.50.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia