Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Neural networks.

Zeitschriftenartikel zum Thema „Neural networks“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Neural networks" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Navghare, Tukaram, Aniket Muley und Vinayak Jadhav. „Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach“. Indian Journal Of Science And Technology 17, Nr. 4 (26.01.2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

N, Vikram. „Artificial Neural Networks“. International Journal of Research Publication and Reviews 4, Nr. 4 (23.04.2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

O. H. Abdelwahed, O. H. Abdelwahed, und M. El-Sayed Wahed. „Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks“. Indian Journal of Applied Research 3, Nr. 6 (01.10.2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer und Marcin Woźniak Wei Wei. „Overview of Capsule Neural Networks“. 網際網路技術學刊 23, Nr. 1 (Januar 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Der volle Inhalt der Quelle
Annotation:
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Perfetti, R. „A neural network to design neural networks“. IEEE Transactions on Circuits and Systems 38, Nr. 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

AVeselý. „Neural networks in data mining“. Agricultural Economics (Zemědělská ekonomika) 49, No. 9 (02.03.2012): 427–31. http://dx.doi.org/10.17221/5427-agricecon.

Der volle Inhalt der Quelle
Annotation:
To posses relevant information is an inevitable condition for successful enterprising in modern business. Information could be parted to data and knowledge. How to gather, store and retrieve data is studied in database theory. In the knowledge engineering, there is in the centre of interest the knowledge and methods of its formalization and gaining are studied. Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Classical methods of gaining knowledge from data sets are statistical methods. In data mining, new methods besides statistical are used. These new methods have their origin in artificial intelligence. They look for unknown and unexpected relations, which can be uncovered by exploring of data in database. In the article, a utilization of modern methods of data mining is described and especially the methods based on neural networks theory are pursued. The advantages and drawbacks of applications of multiplayer feed forward neural networks and Kohonen&rsquo;s self-organizing maps are discussed. Kohonen&rsquo;s self-organizing map is the most promising neural data-mining algorithm regarding its capability to visualize high-dimensional data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

J, Joselin, Dinesh T und Ashiq M. „A Review on Neural Networks“. International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31.10.2018): 565–69. http://dx.doi.org/10.31142/ijtsrd18461.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ziroyan, M. A., E. A. Tusova, A. S. Hovakimian und S. G. Sargsyan. „Neural networks apparatus in biometrics“. Contemporary problems of social work 1, Nr. 2 (30.06.2015): 129–37. http://dx.doi.org/10.17922/2412-5466-2015-1-2-129-137.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Alle, Kailash. „Sentiment Analysis Using Neural Networks“. International Journal of Science and Research (IJSR) 7, Nr. 12 (05.12.2018): 1604–8. http://dx.doi.org/10.21275/sr24716104045.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rodriguez, Nathaniel, Eduardo Izquierdo und Yong-Yeol Ahn. „Optimal modularity and memory capacity of neural reservoirs“. Network Neuroscience 3, Nr. 2 (Januar 2019): 551–66. http://dx.doi.org/10.1162/netn_a_00082.

Der volle Inhalt der Quelle
Annotation:
The neural network is a powerful computing framework that has been exploited by biological evolution and by humans for solving diverse problems. Although the computational capabilities of neural networks are determined by their structure, the current understanding of the relationships between a neural network’s architecture and function is still primitive. Here we reveal that a neural network’s modular architecture plays a vital role in determining the neural dynamics and memory performance of the network of threshold neurons. In particular, we demonstrate that there exists an optimal modularity for memory performance, where a balance between local cohesion and global connectivity is established, allowing optimally modular networks to remember longer. Our results suggest that insights from dynamical analysis of neural networks and information-spreading processes can be leveraged to better design neural networks and may shed light on the brain’s modular organization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Marton, Sascha, Stefan Lüdtke und Christian Bartelt. „Explanations for Neural Networks by Neural Networks“. Applied Sciences 12, Nr. 3 (18.01.2022): 980. http://dx.doi.org/10.3390/app12030980.

Der volle Inhalt der Quelle
Annotation:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Wang, Jun. „Artificial neural networks versus natural neural networks“. Decision Support Systems 11, Nr. 5 (Juni 1994): 415–29. http://dx.doi.org/10.1016/0167-9236(94)90016-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Yashchenko, V. O. „Neural-like growing networks in the development of general intelligence. Neural-like growing networks (P. II)“. Mathematical machines and systems 1 (2023): 3–29. http://dx.doi.org/10.34121/1028-9763-2023-1-3-29.

Der volle Inhalt der Quelle
Annotation:
This article is devoted to the development of general artificial intelligence (AGI) based on a new type of neural networks – “neural-like growing networks”. It consists of two parts. The first one was published in N4, 2022, and describes an artificial neural-like element (artificial neuron) in terms of its functionality, which is as close as possible to a biological neuron. An artificial neural-like element is the main element in building neural-like growing networks. The second part deals with the structures and functions of artificial and natural neural networks. The paper proposes a new approach for creating neural-like growing networks as a means of developing AGI that is as close as possible to the natural intelligence of a person. The intelligence of man and living organisms is formed by their nervous system. According to I.P. Pavlov's definition, the main mechanism of higher nervous activity is the reflex activity of the nervous system. In the nerve cell, the main storage of unconditioned reflexes is the deoxyribonucleic acid (DNA) molecule. The article describes ribosomal protein synthesis that contributes to the implementation of unconditioned reflexes and the formation of conditioned reflexes as the basis for learning biological objects. The first part of the work shows that the structure and functions of ribosomes almost completely coincide with the structure and functions of the Turing machine. Turing invented this machine to prove the fundamental (theoretical) possibility of constructing arbitrarily complex algorithms from extremely simple operations, and the operations themselves are performed automatically. Here arises a stunning analogy, nature created DNA and the ribosome to build complex algorithms for creating biological objects and their communication with each other and with the external environment, and the ribosomal protein synthesis is carried out by many ribosomes at the same time. It was concluded that the nerve cells of the brain are analog multi-machine complexes – ultra-fast molecular supercomputers with an unusually simple analog programming device.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Parks, Allen D. „Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures“. European Journal of Engineering Research and Science 5, Nr. 2 (13.02.2020): 171–77. http://dx.doi.org/10.24018/ejers.2020.5.2.1689.

Der volle Inhalt der Quelle
Annotation:
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Parks, Allen D. „Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures“. European Journal of Engineering and Technology Research 5, Nr. 2 (13.02.2020): 171–77. http://dx.doi.org/10.24018/ejeng.2020.5.2.1689.

Der volle Inhalt der Quelle
Annotation:
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Li, Xiao Hu, Feng Xu, Jin Hua Zhang und Su Nan Wang. „A New Small-World Neural Network with its Performance on Fault Tolerance“. Advanced Materials Research 629 (Dezember 2012): 719–24. http://dx.doi.org/10.4028/www.scientific.net/amr.629.719.

Der volle Inhalt der Quelle
Annotation:
Many artificial neural networks are the simple simulation of brain neural network’s architecture and function. However, how to rebuild new artificial neural network which architecture is similar to biological neural networks is worth studying. In this study, a new multilayer feedforward small-world neural network is presented using the results form research on complex network. Firstly, a new multilayer feedforward small-world neural network which relies on the rewiring probability heavily is built up on the basis of the construction ideology of Watts-Strogatz networks model and community structure. Secondly, fault tolerance is employed in investigating the performances of new small-world neural network. When the network with connection fault or neuron damage is used to test the fault tolerance performance under different rewiring probability, simulation results show that the fault tolerance capability of small-world neural network outmatches that of the same scale regular network when the fault probability is more than 40%, while random network has the best fault tolerance capability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

JORGENSEN, THOMAS D., BARRY P. HAYNES und CHARLOTTE C. F. NORLUND. „PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES“. International Journal of Neural Systems 18, Nr. 05 (Oktober 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Der volle Inhalt der Quelle
Annotation:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit und Praneesh Mohandas. „New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays“. Fractal and Fractional 6, Nr. 9 (13.09.2022): 515. http://dx.doi.org/10.3390/fractalfract6090515.

Der volle Inhalt der Quelle
Annotation:
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan und Teoh Yeong Kin. „Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance“. Journal of Computing Research and Innovation 7, Nr. 1 (30.03.2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Der volle Inhalt der Quelle
Annotation:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Tetko, Igor V. „Neural Network Studies. 4. Introduction to Associative Neural Networks“. Journal of Chemical Information and Computer Sciences 42, Nr. 3 (26.03.2002): 717–28. http://dx.doi.org/10.1021/ci010379o.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Garcia, R. K., K. Moreira Gandra, J. M. Block und D. Barrera-Arellano. „Neural networks to formulate special fats“. Grasas y Aceites 63, Nr. 3 (05.07.2012): 245–52. http://dx.doi.org/10.3989/gya.119011.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Veselý, A., und D. Brechlerová. „Neural networks in intrusion detection systems“. Agricultural Economics (Zemědělská ekonomika) 50, No. 1 (24.02.2012): 35–40. http://dx.doi.org/10.17221/5164-agricecon.

Der volle Inhalt der Quelle
Annotation:
Security of an information system is its very important property, especially today, when computers are interconnected via internet. Because no system can be absolutely secure, the timely and accurate detection of intrusions is necessary. For this purpose, Intrusion Detection Systems (IDS) were designed. There are two basic models of IDS: misuse IDS and anomaly IDS. Misuse systems detect intrusions by looking for activity that corresponds to the known signatures of intrusions or vulnerabilities. Anomaly systems detect intrusions by searching for an abnormal system activity. Most IDS commercial tools are misuse systems with rule-based expert system structure. However, these techniques are less successful when attack characteristics vary from built-in signatures. Artificial neural networks offer the potential to resolve these problems. As far as anomaly systems are concerned, it is very difficult to build them, because it is difficult to define the normal and abnormal behaviour of a&nbsp;system. Also for building anomaly system, neural networks can be used, because they can learn to discriminate the normal and abnormal behaviour of a&nbsp;system from examples. Therefore, they offer a&nbsp;promising technique for building anomaly systems. This paper presents an overview of the applicability of neural networks in building intrusion systems and discusses advantages and drawbacks of neural network technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Mu, Yangzi, Mengxing Huang, Chunyang Ye und Qingzhou Wu. „Diagnosis Prediction via Recurrent Neural Networks“. International Journal of Machine Learning and Computing 8, Nr. 2 (April 2018): 117–20. http://dx.doi.org/10.18178/ijmlc.2018.8.2.673.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Sandhiya, K., M. Vidhya und M. Shivaranjani S. Saranya. „Smart Fruit Classification using Neural Networks“. International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (31.12.2017): 1298–303. http://dx.doi.org/10.31142/ijtsrd6986.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

S, Pothumani, und Priya N. „Analysis of RAID in Neural Networks“. Journal of Advanced Research in Dynamical and Control Systems 11, Nr. 0009-SPECIAL ISSUE (25.09.2019): 589–94. http://dx.doi.org/10.5373/jardcs/v11/20192609.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

TYMOSHENKO, Pavlo, Yevgen ZASOBA, Olexander KOVALCHUK und Olexander PSHENYCHNYY. „NEUROEVOLUTIONARY ALGORITHMS FOR NEURAL NETWORKS GENERATING“. Herald of Khmelnytskyi National University. Technical sciences 315, Nr. 6(1) (29.12.2022): 240–44. http://dx.doi.org/10.31891/2307-5732-2022-315-6-240-244.

Der volle Inhalt der Quelle
Annotation:
Solving engineering problems using conventional neural networks requires long-term research on the choice of architecture and hyperparameters. A strong artificial intelligence would be devoid of such shortcomings. Such research is carried out using a very wide range of approaches: for example, biological (attempts to grow a brain in laboratory conditions), hardware (creating neural processors) or software (using the power of ordinary CPUs and GPUs). The goal of the work is to develop such a system that would allow using evolutionary approaches to generate neural networks suitable for solving problems. This is called “neuroevolution”. The purpose of this work also includes the study of the features of possible applicable evolutionary strategies. The object of research in this work is a neuroevolutionary approach to solving problems of machine learning. The subject of research is evolutionary strategies, neural coding methods networks in the organism’s genome. The scientific novelty of the work lies in the testing of previously unused evolutionary strategies and the generalization of the obtained system to the systems of “general artificial intelligence”. A system for simulating neuroevolution was created. The specifics of implementation were considered, the choice of algorithms was justified, and their work was explained. In order to perform experiments, datasets were created and methods of applying neuroevolutionary systems were developed. It was possible to choose the most optimal training parameters, to find out the relationship between them, as well as the accuracy and speed of training. It cannot be said that the models implemented within this work directly bring us closer to strong AI. They still lack their own memory as well as a certain level of complexity. For successful use, it is necessary to configure the view of the input data or perform some calculations outside the model. However, in the future, such a system can be developed, for example, to work with SNNs, or for use on special equipment
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Henzinger, Thomas A., Mathias Lechner und Đorđe Žikelić. „Scalable Verification of Quantized Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 5 (18.05.2021): 3787–95. http://dx.doi.org/10.1609/aaai.v35i5.16496.

Der volle Inhalt der Quelle
Annotation:
Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized model of the actual network which works over real arithmetic and ignores rounding imprecisions. This idealization is in stark contrast to network quantization, which is a technique that trades numerical precision for computational efficiency and is, therefore, often applied in practice. Neglecting rounding errors of such low-bit quantized neural networks has been shown to lead to wrong conclusions about the network's correctness. Thus, the desired approach for verifying quantized neural networks would be one that takes these rounding errors into account. In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP. Furthermore, we explore several practical heuristics toward closing the complexity gap between idealized and bit-exact verification. In particular, we propose three techniques for making SMT-based verification of quantized neural networks more scalable. Our experiments demonstrate that our proposed methods allow a speedup of up to three orders of magnitude over existing approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Zhang, Wei, Zhi Han, Xiai Chen, Baichen Liu, Huidi Jia und Yandong Tang. „Fully Kernected Neural Networks“. Journal of Mathematics 2023 (28.06.2023): 1–9. http://dx.doi.org/10.1155/2023/1539436.

Der volle Inhalt der Quelle
Annotation:
In this paper, we apply kernel methods to deep convolutional neural network (DCNN) to improve its nonlinear ability. DCNNs have achieved significant improvement in many computer vision tasks. For an image classification task, the accuracy comes to saturation when the depth and width of network are enough and appropriate. The saturation accuracy will not rise even by increasing the depth and width. We find that improving nonlinear ability of DCNNs can break through the saturation accuracy. In a DCNN, the former layer is more inclined to extract features and the latter layer is more inclined to classify features. Therefore, we apply kernel methods at the last fully connected layer to implicitly map features to a higher-dimensional space to improve nonlinear ability so that the network achieves better linear separability. Also, we name the network as fully kernected neural networks (fully connected neural networks with kernel methods). Our experiment result shows that fully kernected neural networks achieve higher classification accuracy and faster convergence rate than baseline networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Simons, Robert, und J. G. Taylor. „Neural Networks.“ Journal of the Operational Research Society 47, Nr. 4 (April 1996): 596. http://dx.doi.org/10.2307/3010740.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Schier, R. „Neural networks.“ Radiology 191, Nr. 1 (April 1994): 291. http://dx.doi.org/10.1148/radiology.191.1.8134593.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Tafti, Mohammed H. A. „Neural networks“. ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, Nr. 1 (März 1992): 51–54. http://dx.doi.org/10.1145/134347.134361.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Turega, M. A. „Neural Networks“. Computer Journal 35, Nr. 3 (01.06.1992): 290. http://dx.doi.org/10.1093/comjnl/35.3.290.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Jordan, Michael I., und Christopher M. Bishop. „Neural networks“. ACM Computing Surveys 28, Nr. 1 (März 1996): 73–75. http://dx.doi.org/10.1145/234313.234348.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Dory, Robert A. „Neural Networks“. Computers in Physics 4, Nr. 3 (1990): 324. http://dx.doi.org/10.1063/1.4822918.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Ganssle, Graham. „Neural networks“. Leading Edge 37, Nr. 8 (August 2018): 616–19. http://dx.doi.org/10.1190/tle37080616.1.

Der volle Inhalt der Quelle
Annotation:
We've all heard a proselytizing hyperbolist make the artificial-intelligence-is-going-to-steal-my-job speech. If you subscribe, look at the code in the notebook accompanying this tutorial at https://github.com/seg/tutorials-2018 . It demonstrates a small neural network. You'll find a simple system composed chiefly of multiply and add operations. That's really all that happens inside a neural network. Multiply and add. There's no magic here.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

McGourty, Christine. „Neural networks“. Nature 335, Nr. 6186 (September 1988): 103. http://dx.doi.org/10.1038/335103b0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Simons, Robert. „Neural Networks“. Journal of the Operational Research Society 47, Nr. 4 (April 1996): 596–97. http://dx.doi.org/10.1057/jors.1996.70.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Beatty, P. C. W. „Neural networks“. Current Anaesthesia & Critical Care 9, Nr. 4 (August 1998): 168–73. http://dx.doi.org/10.1016/s0953-7112(98)80050-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Cutler, Adele. „Neural Networks“. Technometrics 42, Nr. 4 (November 2000): 432. http://dx.doi.org/10.1080/00401706.2000.10485724.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Signorini, DavidF, JimM Slattery, S. R. Dodds, Victor Lane und Peter Littlejohns. „Neural networks“. Lancet 346, Nr. 8988 (Dezember 1995): 1500–1501. http://dx.doi.org/10.1016/s0140-6736(95)92525-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Jefferson, MilesF, Neil Pendleton, Sam Lucas, MichaelA Horan und Lionel Tarassenko. „Neural networks“. Lancet 346, Nr. 8991-8992 (Dezember 1995): 1712. http://dx.doi.org/10.1016/s0140-6736(95)92880-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gutfreund, H. „NEURAL NETWORKS“. International Journal of Modern Physics B 04, Nr. 06 (Mai 1990): 1223–39. http://dx.doi.org/10.1142/s0217979290000607.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Medoff, Deborah R., und M.-A. Tagamets. „Neural Networks“. American Journal of Psychiatry 157, Nr. 10 (Oktober 2000): 1571. http://dx.doi.org/10.1176/appi.ajp.157.10.1571.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Lewis, David A. „Neural Networks“. American Journal of Psychiatry 157, Nr. 11 (November 2000): 1752. http://dx.doi.org/10.1176/appi.ajp.157.11.1752.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Graybiel, Ann M. „Neural Networks“. American Journal of Psychiatry 158, Nr. 1 (Januar 2001): 21. http://dx.doi.org/10.1176/appi.ajp.158.1.21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Tamminga, Carol A., und Henry H. Holcomb. „Neural Networks“. American Journal of Psychiatry 158, Nr. 2 (Februar 2001): 185. http://dx.doi.org/10.1176/appi.ajp.158.2.185.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Schwindling, Jerome. „Neural Networks“. EPJ Web of Conferences 4 (2010): 02002. http://dx.doi.org/10.1051/epjconf/20100402002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Widrow, Bernard, David E. Rumelhart und Michael A. Lehr. „Neural networks“. Communications of the ACM 37, Nr. 3 (März 1994): 93–105. http://dx.doi.org/10.1145/175247.175257.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Kock, Gerd. „Neural networks“. Microprocessing and Microprogramming 38, Nr. 1-5 (September 1993): 679. http://dx.doi.org/10.1016/0165-6074(93)90210-c.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Titterington, Michael. „Neural networks“. WIREs Computational Statistics 2, Nr. 1 (21.12.2009): 1–8. http://dx.doi.org/10.1002/wics.50.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie