Journal articles on the topic 'Neural network'

To see the other types of publications on this topic, follow the link: Neural network.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Navghare, Tukaram, Aniket Muley, and Vinayak Jadhav. "Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach." Indian Journal Of Science And Technology 17, no. 4 (January 26, 2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

O. H. Abdelwahed, O. H. Abdelwahed, and M. El-Sayed Wahed. "Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks." Indian Journal of Applied Research 3, no. 6 (October 1, 2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Loc. "Directed Hypergraph Neural Network." Journal of Advanced Research in Dynamical and Control Systems 12, SP4 (March 31, 2020): 1434–41. http://dx.doi.org/10.5373/jardcs/v12sp4/20201622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Antipova, E. S., and S. A. Rashkovskiy. "Autoassociative Hamming Neural Network." Nelineinaya Dinamika 17, no. 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Full text
Abstract:
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
APA, Harvard, Vancouver, ISO, and other styles
5

Perfetti, R. "A neural network to design neural networks." IEEE Transactions on Circuits and Systems 38, no. 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer, and Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks." 網際網路技術學刊 23, no. 1 (January 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Full text
Abstract:
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
7

D, Sreekanth. "Metro Water Fraudulent Prediction in Houses Using Convolutional Neural Network and Recurrent Neural Network." Revista Gestão Inovação e Tecnologias 11, no. 4 (July 10, 2021): 1177–87. http://dx.doi.org/10.47059/revistageintec.v11i4.2177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan, and Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance." Journal of Computing Research and Innovation 7, no. 1 (March 30, 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Full text
Abstract:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
APA, Harvard, Vancouver, ISO, and other styles
9

FUKUSHIMA, Kunihiko. "Neocognitron: Deep Convolutional Neural Network." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 27, no. 4 (2015): 115–25. http://dx.doi.org/10.3156/jsoft.27.4_115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

CVS, Rajesh, and Nadikoppula Pardhasaradhi. "Analysis of Artificial Neural-Network." International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (October 31, 2018): 418–28. http://dx.doi.org/10.31142/ijtsrd18482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

R, Adarsh, and Dr Suma. "Neural Network for Financial Forecasting." International Journal of Research Publication and Reviews 5, no. 5 (May 26, 2024): 13455–58. http://dx.doi.org/10.55248/gengpi.5.0524.1476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kim Soon, Gan, Chin Kim On, Nordaliela Mohd Rusli, Tan Soo Fun, Rayner Alfred, and Tan Tse Guan. "Comparison of simple feedforward neural network, recurrent neural network and ensemble neural networks in phishing detection." Journal of Physics: Conference Series 1502 (March 2020): 012033. http://dx.doi.org/10.1088/1742-6596/1502/1/012033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

JORGENSEN, THOMAS D., BARRY P. HAYNES, and CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES." International Journal of Neural Systems 18, no. 05 (October 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Full text
Abstract:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
APA, Harvard, Vancouver, ISO, and other styles
14

Tetko, Igor V. "Neural Network Studies. 4. Introduction to Associative Neural Networks." Journal of Chemical Information and Computer Sciences 42, no. 3 (March 26, 2002): 717–28. http://dx.doi.org/10.1021/ci010379o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hylton, Todd. "Thermodynamic Neural Network." Entropy 22, no. 3 (February 25, 2020): 256. http://dx.doi.org/10.3390/e22030256.

Full text
Abstract:
A thermodynamically motivated neural network model is described that self-organizes to transport charge associated with internal and external potentials while in contact with a thermal reservoir. The model integrates techniques for rapid, large-scale, reversible, conservative equilibration of node states and slow, small-scale, irreversible, dissipative adaptation of the edge states as a means to create multiscale order. All interactions in the network are local and the network structures can be generic and recurrent. Isolated networks show multiscale dynamics, and externally driven networks evolve to efficiently connect external positive and negative potentials. The model integrates concepts of conservation, potentiation, fluctuation, dissipation, adaptation, equilibration and causation to illustrate the thermodynamic evolution of organization in open systems. A key conclusion of the work is that the transport and dissipation of conserved physical quantities drives the self-organization of open thermodynamic systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Hamdan, Baida Abdulredha. "Neural Network Principles and its Application." Webology 19, no. 1 (January 20, 2022): 3955–70. http://dx.doi.org/10.14704/web/v19i1/web19261.

Full text
Abstract:
Neural networks which also known as artificial neural networks is generally a computing dependent technique that formed and designed to create a simulation to the real brain of a human to be used as a problem solving method. Artificial neural networks gain their abilities by the method of training or learning, each method have a certain input and output which called results too, this method of learning works to create forming probability-weighted associations among both of input and the result which stored and saved across the net specifically among its data structure, any training process is depending on identifying the net difference between processed output which is usually a prediction and the real targeted output which occurs as an error, then a series of adjustments achieved to gain a proper learning result, this process called supervised learning. Artificial neural networks have found and proved itself in many applications in a variety of fields due to their capacity to recreate and simulate nonlinear phenomena. System identification and control (process control, vehicle control, quantum chemistry, trajectory prediction, and natural resource management. Etc.) In addition to face recognition which proved to be very effective. Neural network was proved to be a very promising technique in many fields due to its accuracy and problem solving properties.
APA, Harvard, Vancouver, ISO, and other styles
17

Deeba, Farah, She Kun, Fayaz Ali Dharejo, Hameer Langah, and Hira Memon. "Digital Watermarking Using Deep Neural Network." International Journal of Machine Learning and Computing 10, no. 2 (February 2020): 277–82. http://dx.doi.org/10.18178/ijmlc.2020.10.2.932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

O., Sheeba, Jithin George, Rajin P. K., Nisha Thomas, and Thomas George. "Glaucoma Detection Using Artificial Neural Network." International Journal of Engineering and Technology 6, no. 2 (2014): 158–61. http://dx.doi.org/10.7763/ijet.2014.v6.687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mahmood, Suzan A., and Loay E. George. "Speaker Identification Using Backpropagation Neural Network." Journal of Zankoy Sulaimani - Part A 11, no. 1 (September 23, 2007): 61–66. http://dx.doi.org/10.17656/jzs.10181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chung-Hsing Chen, Chung-Hsing Chen, and Ko-Wei Huang Chung-Hsing Chen. "Document Classification Using Lightweight Neural Network." 網際網路技術學刊 24, no. 7 (December 2023): 1505–11. http://dx.doi.org/10.53106/160792642023122407012.

Full text
Abstract:
<p>In recent years, OCR data has been used for learning and analyzing document classification. In addition, some neural networks have used image recognition for training, such as the network published by the ImageNet Large Scale Visual Recognition Challenge for document image training, AlexNet, GoogleNet, and MobileNet. Document image classification is important in data extraction processes and often requires significant computing power. Furthermore, it is difficult to implement image classification using general computers without a graphics processing unit (GPU). Therefore, this study proposes a lightweight neural network application that can perform document image classification on general computers or the Internet of Things (IoT) without a GPU. Plustek Inc. provided 3065 receipts belonging to 58 categories. Three datasets were considered as test samples while the remaining were considered as training samples to train the network to obtain a classifier. After the experiments, the classifier achieved 98.26% accuracy, and only 3 out of 174 samples showed errors.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
21

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit, and Praneesh Mohandas. "New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays." Fractal and Fractional 6, no. 9 (September 13, 2022): 515. http://dx.doi.org/10.3390/fractalfract6090515.

Full text
Abstract:
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
22

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3491255.

Full text
Abstract:
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tensor networks, namely, the powerful learning ability of neural networks and the simplicity of tensor networks. The QTNN model can be further regarded as a generalized multilayer nonlinear tensor network, which can efficiently extract low-dimensional features of the data while maintaining the original structure information. In addition, to more effectively represent the local information of data, we introduce multiple convolution layers in QTNN to extract the local features. We also develop a high-order back-propagation algorithm for training the parameters of QTNN. We conducted classification experiments on multiple representative datasets to further evaluate the performance of proposed models, and the experimental results show that QTNN is simpler and more efficient while compared to the classic deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
23

Herold, Christopher D., Robert L. Fitzgerald, David A. Herold, and Taiwei Lu. "Neural Network." Laboratory Automation News 1, no. 3 (July 1996): 16–17. http://dx.doi.org/10.1177/221106829600100304.

Full text
Abstract:
A hybrid neural network (HNN) developed by Physical Optics Corporation (Torrance, CA) is helping a team of scientists with the San Diego Veterans Administration Medical Center and University of California, San Diego Pathology Department automate the detection and identification of Tuberculosis and other mycobacterial infections.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Wei, Shaogang Gong, and Xiatian Zhu. "Neural Graph Embedding for Neural Architecture Search." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4707–14. http://dx.doi.org/10.1609/aaai.v34i04.5903.

Full text
Abstract:
Existing neural architecture search (NAS) methods often operate in discrete or continuous spaces directly, which ignores the graphical topology knowledge of neural networks. This leads to suboptimal search performance and efficiency, given the factor that neural networks are essentially directed acyclic graphs (DAG). In this work, we address this limitation by introducing a novel idea of neural graph embedding (NGE). Specifically, we represent the building block (i.e. the cell) of neural networks with a neural DAG, and learn it by leveraging a Graph Convolutional Network to propagate and model the intrinsic topology information of network architectures. This results in a generic neural network representation integrable with different existing NAS frameworks. Extensive experiments show the superiority of NGE over the state-of-the-art methods on image classification and semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles
25

Shpinareva, Irina M., Anastasia A. Yakushina, Lyudmila A. Voloshchuk, and Nikolay D. Rudnichenko. "Detection and classification of network attacks using the deep neural network cascade." Herald of Advanced Information Technology 4, no. 3 (October 15, 2021): 244–54. http://dx.doi.org/10.15276/hait.03.2021.4.

Full text
Abstract:
This article shows the relevance of developing a cascade of deep neural networks for detecting and classifying network attacks based on an analysis of the practical use of network intrusion detection systems to protect local computer networks. A cascade of deep neural networks consists of two elements. The first network is a hybrid deep neural network that contains convolutional neural network layers and long short-term memory layers to detect attacks. The second network is a CNN convolutional neural network for classifying the most popular classes of network attacks such as Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnais-sance, Shellcode, and Worms. At the stage of tuning and training the cascade of deep neural networks, the selection of hyperparame-ters was carried out, which made it possible to improve the quality of the model. Among the available public datasets, one ofthe current UNSW-NB15 datasets was selected, taking into account modern traffic. For the data set under consideration, a data prepro-cessing technology has been developed. The cascade of deep neural networks was trained, tested, and validated on the UNSW-NB15 dataset. The cascade of deep neural networks was tested on real network traffic, which showed its ability to detect and classify at-tacks in a computer network. The use of a cascade of deep neural networks, consisting of a hybrid neural network CNN + LSTM and a neural network CNNhas improved the accuracy of detecting and classifying attacks in computer networks and reduced the fre-quency of false alarms in detecting network attacks
APA, Harvard, Vancouver, ISO, and other styles
26

Jiang, Yiming, Chenguang Yang, Shi-lu Dai, and Beibei Ren. "Deterministic learning enhanced neutral network control of unmanned helicopter." International Journal of Advanced Robotic Systems 13, no. 6 (November 28, 2016): 172988141667111. http://dx.doi.org/10.1177/1729881416671118.

Full text
Abstract:
In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid without any concern on either initial conditions or range of state variables. In addition, deterministic learning is applied to the neutral network learning control, such that the adaptive neutral networks are able to store the learned knowledge that could be reused to construct neutral network controller with improved control performance. Simulation studies are carried out on a helicopter model to illustrate the effectiveness of the proposed control design.
APA, Harvard, Vancouver, ISO, and other styles
27

Kumar, G. Prem, and P. Venkataram. "Network restoration using recurrent neural networks." International Journal of Network Management 8, no. 5 (September 1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ouyang, Xuming, and Cunguang Feng. "Interpretable Neural Network Construction: From Neural Network to Interpretable Neural Tree." Journal of Physics: Conference Series 1550 (May 2020): 032154. http://dx.doi.org/10.1088/1742-6596/1550/3/032154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sineglazov, Victor, and Petro Chynnyk. "Quantum Convolution Neural Network." Electronics and Control Systems 2, no. 76 (June 23, 2023): 40–45. http://dx.doi.org/10.18372/1990-5548.76.17667.

Full text
Abstract:
In this work, quantum convolutional neural networks are considered in the task of recognizing handwritten digits. A proprietary quantum scheme for the convolutional layer of a quantum convolutional neural network is proposed. A proprietary quantum scheme for the pooling layer of a quantum convolutional neural network is proposed. The results of learning quantum convolutional neural networks are analyzed. The built models were compared and the best one was selected based on the accuracy, recall, precision and f1-score metrics. A comparative analysis was made with the classic convolutional neural network based on accuracy, recall, precision and f1-score metrics. The object of the study is the task of recognizing numbers. The subject of research is convolutional neural network, quantum convolutional neural network. The result of this work can be applied in the further research of quantum computing in the tasks of artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
30

ABDI, H. "A NEURAL NETWORK PRIMER." Journal of Biological Systems 02, no. 03 (September 1994): 247–81. http://dx.doi.org/10.1142/s0218339094000179.

Full text
Abstract:
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some specific input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hopfield network, the Boltzmann machine, the back-propagation network and the radial basis function network.
APA, Harvard, Vancouver, ISO, and other styles
31

Zakiya Manzoor Khan, Et al. "Network Intrusion Detection Using Autoencode Neural Network." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (November 2, 2023): 1678–88. http://dx.doi.org/10.17762/ijritcc.v11i10.8739.

Full text
Abstract:
In today's interconnected digital landscape, safeguarding computer networks against unauthorized access and cyber threats is of paramount importance. NIDS play a crucial role in identifying and mitigating potential security breaches. This research paper explores the application of autoencoder neural networks, a subset of deep learning techniques, in the realm of Network Intrusion Detection.Autoencoder neural networks are known for their ability to learn and represent data in a compressed, low-dimensional form. This study investigates their potential in modeling network traffic patterns and identifying anomalous activities. By training autoencoder networks on both normal and malicious network traffic data, we aim to create effective intrusion detection models that can distinguish between benign and malicious network behavior.The paper provides an in-depth analysis of the architecture and training methodologies of autoencoder neural networks for intrusion detection. It also explores various data preprocessing techniques and feature engineering approaches to enhance the model's performance. Additionally, the research evaluates the robustness and scalability of autoencoder-based NIDS in real-world network environments. Furthermore, ethical considerations in network intrusion detection, including privacy concerns and false positive rates, are discussed. It addresses the need for a balanced approach that ensures network security while respecting user privacy and minimizing disruptions. operation. This approach compresses the majority samples & increases the minority sample count in tough samples so that the IDS can achieve greater classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Gong, Xiao Lu, Zhi Jian Hu, Meng Lin Zhang, and He Wang. "Wind Power Forecasting Using Wavelet Decomposition and Elman Neural Network." Advanced Materials Research 608-609 (December 2012): 628–32. http://dx.doi.org/10.4028/www.scientific.net/amr.608-609.628.

Full text
Abstract:
The relevant data sequences provided by numerical weather prediction are decomposed into different frequency bands by using the wavelet decomposition for wind power forecasting. The Elman neural network models are established at different frequency bands respectively, then the output of different networks are combined to get the eventual prediction result. For comparison, Elman neutral network and BP neutral network are used to predict wind power directly. Several error indicators are given to evaluate prediction results of the three methods. The simulation results show that the Elman neural network can achieve good results and that prediction accuracy can be further improved by using the wavelet decomposition simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
33

Setiono, Rudy. "Feedforward Neural Network Construction Using Cross Validation." Neural Computation 13, no. 12 (December 1, 2001): 2865–77. http://dx.doi.org/10.1162/089976601317098565.

Full text
Abstract:
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network's predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples. Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are better than those obtained by state-of-the-art decision tree methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.

Full text
Abstract:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
APA, Harvard, Vancouver, ISO, and other styles
35

Begum, Afsana, Md Masiur Rahman, and Sohana Jahan. "Medical diagnosis using artificial neural networks." Mathematics in Applied Sciences and Engineering 5, no. 2 (June 4, 2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Full text
Abstract:
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
APA, Harvard, Vancouver, ISO, and other styles
36

Stephan, Jane Jaleel, Sahab Dheyaa Mohammed, and Mohammed Khudhair Abbas. "Neural Network Approach to Web Application Protection." International Journal of Information and Education Technology 5, no. 2 (2015): 150–55. http://dx.doi.org/10.7763/ijiet.2015.v5.493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Al-Abaid, Shaimaa Abbas. "Artificial Neural Network Based Image Encryption Technique." Journal of Advanced Research in Dynamical and Control Systems 12, SP3 (February 28, 2020): 1184–89. http://dx.doi.org/10.5373/jardcs/v12sp3/20201365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

P.M, Dr Dinesh. "Segmentation of Flowers Group Using Neural Network." Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (July 25, 2020): 601–4. http://dx.doi.org/10.5373/jardcs/v12sp7/20202147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Paul, Eldho. "Plant Leaf Perception Using Convolutional Neural Network." International Journal of Psychosocial Rehabilitation 24, no. 5 (April 20, 2020): 5753–62. http://dx.doi.org/10.37200/ijpr/v24i5/pr2020283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Panda, Subodh, Bikash Swain, and Sandeep Mishra. "Boiler Performance Optimization Using Process Neural Network." Indian Journal of Applied Research 3, no. 7 (October 1, 2011): 298–300. http://dx.doi.org/10.15373/2249555x/july2013/93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yakovyna, V. S. "Software failures prediction using RBF neural network." Odes’kyi Politechnichnyi Universytet. Pratsi, no. 2 (June 15, 2015): 111–18. http://dx.doi.org/10.15276/opu.2.46.2015.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gupta, Sakshi. "Concrete Mix Design Using Artificial Neural Network." Journal on Today's Ideas-Tomorrow's Technologies 1, no. 1 (June 3, 2013): 29–43. http://dx.doi.org/10.15415/jotitt.2013.11003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ragavi, S. Sree, and S. Ramadevi. "Application of Adaptive Filter in Neural Network." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 1991–94. http://dx.doi.org/10.31142/ijtsrd18176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Nogra, James Arnold, Cherry Lyn Sta Romana, and Elmer Maravillas. "Baybáyin Character Recognition Using Convolutional Neural Network." International Journal of Machine Learning and Computing 10, no. 2 (February 2020): 265–70. http://dx.doi.org/10.18178/ijmlc.2020.10.2.930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Khanarsa, Paisit. "Automatic SARIMA Order Identification Convolutional Neural Network." International Journal of Machine Learning and Computing 10, no. 5 (October 5, 2020): 662–68. http://dx.doi.org/10.18178/ijmlc.2020.10.5.988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Al-Rawi, Kamal R., and Consuelo Gonzalo. "Adaptive Pointing Theory (APT) Artificial Neural Network." International Journal of Computer and Communication Engineering 3, no. 3 (2014): 212–15. http://dx.doi.org/10.7763/ijcce.2014.v3.322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gao, Wei. "New Evolutionary Neural Network Based on Continuous Ant Colony Optimization." Applied Mechanics and Materials 58-60 (June 2011): 1773–78. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1773.

Full text
Abstract:
The evolutionary neural network can be generated combining the evolutionary optimization algorithm and neural network. Based on analysis of shortcomings of previously proposed evolutionary neural networks, combining the continuous ant colony optimization proposed by author and BP neural network, a new evolutionary neural network whose architecture and connection weights evolve simultaneously is proposed. At last, through the typical XOR problem, the new evolutionary neural network is compared and analyzed with BP neural network and traditional evolutionary neural networks based on genetic algorithm and evolutionary programming. The computing results show that the precision and efficiency of the new neural network are all better.
APA, Harvard, Vancouver, ISO, and other styles
48

Kosterin, Maksim A., and Ilya V. Paramonov. "Neural Network-Based Sentiment Classification of Russian Sentences into Four Classes." Modeling and Analysis of Information Systems 29, no. 2 (June 17, 2022): 116–33. http://dx.doi.org/10.18255/1818-1015-2022-2-116-133.

Full text
Abstract:
The paper is devoted to the classification of Russian sentences into four classes: positive, negative, mixed, and neutral. Unlike the majority of modern study in this area, the mixed sentiment class is introduced. Mixed sentiment sentences contain positive and negative sentiments simultaneously.To solve the problem, the following tools were applied: the attention-based LSTM neural network, the dual attention-based GRU neural network, the BERT neural network with several modifications of the output layer to provide classification into four classes. The experimental comparison of the efficiency of various neural networks were performed on three corpora of Russian sentences. Two of them consist of users’ reviews: one with wear reviews and another with hotel reviews. The third corpus contains news from Russian media. The highest weighted F-measure in experiments (0.90) was achieved when using BERT on the wear reviews corpus, as well as the highest weighted F-measure for positive and negative sentences (0.92 and 0.93, respectively). The best classification results for neutral and mixed sentences were achieved on the news corpus. For them F-measure was 0.72 and 0.58, respectively. As a result of experiments, the significant superiority of the BERT transfer network was demonstrated in comparison with older neural networks LTSM and GRU, especially for classification of sentences with weakly expressed sentiments. The error analysis showed that “adjacent” (positive/negative and mixed) classes are worse classified with BERT than “opposite” classes (positive and negative, neutral and mixed).
APA, Harvard, Vancouver, ISO, and other styles
49

Kalinin, Maxim, Vasiliy Krundyshev, and Evgeny Zubkov. "Estimation of applicability of modern neural network methods for preventing cyberthreats to self-organizing network infrastructures of digital economy platforms,." SHS Web of Conferences 44 (2018): 00044. http://dx.doi.org/10.1051/shsconf/20184400044.

Full text
Abstract:
The problems of applying neural network methods for solving problems of preventing cyberthreats to flexible self-organizing network infrastructures of digital economy platforms: vehicle adhoc networks, wireless sensor networks, industrial IoT, “smart buildings” and “smart cities” are considered. The applicability of the classic perceptron neural network, recurrent, deep, LSTM neural networks and neural networks ensembles in the restricting conditions of fast training and big data processing are estimated. The use of neural networks with a complex architecture– recurrent and LSTM neural networks – is experimentally justified for building a system of intrusion detection for self-organizing network infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
50

van Drongelen, Wim. "Modeling Neural Activity." ISRN Biomathematics 2013 (March 7, 2013): 1–37. http://dx.doi.org/10.1155/2013/871472.

Full text
Abstract:
This paper provides an overview of different types of models for studying activity of nerve cells and their networks with a special emphasis on neural oscillations. One part describes the neuronal models based on the Hodgkin and Huxley formalism first described in the 1950s. It is discussed how further simplifications of this formalism enable mathematical analysis of the process of neural excitability. The focus of the paper’s second component is on network activity. Understanding network function is one of the important frontiers remaining in neuroscience. At present, experimental techniques can only provide global recordings or samples of the activity of the huge networks that form the nervous system. Models in neuroscience can therefore play a critical role by providing a framework for integration of necessarily incomplete datasets, thereby providing insight into the mechanisms of neural function. Network models can either explicitly contain individual network nodes that model the neurons, or they can be based on representations of compound population activity. The latter approach was pioneered by Wilson and Cowan in the 1970s. Finally I provide an overview and discuss how network models are employed in the study of neuronal network pathology such as epilepsy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography