Segui questo link per vedere altri tipi di pubblicazioni sul tema: Neural network.

Articoli di riviste sul tema "Neural network"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Neural network".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Navghare, Tukaram, Aniket Muley e Vinayak Jadhav. "Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach". Indian Journal Of Science And Technology 17, n. 4 (26 gennaio 2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

O. H. Abdelwahed, O. H. Abdelwahed, e M. El-Sayed Wahed. "Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks". Indian Journal of Applied Research 3, n. 6 (1 ottobre 2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tran, Loc. "Directed Hypergraph Neural Network". Journal of Advanced Research in Dynamical and Control Systems 12, SP4 (31 marzo 2020): 1434–41. http://dx.doi.org/10.5373/jardcs/v12sp4/20201622.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Antipova, E. S., e S. A. Rashkovskiy. "Autoassociative Hamming Neural Network". Nelineinaya Dinamika 17, n. 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Testo completo
Abstract (sommario):
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Perfetti, R. "A neural network to design neural networks". IEEE Transactions on Circuits and Systems 38, n. 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer e Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks". 網際網路技術學刊 23, n. 1 (gennaio 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Testo completo
Abstract (sommario):
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
7

D, Sreekanth. "Metro Water Fraudulent Prediction in Houses Using Convolutional Neural Network and Recurrent Neural Network". Revista Gestão Inovação e Tecnologias 11, n. 4 (10 luglio 2021): 1177–87. http://dx.doi.org/10.47059/revistageintec.v11i4.2177.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan e Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, n. 1 (30 marzo 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Testo completo
Abstract (sommario):
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

FUKUSHIMA, Kunihiko. "Neocognitron: Deep Convolutional Neural Network". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 27, n. 4 (2015): 115–25. http://dx.doi.org/10.3156/jsoft.27.4_115.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

CVS, Rajesh, e Nadikoppula Pardhasaradhi. "Analysis of Artificial Neural-Network". International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31 ottobre 2018): 418–28. http://dx.doi.org/10.31142/ijtsrd18482.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

R, Adarsh, e Dr Suma. "Neural Network for Financial Forecasting". International Journal of Research Publication and Reviews 5, n. 5 (26 maggio 2024): 13455–58. http://dx.doi.org/10.55248/gengpi.5.0524.1476.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Kim Soon, Gan, Chin Kim On, Nordaliela Mohd Rusli, Tan Soo Fun, Rayner Alfred e Tan Tse Guan. "Comparison of simple feedforward neural network, recurrent neural network and ensemble neural networks in phishing detection". Journal of Physics: Conference Series 1502 (marzo 2020): 012033. http://dx.doi.org/10.1088/1742-6596/1502/1/012033.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

JORGENSEN, THOMAS D., BARRY P. HAYNES e CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES". International Journal of Neural Systems 18, n. 05 (ottobre 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Testo completo
Abstract (sommario):
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit e Praneesh Mohandas. "New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays". Fractal and Fractional 6, n. 9 (13 settembre 2022): 515. http://dx.doi.org/10.3390/fractalfract6090515.

Testo completo
Abstract (sommario):
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Tetko, Igor V. "Neural Network Studies. 4. Introduction to Associative Neural Networks". Journal of Chemical Information and Computer Sciences 42, n. 3 (26 marzo 2002): 717–28. http://dx.doi.org/10.1021/ci010379o.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Hylton, Todd. "Thermodynamic Neural Network". Entropy 22, n. 3 (25 febbraio 2020): 256. http://dx.doi.org/10.3390/e22030256.

Testo completo
Abstract (sommario):
A thermodynamically motivated neural network model is described that self-organizes to transport charge associated with internal and external potentials while in contact with a thermal reservoir. The model integrates techniques for rapid, large-scale, reversible, conservative equilibration of node states and slow, small-scale, irreversible, dissipative adaptation of the edge states as a means to create multiscale order. All interactions in the network are local and the network structures can be generic and recurrent. Isolated networks show multiscale dynamics, and externally driven networks evolve to efficiently connect external positive and negative potentials. The model integrates concepts of conservation, potentiation, fluctuation, dissipation, adaptation, equilibration and causation to illustrate the thermodynamic evolution of organization in open systems. A key conclusion of the work is that the transport and dissipation of conserved physical quantities drives the self-organization of open thermodynamic systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Hamdan, Baida Abdulredha. "Neural Network Principles and its Application". Webology 19, n. 1 (20 gennaio 2022): 3955–70. http://dx.doi.org/10.14704/web/v19i1/web19261.

Testo completo
Abstract (sommario):
Neural networks which also known as artificial neural networks is generally a computing dependent technique that formed and designed to create a simulation to the real brain of a human to be used as a problem solving method. Artificial neural networks gain their abilities by the method of training or learning, each method have a certain input and output which called results too, this method of learning works to create forming probability-weighted associations among both of input and the result which stored and saved across the net specifically among its data structure, any training process is depending on identifying the net difference between processed output which is usually a prediction and the real targeted output which occurs as an error, then a series of adjustments achieved to gain a proper learning result, this process called supervised learning. Artificial neural networks have found and proved itself in many applications in a variety of fields due to their capacity to recreate and simulate nonlinear phenomena. System identification and control (process control, vehicle control, quantum chemistry, trajectory prediction, and natural resource management. Etc.) In addition to face recognition which proved to be very effective. Neural network was proved to be a very promising technique in many fields due to its accuracy and problem solving properties.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Deeba, Farah, She Kun, Fayaz Ali Dharejo, Hameer Langah e Hira Memon. "Digital Watermarking Using Deep Neural Network". International Journal of Machine Learning and Computing 10, n. 2 (febbraio 2020): 277–82. http://dx.doi.org/10.18178/ijmlc.2020.10.2.932.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

O., Sheeba, Jithin George, Rajin P. K., Nisha Thomas e Thomas George. "Glaucoma Detection Using Artificial Neural Network". International Journal of Engineering and Technology 6, n. 2 (2014): 158–61. http://dx.doi.org/10.7763/ijet.2014.v6.687.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Mahmood, Suzan A., e Loay E. George. "Speaker Identification Using Backpropagation Neural Network". Journal of Zankoy Sulaimani - Part A 11, n. 1 (23 settembre 2007): 61–66. http://dx.doi.org/10.17656/jzs.10181.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Chung-Hsing Chen, Chung-Hsing Chen, e Ko-Wei Huang Chung-Hsing Chen. "Document Classification Using Lightweight Neural Network". 網際網路技術學刊 24, n. 7 (dicembre 2023): 1505–11. http://dx.doi.org/10.53106/160792642023122407012.

Testo completo
Abstract (sommario):
<p>In recent years, OCR data has been used for learning and analyzing document classification. In addition, some neural networks have used image recognition for training, such as the network published by the ImageNet Large Scale Visual Recognition Challenge for document image training, AlexNet, GoogleNet, and MobileNet. Document image classification is important in data extraction processes and often requires significant computing power. Furthermore, it is difficult to implement image classification using general computers without a graphics processing unit (GPU). Therefore, this study proposes a lightweight neural network application that can perform document image classification on general computers or the Internet of Things (IoT) without a GPU. Plustek Inc. provided 3065 receipts belonging to 58 categories. Three datasets were considered as test samples while the remaining were considered as training samples to train the network to obtain a classifier. After the experiments, the classifier achieved 98.26% accuracy, and only 3 out of 174 samples showed errors.</p> <p>&nbsp;</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang e Yaliang Zhao. "Quantized Tensor Neural Network". ACM/IMS Transactions on Data Science 2, n. 4 (30 novembre 2021): 1–18. http://dx.doi.org/10.1145/3491255.

Testo completo
Abstract (sommario):
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tensor networks, namely, the powerful learning ability of neural networks and the simplicity of tensor networks. The QTNN model can be further regarded as a generalized multilayer nonlinear tensor network, which can efficiently extract low-dimensional features of the data while maintaining the original structure information. In addition, to more effectively represent the local information of data, we introduce multiple convolution layers in QTNN to extract the local features. We also develop a high-order back-propagation algorithm for training the parameters of QTNN. We conducted classification experiments on multiple representative datasets to further evaluate the performance of proposed models, and the experimental results show that QTNN is simpler and more efficient while compared to the classic deep learning models.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Herold, Christopher D., Robert L. Fitzgerald, David A. Herold e Taiwei Lu. "Neural Network". Laboratory Automation News 1, n. 3 (luglio 1996): 16–17. http://dx.doi.org/10.1177/221106829600100304.

Testo completo
Abstract (sommario):
A hybrid neural network (HNN) developed by Physical Optics Corporation (Torrance, CA) is helping a team of scientists with the San Diego Veterans Administration Medical Center and University of California, San Diego Pathology Department automate the detection and identification of Tuberculosis and other mycobacterial infections.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Li, Wei, Shaogang Gong e Xiatian Zhu. "Neural Graph Embedding for Neural Architecture Search". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 04 (3 aprile 2020): 4707–14. http://dx.doi.org/10.1609/aaai.v34i04.5903.

Testo completo
Abstract (sommario):
Existing neural architecture search (NAS) methods often operate in discrete or continuous spaces directly, which ignores the graphical topology knowledge of neural networks. This leads to suboptimal search performance and efficiency, given the factor that neural networks are essentially directed acyclic graphs (DAG). In this work, we address this limitation by introducing a novel idea of neural graph embedding (NGE). Specifically, we represent the building block (i.e. the cell) of neural networks with a neural DAG, and learn it by leveraging a Graph Convolutional Network to propagate and model the intrinsic topology information of network architectures. This results in a generic neural network representation integrable with different existing NAS frameworks. Extensive experiments show the superiority of NGE over the state-of-the-art methods on image classification and semantic segmentation.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Shpinareva, Irina M., Anastasia A. Yakushina, Lyudmila A. Voloshchuk e Nikolay D. Rudnichenko. "Detection and classification of network attacks using the deep neural network cascade". Herald of Advanced Information Technology 4, n. 3 (15 ottobre 2021): 244–54. http://dx.doi.org/10.15276/hait.03.2021.4.

Testo completo
Abstract (sommario):
This article shows the relevance of developing a cascade of deep neural networks for detecting and classifying network attacks based on an analysis of the practical use of network intrusion detection systems to protect local computer networks. A cascade of deep neural networks consists of two elements. The first network is a hybrid deep neural network that contains convolutional neural network layers and long short-term memory layers to detect attacks. The second network is a CNN convolutional neural network for classifying the most popular classes of network attacks such as Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnais-sance, Shellcode, and Worms. At the stage of tuning and training the cascade of deep neural networks, the selection of hyperparame-ters was carried out, which made it possible to improve the quality of the model. Among the available public datasets, one ofthe current UNSW-NB15 datasets was selected, taking into account modern traffic. For the data set under consideration, a data prepro-cessing technology has been developed. The cascade of deep neural networks was trained, tested, and validated on the UNSW-NB15 dataset. The cascade of deep neural networks was tested on real network traffic, which showed its ability to detect and classify at-tacks in a computer network. The use of a cascade of deep neural networks, consisting of a hybrid neural network CNN + LSTM and a neural network CNNhas improved the accuracy of detecting and classifying attacks in computer networks and reduced the fre-quency of false alarms in detecting network attacks
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Jiang, Yiming, Chenguang Yang, Shi-lu Dai e Beibei Ren. "Deterministic learning enhanced neutral network control of unmanned helicopter". International Journal of Advanced Robotic Systems 13, n. 6 (28 novembre 2016): 172988141667111. http://dx.doi.org/10.1177/1729881416671118.

Testo completo
Abstract (sommario):
In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid without any concern on either initial conditions or range of state variables. In addition, deterministic learning is applied to the neutral network learning control, such that the adaptive neutral networks are able to store the learned knowledge that could be reused to construct neutral network controller with improved control performance. Simulation studies are carried out on a helicopter model to illustrate the effectiveness of the proposed control design.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Kumar, G. Prem, e P. Venkataram. "Network restoration using recurrent neural networks". International Journal of Network Management 8, n. 5 (settembre 1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Ouyang, Xuming, e Cunguang Feng. "Interpretable Neural Network Construction: From Neural Network to Interpretable Neural Tree". Journal of Physics: Conference Series 1550 (maggio 2020): 032154. http://dx.doi.org/10.1088/1742-6596/1550/3/032154.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Sineglazov, Victor, e Petro Chynnyk. "Quantum Convolution Neural Network". Electronics and Control Systems 2, n. 76 (23 giugno 2023): 40–45. http://dx.doi.org/10.18372/1990-5548.76.17667.

Testo completo
Abstract (sommario):
In this work, quantum convolutional neural networks are considered in the task of recognizing handwritten digits. A proprietary quantum scheme for the convolutional layer of a quantum convolutional neural network is proposed. A proprietary quantum scheme for the pooling layer of a quantum convolutional neural network is proposed. The results of learning quantum convolutional neural networks are analyzed. The built models were compared and the best one was selected based on the accuracy, recall, precision and f1-score metrics. A comparative analysis was made with the classic convolutional neural network based on accuracy, recall, precision and f1-score metrics. The object of the study is the task of recognizing numbers. The subject of research is convolutional neural network, quantum convolutional neural network. The result of this work can be applied in the further research of quantum computing in the tasks of artificial intelligence.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

ABDI, H. "A NEURAL NETWORK PRIMER". Journal of Biological Systems 02, n. 03 (settembre 1994): 247–81. http://dx.doi.org/10.1142/s0218339094000179.

Testo completo
Abstract (sommario):
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some specific input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hopfield network, the Boltzmann machine, the back-propagation network and the radial basis function network.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Zakiya Manzoor Khan, Et al. "Network Intrusion Detection Using Autoencode Neural Network". International Journal on Recent and Innovation Trends in Computing and Communication 11, n. 10 (2 novembre 2023): 1678–88. http://dx.doi.org/10.17762/ijritcc.v11i10.8739.

Testo completo
Abstract (sommario):
In today's interconnected digital landscape, safeguarding computer networks against unauthorized access and cyber threats is of paramount importance. NIDS play a crucial role in identifying and mitigating potential security breaches. This research paper explores the application of autoencoder neural networks, a subset of deep learning techniques, in the realm of Network Intrusion Detection.Autoencoder neural networks are known for their ability to learn and represent data in a compressed, low-dimensional form. This study investigates their potential in modeling network traffic patterns and identifying anomalous activities. By training autoencoder networks on both normal and malicious network traffic data, we aim to create effective intrusion detection models that can distinguish between benign and malicious network behavior.The paper provides an in-depth analysis of the architecture and training methodologies of autoencoder neural networks for intrusion detection. It also explores various data preprocessing techniques and feature engineering approaches to enhance the model's performance. Additionally, the research evaluates the robustness and scalability of autoencoder-based NIDS in real-world network environments. Furthermore, ethical considerations in network intrusion detection, including privacy concerns and false positive rates, are discussed. It addresses the need for a balanced approach that ensures network security while respecting user privacy and minimizing disruptions. operation. This approach compresses the majority samples & increases the minority sample count in tough samples so that the IDS can achieve greater classification accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Gong, Xiao Lu, Zhi Jian Hu, Meng Lin Zhang e He Wang. "Wind Power Forecasting Using Wavelet Decomposition and Elman Neural Network". Advanced Materials Research 608-609 (dicembre 2012): 628–32. http://dx.doi.org/10.4028/www.scientific.net/amr.608-609.628.

Testo completo
Abstract (sommario):
The relevant data sequences provided by numerical weather prediction are decomposed into different frequency bands by using the wavelet decomposition for wind power forecasting. The Elman neural network models are established at different frequency bands respectively, then the output of different networks are combined to get the eventual prediction result. For comparison, Elman neutral network and BP neutral network are used to predict wind power directly. Several error indicators are given to evaluate prediction results of the three methods. The simulation results show that the Elman neural network can achieve good results and that prediction accuracy can be further improved by using the wavelet decomposition simultaneously.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Begum, Afsana, Md Masiur Rahman e Sohana Jahan. "Medical diagnosis using artificial neural networks". Mathematics in Applied Sciences and Engineering 5, n. 2 (4 giugno 2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Testo completo
Abstract (sommario):
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Setiono, Rudy. "Feedforward Neural Network Construction Using Cross Validation". Neural Computation 13, n. 12 (1 dicembre 2001): 2865–77. http://dx.doi.org/10.1162/089976601317098565.

Testo completo
Abstract (sommario):
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network's predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples. Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are better than those obtained by state-of-the-art decision tree methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Marton, Sascha, Stefan Lüdtke e Christian Bartelt. "Explanations for Neural Networks by Neural Networks". Applied Sciences 12, n. 3 (18 gennaio 2022): 980. http://dx.doi.org/10.3390/app12030980.

Testo completo
Abstract (sommario):
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Gao, Wei. "New Evolutionary Neural Network Based on Continuous Ant Colony Optimization". Applied Mechanics and Materials 58-60 (giugno 2011): 1773–78. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1773.

Testo completo
Abstract (sommario):
The evolutionary neural network can be generated combining the evolutionary optimization algorithm and neural network. Based on analysis of shortcomings of previously proposed evolutionary neural networks, combining the continuous ant colony optimization proposed by author and BP neural network, a new evolutionary neural network whose architecture and connection weights evolve simultaneously is proposed. At last, through the typical XOR problem, the new evolutionary neural network is compared and analyzed with BP neural network and traditional evolutionary neural networks based on genetic algorithm and evolutionary programming. The computing results show that the precision and efficiency of the new neural network are all better.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Miao, Lu, Wei Fan, Yu Liu, Yingjie Qin, Deyang Chen e Jiayan Cui. "Optimization of PSO-BP neural network for short-term wind power prediction". International Journal of Low-Carbon Technologies 19 (2024): 2687–92. http://dx.doi.org/10.1093/ijlct/ctae234.

Testo completo
Abstract (sommario):
Abstract This paper uses a back propagation (BP) neural network to predict short-term wind power. Since the initial weights and thresholds of BP neural networks significantly impact their performance, we use the optimized particle swarm optimization (PSO) to obtain the critical parameters of BP neural networks. Specifically, we optimize the PSO to make it easier to get better parameters. The experimental results show that the BP neural network's mean relative error (MRE) is 11.91%, 15.18%, and 8.56%, respectively. In comparison, the MRE of the optimized BP neural network is 5.09%, 7.21%, and 4.44%.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Labinsky, Alexander. "NEURAL NETWORK APPROACH TO COGNITIVE MODELING". MONITORING AND EXPERTISE IN SAFETY SYSTEM 2024, n. 3 (22 ottobre 2024): 38–44. http://dx.doi.org/10.61260/2304-0130-2024-3-38-44.

Testo completo
Abstract (sommario):
Some features of cognitive modeling are presented, including the prerequisites for a cognitive approach to solving complex problems. Cognitive modeling involves the use of various artificial neural networks, including convolutional neural networks. The classification of artificial neural networks according to various characteristics is given. The features of self-organizing neural networks and networks using deep learning methods are considered. The artificial neural network, which is a three-layer unidirectional direct propagation network, the interface of a computer program used to approximate functions using the specified neural network, as well as the solution of the image recognition problem using an artificial convolutional neural network, in which the neural network parameters are adjusted for each recognizable image fragment in order to adaptively filter the image, are considered in detail. The analysis of images in video surveillance systems in order to detect fires allows them to be detected at an early stage and, thus, prevent the fire propogation.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Stephan, Jane Jaleel, Sahab Dheyaa Mohammed e Mohammed Khudhair Abbas. "Neural Network Approach to Web Application Protection". International Journal of Information and Education Technology 5, n. 2 (2015): 150–55. http://dx.doi.org/10.7763/ijiet.2015.v5.493.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Al-Abaid, Shaimaa Abbas. "Artificial Neural Network Based Image Encryption Technique". Journal of Advanced Research in Dynamical and Control Systems 12, SP3 (28 febbraio 2020): 1184–89. http://dx.doi.org/10.5373/jardcs/v12sp3/20201365.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

P.M, Dr Dinesh. "Segmentation of Flowers Group Using Neural Network". Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (25 luglio 2020): 601–4. http://dx.doi.org/10.5373/jardcs/v12sp7/20202147.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Paul, Eldho. "Plant Leaf Perception Using Convolutional Neural Network". International Journal of Psychosocial Rehabilitation 24, n. 5 (20 aprile 2020): 5753–62. http://dx.doi.org/10.37200/ijpr/v24i5/pr2020283.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Panda, Subodh, Bikash Swain e Sandeep Mishra. "Boiler Performance Optimization Using Process Neural Network". Indian Journal of Applied Research 3, n. 7 (1 ottobre 2011): 298–300. http://dx.doi.org/10.15373/2249555x/july2013/93.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Yakovyna, V. S. "Software failures prediction using RBF neural network". Odes’kyi Politechnichnyi Universytet. Pratsi, n. 2 (15 giugno 2015): 111–18. http://dx.doi.org/10.15276/opu.2.46.2015.20.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Gupta, Sakshi. "Concrete Mix Design Using Artificial Neural Network". Journal on Today's Ideas-Tomorrow's Technologies 1, n. 1 (3 giugno 2013): 29–43. http://dx.doi.org/10.15415/jotitt.2013.11003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Ragavi, S. Sree, e S. Ramadevi. "Application of Adaptive Filter in Neural Network". International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (31 agosto 2018): 1991–94. http://dx.doi.org/10.31142/ijtsrd18176.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Nogra, James Arnold, Cherry Lyn Sta Romana e Elmer Maravillas. "Baybáyin Character Recognition Using Convolutional Neural Network". International Journal of Machine Learning and Computing 10, n. 2 (febbraio 2020): 265–70. http://dx.doi.org/10.18178/ijmlc.2020.10.2.930.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Khanarsa, Paisit. "Automatic SARIMA Order Identification Convolutional Neural Network". International Journal of Machine Learning and Computing 10, n. 5 (5 ottobre 2020): 662–68. http://dx.doi.org/10.18178/ijmlc.2020.10.5.988.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Al-Rawi, Kamal R., e Consuelo Gonzalo. "Adaptive Pointing Theory (APT) Artificial Neural Network". International Journal of Computer and Communication Engineering 3, n. 3 (2014): 212–15. http://dx.doi.org/10.7763/ijcce.2014.v3.322.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Kosterin, Maksim A., e Ilya V. Paramonov. "Neural Network-Based Sentiment Classification of Russian Sentences into Four Classes". Modeling and Analysis of Information Systems 29, n. 2 (17 giugno 2022): 116–33. http://dx.doi.org/10.18255/1818-1015-2022-2-116-133.

Testo completo
Abstract (sommario):
The paper is devoted to the classification of Russian sentences into four classes: positive, negative, mixed, and neutral. Unlike the majority of modern study in this area, the mixed sentiment class is introduced. Mixed sentiment sentences contain positive and negative sentiments simultaneously.To solve the problem, the following tools were applied: the attention-based LSTM neural network, the dual attention-based GRU neural network, the BERT neural network with several modifications of the output layer to provide classification into four classes. The experimental comparison of the efficiency of various neural networks were performed on three corpora of Russian sentences. Two of them consist of users’ reviews: one with wear reviews and another with hotel reviews. The third corpus contains news from Russian media. The highest weighted F-measure in experiments (0.90) was achieved when using BERT on the wear reviews corpus, as well as the highest weighted F-measure for positive and negative sentences (0.92 and 0.93, respectively). The best classification results for neutral and mixed sentences were achieved on the news corpus. For them F-measure was 0.72 and 0.58, respectively. As a result of experiments, the significant superiority of the BERT transfer network was demonstrated in comparison with older neural networks LTSM and GRU, especially for classification of sentences with weakly expressed sentiments. The error analysis showed that “adjacent” (positive/negative and mixed) classes are worse classified with BERT than “opposite” classes (positive and negative, neutral and mixed).
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia