Journal articles on the topic 'RNN NETWORK'

To see the other types of publications on this topic, follow the link: RNN NETWORK.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'RNN NETWORK.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yin, Qiwei, Ruixun Zhang, and XiuLi Shao. "CNN and RNN mixed model for image classification." MATEC Web of Conferences 277 (2019): 02001. http://dx.doi.org/10.1051/matecconf/201927702001.

Full text
Abstract:
In this paper, we propose a CNN(Convolutional neural networks) and RNN(recurrent neural networks) mixed model for image classification, the proposed network, called CNN-RNN model. Image data can be viewed as two-dimensional wave data, and convolution calculation is a filtering process. It can filter non-critical band information in an image, leaving behind important features of image information. The CNN-RNN model can use the RNN to Calculate the Dependency and Continuity Features of the Intermediate Layer Output of the CNN Model, connect the characteristics of these middle tiers to the final full-connection network for classification prediction, which will result in better classification accuracy. At the same time, in order to satisfy the restriction of the length of the input sequence by the RNN model and prevent the gradient explosion or gradient disappearing in the network, this paper combines the wavelet transform (WT) method in the Fourier transform to filter the input data. We will test the proposed CNN-RNN model on a widely-used datasets CIFAR-10. The results prove the proposed method has a better classification effect than the original CNN network, and that further investigation is needed.
APA, Harvard, Vancouver, ISO, and other styles
2

Tridarma, Panggih, and Sukmawati Nur Endah. "Pengenalan Ucapan Bahasa Indonesia Menggunakan MFCC dan Recurrent Neural Network." JURNAL MASYARAKAT INFORMATIKA 11, no. 2 (November 17, 2020): 36–44. http://dx.doi.org/10.14710/jmasif.11.2.34874.

Full text
Abstract:
Pengenalan ucapan (speech recognition) merupakan perkembangan teknologi dalam bidang suara. Pengenalan ucapan memungkinkan suatu perangkat lunak mengenali kata-kata yang diucapkan oleh manusia dan ditampilkan dalam bentuk tulisan. Namun masih terdapat masalah untuk mengenali kata-kata yang diucapkan, seperti karakteristik suara yang berbeda, usia, kesehatan, dan jenis kelamin. Penelitian ini membahas pengenalan ucapan bahasa Indonesia dengan menggunakan Mel-Frequency Cepstral Coefficient (MFCC) sebagai metode ekstraksi ciri dan Recurrent Neural Network (RNN) sebagai metode pengenalannya dengan membandingkan arsitektur Elman RNN dan arsitektur Jordan RNN. Pembagian data latih dan data uji dilakukan dengan menggunakan metode k-fold cross validation dengan nilai k=5. Hasil penelitian menunjukkan bahwa arsitektur Elman RNN pada parameter 900 hidden neuron, target error 0.0005, learning rate 0.01, dan maksimal epoch 10000 dengan koefisien MFCC 20 menghasilkan akurasi terbaik sebesar 72.65%. Sedangkan hasil penelitian untuk arsitektur Jordan RNN pada parameter 500 hidden neuron, target error 0.0005, learning rate 0.01, dan maksimal epoch 10000 dengan koefisien MFCC 12 menghasilkan akurasi terbaik sebesar 73.55%. Sehingga berdasarkan hasil penelitian yang didapat, arsitektur Jordan RNN memiliki kinerja yang lebih baik dibandingkan dengan arsitektur Elman RNN dalam mengenali ucapan Bahasa Indonesia berjenis continuous speech
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Qianli, Zhenxi Lin, Enhuan Chen, and Garrison Cottrell. "Temporal Pyramid Recurrent Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5061–68. http://dx.doi.org/10.1609/aaai.v34i04.5947.

Full text
Abstract:
Learning long-term and multi-scale dependencies in sequential data is a challenging task for recurrent neural networks (RNNs). In this paper, a novel RNN structure called temporal pyramid RNN (TP-RNN) is proposed to achieve these two goals. TP-RNN is a pyramid-like structure and generally has multiple layers. In each layer of the network, there are several sub-pyramids connected by a shortcut path to the output, which can efficiently aggregate historical information from hidden states and provide many gradient feedback short-paths. This avoids back-propagating through many hidden states as in usual RNNs. In particular, in the multi-layer structure of TP-RNN, the input sequence of the higher layer is a large-scale aggregated state sequence produced by the sub-pyramids in the previous layer, instead of the usual sequence of hidden states. In this way, TP-RNN can explicitly learn multi-scale dependencies with multi-scale input sequences of different layers, and shorten the input sequence and gradient feedback paths of each layer. This avoids the vanishing gradient problem in deep RNNs and allows the network to efficiently learn long-term dependencies. We evaluate TP-RNN on several sequence modeling tasks, including the masked addition problem, pixel-by-pixel image classification, signal recognition and speaker identification. Experimental results demonstrate that TP-RNN consistently outperforms existing RNNs for learning long-term and multi-scale dependencies in sequential data.
APA, Harvard, Vancouver, ISO, and other styles
4

Mosavat, Majid, and Guido Montorsi. "Single-Frequency Network Terrestrial Broadcasting with 5GNR Numerology Using Recurrent Neural Network." Electronics 11, no. 19 (September 29, 2022): 3130. http://dx.doi.org/10.3390/electronics11193130.

Full text
Abstract:
We explore the feasibility of Terrestrial Broadcasting in a Single-Frequency Network (SFN) with standard 5G New Radio (5GNR) numerology designed for uni-cast transmission. Instead of the classical OFDM symbol-by-symbol detector scheme or a more complex equalization technique, we designed a Recurrent-Neural-Network (RNN)-based detector that replaces the channel estimation and equalization blocks. The RNN is a bidirectional Long Short-Term Memory (bi-LSTM) that computes the log-likelihood ratios delivered to the LDPC decoder starting from the received symbols affected by strong intersymbol/intercarrier interference (ISI/ICI) on time-varying channels. To simplify the RNN receiver and reduce the system overhead, pilot and data signals in our proposed scheme are superimposed instead of interspersed. We describe the parameter optimization of the RNN and provide end-to-end simulation results, comparing them with those of a classical system, where the OFDM waveform is specifically designed for Terrestrial Broadcasting. We show that the system outperforms classical receivers, especially in challenging scenarios associated with large intersite distance and large mobility. We also provide evidence of the robustness of the designed RNN receiver, showing that an RNN receiver trained on a single signal-to-noise ratio and user velocity performs efficiently also in a large range of scenarios with different signal-to-noise ratios and velocities.
APA, Harvard, Vancouver, ISO, and other styles
5

Du, Xiuli, Xiaohui Ding, and Fan Tao. "Network Security Situation Prediction Based on Optimized Clock-Cycle Recurrent Neural Network for Sensor-Enabled Networks." Sensors 23, no. 13 (July 1, 2023): 6087. http://dx.doi.org/10.3390/s23136087.

Full text
Abstract:
We propose an optimized Clockwork Recurrent Neural Network (CW-RNN) based approach to address temporal dynamics and nonlinearity in network security situations, improving prediction accuracy and real-time performance. By leveraging the clock-cycle RNN, we enable the model to capture both short-term and long-term temporal features of network security situations. Additionally, we utilize the Grey Wolf Optimization (GWO) algorithm to optimize the hyperparameters of the network, thus constructing an enhanced network security situation prediction model. The introduction of a clock-cycle for hidden units allows the model to learn short-term information from high-frequency update modules while retaining long-term memory from low-frequency update modules, thereby enhancing the model’s ability to capture data patterns. Experimental results demonstrate that the optimized clock-cycle RNN outperforms other network models in extracting the temporal and nonlinear features of network security situations, leading to improved prediction accuracy. Furthermore, our approach has low time complexity and excellent real-time performance, ideal for monitoring large-scale network traffic in sensor networks.
APA, Harvard, Vancouver, ISO, and other styles
6

Choi, Seongjin, Hwasoo Yeo, and Jiwon Kim. "Network-Wide Vehicle Trajectory Prediction in Urban Traffic Networks using Deep Learning." Transportation Research Record: Journal of the Transportation Research Board 2672, no. 45 (September 7, 2018): 173–84. http://dx.doi.org/10.1177/0361198118794735.

Full text
Abstract:
This paper proposes a deep learning approach to learning and predicting network-wide vehicle movement patterns in urban networks. Inspired by recent success in predicting sequence data using recurrent neural networks (RNN), specifically in language modeling that predicts the next words in a sentence given previous words, this research aims to apply RNN to predict the next locations in a vehicle’s trajectory, given previous locations, by viewing a vehicle trajectory as a sentence and a set of locations in a network as vocabulary in human language. To extract a finite set of “locations,” this study partitions the network into “cells,” which represent subregions, and expresses each vehicle trajectory as a sequence of cells. Using large amounts of Bluetooth vehicle trajectory data collected in Brisbane, Australia, this study trains an RNN model to predict cell sequences. It tests the model’s performance by computing the probability of correctly predicting the next [Formula: see text] consecutive cells. Compared with a base-case model that relies on a simple transition matrix, the proposed RNN model shows substantially better prediction results. Network-level aggregate measures such as total cell visit count and intercell flow are also tested, and the RNN model is observed to be capable of replicating real-world traffic patterns.
APA, Harvard, Vancouver, ISO, and other styles
7

Nowak, Mateusz P., and Piotr Pecka. "Routing Algorithms Simulation for Self-Aware SDN." Electronics 11, no. 1 (December 29, 2021): 104. http://dx.doi.org/10.3390/electronics11010104.

Full text
Abstract:
This paper presents a self-aware network approach with cognitive packets, with a routing engine based on random neural networks. The simulation study, performed using a custom simulator extension of OmNeT++, compares RNN routing with other routing methods. The performance results of RNN-based routing, combined with the distributed nature of its operation inaccessible to other presented methods, demonstrate the advantages of introducing neural networks as a decision-making mechanism in selecting network paths. This work also confirms the usefulness of the simulator for SDN networks with cognitive packets and various routing algorithms, including RNN-based routing engines.
APA, Harvard, Vancouver, ISO, and other styles
8

Muhuri, Pramita Sree, Prosenjit Chatterjee, Xiaohong Yuan, Kaushik Roy, and Albert Esterline. "Using a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) to Classify Network Attacks." Information 11, no. 5 (May 1, 2020): 243. http://dx.doi.org/10.3390/info11050243.

Full text
Abstract:
An intrusion detection system (IDS) identifies whether the network traffic behavior is normal or abnormal or identifies the attack types. Recently, deep learning has emerged as a successful approach in IDSs, having a high accuracy rate with its distinctive learning mechanism. In this research, we developed a new method for intrusion detection to classify the NSL-KDD dataset by combining a genetic algorithm (GA) for optimal feature selection and long short-term memory (LSTM) with a recurrent neural network (RNN). We found that using LSTM-RNN classifiers with the optimal feature set improves intrusion detection. The performance of the IDS was analyzed by calculating the accuracy, recall, precision, f-score, and confusion matrix. The NSL-KDD dataset was used to analyze the performances of the classifiers. An LSTM-RNN was used to classify the NSL-KDD datasets into binary (normal and abnormal) and multi-class (Normal, DoS, Probing, U2R, and R2L) sets. The results indicate that applying the GA increases the classification accuracy of LSTM-RNN in both binary and multi-class classification. The results of the LSTM-RNN classifier were also compared with the results using a support vector machine (SVM) and random forest (RF). For multi-class classification, the classification accuracy of LSTM-RNN with the GA model is much higher than SVM and RF. For binary classification, the classification accuracy of LSTM-RNN is similar to that of RF and higher than that of SVM.
APA, Harvard, Vancouver, ISO, and other styles
9

Paramasivan, Senthil Kumar. "Deep Learning Based Recurrent Neural Networks to Enhance the Performance of Wind Energy Forecasting: A Review." Revue d'Intelligence Artificielle 35, no. 1 (February 28, 2021): 1–10. http://dx.doi.org/10.18280/ria.350101.

Full text
Abstract:
In the modern era, deep learning is a powerful technique in the field of wind energy forecasting. The deep neural network effectively handles the seasonal variation and uncertainty characteristics of wind speed by proper structural design, objective function optimization, and feature learning. The present paper focuses on the critical analysis of wind energy forecasting using deep learning based Recurrent neural networks (RNN) models. It explores RNN and its variants, such as simple RNN, Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and Bidirectional RNN models. The recurrent neural network processes the input time series data sequentially and captures well the temporal dependencies exist in the successive input data. This review investigates the RNN models of wind energy forecasting, the data sources utilized, and the performance achieved in terms of the error measures. The overall review shows that the deep learning based RNN improves the performance of wind energy forecasting compared to the conventional techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Yan, Jiapeng, Huifang Kong, and Zhihong Man. "Recurrent Neural Network-Based Nonlinear Optimization for Braking Control of Electric Vehicles." Energies 15, no. 24 (December 14, 2022): 9486. http://dx.doi.org/10.3390/en15249486.

Full text
Abstract:
In this paper, electro-hydraulic braking (EHB) force allocation for electric vehicles (EVs) is modeled as a constrained nonlinear optimization problem (NOP). Recurrent neural networks (RNNs) are advantageous in many folds for solving NOPs, yet existing RNNs’ convergence usually requires convexity with calculation of second-order partial derivatives. In this paper, a recurrent neural network-based NOP solver (RNN-NOPS) is developed. It is seen that the RNN-NOPS is designed to drive all state variables to asymptotically converge to the feasible region, with loose requirement on the NOP’s first-order partial derivative. In addition, the RNN-NOPS’s equilibria are proved to meet Karush–Kuhn–Tucker (KKT) conditions, and the RNN-NOPS behaves with a strong robustness against the violation of the constraints. The comparative studies are conducted to show RNN-NOPS’s advantages for solving the EHB force allocation problem, it is reported that the overall regenerative energy of RNN-NOPS is 15.39% more than that of the method for comparison under SC03 cycle.
APA, Harvard, Vancouver, ISO, and other styles
11

Zafri Wan Yahaya, Wan Muhammad, Fadhlan Hafizhelmi Kamaru Zaman, and Mohd Fuad Abdul Latip. "Prediction of energy consumption using recurrent neural networks (RNN) and nonlinear autoregressive neural network with external input (NARX)." Indonesian Journal of Electrical Engineering and Computer Science 17, no. 3 (March 1, 2020): 1215. http://dx.doi.org/10.11591/ijeecs.v17.i3.pp1215-1223.

Full text
Abstract:
Recurrent Neural Networks (RNN) and Nonlinear Autoregressive Neural Network with External Input (NARX) are recently applied in predicting energy consumption. Energy consumption prediction for depth analysis of how electrical energy consumption is managed on Tower 2 Engineering Building is critical in order to reduce the energy usage and the operational cost. Prediction of energy consumption in this building will bring great benefits to the Faculty of Electrical Engineering UiTM Shah Alam. In this work, we present the comparative study on the performance of prediction of energy consumption in Tower 2 Engineering Building using RNN and NARX method. The model of RNN and NARX are trained using data collected using smart meters installed inside the building. The results after training and testing using RNN and NARX show that by using the recorded data we can accurately predict the energy consumption in the building. We also show that RNN model trained with normalized data performs better than NARX model.
APA, Harvard, Vancouver, ISO, and other styles
12

Bansal, Bhavana, Aparajita Nanda, and Anita Sahoo. "Intelligent Framework With Controlled Behavior for Gene Regulatory Network Reconstruction." International Journal of Information Retrieval Research 12, no. 1 (January 2022): 1–17. http://dx.doi.org/10.4018/ijirr.2022010104.

Full text
Abstract:
Gene Regulatory Networks (GRNs) are the pioneering methodology for finding new gene interactions getting insights of the biological processes using time series gene expression data. It remains a challenge to study the temporal nature of gene expression data that mimic complex non-linear dynamics of the network. In this paper, an intelligent framework of recurrent neural network (RNN) and swarm intelligence (SI) based Particle Swarm Optimization (PSO) with controlled behaviour has been proposed for the reconstruction of GRN from time-series gene expression data. A novel PSO algorithm enhanced by human cognition influenced by the ideology of Bhagavad Gita is employed for improved learning of RNN. RNN guided by the proposed algorithm simulates the nonlinear and dynamic gene interactions to a greater extent. The proposed method shows superior performance over traditional SI algorithms in searching biologically plausible candidate networks. The strength of the method is verified by analyzing the small artificial network and real data of Escherichia coli with improved accuracy.
APA, Harvard, Vancouver, ISO, and other styles
13

Lyu, Shengfei, and Jiaqi Liu. "Convolutional Recurrent Neural Networks for Text Classification." Journal of Database Management 32, no. 4 (October 2021): 65–82. http://dx.doi.org/10.4018/jdm.2021100105.

Full text
Abstract:
Recurrent neural network (RNN) and convolutional neural network (CNN) are two prevailing architectures used in text classification. Traditional approaches combine the strengths of these two networks by straightly streamlining them or linking features extracted from them. In this article, a novel approach is proposed to maintain the strengths of RNN and CNN to a great extent. In the proposed approach, a bi-directional RNN encodes each word into forward and backward hidden states. Then, a neural tensor layer is used to fuse bi-directional hidden states to get word representations. Meanwhile, a convolutional neural network is utilized to learn the importance of each word for text classification. Empirical experiments are conducted on several datasets for text classification. The superior performance of the proposed approach confirms its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
14

Vinayakumar, R., K. P. Soman, and Prabaharan Poornachandran. "Evaluation of Recurrent Neural Network and its Variants for Intrusion Detection System (IDS)." International Journal of Information System Modeling and Design 8, no. 3 (July 2017): 43–63. http://dx.doi.org/10.4018/ijismd.2017070103.

Full text
Abstract:
This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.
APA, Harvard, Vancouver, ISO, and other styles
15

Park, Jieun, Dokkyun Yi, and Sangmin Ji. "Analysis of Recurrent Neural Network and Predictions." Symmetry 12, no. 4 (April 13, 2020): 615. http://dx.doi.org/10.3390/sym12040615.

Full text
Abstract:
This paper analyzes the operation principle and predicted value of the recurrent-neural-network (RNN) structure, which is the most basic and suitable for the change of time in the structure of a neural network for various types of artificial intelligence (AI). In particular, an RNN in which all connections are symmetric guarantees that it will converge. The operating principle of a RNN is based on linear data combinations and is composed through the synthesis of nonlinear activation functions. Linear combined data are similar to the autoregressive-moving average (ARMA) method of statistical processing. However, distortion due to the nonlinear activation function in RNNs causes the predicted value to be different from the predicted ARMA value. Through this, we know the limit of the predicted value of an RNN and the range of prediction that changes according to the learning data. In addition to mathematical proofs, numerical experiments confirmed our claims.
APA, Harvard, Vancouver, ISO, and other styles
16

Dakwale, Praveen, and Christof Monz. "Convolutional over Recurrent Encoder for Neural Machine Translation." Prague Bulletin of Mathematical Linguistics 108, no. 1 (June 1, 2017): 37–48. http://dx.doi.org/10.1515/pralin-2017-0007.

Full text
Abstract:
AbstractNeural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN) called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN). In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.
APA, Harvard, Vancouver, ISO, and other styles
17

Yu, Dian, and Shouqian Sun. "A Systematic Exploration of Deep Neural Networks for EDA-Based Emotion Recognition." Information 11, no. 4 (April 15, 2020): 212. http://dx.doi.org/10.3390/info11040212.

Full text
Abstract:
Subject-independent emotion recognition based on physiological signals has become a research hotspot. Previous research has proved that electrodermal activity (EDA) signals are an effective data resource for emotion recognition. Benefiting from their great representation ability, an increasing number of deep neural networks have been applied for emotion recognition, and they can be classified as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or a combination of these (CNN+RNN). However, there has been no systematic research on the predictive power and configurations of different deep neural networks in this task. In this work, we systematically explore the configurations and performances of three adapted deep neural networks: ResNet, LSTM, and hybrid ResNet-LSTM. Our experiments use the subject-independent method to evaluate the three-class classification on the MAHNOB dataset. The results prove that the CNN model (ResNet) reaches a better accuracy and F1 score than the RNN model (LSTM) and the CNN+RNN model (hybrid ResNet-LSTM). Extensive comparisons also reveal that our three deep neural networks with EDA data outperform previous models with handcraft features on emotion recognition, which proves the great potential of the end-to-end DNN method.
APA, Harvard, Vancouver, ISO, and other styles
18

T, Vijayakumar. "NEURAL NETWORK ANALYSIS FOR TUMOR INVESTIGATION AND CANCER PREDICTION." December 2019 2019, no. 02 (December 18, 2019): 89–98. http://dx.doi.org/10.36548/jei.2019.2.004.

Full text
Abstract:
Predicting the category of tumors and the types of the cancer in its early stage remains as a very essential process to identify depth of the disease and treatment available for it. The neural network that functions similar to the human nervous system is widely utilized in the tumor investigation and the cancer prediction. The paper presents the analysis of the performance of the neural networks such as the, FNN (Feed Forward Neural Networks), RNN (Recurrent Neural Networks) and the CNN (Convolutional Neural Network) investigating the tumors and predicting the cancer. The results obtained by evaluating the neural networks on the breast cancer Wisconsin original data set shows that the CNN provides 43 % better prediction than the FNN and 25% better prediction than the RNN.
APA, Harvard, Vancouver, ISO, and other styles
19

T, Vijayakumar. "NEURAL NETWORK ANALYSIS FOR TUMOR INVESTIGATION AND CANCER PREDICTION." December 2019 2019, no. 02 (December 18, 2019): 89–98. http://dx.doi.org/10.36548/jes.2019.2.004.

Full text
Abstract:
Predicting the category of tumors and the types of the cancer in its early stage remains as a very essential process to identify depth of the disease and treatment available for it. The neural network that functions similar to the human nervous system is widely utilized in the tumor investigation and the cancer prediction. The paper presents the analysis of the performance of the neural networks such as the, FNN (Feed Forward Neural Networks), RNN (Recurrent Neural Networks) and the CNN (Convolutional Neural Network) investigating the tumors and predicting the cancer. The results obtained by evaluating the neural networks on the breast cancer Wisconsin original data set shows that the CNN provides 43 % better prediction than the FNN and 25% better prediction than the RNN.
APA, Harvard, Vancouver, ISO, and other styles
20

Aribowo, Widi. "ELMAN-RECURRENT NEURAL NETWORK FOR LOAD SHEDDING OPTIMIZATION." SINERGI 24, no. 1 (January 14, 2020): 29. http://dx.doi.org/10.22441/sinergi.2020.1.005.

Full text
Abstract:
Load shedding plays a key part in the avoidance of the power system outage. The frequency and voltage fluidity leads to the spread of a power system into sub-systems and leads to the outage as well as the severe breakdown of the system utility. In recent years, Neural networks have been very victorious in several signal processing and control applications. Recurrent Neural networks are capable of handling complex and non-linear problems. This paper provides an algorithm for load shedding using ELMAN Recurrent Neural Networks (RNN). Elman has proposed a partially RNN, where the feedforward connections are modifiable and the recurrent connections are fixed. The research is implemented in MATLAB and the performance is tested with a 6 bus system. The results are compared with the Genetic Algorithm (GA), Combining Genetic Algorithm with Feed Forward Neural Network (hybrid) and RNN. The proposed method is capable of assigning load releases needed and more efficient than other methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Yoko, Kuncoro, Viny Christanti Mawardi, and Janson Hendryli. "SISTEM PERINGKAS OTOMATIS ABSTRAKTIF DENGAN MENGGUNAKAN RECURRENT NEURAL NETWORK." Computatio : Journal of Computer Science and Information Systems 2, no. 1 (May 22, 2018): 65. http://dx.doi.org/10.24912/computatio.v2i1.1481.

Full text
Abstract:
Abstractive Text Summarization try to creates a shorter version of a text while preserve its meaning. We try to use Recurrent Neural Network (RNN) to create summaries of Bahasa Indonesia text. We get corpus from Detik dan Kompas site news. We used word2vec to create word embedding from our corpus then train our data set with RNN to create a model. This model used to generate news. We search the best model by changing word2vec size and RNN hidden states. We use system evaluation and Q&A Evaluation to evaluate our model. System evaluation showed that model with 6457 data set, 200 word2vec size, and 256 RNN hidden states gives best accuracy for 99.8810%. This model evaluated by Q&A Evaluation. Q&A Evaluation showed that the model gives 46.65% accurary.
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Yijun, and Yonghong Qin. "Machine translation of English speech: Comparison of multiple algorithms." Journal of Intelligent Systems 31, no. 1 (January 1, 2022): 159–67. http://dx.doi.org/10.1515/jisys-2022-0005.

Full text
Abstract:
Abstract In order to improve the efficiency of the English translation, machine translation is gradually and widely used. This study briefly introduces the neural network algorithm for speech recognition. Long short-term memory (LSTM), instead of traditional recurrent neural network (RNN), was used as the encoding algorithm for the encoder, and RNN as the decoding algorithm for the decoder. Then, simulation experiments were carried out on the machine translation algorithm, and it was compared with two other machine translation algorithms. The results showed that the back-propagation (BP) neural network had a lower word error rate and spent less recognition time than artificial recognition in recognizing the speech; the LSTM–RNN algorithm had a lower word error rate than BP–RNN and RNN–RNN algorithms in recognizing the test samples. In the actual speech translation test, as the length of speech increased, the LSTM–RNN algorithm had the least changes in the translation score and word error rate, and it had the highest translation score and the lowest word error rate under the same speech length.
APA, Harvard, Vancouver, ISO, and other styles
23

Hazazi, Muhammad Asaduddin, and Agus Sihabuddin. "Extended Kalman Filter In Recurrent Neural Network: USDIDR Forecasting Case Study." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 13, no. 3 (July 31, 2019): 293. http://dx.doi.org/10.22146/ijccs.47802.

Full text
Abstract:
Artificial Neural Networks (ANN) especially Recurrent Neural Network (RNN) have been widely used to predict currency exchange rates. The learning algorithm that is commonly used in ANN is Stochastic Gradient Descent (SGD). One of the advantages of SGD is that the computational time needed is relatively short. But SGD also has weaknesses, including SGD requiring several hyperparameters such as the regularization parameter. Besides that SGD relatively requires a lot of epoch to reach convergence. Extended Kalman Filter (EKF) as a learning algorithm on RNN is used to replace SGD with the hope of a better level of accuracy and convergence rate. This study uses IDR / USD exchange rate data from 31 August 2015 to 29 August 2018 with 70% data as training data and 30% data as test data. This research shows that RNN-EKF produces better convergent speeds and better accuracy compared to RNN-SGD.
APA, Harvard, Vancouver, ISO, and other styles
24

Kamyab, Marjan, Guohua Liu, Abdur Rasool, and Michael Adjeisah. "ACR-SA: attention-based deep model through two-channel CNN and Bi-RNN for sentiment analysis." PeerJ Computer Science 8 (March 17, 2022): e877. http://dx.doi.org/10.7717/peerj-cs.877.

Full text
Abstract:
Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have been successfully applied to Natural Language Processing (NLP), especially in sentiment analysis. NLP can execute numerous functions to achieve significant results through RNN and CNN. Likewise, previous research shows that RNN achieved meaningful results than CNN due to extracting long-term dependencies. Meanwhile, CNN has its advantage; it can extract high-level features using its local fixed-size context at the input level. However, integrating these advantages into one network is challenging because of overfitting in training. Another problem with such models is the consideration of all the features equally. To this end, we propose an attention-based sentiment analysis using CNN and two independent bidirectional RNN networks to address the problems mentioned above and improve sentiment knowledge. Firstly, we apply a preprocessor to enhance the data quality by correcting spelling mistakes and removing noisy content. Secondly, our model utilizes CNN with max-pooling to extract contextual features and reduce feature dimensionality. Thirdly, two independent bidirectional RNN, i.e., Long Short-Term Memory and Gated Recurrent Unit are used to capture long-term dependencies. We also applied the attention mechanism to the RNN layer output to emphasize each word’s attention level. Furthermore, Gaussian Noise and Dropout as regularization are applied to avoid the overfitting problem. Finally, we verify the model’s robustness on four standard datasets. Compared with existing improvements on the most recent neural network models, the experiment results show that our model significantly outperformed the state-of-the-art models.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Zhenshu, Yuming Bo, and Changhui Jiang. "A MEMS Gyroscope Noise Suppressing Method Using Neural Architecture Search Neural Network." Mathematical Problems in Engineering 2019 (November 21, 2019): 1–9. http://dx.doi.org/10.1155/2019/5491243.

Full text
Abstract:
Inertial measurement unit (IMU) (an IMU usually contains three gyroscopes and accelerometers) is the key sensor to construct a self-contained inertial navigation system (INS). IMU manufactured through the Micromechanics Electronics Manufacturing System (MEMS) technology becomes more popular, due to its smaller column, lower cost, and gradually improved accuracy. However, limited by the manufacturing technology, the MEMS IMU raw measurement signals experience complicated noises, which cause the INS navigation solution errors diverge dramatically over time. For addressing this problem, an advanced Neural Architecture Search Recurrent Neural Network (NAS-RNN) was employed in the MEMS gyroscope noise suppressing. NAS-RNN was the recently invented artificial intelligence method for time series problems in data science community. Different from conventional method, NAS-RNN was able to search a more feasible architecture for selected application. In this paper, a popular MEMS IMU STIM300 was employed in the testing experiment, and the sampling frequency was 125 Hz. The experiment results showed that the NAS-RNN was effective for MEMS gyroscope denoising; the standard deviation values of denoised three-axis gyroscope measurements decreased by 44.0%, 34.1%, and 39.3%, respectively. Compared with the Long Short-Term Memory Recurrent Neural Network (LSTM-RNN), the NAS-RNN obtained further decreases by 28.6%, 3.7%, and 8.8% in standard deviation (STD) values of the signals. In addition, the attitude errors decreased by 26.5%, 20.8%, and 16.4% while substituting the LSTM-RNN with the NAS-RNN.
APA, Harvard, Vancouver, ISO, and other styles
26

Subba, Sanjeev, Nawaraj Paudel, and Tej Bahadur Shahi. "Nepali Text Document Classification Using Deep Neural Network." Tribhuvan University Journal 33, no. 1 (June 30, 2019): 11–22. http://dx.doi.org/10.3126/tuj.v33i1.28677.

Full text
Abstract:
An automated text classification is a well-studied problem in text mining which generally demands the automatic assignment of a label or class to a particular text documents on the basis of its content. To design a computer program that learns the model form training data to assign the specific label to unseen text document, many researchers has applied deep learning technologies. For Nepali language, this is first attempt to use deep learning especially Recurrent Neural Network (RNN) and compare its performance to traditional Multilayer Neural Network (MNN). In this study, the Nepali texts were collected from online News portals and their pre-processing and vectorization was done. Finally deep learning classification framework was designed and experimented for ten experiments: five for Recurrent Neural Network and five for Multilayer Neural Network. On comparing the result of the MNN and RNN, it can be concluded that RNN outperformed the MNN as the highest accuracy achieved by MNN is 48 % and highest accuracy achieved by RNN is 63%.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Yung-Chung, Yi-Chun Houng, Han-Xuan Chen, and Shu-Ming Tseng. "Network Anomaly Intrusion Detection Based on Deep Learning Approach." Sensors 23, no. 4 (February 15, 2023): 2171. http://dx.doi.org/10.3390/s23042171.

Full text
Abstract:
The prevalence of internet usage leads to diverse internet traffic, which may contain information about various types of internet attacks. In recent years, many researchers have applied deep learning technology to intrusion detection systems and obtained fairly strong recognition results. However, most experiments have used old datasets, so they could not reflect the latest attack information. In this paper, a current state of the CSE-CIC-IDS2018 dataset and standard evaluation metrics has been employed to evaluate the proposed mechanism. After preprocessing the dataset, six models—deep neural network (DNN), convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM), CNN + RNN and CNN + LSTM—were constructed to judge whether network traffic comprised a malicious attack. In addition, multi-classification experiments were conducted to sort traffic into benign traffic and six categories of malicious attacks: BruteForce, Denial-of-service (DoS), Web Attacks, Infiltration, Botnet, and Distributed denial-of-service (DDoS). Each model showed a high accuracy in various experiments, and their multi-class classification accuracy were above 98%. Compared with the intrusion detection system (IDS) of other papers, the proposed model effectively improves the detection performance. Moreover, the inference time for the combinations of CNN + RNN and CNN + LSTM is longer than that of the individual DNN, RNN and CNN. Therefore, the DNN, RNN and CNN are better than CNN + RNN and CNN + LSTM for considering the implementation of the algorithm in the IDS device.
APA, Harvard, Vancouver, ISO, and other styles
28

Winanto, Eko Arip, Kurniabudi Kurniabudi, Sharipuddin Sharipuddin, Ibnu Sani Wijaya, and Dodi Sandra. "Deteksi Serangan pada Jaringan Kompleks IoT menggunakan Recurrent Neural Network." JURIKOM (Jurnal Riset Komputer) 9, no. 6 (December 30, 2022): 1996. http://dx.doi.org/10.30865/jurikom.v9i6.5298.

Full text
Abstract:
The complex network in the Internet of Things is challenging to maintain network security. With network complexity including data, protocols, sizes, communications, standards, and more, it becomes difficult to implement an intrusion detection system (IDS). One way to improve IDS on complex IoT networks is by using deep learning to detect attacks that occur on complex IoT networks. Recurrent neural network (RNN) is a deep learning method that enhances the detection of complex IoT networks because it takes into account the current input as well as what has been learned from previously received inputs. When making decisions about RNNs, consider current information as well as what has been learned from previous input. Therefore, this study proposes the RNN method to improve the performance of attack detection systems on complex IoT networks. The results of this experiment show satisfactory results by increasing the performance of the accuracy detection system in complex IoT networks which reaches 87%.
APA, Harvard, Vancouver, ISO, and other styles
29

Ji, Junjie, Yongzhang Zhou, Qiuming Cheng, Shoujun Jiang, and Shiting Liu. "Landslide Susceptibility Mapping Based on Deep Learning Algorithms Using Information Value Analysis Optimization." Land 12, no. 6 (May 25, 2023): 1125. http://dx.doi.org/10.3390/land12061125.

Full text
Abstract:
Selecting samples with non-landslide attributes significantly impacts the deep-learning modeling of landslide susceptibility mapping. This study presents a method of information value analysis in order to optimize the selection of negative samples used for machine learning. Recurrent neural network (RNN) has a memory function, so when using an RNN for landslide susceptibility mapping purposes, the input order of the landslide-influencing factors affects the resulting quality of the model. The information value analysis calculates the landslide-influencing factors, determines the input order of data based on the importance of any specific factor in determining the landslide susceptibility, and improves the prediction potential of recurrent neural networks. The simple recurrent unit (SRU), a newly proposed variant of the recurrent neural network, is characterized by possessing a faster processing speed and currently has less application history in landslide susceptibility mapping. This study used recurrent neural networks optimized by information value analysis for landslide susceptibility mapping in Xinhui District, Jiangmen City, Guangdong Province, China. Four models were constructed: the RNN model with optimized negative sample selection, the SRU model with optimized negative sample selection, the RNN model, and the SRU model. The results show that the RNN model with optimized negative sample selection has the best performance in terms of AUC value (0.9280), followed by the SRU model with optimized negative sample selection (0.9057), the RNN model (0.7277), and the SRU model (0.6355). In addition, several objective measures of accuracy (0.8598), recall (0.8302), F1 score (0.8544), Matthews correlation coefficient (0.7206), and the receiver operating characteristic also show that the RNN model performs the best. Therefore, the information value analysis can be used to optimize negative sample selection in landslide sensitivity mapping in order to improve the model’s performance; second, SRU is a weaker method than RNN in terms of model performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Hardy, N. F., and Dean V. Buonomano. "Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model." Neural Computation 30, no. 2 (February 2018): 378–96. http://dx.doi.org/10.1162/neco_a_01041.

Full text
Abstract:
Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency—a measure of network interconnectedness—decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.
APA, Harvard, Vancouver, ISO, and other styles
31

Mohd Ruslan, Muhammad Faridzul Faizal, and Mohd Firdaus Hassan. "Unbalance Failure Recognition Using Recurrent Neural Network." International Journal of Automotive and Mechanical Engineering 19, no. 2 (June 28, 2022): 9668–80. http://dx.doi.org/10.15282/ijame.19.2.2022.04.0746.

Full text
Abstract:
Many machine learning models have been created in recent years, which focus on recognising bearings and gearboxes with less attention on detecting unbalance issues. Unbalance is a fundamental issue that frequently occurs in deteriorating machinery, which requires checking prior to significant faults such as bearing and gearbox failures. Unbalance will propagate unless correction happens, causing damage to neighbouring components, such as bearings and mechanical seals. Because recurrent neural networks are well-known for their performance with sequential data, in this study, RNN is proposed to be developed using only two statistical moments known as the crest factor and kurtosis, with the goal of producing an RNN capable of producing better unbalanced fault predictions than existing machine learning models. The results reveal that RNN prediction efficacies are dependent on how the input data is prepared, with separate datasets of unbalanced data producing more accurate predictions than bulk datasets and combined datasets. This study shows that if the dataset is prepared in a specific way, RNN has a stronger prediction capability, and a future study will explore a new parameter to be fused along with present statistical moments to increase RNN’s prediction capability.
APA, Harvard, Vancouver, ISO, and other styles
32

Adeel, Ahsan, Hadi Larijani, Abbas Javed, and Ali Ahmadinia. "Impact of Learning Algorithms on Random Neural Network based Optimization for LTE-UL Systems." Network Protocols and Algorithms 7, no. 3 (November 30, 2015): 157. http://dx.doi.org/10.5296/npa.v7i3.8295.

Full text
Abstract:
This paper presents an application of context-aware decision making to the problem of radio resource management (RRM) and inter-cell interference coordination (ICIC) in long-term evolution-uplink (LTE-UL) system. The limitations of existing analytical, artificial intelligence (AI), and machine learning (ML) based approaches are highlighted and a novel integration of random neural network (RNN) based learning with genetic algorithm (GA) based reasoning is presented. In first part of the implementation, three learning algorithms (gradient descent (GD), adaptive inertia weight particle swarm optimization (AIWPSO), and differential evolution (DE)) are applied to RNN and two learning algorithms (GD and levenberg-marquardt (LM)) are applied to artificial neural network (ANN). In second part of the implementation, the GA based reasoning is applied to the trained ANN and RNN models for performance optimization. Finally, the ANN and RNN based optimization results are compared with the state-of-the-art fractional power control (FPC) schemes in terms of user throughput and power consumption. The simulation results have revealed that an RNN-DE (RNN trained with DE algorithm) based cognitive engine (CE) can provide up to 14% more cell capacity along with 6dBm and 9dBm less user power consumption as compared to RNN-GD (RNN trained with GD algorithm) and FPC methods respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

Cheng, Yepeng, Zuren Liu, and Yasuhiko Morimoto. "Attention-Based SeriesNet: An Attention-Based Hybrid Neural Network Model for Conditional Time Series Forecasting." Information 11, no. 6 (June 5, 2020): 305. http://dx.doi.org/10.3390/info11060305.

Full text
Abstract:
Traditional time series forecasting techniques can not extract good enough sequence data features, and their accuracies are limited. The deep learning structure SeriesNet is an advanced method, which adopts hybrid neural networks, including dilated causal convolutional neural network (DC-CNN) and Long-short term memory recurrent neural network (LSTM-RNN), to learn multi-range and multi-level features from multi-conditional time series with higher accuracy. However, they didn’t consider the attention mechanisms to learn temporal features. Besides, the conditioning method for CNN and RNN is not specific, and the number of parameters in each layer is tremendous. This paper proposes the conditioning method for two types of neural networks, and respectively uses the gated recurrent unit network (GRU) and the dilated depthwise separable temporal convolutional networks (DDSTCNs) instead of LSTM and DC-CNN for reducing the parameters. Furthermore, this paper presents the lightweight RNN-based hidden state attention module (HSAM) combined with the proposed CNN-based convolutional block attention module (CBAM) for time series forecasting. Experimental results show our model is superior to other models from the viewpoint of forecasting accuracy and computation efficiency.
APA, Harvard, Vancouver, ISO, and other styles
34

Reddy, Mr G. Sekhar, A. Sahithi, P. Harsha Vardhan, and P. Ushasri. "Conversion of Sign Language Video to Text and Speech." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 159–64. http://dx.doi.org/10.22214/ijraset.2022.42078.

Full text
Abstract:
Abstract: Sign Language recognition (SLR) is a significant and promising technique to facilitate communication for hearingimpaired people. Here, we are dedicated to finding an efficient solution to the gesture recognition problem. This work develops a sign language (SL) recognition framework with deep neural networks, which directly transcribes videos of SL sign to word. We propose a novel approach, by using Video sequences that contain both the temporal as well as the spatial features. So, we have used two different models to train both the temporal as well as spatial features. To train the model on the spatial features of the video sequences we use the (Convolutional Neural Networks) CNN model. CNN was trained on the frames obtained from the video sequences of train data. We have used RNN(recurrent neural network) to train the model on the temporal features. A trained CNN model was used to make predictions for individual frames to obtain a sequence of predictions or pool layer outputs for each video. Now this sequence of prediction or pool layer outputs was given to RNN to train on the temporal features. Thus, we perform sign language translation where input video will be given, and by using CNN and RNN, the sign shown in the video is recognized and converted to text and speech. Keywords: CNN (Convolutional Neural Network), RNN(Recurrent Neural Network), SLR(Sign Language Recognition), SL(Sign Language).
APA, Harvard, Vancouver, ISO, and other styles
35

Zhu, Jian-Hua, Musharraf M. Zaman, and Scott A. Anderson. "Modeling of soil behavior with a recurrent neural network." Canadian Geotechnical Journal 35, no. 5 (October 1, 1998): 858–72. http://dx.doi.org/10.1139/t98-042.

Full text
Abstract:
A recurrent neural network (RNN) model is developed for simulating and predicting shear behavior of both a fine-grained residual soil and a dune sand. The RNN model with one hidden layer of 20 nodes appears very effective in modeling complex soil behavior, due to its feedback connections from a hidden layer to an input layer. A dynamic gradient descent learning algorithm is used to train the network. By training part of the experimental data, which include strain-controlled undrained tests and stress-controlled drained tests performed on a residual Hawaiian volcanic soil, the network is able to capture significant variability of shear behavior existing in the residual soil. The unusual characteristics that the denser soil samples dilate under a higher stress level and the looser soil samples contract under a lower stress level are well represented by the RNN model. The RNN model also shows encouraging results in simulation and prediction of behavior of a dune sand which experienced loading-unloading-reloading conditions. Excellent agreements between the measured data and the modeling results are observed in both stress-strain behavior and volumetric-change characteristics. As compared with a traditional model, the RNN model shows more effectiveness and less effort.Key words: neural network, modeling, soil behavior, shear tests, simulation, prediction.
APA, Harvard, Vancouver, ISO, and other styles
36

Rathika, M., P. Sivakumar, K. Ramash Kumar, and Ilhan Garip. "Cooperative Communications Based on Deep Learning Using a Recurrent Neural Network in Wireless Communication Networks." Mathematical Problems in Engineering 2022 (December 21, 2022): 1–12. http://dx.doi.org/10.1155/2022/1864290.

Full text
Abstract:
In recent years, cooperative communication (CC) technology has emerged as a hotspot for testing wireless communication networks (WCNs), and it will play an important role in the spectrum utilization of future wireless communication systems. Instead of running node transmissions at full capacity, this design will distribute available paths across multiple relay nodes to increase the overall throughput. The modeling WCNs coordination processes, as a recurrent mechanism and recommending a deep learning-based transfer choice, propose a recurrent neural network (RNN) process-based relay selection in this research article. This network is trained according to the joint receiver and transmitter outage likelihood and shared knowledge, and without the use of a model or prior data, the best relay is picked from a set of relay nodes. In this study, we make use of the RNN to do superdimensional (high-layered) processing and increase the rate of learning and also have a neural network (NN) selection testing to study the communication device, find out whether or not it can be used, find out how much the system is capable of, and look at how much energy the network needs. In these simulations, it has been shown that the RNN scheme is more effective on these targets and allows the design to keep converged over a longer period of time. We will compare the accuracy and efficiency of our RNN processed-based relay selection methods with long short-term memory (LSTM), gated recurrent units (GRU), and bidirectional long short-term memory (BLSTM),which are all acronyms for long short-term memory methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Venturini, M. "Simulation of Compressor Transient Behavior Through Recurrent Neural Network Models." Journal of Turbomachinery 128, no. 3 (February 1, 2005): 444–54. http://dx.doi.org/10.1115/1.2183315.

Full text
Abstract:
In the paper, self-adapting models capable of reproducing time-dependent data with high computational speed are investigated. The considered models are recurrent feed-forward neural networks (RNNs) with one feedback loop in a recursive computational structure, trained by using a back-propagation learning algorithm. The data used for both training and testing the RNNs have been generated by means of a nonlinear physics-based model for compressor dynamic simulation, which was calibrated on a multistage axial-centrifugal small size compressor. The first step of the analysis is the selection of the compressor maneuver to be used for optimizing RNN training. The subsequent step consists in evaluating the most appropriate RNN structure (optimal number of neurons in the hidden layer and number of outputs) and RNN proper delay time. Then, the robustness of the model response towards measurement uncertainty is ascertained, by comparing the performance of RNNs trained on data uncorrupted or corrupted with measurement errors with respect to the simulation of data corrupted with measurement errors. Finally, the best RNN model is tested on field data taken on the axial-centrifugal compressor on which the physics-based model was calibrated, by comparing physics-based model and RNN predictions against measured data. The comparison between RNN predictions and measured data shows that the agreement can be considered acceptable for inlet pressure, outlet pressure and outlet temperature, while errors are significant for inlet mass flow rate.
APA, Harvard, Vancouver, ISO, and other styles
38

Liao, Zhehao. "Comparative analysis between application of transformer and recurrent neural network in speech recognition." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 629–34. http://dx.doi.org/10.54254/2755-2721/6/20230879.

Full text
Abstract:
Transformer is a deep learning model applying self-attention mechanism which is widely used in solving sequence-to-sequence questions, including speech recognition. After Transformer was proposed, it has been greatly developed and made great progress in the field of speech recognition. Recurrent Neural Network (RNN) is also a model that can be used in speech recognition. Speech recognition is a kind of sequence-to-sequence question that can transform human speech into text form. Both RNN and Transformer use encoder-decoder architecture to solve sequence-to-sequence questions. However, RNN is a recurrent model, weak in parallel training, and it will not perform quite well as Transformer in sequence-to-sequence question, which is a non-recurrent model. This paper mainly analyzes the accuracy Transformer and RNN in automatic speech recognition. It shows that Transformer performs better than RNN in speech recognition area, having higher accuracy, and it therefore provides evidence that Transformer can be an efficacious approach to automatic speech recognition as well as a practical substitution for traditional ways like RNN.
APA, Harvard, Vancouver, ISO, and other styles
39

Alkahtani, Hasan, Theyazn H. H. Aldhyani, and Mohammed Al-Yaari. "Adaptive Anomaly Detection Framework Model Objects in Cyberspace." Applied Bionics and Biomechanics 2020 (December 9, 2020): 1–14. http://dx.doi.org/10.1155/2020/6660489.

Full text
Abstract:
Telecommunication has registered strong and rapid growth in the past decade. Accordingly, the monitoring of computers and networks is too complicated for network administrators. Hence, network security represents one of the biggest serious challenges that can be faced by network security communities. Taking into consideration the fact that e-banking, e-commerce, and business data will be shared on the computer network, these data may face a threat from intrusion. The purpose of this research is to propose a methodology that will lead to a high level and sustainable protection against cyberattacks. In particular, an adaptive anomaly detection framework model was developed using deep and machine learning algorithms to manage automatically-configured application-level firewalls. The standard network datasets were used to evaluate the proposed model which is designed for improving the cybersecurity system. The deep learning based on Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) and machine learning algorithms namely Support Vector Machine (SVM), K-Nearest Neighbor (K-NN) algorithms were implemented to classify the Denial-of-Service attack (DoS) and Distributed Denial-of-Service (DDoS) attacks. The information gain method was applied to select the relevant features from the network dataset. These network features were significant to improve the classification algorithm. The system was used to classify DoS and DDoS attacks in four stand datasets namely KDD cup 199, NSL-KDD, ISCX, and ICI-ID2017. The empirical results indicate that the deep learning based on the LSTM-RNN algorithm has obtained the highest accuracy. The proposed system based on the LSTM-RNN algorithm produced the highest testing accuracy rate of 99.51% and 99.91% with respect to KDD Cup’99, NSL-KDD, ISCX, and ICI-Id2017 datasets, respectively. A comparative result analysis between the machine learning algorithms, namely SVM and KNN, and the deep learning algorithms based on the LSTM-RNN model is presented. Finally, it is concluded that the LSTM-RNN model is efficient and effective to improve the cybersecurity system for detecting anomaly-based cybersecurity.
APA, Harvard, Vancouver, ISO, and other styles
40

Jiang, Tingting, and Xiang Gao. "Deep Learning of Subject Context in Ideological and Political Class Based on Recursive Neural Network." Computational Intelligence and Neuroscience 2022 (September 30, 2022): 1–8. http://dx.doi.org/10.1155/2022/8437548.

Full text
Abstract:
Ideological and political education is the most important way to cultivate students’ humanistic qualities, which can directly determine the development of other qualities. However, at present, the direction of ideological and political innovation in higher vocational colleges is vague. In response to this problem, this study proposes a model based on HS-EEMD-RNN. First, the ensemble empirical mode decomposition (EEMD) method is used to decompose the measured values, and then the recurrent neural network (RNN) is used to train each component and the remaining items. Finally, through the mapping relationship obtained by the model, the response prediction value of each component and the remaining items can be obtained. In the RNN training process, the harmony search (HS) algorithm is introduced to optimize it, and the noise is systematically denoised. Perturbation is used to obtain the optimal solution, thereby optimizing the weight and threshold of the RNN and improving the robustness of the model. The study found that, compared with EEMD-RNN, HS-EEMD-RNN has a better effect, because HS can effectively improve the training and fitting accuracy. The fitting accuracy of the HS-EEMD-RNN model after HS optimization is 0.9918. From this conclusion, the fitting accuracy of the HS-EEMD-RNN model is significantly higher than that of the EEMD-RNN model. In addition, four factors, career development, curriculum construction, community activities, and government support, have obvious influences on ideological and political classrooms in technical colleges. The use of recurrent neural networks in the research direction of deep and innovative research on the subject context of ideological and political classrooms can significantly improve the prediction accuracy of its development direction.
APA, Harvard, Vancouver, ISO, and other styles
41

Albahar, Marwan Ali. "Recurrent Neural Network Model Based on a New Regularization Technique for Real-Time Intrusion Detection in SDN Environments." Security and Communication Networks 2019 (November 18, 2019): 1–9. http://dx.doi.org/10.1155/2019/8939041.

Full text
Abstract:
Software-defined networking (SDN) is a promising approach to networking that provides an abstraction layer for the physical network. This technology has the potential to decrease the networking costs and complexity within huge data centers. Although SDN offers flexibility, it has design flaws with regard to network security. To support the ongoing use of SDN, these flaws must be fixed using an integrated approach to improve overall network security. Therefore, in this paper, we propose a recurrent neural network (RNN) model based on a new regularization technique (RNN-SDR). This technique supports intrusion detection within SDNs. The purpose of regularization is to generalize the machine learning model enough for it to be performed optimally. Experiments on the KDD Cup 1999, NSL-KDD, and UNSW-NB15 datasets achieved accuracies of 99.5%, 97.39%, and 99.9%, respectively. The proposed RNN-SDR employs a minimum number of features when compared with other models. In addition, the experiments also validated that the RNN-SDR model does not significantly affect network performance in comparison with other options. Based on the analysis of the results of our experiments, we conclude that the RNN-SDR model is a promising approach for intrusion detection in SDN environments.
APA, Harvard, Vancouver, ISO, and other styles
42

Krishnan, Surenthiran, Pritheega Magalingam, and Roslina Ibrahim. "Hybrid deep learning model using recurrent neural network and gated recurrent unit for heart disease prediction." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5467. http://dx.doi.org/10.11591/ijece.v11i6.pp5467-5476.

Full text
Abstract:
<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>
APA, Harvard, Vancouver, ISO, and other styles
43

Bukhsh, Madiha, Muhammad Saqib Ali, Abdullah Alourani, Khlood Shinan, Muhammad Usman Ashraf, Abdul Jabbar, and Weiqiu Chen. "Long Short-Term Memory Recurrent Neural Network Approach for Approximating Roots (Eigen Values) of Transcendental Equation of Cantilever Beam." Applied Sciences 13, no. 5 (February 23, 2023): 2887. http://dx.doi.org/10.3390/app13052887.

Full text
Abstract:
In this study, the natural frequencies and roots (Eigenvalues) of the transcendental equation in a cantilever steel beam for transverse vibration with clamped free (CF) boundary conditions are estimated using a long short-term memory-recurrent neural network (LSTM-RNN) approach. The finite element method (FEM) package ANSYS is used for dynamic analysis and, with the aid of simulated results, the Euler–Bernoulli beam theory is adopted for the generation of sample datasets. Then, a deep neural network (DNN)-based LSTM-RNN technique is implemented to approximate the roots of the transcendental equation. Datasets are mainly based on the cantilever beam geometry characteristics used for training and testing the proposed LSTM-RNN network. Furthermore, an algorithm using MATLAB platform for numerical solutions is used to cross-validate the dataset results. The network performance is evaluated using the mean square error (MSE) and mean absolute error (MAE). Finally, the numerical and simulated results are compared using the LSTM-RNN methodology to demonstrate the network validity.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Xintong, and Chuangang Zhao. "A 2D Convolutional Gating Mechanism for Mandarin Streaming Speech Recognition." Information 12, no. 4 (April 12, 2021): 165. http://dx.doi.org/10.3390/info12040165.

Full text
Abstract:
Recent research shows recurrent neural network-Transducer (RNN-T) architecture has become a mainstream approach for streaming speech recognition. In this work, we investigate the VGG2 network as the input layer to the RNN-T in streaming speech recognition. Specifically, before the input feature is passed to the RNN-T, we introduce a gated-VGG2 block, which uses the first two layers of the VGG16 to extract contextual information in the time domain, and then use a SEnet-style gating mechanism to control what information in the channel domain is to be propagated to RNN-T. The results show that the RNN-T model with the proposed gated-VGG2 block brings significant performance improvement when compared to the existing RNN-T model, and it has a lower latency and character error rate than the Transformer-based model.
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Deageon. "Text Classification Based on Neural Network Fusion." Tehnički glasnik 17, no. 3 (July 19, 2023): 359–66. http://dx.doi.org/10.31803/tg-20221228154330.

Full text
Abstract:
The goal of text classification is to identify the category to which the text belongs. Text categorization is widely used in email detection, sentiment analysis, topic marking and other fields. However, good text representation is the point to improve the capability of NLP tasks. Traditional text representation adopts bag-of-words model or vector space model, which loses the context information of the text and faces the problems of high latitude and high sparsity,. In recent years, with the increase of data and the improvement of computing performance, the use of deep learning technology to represent and classify texts has attracted great attention. Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and RNN with attention mechanism are used to represent the text, and then to classify the text and other NLP tasks, all of which have better performance than the traditional methods. In this paper, we design two sentence-level models based on the deep network and the details are as follows: (1) Text representation and classification model based on bidirectional RNN and CNN (BRCNN). BRCNN’s input is the word vector corresponding to each word in the sentence; after using RNN to extract word order information in sentences, CNN is used to extract higher-level features of sentences. After convolution, the maximum pool operation is used to obtain sentence vectors. At last, softmax classifier is used for classification. RNN can capture the word order information in sentences, while CNN can extract useful features. Experiments on eight text classification tasks show that BRCNN model can get better text feature representation, and the classification accuracy rate is equal to or higher than that of the prior art. (2) Attention mechanism and CNN (ACNN) model uses the RNN with attention mechanism to obtain the context vector; Then CNN is used to extract more advanced feature information. The maximum pool operation is adopted to obtain a sentence vector; At last, the softmax classifier is used to classify the text. Experiments on eight text classification benchmark data sets show that ACNN improves the stability of model convergence, and can converge to an optimal or local optimal solution better than BRCNN.
APA, Harvard, Vancouver, ISO, and other styles
46

Yu, Chih-Chang, and Yufeng (Leon) Wu. "Early Warning System for Online STEM Learning—A Slimmer Approach Using Recurrent Neural Networks." Sustainability 13, no. 22 (November 11, 2021): 12461. http://dx.doi.org/10.3390/su132212461.

Full text
Abstract:
While the use of deep neural networks is popular for predicting students’ learning outcomes, convolutional neural network (CNN)-based methods are used more often. Such methods require numerous features, training data, or multiple models to achieve week-by-week predictions. However, many current learning management systems (LMSs) operated by colleges cannot provide adequate information. To make the system more feasible, this article proposes a recurrent neural network (RNN)-based framework to identify at-risk students who might fail the course using only a few common learning features. RNN-based methods can be more effective than CNN-based methods in identifying at-risk students due to their ability to memorize time-series features. The data used in this study were collected from an online course that teaches artificial intelligence (AI) at a university in northern Taiwan. Common features, such as the number of logins, number of posts and number of homework assignments submitted, are considered to train the model. This study compares the prediction results of the RNN model with the following conventional machine learning models: logistic regression, support vector machines, decision trees and random forests. This work also compares the performance of the RNN model with two neural network-based models: the multi-layer perceptron (MLP) and a CNN-based model. The experimental results demonstrate that the RNN model used in this study is better than conventional machine learning models and the MLP in terms of F-score, while achieving similar performance to the CNN-based model with fewer parameters. Our study shows that the designed RNN model can identify at-risk students once one-third of the semester has passed. Some future directions are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
47

Ho, Namgyu, and Yoon-Chul Kim. "Estimation of Cardiac Short Axis Slice Levels with a Cascaded Deep Convolutional and Recurrent Neural Network Model." Tomography 8, no. 6 (November 14, 2022): 2749–60. http://dx.doi.org/10.3390/tomography8060229.

Full text
Abstract:
Automatic identification of short axis slice levels in cardiac magnetic resonance imaging (MRI) is important in efficient and precise diagnosis of cardiac disease based on the geometry of the left ventricle. We developed a combined model of convolutional neural network (CNN) and recurrent neural network (RNN) that takes a series of short axis slices as input and predicts a series of slice levels as output. Each slice image was labeled as one of the following five classes: out-of-apical, apical, mid, basal, and out-of-basal levels. A variety of multi-class classification models were evaluated. When compared with the CNN-alone models, the cascaded CNN-RNN models resulted in higher mean F1-score and accuracy. In our implementation and testing of four different baseline networks with different combinations of RNN modules, MobileNet as the feature extractor cascaded with a two-layer long short-term memory (LSTM) network produced the highest scores in four of the seven evaluation metrics, i.e., five F1-scores, area under the curve (AUC), and accuracy. Our study indicates that the cascaded CNN-RNN models are superior to the CNN-alone models for the classification of short axis slice levels in cardiac cine MR images.
APA, Harvard, Vancouver, ISO, and other styles
48

Margasov, A. O. "Neural ordinary differential equations and their probabilistic extension." Proceedings of the Komi Science Centre of the Ural Division of the Russian Academy of Sciences 6 (2021): 14–19. http://dx.doi.org/10.19110/1994-5655-2021-6-14-19.

Full text
Abstract:
This paper describes the transition from neural network architecture to ordinary differential equations and initial value problem. Two neural network architectures are compared: classical RNN and ODERNN, which uses neural ordinary differential equations. The paper proposes a new architecture of p-ODE-RNN, which allows you to achieve a quality comparable to ODE-RNN, but is trained much faster. Furthermore, the derivation of the proposed architecture in terms of random process theory is discussed.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Fushen, Changshun Du, and Lei Huang. "Ensemble Sentiment Analysis Method based on R-CNN and C-RNN with Fusion Gate." International Journal of Computers Communications & Control 14, no. 2 (April 14, 2019): 272–85. http://dx.doi.org/10.15837/ijccc.2019.2.3375.

Full text
Abstract:
Text sentiment analysis is one of the most important tasks in the field of public opinion monitoring, service evaluation and satisfaction analysis in the current network environment. At present, the sentiment analysis algorithms with good effects are all based on statistical learning methods. The performance of this method depends on the quality of feature extraction, while good feature engineering requires a high degree of expertise and is also time-consuming, laborious, and affords poor opportunities for mobility. Neural networks can reduce dependence on feature engineering. Recurrent neural networks can obtain context information but the order of words will lead to bias; the text analysis method based on convolutional neural network can obtain important features of text through pooling but it is difficult to obtain contextual information. Aiming at the above problems, this paper proposes a sentiment analysis method based on the combination of R-CNN and C-RNN based on a fusion gate. Firstly, RNN and CNN are combined in different ways to alleviate the shortcomings of the two, and the sub-analysis network R-CNN and C-RNN finally combine the two networks through the gating unit to form the final analysis model. We performed experiments on different data sets to verify the effectiveness of the method.
APA, Harvard, Vancouver, ISO, and other styles
50

Kustiyo, Aziz, Mukhlis Mukhlis, and Aries Suharso. "Model Recurent Neural Network untuk Peramalan Produksi Tebu Nasional." BINA INSANI ICT JOURNAL 9, no. 1 (June 28, 2022): 1. http://dx.doi.org/10.51211/biict.v9i1.1744.

Full text
Abstract:
Abstrak: Produksi tebu di Indonesia tersebar di beberapa wilayah yang mengakibatkan variabilitas yang tinggi dari variabel-variabel yang mempengaruhi produksi tebu nasional. Di samping itu, tidak mudah untuk mendapatkan data-data tersebut dalam waktu yang cukup panjang. Oleh karena itu peramalan produksi tebu nasional berdasarkan variabel-variabel tersebut sangat sulit dilakukan. Sebagai solusi dari masalah tersebut, maka peramalan produksi tebu nasional dilakukan berdasarkan data historisnya. Penelitian ini bertujuan untuk mengembangkan model recurrent neural networks (RNN) untuk peramalan produksi tebu nasional berdasarkan data historisnya. Data yang digunakan adalah data produksi tebu nasional dari tahun 1967 sampai dengan tahun 2019 dalam satuan ton. Sebagai data latih digunakan data tahun 1967 sampai dengan tahun 2006 dan sisanya dipakai sebagai data uji. Pada penelitian ini dilakukan percobaan untuk mengetahui pengaruh panjang deret waktu dan ukuran batch terhadap kinerja model RNN dengan tiga ulangan. Hasil penelitian menunjukkan bahwa model RNN dengan panjang deret waktu 4 dan ukuran batch 16 menghasilkan nilai mean absolut percentage error (MAPE) sebesar 9.0% dengan nilai korelasi 0.77. Secara umum, model RNN yang dibangun mampu menangkap pola produksi tebu nasional dengan tingkat kesalahan yang masih dapat ditoleransi. Kata kunci: deret waktu, peramalan, produksi tebu, recurrent neural networks Abstract: Sugarcane production in Indonesia is spread over several regions. This condition results in high variability of the variables that affect national sugarcane production. In addition, it is not easy to obtain these data over a long period. As a result, it is very difficult to forecast the production of national sugarcane based on the influencing variables. Therefore, the forecasting was based on historical data of the national sugarcane production. This study aims to develop a recurrent neural networks (RNN) model for forecasting national sugarcane production based on historical data. The data used is national sugarcane production data from 1967 to 2019 in tons. As training data, data from 1967 to 2006 were used and the rest was used as test data. In this study, an experiment was conducted to determine the effect of time series length and batch size on the performance of the RNN model with three replications. The results showed that the RNN model with a time series length of 4 and a batch size of 16 produced a mean absolute percentage error (MAPE) of 9.0% with a correlation value of 0.77. In general, the RNN model is able to capture the national sugarcane production pattern with a tolerable error rate. Keywords: forecasting, recurrent neural networks, sugarcane production, time series
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography