To see the other types of publications on this topic, follow the link: LSTM Neural networks.

Journal articles on the topic 'LSTM Neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'LSTM Neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bakir, Houda, Ghassen Chniti, and Hédi Zaher. "E-Commerce Price Forecasting Using LSTM Neural Networks." International Journal of Machine Learning and Computing 8, no. 2 (April 2018): 169–74. http://dx.doi.org/10.18178/ijmlc.2018.8.2.682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Yong, Xiaosheng Si, Changhua Hu, and Jianxun Zhang. "A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures." Neural Computation 31, no. 7 (July 2019): 1235–70. http://dx.doi.org/10.1162/neco_a_01199.

Full text
Abstract:
Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the cell structure, the long short-term memory (LSTM) could handle the problem of long-term dependencies well. Since its introduction, almost all the exciting results based on RNNs have been achieved by the LSTM. The LSTM has become the focus of deep learning. We review the LSTM cell and its variants to explore the learning capacity of the LSTM cell. Furthermore, the LSTM networks are divided into two broad categories: LSTM-dominated networks and integrated LSTM networks. In addition, their various applications are discussed. Finally, future research directions are presented for LSTM networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Kalinin, Maxim, Vasiliy Krundyshev, and Evgeny Zubkov. "Estimation of applicability of modern neural network methods for preventing cyberthreats to self-organizing network infrastructures of digital economy platforms,." SHS Web of Conferences 44 (2018): 00044. http://dx.doi.org/10.1051/shsconf/20184400044.

Full text
Abstract:
The problems of applying neural network methods for solving problems of preventing cyberthreats to flexible self-organizing network infrastructures of digital economy platforms: vehicle adhoc networks, wireless sensor networks, industrial IoT, “smart buildings” and “smart cities” are considered. The applicability of the classic perceptron neural network, recurrent, deep, LSTM neural networks and neural networks ensembles in the restricting conditions of fast training and big data processing are estimated. The use of neural networks with a complex architecture– recurrent and LSTM neural networks – is experimentally justified for building a system of intrusion detection for self-organizing network infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Chuanwei, Xusheng Xu, Yikun Li, Jing Huang, Chenxi Li, and Weixin Sun. "Research on SOC Estimation Method for Lithium-Ion Batteries Based on Neural Network." World Electric Vehicle Journal 14, no. 10 (October 2, 2023): 275. http://dx.doi.org/10.3390/wevj14100275.

Full text
Abstract:
With the increasingly serious problem of environmental pollution, new energy vehicles have become a hot spot in today’s research. The lithium-ion battery has become the mainstream power battery of new energy vehicles as it has the advantages of long service life, high-rated voltage, low self-discharge rate, etc. The battery management system is the key part that ensures the efficient and safe operation of the vehicle as well as the long life of the power battery. The accurate estimation of the power battery state directly affects the whole vehicle’s performance. As a result, this paper established a lithium-ion battery charge state estimation model based on BP, PSO-BP and LSTM neural networks, which tried to combine the PSO algorithm with the LSTM algorithm. The particle swarm algorithm was utilized to obtain the optimal parameters of the model in the process of repetitive iteration so as to establish the PSO-LSTM prediction model. The superiority of the LSTM neural network model in SOC estimation was demonstrated by comparing the estimation accuracies of BP, PSO-BP and LSTM neural networks. The comparative analysis under constant flow conditions in the laboratory showed that the PSO-LSTM neural network predicts SOC more accurately than BP, PSO-BP and LSTM neural networks. The comparative analysis under DST and US06 operating conditions showed that the PSO-LSTM neural network has a greater prediction accuracy for SOC than the LSTM neural network.
APA, Harvard, Vancouver, ISO, and other styles
5

Sridhar, C., and Aniruddha Kanhe. "Performance Comparison of Various Neural Networks for Speech Recognition." Journal of Physics: Conference Series 2466, no. 1 (March 1, 2023): 012008. http://dx.doi.org/10.1088/1742-6596/2466/1/012008.

Full text
Abstract:
Abstract Speech recognition is a method where an audio signal is translated into text, words, or commands and also tells how the speech is recognized. Recently, many deep learning models have been adopted for automatic speech recognition and proved more effective than traditional machine learning methods like Artificial Neural Networks(ANN). This work examines the efficient learning architectures of features by different deep neural networks. In this paper, five neural network models, namely, CNN, LSTM, Bi-LSTM, GRU, and CONV-LSTM, for the comparative study. We trained the networks using Audio MNIST dataset for three different iterations and evaluated them based on performance metrics. Experimentally, CNN and Conv-LSTM network model consistently offers the best performance based on MFCC Features.
APA, Harvard, Vancouver, ISO, and other styles
6

Wan, Yingliang, Hong Tao, and Li Ma. "Forecasting Zhejiang Province's GDP Using a CNN-LSTM Model." Frontiers in Business, Economics and Management 13, no. 3 (March 5, 2024): 233–35. http://dx.doi.org/10.54097/bmq2dy63.

Full text
Abstract:
Zhejiang province has experienced notable economic growth in recent years. Despite this, achieving sustainable high-quality economic development presents complex challenges and uncertainties. This study employs advanced neural network methodologies, including Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and an integrated CNN-LSTM model, to predict Zhejiang's economic trajectory. Our empirical analysis demonstrates the proficiency of neural networks in delivering reasonably precise economic forecasts, despite inherent prediction residuals. A comparative assessment indicates that the composite CNN-LSTM model surpasses the individual CNN and LSTM models in accuracy, providing a more reliable forecasting instrument for Zhejiang's high-quality economic progression.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, David, and An Wei. "Regulated LSTM Artificial Neural Networks for Option Risks." FinTech 1, no. 2 (June 2, 2022): 180–90. http://dx.doi.org/10.3390/fintech1020014.

Full text
Abstract:
This research aims to study the pricing risks of options by using improved LSTM artificial neural network models and make direct comparisons with the Black–Scholes option pricing model based upon the option prices of 50 ETFs of the Shanghai Securities Exchange from 1 January 2018 to 31 December 2019. We study an LSTM model, a mathematical option pricing model (BS model), and an improved artificial neural network model—the regulated LSTM model. The method we adopted is first to price the options using the mathematical model—i.e., the BS model—and then to construct the LSTM neural network for training and predicting the option prices. We further form the regulated LSTM network with optimally selected key technical indicators using Python programming aiming at improving the network’s predicting ability. Risks of option pricing are measured by MSE, RMSE, MAE and MAPE, respectively, for all the models used. The results of this paper show that both the ordinary LSTM and the traditional BS option pricing model have lower predictive ability than the regulated LSTM model. The prediction ability of the regulated LSTM model with the optimal technical indicators is superior, and the approach adopted is effective.
APA, Harvard, Vancouver, ISO, and other styles
8

Pal, Subarno, Soumadip Ghosh, and Amitava Nag. "Sentiment Analysis in the Light of LSTM Recurrent Neural Networks." International Journal of Synthetic Emotions 9, no. 1 (January 2018): 33–39. http://dx.doi.org/10.4018/ijse.2018010103.

Full text
Abstract:
Long short-term memory (LSTM) is a special type of recurrent neural network (RNN) architecture that was designed over simple RNNs for modeling temporal sequences and their long-range dependencies more accurately. In this article, the authors work with different types of LSTM architectures for sentiment analysis of movie reviews. It has been showed that LSTM RNNs are more effective than deep neural networks and conventional RNNs for sentiment analysis. Here, the authors explore different architectures associated with LSTM models to study their relative performance on sentiment analysis. A simple LSTM is first constructed and its performance is studied. On subsequent stages, the LSTM layer is stacked one upon another which shows an increase in accuracy. Later the LSTM layers were made bidirectional to convey data both forward and backward in the network. The authors hereby show that a layered deep LSTM with bidirectional connections has better performance in terms of accuracy compared to the simpler versions of LSTM used here.
APA, Harvard, Vancouver, ISO, and other styles
9

Kabildjanov, A. S., Ch Z. Okhunboboeva, and S. Yo Ismailov. "Intelligent forecasting of growth and development of fruit trees by deep learning recurrent neural networks." IOP Conference Series: Earth and Environmental Science 1206, no. 1 (June 1, 2023): 012015. http://dx.doi.org/10.1088/1755-1315/1206/1/012015.

Full text
Abstract:
Abstract The questions of intellectual forecasting of dynamic processes of growth and development of fruit trees are considered. The average growth rate of shoots of apple trees of the «Renet Simirenko» variety was predicted. Forecasting was carried out using a deep learning recurrent neural network LSTM in relation to a one-dimensional time series, with which the specified parameter was described. The implementation of the recurrent neural network LSTM was carried out in the MATLAB 2021 environment. When defining the architecture and training of the LSTM recurrent neural network, the Deep Network Designer application was used, which is included in the MATLAB 2021 extensions and allows you to create, visualize, edit and train deep learning networks. The recurrent neural network LSTM was trained using the Adam method. The results obtained in the course of predicting the average growth rate of apple shoots using a trained LSTM recurrent neural network were evaluated by the root-mean-square error RMSE and the loss function LOSS.
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Dian, and Shouqian Sun. "A Systematic Exploration of Deep Neural Networks for EDA-Based Emotion Recognition." Information 11, no. 4 (April 15, 2020): 212. http://dx.doi.org/10.3390/info11040212.

Full text
Abstract:
Subject-independent emotion recognition based on physiological signals has become a research hotspot. Previous research has proved that electrodermal activity (EDA) signals are an effective data resource for emotion recognition. Benefiting from their great representation ability, an increasing number of deep neural networks have been applied for emotion recognition, and they can be classified as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or a combination of these (CNN+RNN). However, there has been no systematic research on the predictive power and configurations of different deep neural networks in this task. In this work, we systematically explore the configurations and performances of three adapted deep neural networks: ResNet, LSTM, and hybrid ResNet-LSTM. Our experiments use the subject-independent method to evaluate the three-class classification on the MAHNOB dataset. The results prove that the CNN model (ResNet) reaches a better accuracy and F1 score than the RNN model (LSTM) and the CNN+RNN model (hybrid ResNet-LSTM). Extensive comparisons also reveal that our three deep neural networks with EDA data outperform previous models with handcraft features on emotion recognition, which proves the great potential of the end-to-end DNN method.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Chun-Xiang, Shu-Yang Pang, Xue-Yao Gao, Jia-Qi Lu, and Bo Yu. "Attention Neural Network for Biomedical Word Sense Disambiguation." Discrete Dynamics in Nature and Society 2022 (January 10, 2022): 1–14. http://dx.doi.org/10.1155/2022/6182058.

Full text
Abstract:
In order to improve the disambiguation accuracy of biomedical words, this paper proposes a disambiguation method based on the attention neural network. The biomedical word is viewed as the center. Morphology, part of speech, and semantic information from 4 adjacent lexical units are extracted as disambiguation features. The attention layer is used to generate a feature matrix. Average asymmetric convolutional neural networks (Av-ACNN) and bidirectional long short-term memory (Bi-LSTM) networks are utilized to extract features. The softmax function is applied to determine the semantic category of the biomedical word. At the same time, CNN, LSTM, and Bi-LSTM are applied to biomedical WSD. MSH corpus is adopted to optimize CNN, LSTM, Bi-LSTM, and the proposed method and testify their disambiguation performance. Experimental results show that the average disambiguation accuracy of the proposed method is improved compared with CNN, LSTM, and Bi-LSTM. The average disambiguation accuracy of the proposed method achieves 91.38%.
APA, Harvard, Vancouver, ISO, and other styles
12

Mao, Congmin, and Sujing Liu. "A Study on Speech Recognition by a Neural Network Based on English Speech Feature Parameters." Journal of Advanced Computational Intelligence and Intelligent Informatics 28, no. 3 (May 20, 2024): 679–84. http://dx.doi.org/10.20965/jaciii.2024.p0679.

Full text
Abstract:
In this study, from the perspective of English speech feature parameters, two feature parameters, the mel-frequency cepstral coefficient (MFCC) and filter bank (Fbank), were selected to identify English speech. The algorithms used for recognition employed the classical back-propagation neural network (BPNN), recurrent neural network (RNN), and long short-term memory (LSTM) that were obtained by improving RNN. The three recognition algorithms were compared in the experiments, and the effects of the two feature parameters on the performance of the recognition algorithms were also compared. The LSTM model had the best identification performance among the three neural networks under different experimental environments; the neural network model using the MFCC feature parameter outperformed the neural network using the Fbank feature parameter; the LSTM model had the highest correct rate and the highest speed, while the RNN model ranked second, and the BPNN model ranked worst. The results confirm that the application of the LSTM model in combination with MFCC feature parameter extraction to English speech recognition can achieve higher speech recognition accuracy compared to other neural networks.
APA, Harvard, Vancouver, ISO, and other styles
13

Mountzouris, Konstantinos, Isidoros Perikos, and Ioannis Hatzilygeroudis. "Speech Emotion Recognition Using Convolutional Neural Networks with Attention Mechanism." Electronics 12, no. 20 (October 23, 2023): 4376. http://dx.doi.org/10.3390/electronics12204376.

Full text
Abstract:
Speech emotion recognition (SER) is an interesting and difficult problem to handle. In this paper, we deal with it through the implementation of deep learning networks. We have designed and implemented six different deep learning networks, a deep belief network (DBN), a simple deep neural network (SDNN), an LSTM network (LSTM), an LSTM network with the addition of an attention mechanism (LSTM-ATN), a convolutional neural network (CNN), and a convolutional neural network with the addition of an attention mechanism (CNN-ATN), having in mind, apart from solving the SER problem, to test the impact of the attention mechanism on the results. Dropout and batch normalization techniques are also used to improve the generalization ability (prevention of overfitting) of the models as well as to speed up the training process. The Surrey Audio–Visual Expressed Emotion (SAVEE) database and the Ryerson Audio–Visual Database (RAVDESS) were used for the training and evaluation of our models. The results showed that the networks with the addition of the attention mechanism did better than the others. Furthermore, they showed that the CNN-ATN was the best among the tested networks, achieving an accuracy of 74% for the SAVEE database and 77% for the RAVDESS, and exceeding existing state-of-the-art systems for the same datasets.
APA, Harvard, Vancouver, ISO, and other styles
14

Wan, Huaiyu, Shengnan Guo, Kang Yin, Xiaohui Liang, and Youfang Lin. "CTS-LSTM: LSTM-based neural networks for correlatedtime series prediction." Knowledge-Based Systems 191 (March 2020): 105239. http://dx.doi.org/10.1016/j.knosys.2019.105239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Xu, Lingfeng, Xiang Chen, Shuai Cao, Xu Zhang, and Xun Chen. "Feasibility Study of Advanced Neural Networks Applied to sEMG-Based Force Estimation." Sensors 18, no. 10 (September 25, 2018): 3226. http://dx.doi.org/10.3390/s18103226.

Full text
Abstract:
To find out the feasibility of different neural networks in sEMG-based force estimation, in this paper, three types of networks, namely convolutional neural network (CNN), long short-term memory (LSTM) network and their combination (C-LSTM) were applied to predict muscle force generated in static isometric elbow flexion across three different circumstances (multi-subject, subject-dependent and subject-independent). Eight healthy men were recruited for the experiments, and the results demonstrated that all the three models were applicable for force estimation, and LSTM and C-LSTM achieved better performances. Even under subject-independent situation, they maintained mean RMSE% of as low as 9.07 ± 1.29 and 8.67 ± 1.14. CNN turned out to be a worse choice, yielding a mean RMSE% of 12.13 ± 1.98. To our knowledge, this work was the first to employ CNN, LSTM and C-LSTM in sEMG-based force estimation, and the results not only prove the strength of the proposed networks, but also pointed out a potential way of achieving high accuracy in real-time, subject-independent force estimation.
APA, Harvard, Vancouver, ISO, and other styles
16

Blinov, I., V. Miroshnyk, and V. Sychova. "Short-term forecasting of electricity imbalances using artificial neural networks." IOP Conference Series: Earth and Environmental Science 1254, no. 1 (October 1, 2023): 012029. http://dx.doi.org/10.1088/1755-1315/1254/1/012029.

Full text
Abstract:
Abstract Currently, the problem of improving results of short-term forecasting of electricity imbalances in the modern electricity market of Ukraine is a current problem. In order to solve this problem, two types of neural networks with recurrent layers LSTM and LSTNet were analyzed in this work. A comparison of the results of short-term forecasting of daily schedules of electricity imbalances using LSTM and LSTNet neural networks with vector autoregression model (VARMA) was carried out. Actual data of the balancing market were used for the research. Analysis of the results shows that the smallest forecast error was achieved using the LSTM artificial neural network architecture.
APA, Harvard, Vancouver, ISO, and other styles
17

Pavlatos, Christos, Evangelos Makris, Georgios Fotis, Vasiliki Vita, and Valeri Mladenov. "Enhancing Electrical Load Prediction Using a Bidirectional LSTM Neural Network." Electronics 12, no. 22 (November 15, 2023): 4652. http://dx.doi.org/10.3390/electronics12224652.

Full text
Abstract:
Precise anticipation of electrical demand holds crucial importance for the optimal operation of power systems and the effective management of energy markets within the domain of energy planning. This study builds on previous research focused on the application of artificial neural networks to achieve accurate electrical load forecasting. In this paper, an improved methodology is introduced, centering around bidirectional Long Short-Term Memory (LSTM) neural networks (NN). The primary aim of the proposed bidirectional LSTM network is to enhance predictive performance by capturing intricate temporal patterns and interdependencies within time series data. While conventional feed-forward neural networks are suitable for standalone data points, energy consumption data are characterized by sequential dependencies, necessitating the incorporation of memory-based concepts. The bidirectional LSTM model is designed to furnish the prediction framework with the capacity to assimilate and leverage information from both preceding and forthcoming time steps. This augmentation significantly bolsters predictive capabilities by encapsulating the contextual understanding of the data. Extensive testing of the bidirectional LSTM network is performed using multiple datasets, and the results demonstrate significant improvements in accuracy and predictive capabilities compared to the previous simpleRNN-based framework. The bidirectional LSTM successfully captures underlying patterns and dependencies in electrical load data, achieving superior performance as gauged by metrics such as root mean square error (RMSE) and mean absolute error (MAE). The proposed framework outperforms previous models, achieving a remarkable RMSE, attesting to its remarkable capacity to forecast impending load with precision. This extended study contributes to the field of electrical load prediction by leveraging bidirectional LSTM neural networks to enhance forecasting accuracy. Specifically, the BiLSTM’s MAE of 0.122 demonstrates remarkable accuracy, outperforming the RNN (0.163), LSTM (0.228), and GRU (0.165) by approximately 25%, 46%, and 26%, in the best variation of all networks, at the 24-h time step, while the BiLSTM’s RMSE of 0.022 is notably lower than that of the RNN (0.033), LSTM (0.055), and GRU (0.033), respectively. The findings highlight the significance of incorporating bidirectional memory and advanced neural network architectures for precise energy consumption prediction. The proposed bidirectional LSTM framework has the potential to facilitate more efficient energy planning and market management, supporting decision-making processes in power systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Song, Dazhi, and Dazhi Song. "Stock Price Prediction based on Time Series Model and Long Short-term Memory Method." Highlights in Business, Economics and Management 24 (January 22, 2024): 1203–10. http://dx.doi.org/10.54097/e75xgk49.

Full text
Abstract:
This study conducts a comparative analysis of two prominent methodologies, Time Series Analysis and Long Short-Term Memory Neural Networks (LSTM), for the prediction of stock prices, utilizing historical data from Netflix. The primary purpose of conducting this research is to evaluate their efficacy in terms of predictive accuracy. Time Series Analysis encompasses stationarity tests, rolling statistics, and the application of the Autoregressive Integrated Moving Average model. In contrast, LSTM Neural Networks involve data normalization, reshaping, and the development of LSTM-based models. Performance assessment metrics such as Mean Absolute Error, Mean Squared Error, Root Mean Squared Error, and visual comparisons are utilized. The results prominently favor long short-term memory Neural Network, which consistently outperforms in predictive accuracy, yielding reduced forecasting errors. This study contributes significant insights into stock price prediction methodologies and offers implications for refining model parameters, bolstering adaptability to evolving market dynamics, and addressing computational efficiency concerns in both Time Series Analysis and LSTM Neural Networks. In summary, LSTM model emerges as the preferred approach, advancing understanding of effective strategies for stock price prediction in financial markets.
APA, Harvard, Vancouver, ISO, and other styles
19

Gers, Felix A., Jürgen Schmidhuber, and Fred Cummins. "Learning to Forget: Continual Prediction with LSTM." Neural Computation 12, no. 10 (October 1, 2000): 2451–71. http://dx.doi.org/10.1162/089976600300015015.

Full text
Abstract:
Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive “forget gate” that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.
APA, Harvard, Vancouver, ISO, and other styles
20

Wei, Jun, Fan Yang, Xiao-Chen Ren, and Silin Zou. "A Short-Term Prediction Model of PM2.5 Concentration Based on Deep Learning and Mode Decomposition Methods." Applied Sciences 11, no. 15 (July 27, 2021): 6915. http://dx.doi.org/10.3390/app11156915.

Full text
Abstract:
Based on a set of deep learning and mode decomposition methods, a short-term prediction model for PM2.5 concentration for Beijing city is established in this paper. An ensemble empirical mode decomposition (EEMD) algorithm is first used to decompose the original PM2.5 timeseries to several high- to low-frequency intrinsic mode functions (IMFs). Each IMF component is then trained and predicted by a combination of three neural networks: back propagation network (BP), long short-term memory network (LSTM), and a hybrid network of a convolutional neural network (CNN) + LSTM. The results showed that both BP and LSTM are able to fit the low-frequency IMFs very well, and the total prediction errors of the summation of all IMFs are remarkably reduced from 21 g/m3 in the single BP model to 4.8 g/m3 in the EEMD + BP model. Spatial information from 143 stations surrounding Beijing city is extracted by CNN, which is then used to train the CNN+LSTM. It is found that, under extreme weather conditions of PM2.5 <35 g/m3 and PM2.5 >150 g/m3, the prediction errors of the CNN + LSTM model are improved by ~30% compared to the single LSTM model. However, the prediction of the very high-frequency IMF mode (IMF-1) remains a challenge for all neural networks, which might be due to microphysical turbulences and chaotic processes that cannot be resolved by the above-mentioned neural networks based on variable–variable relationship.
APA, Harvard, Vancouver, ISO, and other styles
21

Bucci, Andrea. "Realized Volatility Forecasting with Neural Networks." Journal of Financial Econometrics 18, no. 3 (2020): 502–31. http://dx.doi.org/10.1093/jjfinec/nbaa008.

Full text
Abstract:
Abstract In the last few decades, a broad strand of literature in finance has implemented artificial neural networks as a forecasting method. The major advantage of this approach is the possibility to approximate any linear and nonlinear behaviors without knowing the structure of the data generating process. This makes it suitable for forecasting time series which exhibit long-memory and nonlinear dependencies, like conditional volatility. In this article, the predictive performance of feed-forward and recurrent neural networks (RNNs) was compared, particularly focusing on the recently developed long short-term memory (LSTM) network and nonlinear autoregressive model process with eXogenous input (NARX) network, with traditional econometric approaches. The results show that RNNs are able to outperform all the traditional econometric methods. Additionally, capturing long-range dependence through LSTM and NARX models seems to improve the forecasting accuracy also in a highly volatile period.
APA, Harvard, Vancouver, ISO, and other styles
22

Du, Shaohui, Zhenghan Chen, Haoyan Wu, Yihong Tang, and YuanQing Li. "Image Recommendation Algorithm Combined with Deep Neural Network Designed for Social Networks." Complexity 2021 (July 2, 2021): 1–9. http://dx.doi.org/10.1155/2021/5196190.

Full text
Abstract:
In recent years, deep neural networks have achieved great success in many fields, such as computer vision and natural language processing. Traditional image recommendation algorithms use text-based recommendation methods. The process of displaying images requires a lot of time and labor, and the time-consuming labor is inefficient. Therefore, this article mainly studies image recommendation algorithms based on deep neural networks in social networks. First, according to the time stamp information of the dataset, the interaction records of each user are sorted by the closest time. Then, some feature vectors are created via traditional feature algorithms like LBP, BGC3, RTU, or CNN extraction. For image recommendation, two LSTM neural networks are established, which accept these feature vectors as input, respectively. The compressed output of the two sub-ESTM neural networks is used as the input of another LSTM neural network. The multilayer regression algorithm is adopted to randomly sample some network nodes to obtain the cognitive information of the nodes sampled in the entire network, predict the relationship between all nodes in the network based on the cognitive information, and perform low sampling to achieve relationship prediction. The experiments show that proposed LSTM model together with CNN feature vectors can outperform other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Singh, Arjun, Shashi Kant Dargar, Amit Gupta, Ashish Kumar, Atul Kumar Srivastava, Mitali Srivastava, Pradeep Kumar Tiwari, and Mohammad Aman Ullah. "Evolving Long Short-Term Memory Network-Based Text Classification." Computational Intelligence and Neuroscience 2022 (February 21, 2022): 1–11. http://dx.doi.org/10.1155/2022/4725639.

Full text
Abstract:
Recently, long short-term memory (LSTM) networks are extensively utilized for text classification. Compared to feed-forward neural networks, it has feedback connections, and thus, it has the ability to learn long-term dependencies. However, the LSTM networks suffer from the parameter tuning problem. Generally, initial and control parameters of LSTM are selected on a trial and error basis. Therefore, in this paper, an evolving LSTM (ELSTM) network is proposed. A multiobjective genetic algorithm (MOGA) is used to optimize the architecture and weights of LSTM. The proposed model is tested on a well-known factory reports dataset. Extensive analyses are performed to evaluate the performance of the proposed ELSTM network. From the comparative analysis, it is found that the LSTM network outperforms the competitive models.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Cheng, Luying Li, Yanmei Liu, Xuejiao Luo, Shangguan Song, and Dingchun Xia. "Research on recurrent neural network model based on weight activity evaluation." ITM Web of Conferences 47 (2022): 02046. http://dx.doi.org/10.1051/itmconf/20224702046.

Full text
Abstract:
Given the complex structure and parameter redundancy of recurrent neural networks such as LSTM, related research and analysis on the structure of recurrent neural networks have been done. To improve the structural rationality of the recurrent neural network and reduce the amount of calculation of network parameters, a weight activity evaluation algorithm is proposed that evaluates the activity of the basic unit of the network. Through experiments and tests on arrhythmia data, the differences in the weight activity of the LSTM network and the change characteristics of weights and gradients are analyzed. The experimental results show that this algorithm can better optimize the recurrent neural network structure and reduce the redundancy of network parameters.
APA, Harvard, Vancouver, ISO, and other styles
25

Mero, Kevin, Nelson Salgado, Jaime Meza, Janeth Pacheco-Delgado, and Sebastián Ventura. "Unemployment Rate Prediction Using a Hybrid Model of Recurrent Neural Networks and Genetic Algorithms." Applied Sciences 14, no. 8 (April 10, 2024): 3174. http://dx.doi.org/10.3390/app14083174.

Full text
Abstract:
Unemployment, a significant economic and social challenge, triggers repercussions that affect individual workers and companies, generating a national economic impact. Forecasting the unemployment rate becomes essential for policymakers, allowing them to make short-term estimates, assess economic health, and make informed monetary policy decisions. This paper proposes the innovative GA-LSTM method, which fuses an LSTM neural network with a genetic algorithm to address challenges in unemployment prediction. Effective parameter determination in recurrent neural networks is crucial and a well-known challenge. The research uses the LSTM neural network to overcome complexities and nonlinearities in unemployment predictions, complementing it with a genetic algorithm to optimize the parameters. The central objective is to evaluate recurrent neural network models by comparing them with GA-LSTM to identify the most appropriate model for predicting unemployment in Ecuador using monthly data collected by various organizations. The results demonstrate that the hybrid GA-LSTM model outperforms traditional approaches, such as BiLSTM and GRU, on various performance metrics. This finding suggests that the combination of the predictive power of LSTM with the optimization capacity of the genetic algorithm offers a robust and effective solution to address the complexity of predicting unemployment in Ecuador.
APA, Harvard, Vancouver, ISO, and other styles
26

Chuang, Chia-Chun, Chien-Ching Lee, Chia-Hong Yeng, Edmund-Cheung So, and Yeou-Jiunn Chen. "Attention Mechanism-Based Convolutional Long Short-Term Memory Neural Networks to Electrocardiogram-Based Blood Pressure Estimation." Applied Sciences 11, no. 24 (December 17, 2021): 12019. http://dx.doi.org/10.3390/app112412019.

Full text
Abstract:
Monitoring people’s blood pressure can effectively prevent blood pressure-related diseases. Therefore, providing a convenient and comfortable approach can effectively help patients in monitoring blood pressure. In this study, an attention mechanism-based convolutional long short-term memory (LSTM) neural network is proposed to easily estimate blood pressure. To easily and comfortably estimate blood pressure, electrocardiogram (ECG) and photoplethysmography (PPG) signals are acquired. To precisely represent the characteristics of ECG and PPG signals, the signals in the time and frequency domain are selected as the inputs of the proposed NN structure. To automatically extract the features, the convolutional neural networks (CNNs) are adopted as the first part of neural networks. To identify the meaningful features, the attention mechanism is used in the second part of neural networks. To model the characteristic of time series, the long short-term memory (LSTM) is adopted in the third part of neural networks. To integrate the information of previous neural networks, the fully connected networks are used to estimate blood pressure. The experimental results show that the proposed approach outperforms CNN and CNN-LSTM and complies with the Association for the Advancement of Medical Instrumentation standard.
APA, Harvard, Vancouver, ISO, and other styles
27

Tra, Nguyen Ngoc, Ho Phuoc Tien, Nguyen Thanh Dat, and Nguyen Ngoc Vu. "VN-INDEX TREND PREDICTION USING LONG-SHORT TERM MEMORY NEURAL NETWORKS." Journal of Science and Technology: Issue on Information and Communications Technology 17, no. 12.2 (December 9, 2019): 61. http://dx.doi.org/10.31130/ict-ud.2019.94.

Full text
Abstract:
The paper attemps to forecast the future trend of Vietnam index (VN-index) by using long-short term memory (LSTM) networks. In particular, an LSTM-based neural network is employed to study the temporal dependence in time-series data of past and present VN index values. Empirical forecasting results show that LSTM-based stock trend prediction offers an accuracy of about 60% which outperforms moving-average-based prediction.
APA, Harvard, Vancouver, ISO, and other styles
28

Nguyen, Viet-Hung, Minh-Tuan Nguyen, Jeongsik Choi, and Yong-Hwa Kim. "NLOS Identification in WLANs Using Deep LSTM with CNN Features." Sensors 18, no. 11 (November 20, 2018): 4057. http://dx.doi.org/10.3390/s18114057.

Full text
Abstract:
Identifying channel states as line-of-sight or non-line-of-sight helps to optimize location-based services in wireless communications. The received signal strength identification and channel state information are used to estimate channel conditions for orthogonal frequency division multiplexing systems in indoor wireless local area networks. This paper proposes a joint convolutional neural network and recurrent neural network architecture to classify channel conditions. Convolutional neural networks extract the feature from frequency-domain characteristics of channel state information data and recurrent neural networks extract the feature from time-varying characteristics of received signal strength identification and channel state information between packet transmissions. The performance of the proposed methods is verified under indoor propagation environments. Experimental results show that the proposed method has a 2% improvement in classification performance over the conventional recurrent neural network model.
APA, Harvard, Vancouver, ISO, and other styles
29

Nogueira Filho, Francisco José Matos, Francisco de Assis Souza Filho, Victor Costa Porto, Renan Vieira Rocha, Ályson Brayner Sousa Estácio, and Eduardo Sávio Passos Rodrigues Martins. "Deep Learning for Streamflow Regionalization for Ungauged Basins: Application of Long-Short-Term-Memory Cells in Semiarid Regions." Water 14, no. 9 (April 19, 2022): 1318. http://dx.doi.org/10.3390/w14091318.

Full text
Abstract:
Rainfall-runoff modeling in ungauged basins continues to be a great hydrological research challenge. A novel approach is the Long-Short-Term-Memory neural network (LSTM) from the Deep Learning toolbox, which few works have addressed its use for rainfall-runoff regionalization. This work aims to discuss the application of LSTM as a regional method against traditional neural network (FFNN) and conceptual models in a practical framework with adverse conditions: reduced data availability, shallow soil catchments with semiarid climate, and monthly time step. For this, the watersheds chosen were located on State of Ceará, Northeast Brazil. For streamflow regionalization, both LSTM and FFNN were better than the hydrological model used as benchmark, however, the FFNN were quite superior. The neural network methods also showed the ability to aggregate process understanding from different watersheds as the performance of the neural networks trained with the regionalization data were better with the neural networks trained for single catchments.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Lunhaojie, Wen Fu, Xingao Bian, and Juntao Fei. "Adaptive Intelligent Sliding Mode Control of a Dynamic System with a Long Short-Term Memory Structure." Mathematics 10, no. 7 (April 6, 2022): 1197. http://dx.doi.org/10.3390/math10071197.

Full text
Abstract:
In this work, a novel fuzzy neural network (NFNN) with a long short-term memory (LSTM) structure was derived and an adaptive sliding mode controller, using NFNN (ASMC-NFNN), was developed for a class of nonlinear systems. Aimed at the unknown uncertainties in nonlinear systems, an NFNN was designed to estimate unknown uncertainties, which combined the advantages of fuzzy systems and neural networks, and also introduced a special LSTM recursive structure. The special three gating units in the LSTM structure enabled it to have selective forgetting and memory mechanisms, which could make full use of historical information, and have a stronger ability to learn and estimate unknown uncertainties than general recurrent neural networks. The Lyapunov stability rule guaranteed the parameter convergence of the neural network and system stability. Finally, research into a simulation of an active power filter system showed that the proposed new algorithm had better static and dynamic properties and robustness compared with a sliding controller that uses a recurrent fuzzy neural network (RFNN).
APA, Harvard, Vancouver, ISO, and other styles
31

Becerra Muriel, Cristian. "Forecasting the Future Value of a Colombian Investment Fund with LSTM Recurrent Neural Networks (LSTM)." System Analysis & Mathematical Modeling 6, no. 1 (March 30, 2024): 78–88. http://dx.doi.org/10.17150/2713-1734.2024.6(1).78-88.

Full text
Abstract:
Recurrent neural networks are a tool that is currently used in time series, a widespread use of these networks is the forecasting of future prices in financial time series. One widely used recurrent neural network model is the LSTM (Long Short-Term Memory) model, proposed by Sepp Hochreiter and Jürgen Schmidhuber in their paper called LONG SHORT-TERM MEMORY published in 1997. This model solves the long term memory problem of recurrent neural networks by adding a selective memory cell which acts as a "filter" to choose what kind of information is important to keep and what kind of information is irrelevant and can be discarded. This paper seeks to analyze whether the original LSTM model is relevant to forecast the price of a Colombian mutual fund without taking into account exogenous variables (economic news, macroeconomic and microeconomic variables, underlying assets such as stocks, options, bonds, etc.) that affect its behavior. In this study, three funds with a majority composition of shares in the Colombian market were taken, which have higher volatility levels than public or private bonds.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Feizhou, Ke Shang, Lei Yan, Haijing Nan, and Zicong Miao. "Prediction of Parking Space Availability Using Improved MAT-LSTM Network." ISPRS International Journal of Geo-Information 13, no. 5 (May 1, 2024): 151. http://dx.doi.org/10.3390/ijgi13050151.

Full text
Abstract:
The prediction of parking space availability plays a crucial role in information systems providing parking guidance. However, controversy persists regarding the efficiency and accuracy of mainstream time series prediction methods, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In this study, a comparison was made between a temporal convolutional network (TCN) based on CNNs and a long short-term memory (LSTM) network based on RNNs to determine an appropriate baseline for predicting parking space availability. Subsequently, a multi-head attention (MAT) mechanism was incorporated into an LSTM network, attempting to improve its accuracy. Experiments were conducted on three real and two synthetic datasets. The results indicated that the TCN achieved the fastest convergence, whereas the MAT-LSTM method provided the highest average accuracy, namely 0.0330 and 1.102 × 10−6, on the real and synthetic datasets, respectively. Furthermore, the improved MAT-LSTM model accomplished an increase of up to 48% in accuracy compared with the classic LSTM model. Consequently, we concluded that RNN-based networks are better suited for predicting long-time series. In particular, the MAT-LSTM method proposed in this study holds higher application value for predicting parking space availability with a higher accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Alaameri, Zahra Hasan Oleiwi, and Mustafa Abdulsahib Faihan. "Forecasting the Accounting Profits of the Banks Listed in Iraq Stock Exchange Using Artificial Neural Networks." Webology 19, no. 1 (January 20, 2022): 2669–82. http://dx.doi.org/10.14704/web/v19i1/web19177.

Full text
Abstract:
This paper demonstrates the feasibility of using deep learning approaches in time series forecasting of bank profits. Two types of neural networks were used, LSTM (Long-Short Term Memory) and NAR (Nonlinear Autoregressive) networks, for comparison. The data from 12 Iraqi banks, which are registered in the Iraq stock exchange, were involved in this study for sixteen years (2004-2019). RMSE and MAPE were used for comparing the performance of the two models (LSTM and NAR). Our results showed that the NAR is more accurate than LSTM for the prediction of profits. And that the use of the NAR network by the Iraqi banks will help them predict future accounting profits.
APA, Harvard, Vancouver, ISO, and other styles
34

Moskalenko, Valentyna, Anastasija Santalova, and Nataliia Fonta. "STUDY OF NEURAL NETWORKS FOR FORECASTING THE VALUE OF COMPANY SHARES IN AN UNSTABLE ECONOMY." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 2 (8) (December 23, 2022): 16–23. http://dx.doi.org/10.20998/2079-0023.2022.02.03.

Full text
Abstract:
These studies deal with analysis and selection of neural networks with various architectures and hybrid models, which include neural networks, to predict the market value of shares in the stock market of a country that is in the process of unstable development. Analysis and forecasting of such stock markets cannot be carried out using classical methods. The relevance of the research topic is due to the need to develop software systems that implement algorithmic support for predicting the market value of shares in Ukraine. The introduction of such software systems in the circuit of investment decisionmaking in companies that are interested in increasing the information transparency of the Ukrainian stock market will improve the forecasts of the market value of shares. This, in turn, will help improve the investment climate and ensure the growth of investment in the Ukrainian economy. The analysis of the results of existing studies on the use of neural networks and other methods of computational intelligence for modeling the behavior of stock market participants and market forecasting has been carried out. The article presents the results of a study for the using of neural networks with various architectures for predicting the market value of shares in the stock markets of Ukraine. Four shares of the Ukrainian Stock Exchange were chosen for forecasting: Centrenergo (CEEN); Ukrtelecom (UTLM); Kriukivs’kyi Vahonobudivnyi Zavod PAT (KVBZ); Raiffeisen Bank Aval (BAVL). The following models were chosen for the experimental study: long short-term memory LSTM; convolutional neural network CNN; a hybrid model combining two neural networks CNN and LSTM; a hybrid model consisting of a variational mode decomposition algorithm and a long-term memory neural network (VMD-LSTM); hybrid VMD-CNN-LSTM deep learning model based on variational mode (VMD) and two neural networks. Estimates of forecast quality based on various metrics were calculated. It is concluded that the use of the hybrid model VMD-CNN-LSTM gives the minimum error in predicting the market value of the shares of Ukrainian enterprises. It is also advisable to use the VMD-LSTM model to predict the stock exchanges of countries with an unstable economy.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Chen. "Prediction and Analysis of Artwork Price Based on Deep Neural Network." Scientific Programming 2022 (March 10, 2022): 1–10. http://dx.doi.org/10.1155/2022/7133910.

Full text
Abstract:
The use of deep learning methods to solve problems in the field of artwork prices has attracted widespread attention, especially the superiority of long short-term memory network (LSTM) in dealing with time series problems. However, the potential for deep learning in the prediction of artwork price has not been fully explored. This paper proposes a deep prediction network structure that considers the correlation between time series data and the combination of two-way LSTM as well as one-way LSTM networks to predict the price of artworks. This paper proposes a deep-level two-way and one-way LSTM to predict the price of artworks in the art market. Taking into account the potential reverse dependence of the time series, the bidirectional LSTM layer is used to obtain bidirectional time correlation from historical data. This research uses a matrix to represent the artwork price data and fully considers the spatial correlation characteristics of the artwork price. Simultaneously, this paper uses the two-way LSTM network to correlate the potential contextual information of the historical data of the artwork price stream and fully perform feature learning. This study applies the two-way LSTM network layer to the building blocks of the deep architecture to measure the inverse dependence of the price fluctuation data. The comparison with other prediction models shows that the LSTM neural network fused with one-way and two-way proposed in this paper is superior to other neural networks for predicting price of artworks in terms of prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Jaekyung, Hyunwoo Kim, and Hyungkyoo Kim. "Commercial Vacancy Prediction Using LSTM Neural Networks." Sustainability 13, no. 10 (May 12, 2021): 5400. http://dx.doi.org/10.3390/su13105400.

Full text
Abstract:
Previous studies on commercial vacancy have mostly focused on the survival rate of commercial buildings over a certain time frame and the cause of their closure, due to a lack of appropriate data. Based on a time-series of 2,940,000 individual commercial facility data, the main purpose of this research is two-fold: (1) to examine long short-term memory (LSTM) as a feasible option for predicting trends in commercial districts and (2) to identify the influence of each variable on prediction results for establishing evidence-based decision-making on the primary influences of commercial vacancy. The results indicate that LSTM can be useful in simulating commercial vacancy dynamics. Furthermore, sales, floating population, and franchise rate were found to be the main determinants for commercial vacancy. The results suggest that it is imperative to control the cannibalization of commercial districts and develop their competitiveness to retain a consistent floating population.
APA, Harvard, Vancouver, ISO, and other styles
37

Khalil, Kasem, Omar Eldash, Ashok Kumar, and Magdy Bayoumi. "Economic LSTM Approach for Recurrent Neural Networks." IEEE Transactions on Circuits and Systems II: Express Briefs 66, no. 11 (November 2019): 1885–89. http://dx.doi.org/10.1109/tcsii.2019.2924663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ergen, Tolga, and Suleyman Serdar Kozat. "Unsupervised Anomaly Detection With LSTM Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 31, no. 8 (August 2020): 3127–41. http://dx.doi.org/10.1109/tnnls.2019.2935975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wei, Xiaolu, Binbin Lei, Hongbing Ouyang, and Qiufeng Wu. "Stock Index Prices Prediction via Temporal Pattern Attention and Long-Short-Term Memory." Advances in Multimedia 2020 (December 10, 2020): 1–7. http://dx.doi.org/10.1155/2020/8831893.

Full text
Abstract:
This study attempts to predict stock index prices using multivariate time series analysis. The study’s motivation is based on the notion that datasets of stock index prices involve weak periodic patterns, long-term and short-term information, for which traditional approaches and current neural networks such as Autoregressive models and Support Vector Machine (SVM) may fail. This study applied Temporal Pattern Attention and Long-Short-Term Memory (TPA-LSTM) for prediction to overcome the issue. The results show that stock index prices prediction through the TPA-LSTM algorithm could achieve better prediction performance over traditional deep neural networks, such as recurrent neural network (RNN), convolutional neural network (CNN), and long and short-term time series network (LSTNet).
APA, Harvard, Vancouver, ISO, and other styles
40

Wei, Chih-Chiang. "Comparison of River Basin Water Level Forecasting Methods: Sequential Neural Networks and Multiple-Input Functional Neural Networks." Remote Sensing 12, no. 24 (December 20, 2020): 4172. http://dx.doi.org/10.3390/rs12244172.

Full text
Abstract:
To precisely forecast downstream water levels in catchment areas during typhoons, the deep learning artificial neural networks were employed to establish two water level forecasting models using sequential neural networks (SNNs) and multiple-input functional neural networks (MIFNNs). SNNs, which have a typical neural network structure, are network models constructed using sequential methods. To develop a network model capable of flexibly consolidating data, MIFNNs are employed for processing data from multiple sources or with multiple dimensions. Specifically, when images (e.g., radar reflectivity images) are used as input attributes, feature extraction is required to provide effective feature maps for model training. Therefore, convolutional layers and pooling layers were adopted to extract features. Long short-term memory (LSTM) layers adopted during model training enabled memory cell units to automatically determine the memory length, providing more useful information. The Hsintien River basin in northern Taiwan was selected as the research area and collected relevant data from 2011 to 2019. The input attributes comprised one-dimensional data (e.g., water levels at river stations, rain rates at rain gauges, and reservoir release) and two-dimensional data (i.e., radar reflectivity mosaics). Typhoons Saola, Soudelor, Dujuan, and Megi were selected, and the water levels 1 to 6 h after the typhoons struck were forecasted. The results indicated that compared with linear regressions (REG), SNN using dense layers (SNN-Dense), and SNN using LSTM layers (SNN-LSTM) models, superior forecasting results were achieved for the MIFNN model. Thus, the MIFNN model, as the optimal model for water level forecasting, was identified.
APA, Harvard, Vancouver, ISO, and other styles
41

Han, Shipeng, Zhen Meng, Xingcheng Zhang, and Yuepeng Yan. "Hybrid Deep Recurrent Neural Networks for Noise Reduction of MEMS-IMU with Static and Dynamic Conditions." Micromachines 12, no. 2 (February 20, 2021): 214. http://dx.doi.org/10.3390/mi12020214.

Full text
Abstract:
Micro-electro-mechanical system inertial measurement unit (MEMS-IMU), a core component in many navigation systems, directly determines the accuracy of inertial navigation system; however, MEMS-IMU system is often affected by various factors such as environmental noise, electronic noise, mechanical noise and manufacturing error. These can seriously affect the application of MEMS-IMU used in different fields. Focus has been on MEMS gyro since it is an essential and, yet, complex sensor in MEMS-IMU which is very sensitive to noises and errors from the random sources. In this study, recurrent neural networks are hybridized in four different ways for noise reduction and accuracy improvement in MEMS gyro. These are two-layer homogenous recurrent networks built on long short term memory (LSTM-LSTM) and gated recurrent unit (GRU-GRU), respectively; and another two-layer but heterogeneous deep networks built on long short term memory-gated recurrent unit (LSTM-GRU) and a gated recurrent unit-long short term memory (GRU-LSTM). Practical implementation with static and dynamic experiments was carried out for a custom MEMS-IMU to validate the proposed networks, and the results show that GRU-LSTM seems to be overfitting large amount data testing for three-dimensional axis gyro in the static test. However, for X-axis and Y-axis gyro, LSTM-GRU had the best noise reduction effect with over 90% improvement in the three axes. For Z-axis gyroscope, LSTM-GRU performed better than LSTM-LSTM and GRU-GRU in quantization noise and angular random walk, while LSTM-LSTM shows better improvement than both GRU-GRU and LSTM-GRU networks in terms of zero bias stability. In the dynamic experiments, the Hilbert spectrum carried out revealed that time-frequency energy of the LSTM-LSTM, GRU-GRU, and GRU-LSTM denoising are higher compared to LSTM-GRU in terms of the whole frequency domain. Similarly, Allan variance analysis also shows that LSTM-GRU has a better denoising effect than the other networks in the dynamic experiments. Overall, the experimental results demonstrate the effectiveness of deep learning algorithms in MEMS gyro noise reduction, among which LSTM-GRU network shows the best noise reduction effect and great potential for application in the MEMS gyroscope area.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Qinghua, Yuexiao Yu, Hosameldin O. A. Ahmed, Mohamed Darwish, and Asoke K. Nandi. "Open-Circuit Fault Detection and Classification of Modular Multilevel Converters in High Voltage Direct Current Systems (MMC-HVDC) with Long Short-Term Memory (LSTM) Method." Sensors 21, no. 12 (June 17, 2021): 4159. http://dx.doi.org/10.3390/s21124159.

Full text
Abstract:
Fault detection and classification are two of the challenging tasks in Modular Multilevel Converters in High Voltage Direct Current (MMC-HVDC) systems. To directly classify the raw sensor data without certain feature extraction and classifier design, a long short-term memory (LSTM) neural network is proposed and used for seven states of the MMC-HVDC transmission power system simulated by Power Systems Computer Aided Design/Electromagnetic Transients including DC (PSCAD/EMTDC). It is observed that the LSTM method can detect faults with 100% accuracy and classify different faults as well as provide promising fault classification performance. Compared with a bidirectional LSTM (BiLSTM), the LSTM can get similar classification accuracy, requiring less training time and testing time. Compared with Convolutional Neural Networks (CNN) and AutoEncoder-based deep neural networks (AE-based DNN), the LSTM method can get better classification accuracy around the middle of the testing data proportion, but it needs more training time.
APA, Harvard, Vancouver, ISO, and other styles
43

Victor, Nancy, and Daphne Lopez. "sl-LSTM." International Journal of Grid and High Performance Computing 12, no. 3 (July 2020): 1–16. http://dx.doi.org/10.4018/ijghpc.2020070101.

Full text
Abstract:
The volume of data in diverse data formats from various data sources has led the way for a new drift in the digital world, Big Data. This article proposes sl-LSTM (sequence labelling LSTM), a neural network architecture that combines the effectiveness of typical LSTM models to perform sequence labeling tasks. This is a bi-directional LSTM which uses stochastic gradient descent optimization and combines two features of the existing LSTM variants: coupled input-forget gates for reducing the computational complexity and peephole connections that allow all gates to inspect the current cell state. The model is tested on different datasets and the results show that the integration of various neural network models can further improve the efficiency of approach for identifying sensitive information in Big data.
APA, Harvard, Vancouver, ISO, and other styles
44

Kłosowski, Grzegorz, Anna Hoła, Tomasz Rymarczyk, Mariusz Mazurek, Konrad Niderla, and Magdalena Rzemieniak. "Using Machine Learning in Electrical Tomography for Building Energy Efficiency through Moisture Detection." Energies 16, no. 4 (February 11, 2023): 1818. http://dx.doi.org/10.3390/en16041818.

Full text
Abstract:
Wet foundations and walls of buildings significantly increase the energy consumption of buildings, and the drying of walls is one of the priority activities as part of thermal modernization, along with the insulation of the facades. This article discusses the research findings of detecting moisture decomposition within building walls utilizing electrical impedance tomography (EIT) and deep learning techniques. In particular, the focus was on algorithmic models whose task is transforming voltage measurements into spatial EIT images. Two homogeneous deep learning networks were used: CNN (Convolutional Neural Network) and LSTM (Long-Short Term Memory). In addition, a new heterogeneous (hybrid) network was built with LSTM and CNN layers. Based on the reference reconstructions’ simulation data, three separate neural network algorithmic models: CNN, LSTM, and the hybrid model (CNN+LSTM), were trained. Then, based on popular measures such as mean square error or correlation coefficient, the quality of the models was assessed with the reference images. The obtained research results showed that hybrid deep neural networks have great potential for solving the tomographic inverse problem. Furthermore, it has been proven that the proper joining of CNN and LSTM layers can improve the effect of EIT reconstructions.
APA, Harvard, Vancouver, ISO, and other styles
45

Ayyildiz, Ertugrul, and Melike Erdoğan. "Forecasting of daily dam occupancy rates using LSTM networks." World Journal of Environmental Research 12, no. 1 (May 31, 2022): 33–42. http://dx.doi.org/10.18844/wjer.v12i1.7732.

Full text
Abstract:
Due to unconscious consumption of natural water resources and climate change, a water crisis is expected in the upcoming years. At this point, it is necessary to know the water levels in the dams and develop strategies for water-saving applications in the coming periods. This study aimed to propose the artificial neural network models for forecasting the water in the dams that provide usable water for the future. For this reason, long short-term memory (LSTM) networks that are a type of recurrent neural networks are employed to make future forecasts. The daily dam occupancy rate data between 2005 and 2021 for İstanbul is used to train the LSTM network. Then, the developed models are used to forecast over the next 30 days. The data are used in ARIMA to model the daily dam occupancy time series, for a fair comparison. The forecast values gained by the proposed LSTM network are compared with the traditional methods using RMSE and MAPE for all the forecast horizons. The results show that the LSTM-based forecast model always has a better accuracy rate than the ARIMA. Keywords: ANNs, dam, forecasting, occupancy, LSTM, RNNs;
APA, Harvard, Vancouver, ISO, and other styles
46

You, Yue, Woo-Hyoung Kim, and Yong-Seok Cho. "Stock Market Prediction Based on LSTM Neural Networks." Korea International Trade Research Institute 19, no. 2 (April 30, 2023): 391–407. http://dx.doi.org/10.16980/jitc.19.2.202304.391.

Full text
Abstract:
Purpose – This study aims to more accurately and effectively predict trends in portfolio prices by building a model using LSTM neural networks, and investigating the risk and profit prediction of investment portfolios. Design/Methodology/Approach – To obtain a return on stocks, this study used 60 monthly transaction data from major countries, including the United States and Korea, for five ETFs, BNDX, BND, VXUS, VTI, and 122630.KS, for five years from January 2016 to December of 2021. In addition, a related portfolio was constructed using modern portfolio theory. Through Min-Max normalization, five ETFs and closing data from April 20 to July 20, 2022 were normalized. The input data were classified into two characteristic dimensions, and an LSTM time series model was constructed with the number of nodes in six hidden layers. Findings – By establishing a portfolio and making regression predictions, it was possible to effectively reduce situations in which prediction accuracy was lowered due to large fluctuations in index-based stocks. Research Implications – The predicted results were tested using OLS regression analysis. The relationship between the risk of building a tangential portfolio with the same composition with different weights, the accuracy of stock price prediction by effectively reducing the low prediction accuracy of highly volatile stocks in the portfolio, and changing the set risk-free interest rate were examined.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhou, Lixia, Xia Chen, Runsha Dong, and Shan Yang. "Hotspots Prediction Based on LSTM Neural Network for Cellular Networks." Journal of Physics: Conference Series 1624 (October 2020): 052016. http://dx.doi.org/10.1088/1742-6596/1624/5/052016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Geng, Xuemin Yao, Jianjun Cui, Yonggang Yan, Jun Dai, and Wu Zhao. "A novel piezoelectric hysteresis modeling method combining LSTM and NARX neural networks." Modern Physics Letters B 34, no. 28 (June 16, 2020): 2050306. http://dx.doi.org/10.1142/s0217984920503066.

Full text
Abstract:
In order to study the hysteresis nonlinear characteristics of piezoelectric actuators, a novel hybrid modeling method based on Long-Short-Term Memory (LSTM) and Nonlinear autoregressive with external input (NARX) neural networks is proposed. First, the input–output curve between the applied voltage and the produced angle of a piezoelectric tip/tilt mirror is measured. Second, two hysteresis models named LSTM and NARX neural networks were, respectively, established mathematically, and then were tested and verified experimentally. Third, a novel adaptive weighted hybrid hysteresis model which combines LSTM and NARX neural networks was proposed through analyzing and comparing the unique characteristics of the above two hysteresis models. The proposed hybrid model combines LSTM’s ability to approximate nonlinear static hysteresis and NARX’s high dynamic-fitting ability. Experimental results show that the RMS errors of the hybrid model are smaller than those of LSTM model and NARX model. That is to say, the proposed hybrid model has a relatively high accuracy. Compared with the traditional differential equation-based and operator-based hysteresis models, the presented hybrid neural network method has higher flexibility and accuracy in modeling performance, and is a more promising method for modeling piezoelectric hysteresis.
APA, Harvard, Vancouver, ISO, and other styles
49

Shewalkar, Apeksha, Deepika Nyavanandi, and Simone A. Ludwig. "Performance Evaluation of Deep Neural Networks Applied to Speech Recognition: RNN, LSTM and GRU." Journal of Artificial Intelligence and Soft Computing Research 9, no. 4 (October 1, 2019): 235–45. http://dx.doi.org/10.2478/jaiscr-2019-0006.

Full text
Abstract:
Abstract Deep Neural Networks (DNN) are nothing but neural networks with many hidden layers. DNNs are becoming popular in automatic speech recognition tasks which combines a good acoustic with a language model. Standard feedforward neural networks cannot handle speech data well since they do not have a way to feed information from a later layer back to an earlier layer. Thus, Recurrent Neural Networks (RNNs) have been introduced to take temporal dependencies into account. However, the shortcoming of RNNs is that long-term dependencies due to the vanishing/exploding gradient problem cannot be handled. Therefore, Long Short-Term Memory (LSTM) networks were introduced, which are a special case of RNNs, that takes long-term dependencies in a speech in addition to short-term dependencies into account. Similarily, GRU (Gated Recurrent Unit) networks are an improvement of LSTM networks also taking long-term dependencies into consideration. Thus, in this paper, we evaluate RNN, LSTM, and GRU to compare their performances on a reduced TED-LIUM speech data set. The results show that LSTM achieves the best word error rates, however, the GRU optimization is faster while achieving word error rates close to LSTM.
APA, Harvard, Vancouver, ISO, and other styles
50

Yadav, Omprakash, Rachael Dsouza, Rhea Dsouza, and Janice Jose. "Soccer Action video Classification using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 1060–63. http://dx.doi.org/10.22214/ijraset.2022.43929.

Full text
Abstract:
Abstract: This paper proposes a deep learning approach for the classification of different soccer actions like Goal, Yellow Card and Soccer Juggling from an input soccer video. The approach used for the same included a Hybrid model which consisted of VGG16 CNN model and Bidirectional Long short-term memory (Bi-LSTM) a Recurrent Neural Network (RNN) model. Our approach involved manually annotating approximately 400 soccer clips from 3 action classes for training. Using the VGG16 model to extract the features from the frames of these clips and then training the bi-LSTM on the features obtained. Bi-LSTM being useful in predicting input sequence problems like videos. Keywords: Soccer Videos, Convolution Neural Networks (CNNs), Recurrent Neural Network (RNN), Bidirectional Long shortterm memory (Bi-LSTM)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography