Статті в журналах з теми "Long Short-Term Memory network ( LSTM)"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Long Short-Term Memory network ( LSTM).

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Long Short-Term Memory network ( LSTM)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long Short-Term Memory." Neural Computation 9, no. 8 (November 1, 1997): 1735–80. http://dx.doi.org/10.1162/neco.1997.9.8.1735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
2

Singh, Arjun, Shashi Kant Dargar, Amit Gupta, Ashish Kumar, Atul Kumar Srivastava, Mitali Srivastava, Pradeep Kumar Tiwari, and Mohammad Aman Ullah. "Evolving Long Short-Term Memory Network-Based Text Classification." Computational Intelligence and Neuroscience 2022 (February 21, 2022): 1–11. http://dx.doi.org/10.1155/2022/4725639.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, long short-term memory (LSTM) networks are extensively utilized for text classification. Compared to feed-forward neural networks, it has feedback connections, and thus, it has the ability to learn long-term dependencies. However, the LSTM networks suffer from the parameter tuning problem. Generally, initial and control parameters of LSTM are selected on a trial and error basis. Therefore, in this paper, an evolving LSTM (ELSTM) network is proposed. A multiobjective genetic algorithm (MOGA) is used to optimize the architecture and weights of LSTM. The proposed model is tested on a well-known factory reports dataset. Extensive analyses are performed to evaluate the performance of the proposed ELSTM network. From the comparative analysis, it is found that the LSTM network outperforms the competitive models.
3

Chen Wang, Chen Wang, Bingchun Liu Chen Wang, Jiali Chen Bingchun Liu, and Xiaogang Yu Jiali Chen. "Air Quality Index Prediction Based on a Long Short-Term Memory Artificial Neural Network Model." 電腦學刊 34, no. 2 (April 2023): 069–79. http://dx.doi.org/10.53106/199115992023043402006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>Air pollution has become one of the important challenges restricting the sustainable development of cities. Therefore, it is of great significance to achieve accurate prediction of Air Quality Index (AQI). Long Short Term Memory (LSTM) is a deep learning method suitable for learning time series data. Considering its superiority in processing time series data, this study established an LSTM forecasting model suitable for air quality index forecasting. First, we focus on optimizing the feature metrics of the model input through Information Gain (IG). Second, the prediction results of the LSTM model are compared with other machine learning models. At the same time the time step aspect of the LSTM model is used with selective experiments to ensure that model validation works properly. The results show that compared with other machine learning models, the LSTM model constructed in this paper is more suitable for the prediction of air quality index.</p> <p>&nbsp;</p>
4

Liu, Chen. "Long short-term memory (LSTM)-based news classification model." PLOS ONE 19, no. 5 (May 30, 2024): e0301835. http://dx.doi.org/10.1371/journal.pone.0301835.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this study, we used unidirectional and bidirectional long short-term memory (LSTM) deep learning networks for Chinese news classification and characterized the effects of contextual information on text classification, achieving a high level of accuracy. A Chinese glossary was created using jieba—a word segmentation tool—stop-word removal, and word frequency analysis. Next, word2vec was used to map the processed words into word vectors, creating a convenient lookup table for word vectors that could be used as feature inputs for the LSTM model. A bidirectional LSTM (BiLSTM) network was used for feature extraction from word vectors to facilitate the transfer of information in both the backward and forward directions to the hidden layer. Subsequently, an LSTM network was used to perform feature integration on all the outputs of the BiLSTM network, with the output from the last layer of the LSTM being treated as the mapping of the text into a feature vector. The output feature vectors were then connected to a fully connected layer to construct a feature classifier using the integrated features, finally classifying the news articles. The hyperparameters of the model were optimized based on the loss between the true and predicted values using the adaptive moment estimation (Adam) optimizer. Additionally, multiple dropout layers were added to the model to reduce overfitting. As text classification models for Chinese news articles, the Bi-LSTM and unidirectional LSTM models obtained f1-scores of 94.15% and 93.16%, respectively, with the former outperforming the latter in terms of feature extraction.
5

Zhou, Chenze. "Long Short-term Memory Applied on Amazon's Stock Prediction." Highlights in Science, Engineering and Technology 34 (February 28, 2023): 71–76. http://dx.doi.org/10.54097/hset.v34i.5380.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
More and more investors are paying attention to how to use data mining technology into stock investing decisions as a result of the introduction of big data and the quick expansion of financial markets. Machine learning can automatically apply complex mathematical calculations to big data repeatedly and faster. The machine model can analyze all the factors and indicators affecting stock price and achieve high efficiency. Based on the Amazon stock price published on Kaggle, this paper adopts the Long Short-term Memory (LSTM) method for model training. The Keras package in the Python program is used to normalize the data. The Sequence model in Keras establishes a two-layer LSTM network and a three-layer LSTM network to compare and analyze the fitting effect of the model on stock prices. By calculating RMSE and RMPE, the study found that the stock price prediction accuracy of two-layer LSTM is similar to that of three-layer LSTM. In terms of F-measure and Accuracy, the LSTM model of the three-layer network is significantly better than the LSTM model of the two-layer network layer. In general, the LSTM model can accurately predict stock price. Therefore, investors will know the upward or downward trend of stock prices in advance according to the prediction results of the model to make corresponding decisions.
6

Xu, Wei, Yanan Jiang, Xiaoli Zhang, Yi Li, Run Zhang, and Guangtao Fu. "Using long short-term memory networks for river flow prediction." Hydrology Research 51, no. 6 (October 5, 2020): 1358–76. http://dx.doi.org/10.2166/nh.2020.026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Deep learning has made significant advances in methodologies and practical applications in recent years. However, there is a lack of understanding on how the long short-term memory (LSTM) networks perform in river flow prediction. This paper assesses the performance of LSTM networks to understand the impact of network structures and parameters on river flow predictions. Two river basins with different characteristics, i.e., Hun river and Upper Yangtze river basins, are used as case studies for the 10-day average flow predictions and the daily flow predictions, respectively. The use of the fully connected layer with the activation function before the LSTM cell layer can substantially reduce learning efficiency. On the contrary, non-linear transformation following the LSTM cells is required to improve learning efficiency due to the different magnitudes of precipitation and flow. The batch size and the number of LSTM cells are sensitive parameters and should be carefully tuned to achieve a balance between learning efficiency and stability. Compared with several hydrological models, the LSTM network achieves good performance in terms of three evaluation criteria, i.e., coefficient of determination, Nash–Sutcliffe Efficiency and relative error, which demonstrates its powerful capacity in learning non-linear and complex processes in hydrological modelling.
7

Kumar, Naresh, Jatin Bindra, Rajat Sharma, and Deepali Gupta. "Air Pollution Prediction Using Recurrent Neural Network, Long Short-Term Memory and Hybrid of Convolutional Neural Network and Long Short-Term Memory Models." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4580–84. http://dx.doi.org/10.1166/jctn.2020.9283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Air pollution prediction was not an easy task few years back. With the increasing computation power and wide availability of the datasets, air pollution prediction problem is solved to some extend. Inspired by the deep learning models, in this paper three techniques for air pollution prediction have been proposed. The models used includes recurrent neural network (RNN), Long short-term memory (LSTM) and a hybrid combination of Convolutional neural network (CNN) and LSTM models. These models are tested by comparing MSE loss on air pollution test of Belgium. The validation loss on RNN is 0.0045, LSTM is 0.00441 and CNN and LSTM is 0.0049. The loss on testing dataset for these models are 0.00088, 0.00441 and 0.0049 respectively.
8

Song, Tianyu, Wei Ding, Jian Wu, Haixing Liu, Huicheng Zhou, and Jinggang Chu. "Flash Flood Forecasting Based on Long Short-Term Memory Networks." Water 12, no. 1 (December 29, 2019): 109. http://dx.doi.org/10.3390/w12010109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Flash floods occur frequently and distribute widely in mountainous areas because of complex geographic and geomorphic conditions and various climate types. Effective flash flood forecasting with useful lead times remains a challenge due to its high burstiness and short response time. Recently, machine learning has led to substantial changes across many areas of study. In hydrology, the advent of novel machine learning methods has started to encourage novel applications or substantially improve old ones. This study aims to establish a discharge forecasting model based on Long Short-Term Memory (LSTM) networks for flash flood forecasting in mountainous catchments. The proposed LSTM flood forecasting (LSTM-FF) model is composed of T multivariate single-step LSTM networks and takes spatial and temporal dynamics information of observed and forecast rainfall and early discharge as inputs. The case study in Anhe revealed that the proposed models can effectively predict flash floods, especially the qualified rates (the ratio of the number of qualified events to the total number of flood events) of large flood events are above 94.7% at 1–5 h lead time and range from 84.2% to 89.5% at 6–10 h lead-time. For the large flood simulation, the small flood events can help the LSTM-FF model to explore a better rainfall-runoff relationship. The impact analysis of weights in the LSTM network structures shows that the discharge input plays a more obvious role in the 1-h LSTM network and the effect decreases with the lead-time. Meanwhile, in the adjacent lead-time, the LSTM networks explored a similar relationship between input and output. The study provides a new approach for flash flood forecasting and the highly accurate forecast contributes to prepare for and mitigate disasters.
9

Zoremsanga, Chawngthu, and Jamal Hussain. "An Evaluation of Bidirectional Long Short-Term Memory Model for Estimating Monthly Rainfall in India." Indian Journal Of Science And Technology 17, no. 18 (April 24, 2024): 1828–37. http://dx.doi.org/10.17485/ijst/v17i18.2505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Objectives: Predicting the amount of rainfall is difficult due to its complexity and non-linearity. The objective of this study is to predict the average rainfall one month ahead using the all-India monthly average rainfall dataset from 1871 to 2016. Methods: This study proposed a Bidirectional Long Short-Term Memory (LSTM) model to predict the average monthly rainfall in India. The parameters of the models are determined using the grid search method. This study utilized the average monthly rainfall as an input, and the dataset consists of 1752 months of rainfall data prepared from thirty (30) meteorological sub-divisions in India. The model was compiled using the Mean Square Error (MSE) loss function and Adam optimizer. The models' performances were evaluated using statistical metrics such as Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). Findings: This study discovered that the proposed Bidirectional LSTM model achieved an RMSE of 240.79 and outperformed an existing Recurrent Neural Network (RNN), Vanilla LSTM and Stacked LSTM by 8%, 4% and 2% respectively. The study also finds that increasing the input time step and increasing the number of cells in the hidden layer enhanced the prediction performance of the proposed model, and the Bidirectional LSTM converges at a lower epoch compared to RNN and LSTM models. Novelty: This study applied the Bidirectional LSTM for the first time in predicting all-India monthly average rainfall and provides a new benchmark for this dataset. Keywords: Deep Learning, LSTM, Rainfall prediction, Stacked LSTM, Bidirectional LSTM
10

Muneer, Amgad, Rao Faizan Ali, Ahmed Almaghthawi, Shakirah Mohd Taib, Amal Alghamdi, and Ebrahim Abdulwasea Abdullah Ghaleb. "Short term residential load forecasting using long short-term memory recurrent neural network." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 5 (October 1, 2022): 5589. http://dx.doi.org/10.11591/ijece.v12i5.pp5589-5599.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<span>Load forecasting plays an essential role in power system planning. The efficiency and reliability of the whole power system can be increased with proper planning and organization. Residential load forecasting is indispensable due to its increasing role in the smart grid environment. Nowadays, smart meters can be deployed at the residential level for collecting historical data consumption of residents. Although the employment of smart meters ensures large data availability, the inconsistency of load data makes it challenging and taxing to forecast accurately. Therefore, the traditional forecasting techniques may not suffice the purpose. However, a deep learning forecasting network-based long short-term memory (LSTM) is proposed in this paper. The powerful nonlinear mapping capabilities of RNN in time series make it effective along with the higher learning capabilities of long sequences of LSTM. The proposed method is tested and validated through available real-world data sets. A comparison of LSTM is then made with two traditionally available techniques, exponential smoothing and auto-regressive integrated moving average model (ARIMA). Real data from 12 houses over three months is used to evaluate and validate the performance of load forecasts performed using the three mentioned techniques. LSTM model has achieved the best results due to its higher capability of memorizing large data in time series-based predictions.</span>
11

Awad, Asmaa Ahmed, Ahmed Fouad Ali, and Tarek Gaber. "An improved long short term memory network for intrusion detection." PLOS ONE 18, no. 8 (August 1, 2023): e0284795. http://dx.doi.org/10.1371/journal.pone.0284795.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Over the years, intrusion detection system has played a crucial role in network security by discovering attacks from network traffics and generating an alarm signal to be sent to the security team. Machine learning methods, e.g., Support Vector Machine, K Nearest Neighbour, have been used in building intrusion detection systems but such systems still suffer from low accuracy and high false alarm rate. Deep learning models (e.g., Long Short-Term Memory, LSTM) have been employed in designing intrusion detection systems to address this issue. However, LSTM needs a high number of iterations to achieve high performance. In this paper, a novel, and improved version of the Long Short-Term Memory (ILSTM) algorithm was proposed. The ILSTM is based on the novel integration of the chaotic butterfly optimization algorithm (CBOA) and particle swarm optimization (PSO) to improve the accuracy of the LSTM algorithm. The ILSTM was then used to build an efficient intrusion detection system for binary and multi-class classification cases. The proposed algorithm has two phases: phase one involves training a conventional LSTM network to get initial weights, and phase two involves using the hybrid swarm algorithms, CBOA and PSO, to optimize the weights of LSTM to improve the accuracy. The performance of ILSTM and the intrusion detection system were evaluated using two public datasets (NSL-KDD dataset and LITNET-2020) under nine performance metrics. The results showed that the proposed ILSTM algorithm outperformed the original LSTM and other related deep-learning algorithms regarding accuracy and precision. The ILSTM achieved an accuracy of 93.09% and a precision of 96.86% while LSTM gave an accuracy of 82.74% and a precision of 76.49%. Also, the ILSTM performed better than LSTM in both datasets. In addition, the statistical analysis showed that ILSTM is more statistically significant than LSTM. Further, the proposed ISTLM gave better results of multiclassification of intrusion types such as DoS, Prob, and U2R attacks.
12

Tra, Nguyen Ngoc, Ho Phuoc Tien, Nguyen Thanh Dat, and Nguyen Ngoc Vu. "VN-INDEX TREND PREDICTION USING LONG-SHORT TERM MEMORY NEURAL NETWORKS." Journal of Science and Technology: Issue on Information and Communications Technology 17, no. 12.2 (December 9, 2019): 61. http://dx.doi.org/10.31130/ict-ud.2019.94.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The paper attemps to forecast the future trend of Vietnam index (VN-index) by using long-short term memory (LSTM) networks. In particular, an LSTM-based neural network is employed to study the temporal dependence in time-series data of past and present VN index values. Empirical forecasting results show that LSTM-based stock trend prediction offers an accuracy of about 60% which outperforms moving-average-based prediction.
13

Bhandarkar, Tanvi, Vardaan K, Nikhil Satish, S. Sridhar, R. Sivakumar, and Snehasish Ghosh. "Earthquake trend prediction using long short-term memory RNN." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 2 (April 1, 2019): 1304. http://dx.doi.org/10.11591/ijece.v9i2.pp1304-1312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The prediction of a natural calamity such as earthquakes has been an area of interest for a long time but accurate results in earthquake forecasting have evaded scientists, even leading some to deem it intrinsically impossible to forecast them accurately. In this paper an attempt to forecast earthquakes and trends using a data of a series of past earthquakes. A type of recurrent neural network called Long Short-Term Memory (LSTM) is used to model the sequence of earthquakes. The trained model is then used to predict the future trend of earthquakes. An ordinary Feed Forward Neural Network (FFNN) solution for the same problem was done for comparison. The LSTM neural network was found to outperform the FFNN. The R^2 score of the LSTM is better than the FFNN’s by 59%.</p>
14

Hu, Sile, Wenbin Cai, Jun Liu, Hao Shi, and Jiawei Yu. "Refining Short-Term Power Load Forecasting: An Optimized Model with Long Short-Term Memory Network." Volume 31, Issue 3 31, no. 3 (April 4, 2024): 151–66. http://dx.doi.org/10.20532/cit.2023.1005730.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Short-term power load forecasting involves the stable operation and optimal scheduling of the power system. Accurate load forecasting can improve the safety and economy of the power grid. Therefore, how to predict power load quickly and accurately has become one of the urgent problems to be solved. Based on the optimization parameter selection and data preprocessing of the improved Long Short-Term Memory Network, the study first integrated particle swarm optimization algorithm to achieve parameter optimization. Then, combined with convolutional neural network, the power load data were processed to optimize the data and reduce noise, thereby enhancing model performance. Finally, simulation experiments were conducted. The PSO-CNN-LSTM model was tested on the GEFC dataset and demonstrated stability of up to 90%. This was 22% higher than the competing CNN-LSTM model and at least 30% higher than the LSTM model. The PSO-CNN-LSTM model was trained with a step size of 1.9×10^4, the relative mean square error was 0.2345×10^-4. However, when the CNN-LSTM and LSTM models were trained for more than 2.0×10^4 steps, they still did not achieve the target effect. In addition, the fitting error of the PSOCNN-LSTM model in the GEFC dataset was less than 1.0×10^-7. In power load forecasting, the PSOCNN- LSTM model's predicted results had an average absolute error of less than 1.0% when compared to actual data. This was an improvement of at least 0.8% compared to the average absolute error of the CNNLSTM prediction model. These experiments confirmed that the prediction model that combined two methods had further improved the speed and accuracy of power load prediction compared to traditional prediction models, providing more guarantees for safe and stable operation of the power system.
15

Lv, Liujia, Weijian Kong, Jie Qi, and Jue Zhang. "An improved long short-term memory neural network for stock forecast." MATEC Web of Conferences 232 (2018): 01024. http://dx.doi.org/10.1051/matecconf/201823201024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents an improved long short-term memory (LSTM) neural network based on particle swarm optimization (PSO), which is applied to predict the closing price of the stock. PSO is introduced to optimize the weights of the LSTM neural network, which reduces the prediction error. After preprocessing the historical data of the stock, including opening price, closing price, highest price, lowest price, and daily volume these five attributes, we train the LSTM by employing time series of the historical data. Finally, we apply the proposed LSTM to predict the closing price of the stock in the last two years. Compared with typical algorithms by simulation, we find the LSTM has better performance in reliability and adaptability, and the improved PSO-LSTM algorithm has better accuracy.
16

Li, Siyao, Rui Qin, and Zijian Zhou. "Movie sentiment analysis based on Long Short-Term Memory Network." Applied and Computational Engineering 38, no. 1 (January 22, 2024): 16–25. http://dx.doi.org/10.54254/2755-2721/38/20230524.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An important task in the study of Natural Language Processing (NLP) is the analysis of movie reviews. It finishes the task of classifying movie review texts into sentiment, such as positive, negative or neutral sentiment. Previous works mainly follow the pipeline of LSTM (Long Short-Term Memory Network). The network model is a variant of Recurrent Neural Network (RNN) and particularly suitable for processing natural language texts. Though existing LSTM-based works have improved the performance significantly, we argue that most of them deal with the problem of analyzing the sentiment of movie reviews while ignore analyze the model performance in different application scenarios, such as different lengths of the reviews and the frequency of sentiment adverbs in the reviews. To alleviate the above issue, in this paper, we constructed a simple LSTM model containing an embedding layer, a batch normalization layer, a dropout layer, a one-dimensional convolutional layer, a maximal pooling layer, a bi-directional LSTM layer and a fully connected layer. We used the existing IMDB movie review dataset to train the model, and selected two research scenarios of movie review length and frequency of occurrence of sentiment adverbs to test the model, respectively. From the experimental results, we proposed a model for the scenarios in which the LSTM model handles the problem of sentiment analysis with respect to the dataset construction, model stability and generalization ability, text fragment processing, data preprocessing and feature extraction, model optimization and improvement.
17

Shankar, Sonali, P. Vigneswara Ilavarasan, Sushil Punia, and Surya Prakash Singh. "Forecasting container throughput with long short-term memory networks." Industrial Management & Data Systems 120, no. 3 (December 4, 2019): 425–41. http://dx.doi.org/10.1108/imds-07-2019-0370.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose Better forecasting always leads to better management and planning of the operations. The container throughput data are complex and often have multiple seasonality. This makes it difficult to forecast accurately. The purpose of this paper is to forecast container throughput using deep learning methods and benchmark its performance over other traditional time-series methods. Design/methodology/approach In this study, long short-term memory (LSTM) networks are implemented to forecast container throughput. The container throughput data of the Port of Singapore are used for empirical analysis. The forecasting performance of the LSTM model is compared with seven different time-series forecasting methods, namely, autoregressive integrated moving average (ARIMA), simple exponential smoothing, Holt–Winter’s, error-trend-seasonality, trigonometric regressors (TBATS), neural network (NN) and ARIMA + NN. The relative error matrix is used to analyze the performance of the different models with respect to bias, accuracy and uncertainty. Findings The results showed that LSTM outperformed all other benchmark methods. From a statistical perspective, the Diebold–Mariano test is also conducted to further substantiate better forecasting performance of LSTM over other counterpart methods. Originality/value The proposed study is a contribution to the literature on the container throughput forecasting and adds value to the supply chain theory of forecasting. Second, this study explained the architecture of the deep-learning-based LSTM method and discussed in detail the steps to implement it.
18

Bi, Ruoxue. "Long-term and short-term memory network based movie comment sentiment analysis." Applied and Computational Engineering 36, no. 1 (January 22, 2024): 150–55. http://dx.doi.org/10.54254/2755-2721/36/20230437.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper proposes an emotional analysis method of movie reviews based on Long-term and Short-term Memory(LSTM) Network model. Emotional analysis is widely used in movie recommendation system, which can recommend and judge movies by understanding the audiences emotional response to movies. However, due to the characteristics of movie text and the complexity of emotional expression, traditional methods such as machine learning have limitations and shortcomings in emotional analysis. However, the LSTM models better memory is utilized by the method proposed in this paper and the ability to capture the long-term correlation in movie texts, which obviously improves the accuracy and reliability of emotional analysis, and demonstrates the advantages of the LSTM model in emotional analysis compared to the traditional model. Future research can further explore other deep learning models and algorithms, so as to make emotional analysis more accurate and provide users with reliable movie recommendation information.
19

Zhang, Suqin. "Stock price prediction based on the long short-term memory network." Applied and Computational Engineering 18, no. 1 (October 23, 2023): 28–32. http://dx.doi.org/10.54254/2755-2721/18/20230958.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Stock analysis is a challenging task that involves modelling complex and nonlinear dynamics of stock prices and volumes. Long Short-Term Memory (LSTM) is a type of recurrent neural network that can capture long-term dependencies and temporal patterns in time series data. In this paper, a stock analysis method based on LSTM is proposed that can predict future stock prices and transactions using historical data. Yfinance is used to obtain stock data of four technology companies (i.e. Apple, Google, Microsoft, and Amazon) and apply LSTM to extract features and forecast trends. Various techniques are also used such as moving average, correlation analysis, and risk assessment to evaluate the performance and risk of different stocks. When compare the method in this paper with other neural network models such as RNN and GRU, the result show that LSTM achieves better accuracy and stability in stock prediction. This paper demonstrates the effectiveness and applicability of LSTM method through experiments on real-world data sets.
20

Sun, Lichao, Hang Qin, Krzysztof Przystupa, Michal Majka, and Orest Kochan. "Individualized Short-Term Electric Load Forecasting Using Data-Driven Meta-Heuristic Method Based on LSTM Network." Sensors 22, no. 20 (October 17, 2022): 7900. http://dx.doi.org/10.3390/s22207900.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Short-term load forecasting is viewed as one promising technology for demand prediction under the most critical inputs for the promising arrangement of power plant units. Thus, it is imperative to present new incentive methods to motivate such power system operations for electricity management. This paper proposes an approach for short-term electric load forecasting using long short-term memory networks and an improved sine cosine algorithm called MetaREC. First, using long short-term memory networks for a special kind of recurrent neural network, the dispatching commands have the characteristics of storing and transmitting both long-term and short-term memories. Next, four important parameters are determined using the sine cosine algorithm base on a logistic chaos operator and multilevel modulation factor to overcome the inaccuracy of long short-term memory networks prediction, in terms of the manual selection of parameter values. Moreover, the performance of the MetaREC method outperforms others with regard to convergence accuracy and convergence speed on a variety of test functions. Finally, our analysis is extended to the scenario of the MetaREC_long short-term memory with back propagation neural network, long short-term memory networks with default parameters, long short-term memory networks with the conventional sine-cosine algorithm, and long short-term memory networks with whale optimization for power load forecasting on a real electric load dataset. Simulation results demonstrate that the multiple forecasts with MetaREC_long short-term memory can effectively incentivize the high accuracy and stability for short-term power load forecasting.
21

Han, Mingchong, Aiguo Tan, and Jianwei Zhong. "Application of Particle Swarm Optimization Combined with Long and Short-term Memory Networks for Short-term Load Forecasting." Journal of Physics: Conference Series 2203, no. 1 (February 1, 2022): 012047. http://dx.doi.org/10.1088/1742-6596/2203/1/012047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract In this paper, we apply the Long Short-Term Memory (LSTM) network to short-term load forecasting, and use the TensorFlow deep learning framework to build a Particle Swarm Optimization (PSO) model to optimize the parameters of the LSTM. Optimization (PSO) model to optimize the parameters of LSTM. In this paper, we use the meteorological data and historical load data of a certain place as the input of LSTM before and after optimization, and compare the model with the BP Neural Network before and after optimization, and the results show that the PSO-LSTM model has higher reliability and prediction accuracy.
22

Deshpande, Vivek. "Implementation of Long Short-Term Memory (LSTM) Networks for Stock Price Prediction." Research Journal of Computer Systems and Engineering 4, no. 2 (December 31, 2023): 60–72. http://dx.doi.org/10.52710/rjcse.74.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this research, we explore the potential of Long Short-Term Memory (LSTM) networks for predicting stock prices. Due to the complexities of the financial markets and the inherent volatility of stock prices, accurate forecasting is now crucial for investors and financial specialists. It has been shown that LSTM, a type of recurrent neural network (RNN), can recognise temporal correlations and patterns in serial data. Training and assessing LSTM models in this work involves analysing stock price data, relevant financial measures, and market sentiment indicators. We looked into other ideas, hyper parameters, and preprocessing methods to see if we might boost the networks' performance. To further improve the model's generalizability, we utilise series normalisation and removal to reduce overfitting. The outcomes demonstrate that the LSTM network outperforms more standard series temporal prediction methods in capturing and anticipating shifts in action pricing. We also conduct extensive back testing and evaluation, using measures like mean squared error (MSE) and mean absolute error (MAE), to assess the model's accuracy and resilience. The results of this study shed light on how deep learning techniques, in particular LSTM networks, can be applied to the prediction of stock prices, potentially assisting traders, investors, and other financial decision-makers in navigating complex and volatile financial markets.
23

Wei, Xiaolu, Binbin Lei, Hongbing Ouyang, and Qiufeng Wu. "Stock Index Prices Prediction via Temporal Pattern Attention and Long-Short-Term Memory." Advances in Multimedia 2020 (December 10, 2020): 1–7. http://dx.doi.org/10.1155/2020/8831893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study attempts to predict stock index prices using multivariate time series analysis. The study’s motivation is based on the notion that datasets of stock index prices involve weak periodic patterns, long-term and short-term information, for which traditional approaches and current neural networks such as Autoregressive models and Support Vector Machine (SVM) may fail. This study applied Temporal Pattern Attention and Long-Short-Term Memory (TPA-LSTM) for prediction to overcome the issue. The results show that stock index prices prediction through the TPA-LSTM algorithm could achieve better prediction performance over traditional deep neural networks, such as recurrent neural network (RNN), convolutional neural network (CNN), and long and short-term time series network (LSTNet).
24

Kratzert, Frederik, Daniel Klotz, Claire Brenner, Karsten Schulz, and Mathew Herrnegger. "Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks." Hydrology and Earth System Sciences 22, no. 11 (November 22, 2018): 6005–22. http://dx.doi.org/10.5194/hess-22-6005-2018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Rainfall–runoff modelling is one of the key challenges in the field of hydrology. Various approaches exist, ranging from physically based over conceptual to fully data-driven models. In this paper, we propose a novel data-driven approach, using the Long Short-Term Memory (LSTM) network, a special type of recurrent neural network. The advantage of the LSTM is its ability to learn long-term dependencies between the provided input and output of the network, which are essential for modelling storage effects in e.g. catchments with snow influence. We use 241 catchments of the freely available CAMELS data set to test our approach and also compare the results to the well-known Sacramento Soil Moisture Accounting Model (SAC-SMA) coupled with the Snow-17 snow routine. We also show the potential of the LSTM as a regional hydrological model in which one model predicts the discharge for a variety of catchments. In our last experiment, we show the possibility to transfer process understanding, learned at regional scale, to individual catchments and thereby increasing model performance when compared to a LSTM trained only on the data of single catchments. Using this approach, we were able to achieve better model performance as the SAC-SMA + Snow-17, which underlines the potential of the LSTM for hydrological modelling applications.
25

Balmuri, Kavitha Rani, Srinivas Konda, Wen-Cheng Lai, Parameshachari Bidare Divakarachari, Kavitha Malali Vishveshwarappa Gowda, and Hemalatha Kivudujogappa Lingappa. "A Long Short-Term Memory Network-Based Radio Resource Management for 5G Network." Future Internet 14, no. 6 (June 14, 2022): 184. http://dx.doi.org/10.3390/fi14060184.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, the Long-Term Evolution-Advanced system is widely used to provide 5G communication due to its improved network capacity and less delay during communication. The main issues in the 5G network are insufficient user resources and burst errors, because it creates losses in data transmission. In order to overcome this, an effective Radio Resource Management (RRM) is required to be developed in the 5G network. In this paper, the Long Short-Term Memory (LSTM) network is proposed to develop the radio resource management in the 5G network. The proposed LSTM-RRM is used for assigning an adequate power and bandwidth to the desired user equipment of the network. Moreover, the Grid Search Optimization (GSO) is used for identifying the optimal hyperparameter values for LSTM. In radio resource management, a request queue is used to avoid the unwanted resource allocation in the network. Moreover, the losses during transmission are minimized by using frequency interleaving and guard level insertion. The performance of the LSTM-RRM method has been analyzed in terms of throughput, outage percentage, dual connectivity, User Sum Rate (USR), Threshold Sum Rate (TSR), Outdoor Sum Rate (OSR), threshold guaranteed rate, indoor guaranteed rate, and outdoor guaranteed rate. The indoor guaranteed rate of LSTM-RRM for 1400 m of building distance improved up to 75.38% compared to the existing QOC-RRM.
26

Min, Huasong, Ziming Chen, Bin Fang, Ziwei Xia, Yixu Song, Zongtao Wang, Quan Zhou, Fuchun Sun, and Chunfang Liu. "Cross-Individual Gesture Recognition Based on Long Short-Term Memory Networks." Scientific Programming 2021 (July 6, 2021): 1–11. http://dx.doi.org/10.1155/2021/6680417.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Gestures recognition based on surface electromyography (sEMG) has been widely used for human-computer interaction. However, there are few research studies on overcoming the influence of physiological factors among different individuals. In this paper, a cross-individual gesture recognition method based on long short-term memory (LSTM) networks is proposed, named cross-individual LSTM (CI-LSTM). CI-LSTM has a dual-network structure, including a gesture recognition module and an individual recognition module. By designing the loss function, the individual information recognition module assists the gesture recognition module to train, which tends to orthogonalize the gesture features and individual features to minimize the impact of individual information differences on gesture recognition. Through cross-individual gesture recognition experiments, it is verified that compared with other selected algorithm models, the recognition accuracy obtained by using the CI-LSTM model can be improved by an average of 9.15%. Compared with other models, CI-LSTM can overcome the influence of individual characteristics and complete the task of cross-individual hand gestures recognition. Based on the proposed model, online control of the prosthetic hand is realized.
27

Wang, Jianyong, Lei Zhang, Yuanyuan Chen, and Zhang Yi. "A New Delay Connection for Long Short-Term Memory Networks." International Journal of Neural Systems 28, no. 06 (June 24, 2018): 1750061. http://dx.doi.org/10.1142/s0129065717500617.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Connections play a crucial role in neural network (NN) learning because they determine how information flows in NNs. Suitable connection mechanisms may extensively enlarge the learning capability and reduce the negative effect of gradient problems. In this paper, a new delay connection is proposed for Long Short-Term Memory (LSTM) unit to develop a more sophisticated recurrent unit, called Delay Connected LSTM (DCLSTM). The proposed delay connection brings two main merits to DCLSTM with introducing no extra parameters. First, it allows the output of the DCLSTM unit to maintain LSTM, which is absent in the LSTM unit. Second, the proposed delay connection helps to bridge the error signals to previous time steps and allows it to be back-propagated across several layers without vanishing too quickly. To evaluate the performance of the proposed delay connections, the DCLSTM model with and without peephole connections was compared with four state-of-the-art recurrent model on two sequence classification tasks. DCLSTM model outperformed the other models with higher accuracy and F1[Formula: see text]score. Furthermore, the networks with multiple stacked DCLSTM layers and the standard LSTM layer were evaluated on Penn Treebank (PTB) language modeling. The DCLSTM model achieved lower perplexity (PPL)/bit-per-character (BPC) than the standard LSTM model. The experiments demonstrate that the learning of the DCLSTM models is more stable and efficient.
28

Alamri, Nawaf Mohammad H., Michael Packianather, and Samuel Bigot. "Optimizing the Parameters of Long Short-Term Memory Networks Using the Bees Algorithm." Applied Sciences 13, no. 4 (February 16, 2023): 2536. http://dx.doi.org/10.3390/app13042536.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Improving the performance of Deep Learning (DL) algorithms is a challenging problem. However, DL is applied to different types of Deep Neural Networks, and Long Short-Term Memory (LSTM) is one of them that deals with time series or sequential data. This paper attempts to overcome this problem by optimizing LSTM parameters using the Bees Algorithm (BA), which is a nature-inspired algorithm that mimics the foraging behavior of honey bees. In particular, it was used to optimize the adjustment factors of the learning rate in the forget, input, and output gates, in addition to cell candidate, in both forward and backward sides. Furthermore, the BA was used to optimize the learning rate factor in the fully connected layer. In this study, artificial porosity images were used for testing the algorithms; since the input data were images, a Convolutional Neural Network (CNN) was added in order to extract the features in the images to feed into the LSTM for predicting the percentage of porosity in the sequential layers of artificial porosity images that mimic real CT scan images of products manufactured by the Selective Laser Melting (SLM) process. Applying a Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) yielded a porosity prediction accuracy of 93.17%. Although using Bayesian Optimization (BO) to optimize the LSTM parameters mentioned previously did not improve the performance of the LSTM, as the prediction accuracy was 93%, adding the BA to optimize the same LSTM parameters did improve its performance in predicting the porosity, with an accuracy of 95.17% where a hybrid Bees Algorithm Convolutional Neural Network Long Short-Term Memory (BA-CNN-LSTM) was used. Furthermore, the hybrid BA-CNN-LSTM algorithm was capable of dealing with classification problems as well. This was shown by applying it to Electrocardiogram (ECG) benchmark images, which improved the test set classification accuracy, which was 92.50% for the CNN-LSTM algorithm and 95% for both the BO-CNN-LSTM and BA-CNN-LSTM algorithms. In addition, the turbofan engine degradation simulation numerical dataset was used to predict the Remaining Useful Life (RUL) of the engines using the LSTM network. A CNN was not needed in this case, as there was no feature extraction for the images. However, adding the BA to optimize the LSTM parameters improved the prediction accuracy in the testing set for the LSTM and BO-LSTM, which increased from 74% to 77% for the hybrid BA-LSTM algorithm.
29

BALOGLU, ULAS BARAN, and ÖZAL YILDIRIM. "CONVOLUTIONAL LONG-SHORT TERM MEMORY NETWORKS MODEL FOR LONG DURATION EEG SIGNAL CLASSIFICATION." Journal of Mechanics in Medicine and Biology 19, no. 01 (February 2019): 1940005. http://dx.doi.org/10.1142/s0219519419400050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background and objective: Deep learning structures have recently achieved remarkable success in the field of machine learning. Convolutional neural networks (CNN) in image processing and long-short term memory (LSTM) in the time-series analysis are commonly used deep learning algorithms. Healthcare applications of deep learning algorithms provide important contributions for computer-aided diagnosis research. In this study, convolutional long-short term memory (CLSTM) network was used for automatic classification of EEG signals and automatic seizure detection. Methods: A new nine-layer deep network model consisting of convolutional and LSTM layers was designed. The signals processed in the convolutional layers were given as an input to the LSTM network whose outputs were processed in densely connected neural network layers. The EEG data is appropriate for a model having 1-D convolution layers. A bidirectional model was employed in the LSTM layer. Results: Bonn University EEG database with five different datasets was used for experimental studies. In this database, each dataset contains 23.6[Formula: see text]s duration 100 single channel EEG segments which consist of 4097 dimensional samples (173.61[Formula: see text]Hz). Eight two-class and three three-class clinical scenarios were examined. When the experimental results were evaluated, it was seen that the proposed model had high accuracy on both binary and ternary classification tasks. Conclusions: The proposed end-to-end learning structure showed a good performance without using any hand-crafted feature extraction or shallow classifiers to detect the seizures. The model does not require filtering, and also automatically learns to filter the input as well. As a result, the proposed model can process long duration EEG signals without applying segmentation, and can detect epileptic seizures automatically by using the correlation of ictal and interictal signals of raw data.
30

Sugiartawan, Putu, Agus Aan Jiwa Permana, and Paholo Iman Prakoso. "Forecasting Kunjungan Wisatawan Dengan Long Short Term Memory (LSTM)." Jurnal Sistem Informasi dan Komputer Terapan Indonesia (JSIKTI) 1, no. 1 (September 30, 2018): 43–52. http://dx.doi.org/10.33173/jsikti.5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bali is one of the favorite tourist attractions in Indonesia, where the number of foreign tourists visiting Bali is around 4 million over 2015 (Dispar Bali). The number of tourists visiting is spread in various regions and tourist attractions that are located in Bali. Although tourist visits to Bali can be said to be large, the visit was not evenly distributed, there were significant fluctuations in tourist visits. Forecasting or forecasting techniques can find out the pattern of tourist visits. Forecasting technique aims to predict the previous data pattern so that the next data pattern can be known. In this study using the technique of recurrent neural network in predicting the level of tourist visits. One of the techniques for a recurrent neural network (RNN) used in this study is Long Short-Term Memory (LSTM). This model is better than a simple RNN model. In this study predicting the level of tourist visits using the LSTM algorithm, the data used is data on tourist visits to one of the attractions in Bali. The results obtained using the LSTM model amounted to 15,962. The measured value is an error value, with the MAPE technique. The LSTM architecture used consists of 16 units of neuron units in the hidden layer, a learning rate of 0.01, windows size of 3, and the number of hidden layers is 1.
31

Noor, Fahima, Sanaulla Haq, Mohammed Rakib, Tarik Ahmed, Zeeshan Jamal, Zakaria Shams Siam, Rubyat Tasnuva Hasan, Mohammed Sarfaraz Gani Adnan, Ashraf Dewan, and Rashedur M. Rahman. "Water Level Forecasting Using Spatiotemporal Attention-Based Long Short-Term Memory Network." Water 14, no. 4 (February 17, 2022): 612. http://dx.doi.org/10.3390/w14040612.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bangladesh is in the floodplains of the Ganges, Brahmaputra, and Meghna River delta, crisscrossed by an intricate web of rivers. Although the country is highly prone to flooding, the use of state-of-the-art deep learning models in predicting river water levels to aid flood forecasting is underexplored. Deep learning and attention-based models have shown high potential for accurately forecasting floods over space and time. The present study aims to develop a long short-term memory (LSTM) network and its attention-based architectures to predict flood water levels in the rivers of Bangladesh. The models developed in this study incorporated gauge-based water level data over 7 days for flood prediction at Dhaka and Sylhet stations. This study developed five models: artificial neural network (ANN), LSTM, spatial attention LSTM (SALSTM), temporal attention LSTM (TALSTM), and spatiotemporal attention LSTM (STALSTM). The multiple imputation by chained equations (MICE) method was applied to address missing data in the time series analysis. The results showed that the use of both spatial and temporal attention together increases the predictive performance of the LSTM model, which outperforms other attention-based LSTM models. The STALSTM-based flood forecasting system, developed in this study, could inform flood management plans to accurately predict floods in Bangladesh and elsewhere.
32

Izzadiana, Helma Syifa, Herlina Napitupulu, and Firdaniza Firdaniza. "Peramalan Data Univariat Menggunakan Metode Long Short Term Memory." SisInfo : Jurnal Sistem Informasi dan Informatika 5, no. 2 (August 18, 2023): 29–39. http://dx.doi.org/10.37278/sisinfo.v5i2.669.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Peramalan data univariat mengacu pada kegiatan meramalkan nilai pada data dengan satu variabel independen yang mungkin muncul di masa depan berdasarkan nilai-nilai yang ada di masa lalu. Penelitian ini bertujuan untuk memperoleh model yang dibangun menggunakan pendekatan deep learning jenis supervised learning yaitu metode Long Short Term Memory (LSTM) yang diterapkan pada data univariat. Metode LSTM merupakan pengembangan dari metode Recurrent Neural Network (RNN) dengan menambahkan 3 gate yang mampu memilih informasi yang dibutuhkan untuk pelatihan sel sehingga mampu mengurangi kemungkinan exploding gradients dan vanishing gradients. Model dibangun dengan input layer LSTM dengan unit sel dan output dense layer dengan tambahan hyperparameter tuning yang diset menggunakan optimizer, fungsi aktivasi dan , dan nilai epoch. Performa model peramalan diuji menggunakan mean absolute percentage error (MAPE).
33

Zhang, Jun, Xiyao Cao, Jiemin Xie, and Pangao Kou. "An Improved Long Short-Term Memory Model for Dam Displacement Prediction." Mathematical Problems in Engineering 2019 (April 24, 2019): 1–14. http://dx.doi.org/10.1155/2019/6792189.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Displacement plays a vital role in dam safety monitoring data, which adequately responds to security risks such as the flood water pressure, extreme temperature, structure deterioration, and bottom bedrock damage. To make accurate predictions, former researchers established various models. However, these models’ input variables cannot efficiently reflect the delays between the external environment and displacement. Therefore, a long short-term memory (LSTM) model is proposed to make full use of the historical data to reflect the delays. Furthermore, the LSTM model is improved to optimize the performance by making variables more physically reasonable. Finally, a real-world radial displacement dataset is used to compare the performance of LSTM models, multiple linear regression (MLR), multilayer perceptron (MLP) neural networks, support vector machine (SVM), and boosted regression tree (BRT). The results indicate that (1) the LSTM models can efficiently reflect the delays and make the variables selection more convenient and (2) the improved LSTM model achieves the best performance by optimizing the input form and network structure based on a clearer physical meaning.
34

Lei, Tengfei, Rita Yi Man Li, Nuttapong Jotikastira, Haiyan Fu, and Cong Wang. "Prediction for the Inventory Management Chaotic Complexity System Based on the Deep Neural Network Algorithm." Complexity 2023 (May 12, 2023): 1–11. http://dx.doi.org/10.1155/2023/9369888.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Precise inventory prediction is the key to goods inventory and safety management. Accurate inventory prediction improves enterprises’ production efficiency. It is also essential to control costs and optimize the supply chain’s performance. Nevertheless, the complex inventory data are often chaotic and nonlinear; high data complexity raises the accuracy prediction difficulty. This study simulated inventory records by using the dynamics inventory management system. Four deep neural network models trained the data: short-term memory neural network (LSTM), convolutional neural network-long short-term memory (CNN-LSTM), bidirectional long short-term memory neural network (Bi-LSTM), and deep long-short-term memory neural network (DLSTM). Evaluating the models’ performance based on RMSE, MSE, and MAE, bi-LSTM achieved the highest prediction accuracy with the least square error of 0.14%. The results concluded that the complexity of the model was not directly related to the prediction performance. By contrasting several methods of chaotic nonlinear inventory data and neural network dynamics prediction, this study contributed to the academia. The research results provided useful advice for companies’ planned production and inventory officers when they plan for product inventory and minimize the risk of mishaps brought on by excess inventories in warehouses.
35

Putera Khano, Muhammad Nazhif Abda, Dewi Retno Sari Saputro, Sutanto Sutanto, and Antoni Wibowo. "SENTIMENT ANALYSIS WITH LONG-SHORT TERM MEMORY (LSTM) AND GATED RECURRENT UNIT (GRU) ALGORITHMS." BAREKENG: Jurnal Ilmu Matematika dan Terapan 17, no. 4 (December 19, 2023): 2235–42. http://dx.doi.org/10.30598/barekengvol17iss4pp2235-2242.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sentiment analysis is a form of machine learning that functions to obtain emotional polarity values or data tendencies from data in the form of text. Sentiment analysis is needed to analyze opinions, sentiments, reviews, and criticisms from someone for a product, service, organization, topic, etc. Recurrent Neural Network (RNN) is one of the Natural Language Processing (NLP) algorithms that is used in sentiment analysis. RNN is a neural network that can use internal memory to process input. RNN itself has a weakness in Long-Term Memory (LTM). Therefore, this article examines the combination of Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) algorithms. GRU is an algorithm that is used to make each recurrent unit able to record adaptively at different time scales. Meanwhile, LSTM is a network architecture with the advantage of learning long-term dependencies on data. LSTM can remember long-term memory information, learn long-sequential data, and form information relation data in LTM. The combination of LSTM and GRU aims to overcome RNN’s weakness in LTM. The LSTM-GRU is combined by adding GRU to the data generated from LSTM. The combination of LSTM and GRU creates a better performance algorithm for addressing the LTM problem.
36

Wang, Lipeng. "An Improved Long Short-Term Memory Neural Network for Macroeconomic Forecast." Revue d'Intelligence Artificielle 34, no. 5 (November 20, 2020): 577–84. http://dx.doi.org/10.18280/ria.340507.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The statistics and cyclical swings of macroeconomics are necessary for exploring the internal laws and features of the market economy. To realize intelligent and efficient macroeconomic forecast, this paper puts forward a macroeconomic forecast model based on improved long short-term memory (LSTM) neural network. Firstly, a scientific evaluation index system (EIS) was constructed for macroeconomy. The correlation between indices was measured by Spearman correlation coefficient, and the index data were preprocessed by interpolating the missing items and converting low-frequency series into high-frequency series. Next, the corresponding mixed frequency dataset was constructed, followed by the derivation of the state space equation. Then, the LSTM neutral network was optimized by the Kalman filter or macroeconomic forecast. The effectiveness of the proposed forecast method was verified through experiments. The research results lay a theoretical basis for the application of LSTM in financial forecasts.
37

Liu, Feiyang, Panke Qin, Junru You, and Yanyan Fu. "Sparrow Search Algorithm-Optimized Long Short-Term Memory Model for Stock Trend Prediction." Computational Intelligence and Neuroscience 2022 (August 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/3680419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The long short-term memory (LSTM) network is especially suitable for dealing with time series-related problems, which has led to a wide range of applications in analyzing stock market quotations and predicting future price trends. However, the selection of hyperparameters in LSTM networks was often based on subjective experience and existing research. The inability to determine the optimal values of the parameters results in a reduced generalization capability of the model. Therefore, we proposed a sparrow search algorithm-optimized LSTM (SSA-LSTM) model for stock trend prediction. The SSA was used to find the optimal hyperparameters of the LSTM model to adapt the features of the data to the structure of the model, so as to construct a highly accurate stock trend prediction model. With the Shanghai Composite Index stock data in the last decade, the mean absolute percentage error, root mean square error, mean absolute error, and coefficient of determination between stock prices predicted by the SSA-LSTM method and actual prices are 0.0093, 41.9505, 30.5300, and 0.9754. The result indicates that the proposed model possesses higher forecasting precision than other traditional stock forecasting methods and enhances the interpretability of the network model structure and parameters.
38

Lees, Thomas, Steven Reece, Frederik Kratzert, Daniel Klotz, Martin Gauch, Jens De Bruijn, Reetik Kumar Sahu, Peter Greve, Louise Slater, and Simon J. Dadson. "Hydrological concept formation inside long short-term memory (LSTM) networks." Hydrology and Earth System Sciences 26, no. 12 (June 20, 2022): 3079–101. http://dx.doi.org/10.5194/hess-26-3079-2022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Neural networks have been shown to be extremely effective rainfall-runoff models, where the river discharge is predicted from meteorological inputs. However, the question remains: what have these models learned? Is it possible to extract information about the learned relationships that map inputs to outputs, and do these mappings represent known hydrological concepts? Small-scale experiments have demonstrated that the internal states of long short-term memory networks (LSTMs), a particular neural network architecture predisposed to hydrological modelling, can be interpreted. By extracting the tensors which represent the learned translation from inputs (precipitation, temperature, and potential evapotranspiration) to outputs (discharge), this research seeks to understand what information the LSTM captures about the hydrological system. We assess the hypothesis that the LSTM replicates real-world processes and that we can extract information about these processes from the internal states of the LSTM. We examine the cell-state vector, which represents the memory of the LSTM, and explore the ways in which the LSTM learns to reproduce stores of water, such as soil moisture and snow cover. We use a simple regression approach to map the LSTM state vector to our target stores (soil moisture and snow). Good correlations (R2>0.8) between the probe outputs and the target variables of interest provide evidence that the LSTM contains information that reflects known hydrological processes comparable with the concept of variable-capacity soil moisture stores. The implications of this study are threefold: (1) LSTMs reproduce known hydrological processes. (2) While conceptual models have theoretical assumptions embedded in the model a priori, the LSTM derives these from the data. These learned representations are interpretable by scientists. (3) LSTMs can be used to gain an estimate of intermediate stores of water such as soil moisture. While machine learning interpretability is still a nascent field and our approach reflects a simple technique for exploring what the model has learned, the results are robust to different initial conditions and to a variety of benchmarking experiments. We therefore argue that deep learning approaches can be used to advance our scientific goals as well as our predictive goals.
39

Song, Fei, Yong Li, Wei Cheng, and Limeng Dong. "Learning to Track Multiple Radar Targets with Long Short-Term Memory Networks." Wireless Communications and Mobile Computing 2023 (February 15, 2023): 1–9. http://dx.doi.org/10.1155/2023/1033371.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Radar multitarget tracking in a dense clutter environment remains a complex problem to be solved. Most existing solutions still rely on complex motion models and prior distribution knowledge. In this paper, a new online tracking method based on a long short-term memory (LSTM) network is proposed. It combines state prediction, measurement association, and trajectory management functions in an end-to-end manner. We employ LSTM networks to model target motion and trajectory associations, relying on their strong learning ability to learn target motion properties and long-term dependence of trajectory associations from noisy data. Moreover, to address the problem of missing appearance information of radar targets, we propose an architecture based on the LSTM network to calculate similarity function by extracting long-term motion features. And the similarity is applied to trajectory associations to improve their robustness. Our proposed method is validated in simulation scenarios and achieves good results.
40

Liao, Chin-Wen, I.-Chi Wang, Kuo-Ping Lin, and Yu-Ju Lin. "A Fuzzy Seasonal Long Short-Term Memory Network for Wind Power Forecasting." Mathematics 9, no. 11 (May 23, 2021): 1178. http://dx.doi.org/10.3390/math9111178.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To protect the environment and achieve the Sustainable Development Goals (SDGs), reducing greenhouse gas emissions has been actively promoted by global governments. Thus, clean energy, such as wind power, has become a very important topic among global governments. However, accurately forecasting wind power output is not a straightforward task. The present study attempts to develop a fuzzy seasonal long short-term memory network (FSLSTM) that includes the fuzzy decomposition method and long short-term memory network (LSTM) to forecast a monthly wind power output dataset. LSTM technology has been successfully applied to forecasting problems, especially time series problems. This study first adopts the fuzzy seasonal index into the fuzzy LSTM model, which effectively extends the traditional LSTM technology. The FSLSTM, LSTM, autoregressive integrated moving average (ARIMA), generalized regression neural network (GRNN), back propagation neural network (BPNN), least square support vector regression (LSSVR), and seasonal autoregressive integrated moving average (SARIMA) models are then used to forecast monthly wind power output datasets in Taiwan. The empirical results indicate that FSLSTM can obtain better performance in terms of forecasting accuracy than the other methods. Therefore, FSLSTM can efficiently provide credible prediction values for Taiwan’s wind power output datasets.
41

Yang, Tianyi, Quanming Zhao, and Yifan Meng. "Ultra-short-term Photovoltaic Power Prediction Based on Multi-head ProbSparse Self-attention and Long Short-term Memory." Journal of Physics: Conference Series 2558, no. 1 (August 1, 2023): 012007. http://dx.doi.org/10.1088/1742-6596/2558/1/012007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract To provide accurate predictions of photovoltaic (PV) power generation, an MHPSA-LSTM ultra-short-term multipoint PV power prediction model combining Multi-head ProbSparse self-attention (MHPSA) and long short-term memory (LSTM) network is posited. The MHPSA is first used to capture information dependencies at a distance. Secondly, the LSTM is used to enhance the local correlation. At last, a pooling layer is added after LSTM to reduce the parameters of the fully-connected layer and alleviate overfitting, thus improving the prediction accuracy. The MHPSA-LSTM model is validated on a PV plant at the Desert Knowledge Australia Solar Centre as an example, and the RMSE, MAE, and R2 of MHPSA-LSTM are 0.527, 0.264, and 0.917, respectively. MHPSA-LSTM has higher prediction accuracy compared with BP, LSTM, GRU, and CNN-LSTM.
42

Mukhlis, Mukhlis, Aziz Kustiyo, and Aries Suharso. "Peramalan Produksi Pertanian Menggunakan Model Long Short-Term Memory." BINA INSANI ICT JOURNAL 8, no. 1 (June 24, 2021): 22. http://dx.doi.org/10.51211/biict.v8i1.1492.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstrak: Masalah yang timbul dalam peramalan hasil produksi pertanian antara lain adalah sulit untuk mendapatkan data yang lengkap dari variabel-variabel yang mempengaruhi hasil pertanian dalam jangka panjang. Kondisi ini akan semakin sulit ketika peramalan mencakup wilayah yang cukup luas. Akibatnya, variabel-variabel tersebut harus diinterpolasi sehingga akan menyebabkan bias terhadap hasil peramalan. (1) Mengetahui gambaran meta analisis penelitian peramalan produk pertanian menggunakan Long Short Term Memory (LSTM), (2) Mengetahui penelitian meta analisis cakupan wilayah, komoditi dan periode data terkait produk pertanian terutama gandum, kedelai jagung dan pisang, (3) Mengetahui praproses data antara lain menghilangkan data yang tidak sesuai, menangani data yang kosong, serta memilih variabel tertentu. Sebagai solusi dari masalah tersebut, peramalan hasil produksi pertanian dilakukan berdasarkan data historis hasil produksi pertanian. Salah model peramalan yang saat ini banyak dikembangkan adalah model jaringan syaraf LSTM yang merupakan pengembangan dari model jaringan syaraf recurrent (RNN). Tulisan ini merupakan hasil kajian literatur pengembangan model-model LSTM untuk peramalan hasil produksi pertanian meliputi gandum, kedelai, jagung dan pisang. Perbaikan kinerja model LSTM dilakukan mulai dari praproses, tuning hyperparameter, sampai dengan penggabungan dengan metode lain. Berdasarkan kajian tersebut, model-model LSTM memiliki kinerja yang lebih baik dibandingkan dengan model benchmark. Kata kunci: jaringan syaraf, LSTM, peramalan, produksi pertanian, RNN. Abstract: Problems that arise in forecasting agricultural products include the difficulty of obtaining complete data on the variables that affect agricultural yields in the long term. This condition will be more difficult when the forecast covers a large area. As a result, these variables must be interpolated so that it will cause a bias towards the forecasting results. (1) Knowing the description of research maps for forecasting agricultural products using Long short term memory (LSTM), (2) Knowing Research Coverage areas, commodities, and data periods related to agricultural products, especially Wheat, Soybeans, corn, and bananas, (3) Knowing Preprocessing data between others remove inappropriate data, handle blank data, and select certain variables. This paper is the result of a literature review on the development of LSTM models for crop yields forecasting including wheat, soybeans, corn, and bananas. The Performance Improvements of the LSTM models were carried out by preprocessing data, hyperparameter tuning, and combining LSTM with other methods. Based on this study, LSTM models have better performance compared to the benchmark model. Keywords: neural network, LSTM, forecasting, crop yield, RNN.
43

Zhang, Tianren. "COVID-19 Epidemic Trend Prediction using Long Short-term Memory Network." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 258–65. http://dx.doi.org/10.54097/hset.v39i.6537.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The COVID-19 pandemic is continuously spreading in various countries and different regions. It produces serious economic shock worldwide and negatively impacts the life and work of people. Although many control measures are conducted to contain its spread, it is still not known when the epidemic will end. Predicting the trend of COVID-19 accurately is extremely important. It can improve the resource allocation rate and make better preventive and control measures for the epidemic. In this paper, Long Short-term Memory (LSTM) models are leveraged for predicting the epidemic in different countries, including Germany, Japan, Russia, and Italy. The LSTM is a type of recurrent neural network (RNN), which is effective for predicting sequential data such as the time series. In this work, a visualization analysis is firstly conducted for demonstrating the trends of COVID-19 in various countries. Then the performances of the LSTM network are validated on the data of four countries.
44

Nguyen, Sang Thi Thanh, and Bao Duy Tran. "Long Short-Term Memory Based Movie Recommendation." Science & Technology Development Journal - Engineering and Technology 3, SI1 (September 19, 2020): SI1—SI9. http://dx.doi.org/10.32508/stdjet.v3isi1.540.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recommender systems (RS) have become a fundamental tool for helping users make decisions around millions of different choices nowadays – the era of Big Data. It brings a huge benefit for many business models around the world due to their effectiveness on the target customers. A lot of recommendation models and techniques have been proposed and many accomplished incredible outcomes. Collaborative filtering and content-based filtering methods are common, but these both have some disadvantages. A critical one is that they only focus on a user's long-term static preference while ignoring his or her short-term transactional patterns, which results in missing the user's preference shift through the time. In this case, the user's intent at a certain time point may be easily submerged by his or her historical decision behaviors, which leads to unreliable recommendations. To deal with this issue, a session of user interactions with the items can be considered as a solution. In this study, Long Short-Term Memory (LSTM) networks will be analyzed to be applied to user sessions in a recommender system. The MovieLens dataset is considered as a case study of movie recommender systems. This dataset is preprocessed to extract user-movie sessions for user behavior discovery and making movie recommendations to users. Several experiments have been carried out to evaluate the LSTM-based movie recommender system. In the experiments, the LSTM networks are compared with a similar deep learning method, which is Recurrent Neural Networks (RNN), and a baseline machine learning method, which is the collaborative filtering using item-based nearest neighbors (item-KNN). It has been found that the LSTM networks are able to be improved by optimizing their hyperparameters and outperform the other methods when predicting the next movies interested by users.
45

Liu, Jinyuan, Shouxi Wang, Nan Wei, Yi Yang, Yihao Lv, Xu Wang, and Fanhua Zeng. "An Enhancement Method Based on Long Short-Term Memory Neural Network for Short-Term Natural Gas Consumption Forecasting." Energies 16, no. 3 (January 26, 2023): 1295. http://dx.doi.org/10.3390/en16031295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial intelligence models have been widely applied for natural gas consumption forecasting over the past decades, especially for short-term consumption forecasting. This paper proposes a three-layer neural network forecasting model that can extract key information from input factors and improve the weight optimization mechanism of long short-term memory (LSTM) neural network to effectively forecast short-term consumption. In the proposed model, a convolutional neural network (CNN) layer is adopted to extract the features among various factors affecting natural gas consumption and improve computing efficiency. The LSTM layer is able to learn and save the long-distance state through the gating mechanism and overcomes the defects of gradient disappearance and explosion in the recurrent neural network. To solve the problem of encoding input sequences as fixed-length vectors, the layer of attention (ATT) is used to optimize the assignment of weights and highlight the key sequences. Apart from the comparisons with other popular forecasting models, the performance and robustness of the proposed model are validated on datasets with different fluctuations and complexities. Compared with traditional two-layer models (CNN-LSTM and LSTM-ATT), the mean absolute range normalized errors (MARNE) of the proposed model in Athens and Spata are improved by more than 16% and 11%, respectively. In comparison with single LSTM, back propagation neural network, support vector regression, and multiple linear regression methods, the improvement in MARNE exceeds 42% in Athens. The coefficient of determination is improved by more than 25%, even in the high-complexity dataset, Spata.
46

Zhang, Jiaan, Chenyu Liu, and Leijiao Ge. "Short-Term Load Forecasting Model of Electric Vehicle Charging Load Based on MCCNN-TCN." Energies 15, no. 7 (April 4, 2022): 2633. http://dx.doi.org/10.3390/en15072633.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The large fluctuations in charging loads of electric vehicles (EVs) make short-term forecasting challenging. In order to improve the short-term load forecasting performance of EV charging load, a corresponding model-based multi-channel convolutional neural network and temporal convolutional network (MCCNN-TCN) are proposed. The multi-channel convolutional neural network (MCCNN) can extract the fluctuation characteristics of EV charging load at various time scales, while the temporal convolutional network (TCN) can build a time-series dependence between the fluctuation characteristics and the forecasted load. In addition, an additional BP network maps the selected meteorological and date features into a high-dimensional feature vector, which is spliced with the output of the TCN. According to experimental results employing urban charging station load data from a city in northern China, the proposed model is more accurate than artificial neural network (ANN), long short-term memory (LSTM), convolutional neural networks and long short-term memory (CNN-LSTM), and TCN models. The MCCNN-TCN model outperforms the ANN, LSTM, CNN-LSTM, and TCN by 14.09%, 25.13%, 27.32%, and 4.48%, respectively, in terms of the mean absolute percentage error.
47

Son, Hojae, Anand Paul, and Gwanggil Jeon. "Country Information Based on Long-Term Short-Term Memory (LSTM)." International Journal of Engineering & Technology 7, no. 4.44 (December 1, 2018): 47. http://dx.doi.org/10.14419/ijet.v7i4.44.26861.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Social platform such as Facebook, Twitter and Instagram generates tremendous data these days. Researchers make use of these data to extract meaningful information and predict future. Especially twitter is the platform people can share their thought briefly on a certain topic and it provides real-time streaming data API (Application Programming Interface) for filtering data for a purpose. Over time a country has changed its interest in other countries. People can get a benefit to see a tendency of interest as well as prediction result from twitter streaming data. Capturing twitter data flow is connected to how people think and have an interest on the topic. We believe real-time twitter data reflect this change. Long-term Short-term Memory Unit (LSTM) is the widely used deep learning unit from recurrent neural network to learn the sequence. The purpose of this work is building prediction model “Country Interest Analysis based on LSTM (CIAL)” to forecast next interval of tweet counts when it comes to referring country on the tweet post. Additionally it’s necessary to cluster for analyzing multiple countries twitter data over the remote nodes. This paper presents how country attention tendency can be captured over twitter streaming data with LSTM algorithm.
48

Sher, Madiha, Nasru Minallah, Tufail Ahmad, and Waleed Khan. "Hyperparameters analysis of long short-term memory architecture for crop classification." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 4 (August 1, 2023): 4661. http://dx.doi.org/10.11591/ijece.v13i4.pp4661-4670.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<span lang="EN-US">Deep learning (DL) has seen a massive rise in popularity for remote sensing (RS) based applications over the past few years. However, the performance of DL algorithms is dependent on the optimization of various hyperparameters since the hyperparameters have a huge impact on the performance of deep neural networks. The impact of hyperparameters on the accuracy and reliability of DL models is a significant area for investigation. In this study, the grid Search algorithm is used for hyperparameters optimization of long short-term memory (LSTM) network for the RS-based classification. The hyperparameters considered for this study are, optimizer, activation function, batch size, and the number of LSTM layers. In this study, over 1,000 hyperparameter sets are evaluated and the result of all the sets are analyzed to see the effects of various combinations of hyperparameters as well the individual parameter effect on the performance of the LSTM model. The performance of the LSTM model is evaluated using the performance metric of minimum loss and average loss and it was found that classification can be highly affected by the choice of optimizer; however, other parameters such as the number of LSTM layers have less influence.</span>
49

Muhuri, Pramita Sree, Prosenjit Chatterjee, Xiaohong Yuan, Kaushik Roy, and Albert Esterline. "Using a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) to Classify Network Attacks." Information 11, no. 5 (May 1, 2020): 243. http://dx.doi.org/10.3390/info11050243.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An intrusion detection system (IDS) identifies whether the network traffic behavior is normal or abnormal or identifies the attack types. Recently, deep learning has emerged as a successful approach in IDSs, having a high accuracy rate with its distinctive learning mechanism. In this research, we developed a new method for intrusion detection to classify the NSL-KDD dataset by combining a genetic algorithm (GA) for optimal feature selection and long short-term memory (LSTM) with a recurrent neural network (RNN). We found that using LSTM-RNN classifiers with the optimal feature set improves intrusion detection. The performance of the IDS was analyzed by calculating the accuracy, recall, precision, f-score, and confusion matrix. The NSL-KDD dataset was used to analyze the performances of the classifiers. An LSTM-RNN was used to classify the NSL-KDD datasets into binary (normal and abnormal) and multi-class (Normal, DoS, Probing, U2R, and R2L) sets. The results indicate that applying the GA increases the classification accuracy of LSTM-RNN in both binary and multi-class classification. The results of the LSTM-RNN classifier were also compared with the results using a support vector machine (SVM) and random forest (RF). For multi-class classification, the classification accuracy of LSTM-RNN with the GA model is much higher than SVM and RF. For binary classification, the classification accuracy of LSTM-RNN is similar to that of RF and higher than that of SVM.
50

Xie, Anqi, Hao Yang, Jing Chen, Li Sheng, and Qian Zhang. "A Short-Term Wind Speed Forecasting Model Based on a Multi-Variable Long Short-Term Memory Network." Atmosphere 12, no. 5 (May 19, 2021): 651. http://dx.doi.org/10.3390/atmos12050651.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Accurately forecasting wind speed on a short-term scale has become essential in the field of wind power energy. In this paper, a multi-variable long short-term memory network model (MV-LSTM) based on Pearson correlation coefficient feature selection is proposed to predict the short-term wind speed. The proposed method utilizes multiple historical meteorological variables, such as wind speed, temperature, humidity, and air pressure, to predict the wind speed in the next hour. Hourly data collected from two ground observation stations in Yanqing and Zhaitang in Beijing were divided into training and test sets. The training sets were used to train the model, and the test sets were used to evaluate the model with the root-mean-square error (RMSE), mean absolute error (MAE), mean bias error (MBE), and mean absolute percentage error (MAPE) metrics. The proposed method is compared with two other forecasting methods (the autoregressive moving average model (ARMA) method and the single-variable long short-term memory network (LSTM) method, which inputs only historical wind speed data) based on the same dataset. The experimental results prove the feasibility of the MV-LSTM method for short-term wind speed forecasting and its superiority to the ARMA method and the single-variable LSTM method.

До бібліографії