Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: LSTM unit.

Zeitschriftenartikel zum Thema „LSTM unit“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "LSTM unit" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Dangovski, Rumen, Li Jing, Preslav Nakov, Mićo Tatalović und Marin Soljačić. „Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications“. Transactions of the Association for Computational Linguistics 7 (November 2019): 121–38. http://dx.doi.org/10.1162/tacl_a_00258.

Der volle Inhalt der Quelle
Annotation:
Stacking long short-term memory (LSTM) cells or gated recurrent units (GRUs) as part of a recurrent neural network (RNN) has become a standard approach to solving a number of tasks ranging from language modeling to text summarization. Although LSTMs and GRUs were designed to model long-range dependencies more accurately than conventional RNNs, they nevertheless have problems copying or recalling information from the long distant past. Here, we derive a phase-coded representation of the memory state, Rotational Unit of Memory (RUM), that unifies the concepts of unitary learning and associative memory. We show experimentally that RNNs based on RUMs can solve basic sequential tasks such as memory copying and memory recall much better than LSTMs/GRUs. We further demonstrate that by replacing LSTM/GRU with RUM units we can apply neural networks to real-world problems such as language modeling and text summarization, yielding results comparable to the state of the art.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Han, Shipeng, Zhen Meng, Xingcheng Zhang und Yuepeng Yan. „Hybrid Deep Recurrent Neural Networks for Noise Reduction of MEMS-IMU with Static and Dynamic Conditions“. Micromachines 12, Nr. 2 (20.02.2021): 214. http://dx.doi.org/10.3390/mi12020214.

Der volle Inhalt der Quelle
Annotation:
Micro-electro-mechanical system inertial measurement unit (MEMS-IMU), a core component in many navigation systems, directly determines the accuracy of inertial navigation system; however, MEMS-IMU system is often affected by various factors such as environmental noise, electronic noise, mechanical noise and manufacturing error. These can seriously affect the application of MEMS-IMU used in different fields. Focus has been on MEMS gyro since it is an essential and, yet, complex sensor in MEMS-IMU which is very sensitive to noises and errors from the random sources. In this study, recurrent neural networks are hybridized in four different ways for noise reduction and accuracy improvement in MEMS gyro. These are two-layer homogenous recurrent networks built on long short term memory (LSTM-LSTM) and gated recurrent unit (GRU-GRU), respectively; and another two-layer but heterogeneous deep networks built on long short term memory-gated recurrent unit (LSTM-GRU) and a gated recurrent unit-long short term memory (GRU-LSTM). Practical implementation with static and dynamic experiments was carried out for a custom MEMS-IMU to validate the proposed networks, and the results show that GRU-LSTM seems to be overfitting large amount data testing for three-dimensional axis gyro in the static test. However, for X-axis and Y-axis gyro, LSTM-GRU had the best noise reduction effect with over 90% improvement in the three axes. For Z-axis gyroscope, LSTM-GRU performed better than LSTM-LSTM and GRU-GRU in quantization noise and angular random walk, while LSTM-LSTM shows better improvement than both GRU-GRU and LSTM-GRU networks in terms of zero bias stability. In the dynamic experiments, the Hilbert spectrum carried out revealed that time-frequency energy of the LSTM-LSTM, GRU-GRU, and GRU-LSTM denoising are higher compared to LSTM-GRU in terms of the whole frequency domain. Similarly, Allan variance analysis also shows that LSTM-GRU has a better denoising effect than the other networks in the dynamic experiments. Overall, the experimental results demonstrate the effectiveness of deep learning algorithms in MEMS gyro noise reduction, among which LSTM-GRU network shows the best noise reduction effect and great potential for application in the MEMS gyroscope area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Huang, Zhongzhan, Senwei Liang, Mingfu Liang und Haizhao Yang. „DIANet: Dense-and-Implicit Attention Network“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 4206–14. http://dx.doi.org/10.1609/aaai.v34i04.5842.

Der volle Inhalt der Quelle
Annotation:
Attention networks have successfully boosted the performance in various vision problems. Previous works lay emphasis on designing a new attention module and individually plug them into the networks. Our paper proposes a novel-and-simple framework that shares an attention module throughout different network layers to encourage the integration of layer-wise information and this parameter-sharing module is referred to as Dense-and-Implicit-Attention (DIA) unit. Many choices of modules can be used in the DIA unit. Since Long Short Term Memory (LSTM) has a capacity of capturing long-distance dependency, we focus on the case when the DIA unit is the modified LSTM (called DIA-LSTM). Experiments on benchmark datasets show that the DIA-LSTM unit is capable of emphasizing layer-wise feature interrelation and leads to significant improvement of image classification accuracy. We further empirically show that the DIA-LSTM has a strong regularization ability on stabilizing the training of deep networks by the experiments with the removal of skip connections (He et al. 2016a) or Batch Normalization (Ioffe and Szegedy 2015) in the whole residual network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Jianyong, Lei Zhang, Yuanyuan Chen und Zhang Yi. „A New Delay Connection for Long Short-Term Memory Networks“. International Journal of Neural Systems 28, Nr. 06 (24.06.2018): 1750061. http://dx.doi.org/10.1142/s0129065717500617.

Der volle Inhalt der Quelle
Annotation:
Connections play a crucial role in neural network (NN) learning because they determine how information flows in NNs. Suitable connection mechanisms may extensively enlarge the learning capability and reduce the negative effect of gradient problems. In this paper, a new delay connection is proposed for Long Short-Term Memory (LSTM) unit to develop a more sophisticated recurrent unit, called Delay Connected LSTM (DCLSTM). The proposed delay connection brings two main merits to DCLSTM with introducing no extra parameters. First, it allows the output of the DCLSTM unit to maintain LSTM, which is absent in the LSTM unit. Second, the proposed delay connection helps to bridge the error signals to previous time steps and allows it to be back-propagated across several layers without vanishing too quickly. To evaluate the performance of the proposed delay connections, the DCLSTM model with and without peephole connections was compared with four state-of-the-art recurrent model on two sequence classification tasks. DCLSTM model outperformed the other models with higher accuracy and F1[Formula: see text]score. Furthermore, the networks with multiple stacked DCLSTM layers and the standard LSTM layer were evaluated on Penn Treebank (PTB) language modeling. The DCLSTM model achieved lower perplexity (PPL)/bit-per-character (BPC) than the standard LSTM model. The experiments demonstrate that the learning of the DCLSTM models is more stable and efficient.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

He, Wei, Jufeng Li, Zhihe Tang, Beng Wu, Hui Luan, Chong Chen und Huaqing Liang. „A Novel Hybrid CNN-LSTM Scheme for Nitrogen Oxide Emission Prediction in FCC Unit“. Mathematical Problems in Engineering 2020 (17.08.2020): 1–12. http://dx.doi.org/10.1155/2020/8071810.

Der volle Inhalt der Quelle
Annotation:
Fluid Catalytic Cracking (FCC), a key unit for secondary processing of heavy oil, is one of the main pollutant emissions of NOx in refineries which can be harmful for the human health. Owing to its complex behaviour in reaction, product separation, and regeneration, it is difficult to accurately predict NOx emission during FCC process. In this paper, a novel deep learning architecture formed by integrating Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) for nitrogen oxide emission prediction is proposed and validated. CNN is used to extract features among multidimensional data. LSTM is employed to identify the relationships between different time steps. The data from the Distributed Control System (DCS) in one refinery was used to evaluate the performance of the proposed architecture. The results indicate the effectiveness of CNN-LSTM in handling multidimensional time series datasets with the RMSE of 23.7098, and the R2 of 0.8237. Compared with previous methods (CNN and LSTM), CNN-LSTM overcomes the limitation of high-quality feature dependence and handles large amounts of high-dimensional data with better efficiency and accuracy. The proposed CNN-LSTM scheme would be a beneficial contribution to the accurate and stable prediction of irregular trends for NOx emission from refining industry, providing more reliable information for NOx risk assessment and management.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Donoso-Oliva, C., G. Cabrera-Vives, P. Protopapas, R. Carrasco-Davis und P. A. Estevez. „The effect of phased recurrent units in the classification of multiple catalogues of astronomical light curves“. Monthly Notices of the Royal Astronomical Society 505, Nr. 4 (10.06.2021): 6069–84. http://dx.doi.org/10.1093/mnras/stab1598.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT In the new era of very large telescopes, where data are crucial to expand scientific knowledge, we have witnessed many deep learning applications for the automatic classification of light curves. Recurrent neural networks (RNNs) are one of the models used for these applications, and the Long Short-Term Memory (LSTM) unit stands out for being an excellent choice for the representation of long time series. In general, RNNs assume observations at discrete times, which may not suit the irregular sampling of light curves. A traditional technique to address irregular sequences consists of adding the sampling time to the network’s input, but this is not guaranteed to capture sampling irregularities during training. Alternatively, the Phased LSTM (PLSTM) unit has been created to address this problem by updating its state using the sampling times explicitly. In this work, we study the effectiveness of the LSTM- and PLSTM-based architectures for the classification of astronomical light curves. We use seven catalogues containing periodic and non-periodic astronomical objects. Our findings show that LSTM outperformed PLSTM on six of seven data sets. However, the combination of both units enhances the results in all data sets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pan, Yu, Jing Xu, Maolin Wang, Jinmian Ye, Fei Wang, Kun Bai und Zenglin Xu. „Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4683–90. http://dx.doi.org/10.1609/aaai.v33i01.33014683.

Der volle Inhalt der Quelle
Annotation:
Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved promising performance in sequential data modeling. The hidden layers in RNNs can be regarded as the memory units, which are helpful in storing information in sequential contexts. However, when dealing with high dimensional input data, such as video and text, the input-to-hidden linear transformation in RNNs brings high memory usage and huge computational cost. This makes the training of RNNs very difficult. To address this challenge, we propose a novel compact LSTM model, named as TR-LSTM, by utilizing the low-rank tensor ring decomposition (TRD) to reformulate the input-to-hidden transformation. Compared with other tensor decomposition methods, TR-LSTM is more stable. In addition, TR-LSTM can complete an end-to-end training and also provide a fundamental building block for RNNs in handling large input data. Experiments on real-world action recognition datasets have demonstrated the promising performance of the proposed TR-LSTM compared with the tensor-train LSTM and other state-of-the-art competitors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shafqat, Wafa, und Yung-Cheol Byun. „A Context-Aware Location Recommendation System for Tourists Using Hierarchical LSTM Model“. Sustainability 12, Nr. 10 (18.05.2020): 4107. http://dx.doi.org/10.3390/su12104107.

Der volle Inhalt der Quelle
Annotation:
The significance of contextual data has been recognized by analysts and specialists in numerous disciplines such as customization, data recovery, ubiquitous and versatile processing, information mining, and management. While a generous research has just been performed in the zone of recommender frameworks, by far most of the existing approaches center on prescribing the most relevant items to customers. It usually neglects extra-contextual information, for example time, area, climate or the popularity of different locations. Therefore, we proposed a deep long-short term memory (LSTM) based context-enriched hierarchical model. This proposed model had two levels of hierarchy and each level comprised of a deep LSTM network. In each level, the task of the LSTM was different. At the first level, LSTM learned from user travel history and predicted the next location probabilities. A contextual learning unit was active between these two levels. This unit extracted maximum possible contexts related to a location, the user and its environment such as weather, climate and risks. This unit also estimated other effective parameters such as the popularity of a location. To avoid feature congestion, XGBoost was used to rank feature importance. The features with no importance were discarded. At the second level, another LSTM framework was used to learn these contextual features embedded with location probabilities and resulted into top ranked places. The performance of the proposed approach was elevated with an accuracy of 97.2%, followed by gated recurrent unit (GRU) (96.4%) and then Bidirectional LSTM (94.2%). We also performed experiments to find the optimal size of travel history for effective recommendations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wu, Beng, Wei He, Jing Wang, Huaqing Liang und Chong Chen. „A convolutional-LSTM model for nitrogen oxide emission forecasting in FCC unit“. Journal of Intelligent & Fuzzy Systems 40, Nr. 1 (04.01.2021): 1537–45. http://dx.doi.org/10.3233/jifs-192086.

Der volle Inhalt der Quelle
Annotation:
As the environment issue is put on the agenda, air pollution also concerns a lot. Nitrogen oxide (NOx) an is important factor which affects air pollution and is also the main gas emissions of the smoke and waste gas of FCC unit in petrochemical industry. It is important to accurately predict the NOx emission in advance for petrochemical industry to avoid air pollution incidents. In this paper, convolutional neural network (CNN) and long short-term memory (LSTM) are combined to predict the NOx emission in Fluid Catalytic Cracking unit (FCC unit). Convolutional-LSTM (CLSTM) is able to extract the spatial and temporal features which are essential information in the prediction of the NOx emission. The features in the factors of production which would affect the NOx emission are extracted by CNN which prepares time series data for LSTM. The LSTM layer is connected after CNN to model the irregular trends in time series. CNN, Multi-layer perception (MLP), rand forest (RF), support vector machine (SVM) and LSTM are implemented as baseline models. The results from the proposed CLSTM model showed better performance than all the baseline models. The mean absolute error and root mean square error for CLSTM were calculated with the values of 16.8267 and 23.7089 which are the lowest among all the models. The Pearson correlation coefficient and R2 for the proposed CLSTM model are calculated with the value of 0.9263, 0.8237 which are the highest among all the models. Furthermore, the residual graphs indicate the well matched performance between the observations and the predictions. The study provides a model reference for forecasting the NOx concentration emitted by FCC unit in petrochemical industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Appati, Justice Kwame, Ismail Wafaa Denwar, Ebenezer Owusu und Michael Agbo Tettey Soli. „Construction of an Ensemble Scheme for Stock Price Prediction Using Deep Learning Techniques“. International Journal of Intelligent Information Technologies 17, Nr. 2 (April 2021): 72–95. http://dx.doi.org/10.4018/ijiit.2021040104.

Der volle Inhalt der Quelle
Annotation:
This study proposes a deep learning approach for stock price prediction by bridging the long short-term memory with gated recurrent unit. In its evaluation, the mean absolute error and mean square error were used. The model proposed is an extension of the study of Hossain et al. established in 2018 with an MSE of 0.00098 as its lowest error. The current proposed model is a mix of the bidirectional LSTM and bidirectional GRU resulting in 0.00000008 MSE as the lowest error recorded. The LSTM model recorded 0.00000025 MSE, the GRU model recorded 0.00000077 MSE, and the LSTM + GRU model recorded 0.00000023 MSE. Other combinations of the existing models such as the bi-directional LSTM model recorded 0.00000019 MSE, bi-directional GRU recorded 0.00000011 MSE, bidirectional LSTM + GRU recorded 0.00000027 MSE, LSTM and bi-directional GRU recorded 0.00000020 MSE.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Xia, Jing, Su Pan, Min Zhu, Guolong Cai, Molei Yan, Qun Su, Jing Yan und Gangmin Ning. „A Long Short-Term Memory Ensemble Approach for Improving the Outcome Prediction in Intensive Care Unit“. Computational and Mathematical Methods in Medicine 2019 (03.11.2019): 1–10. http://dx.doi.org/10.1155/2019/8152713.

Der volle Inhalt der Quelle
Annotation:
In intensive care unit (ICU), it is essential to predict the mortality of patients and mathematical models aid in improving the prognosis accuracy. Recently, recurrent neural network (RNN), especially long short-term memory (LSTM) network, showed advantages in sequential modeling and was promising for clinical prediction. However, ICU data are highly complex due to the diverse patterns of diseases; therefore, instead of single LSTM model, an ensemble algorithm of LSTM (eLSTM) is proposed, utilizing the superiority of the ensemble framework to handle the diversity of clinical data. The eLSTM algorithm was evaluated by the acknowledged database of ICU admissions Medical Information Mart for Intensive Care III (MIMIC-III). The investigation in total of 18415 cases shows that compared with clinical scoring systems SAPS II, SOFA, and APACHE II, random forests classification algorithm, and the single LSTM classifier, the eLSTM model achieved the superior performance with the largest value of area under the receiver operating characteristic curve (AUROC) of 0.8451 and the largest area under the precision-recall curve (AUPRC) of 0.4862. Furthermore, it offered an early prognosis of ICU patients. The results demonstrate that the eLSTM is capable of dynamically predicting the mortality of patients in complex clinical situations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zarzycki, Krzysztof, und Maciej Ławryńczuk. „LSTM and GRU Neural Networks as Models of Dynamical Processes Used in Predictive Control: A Comparison of Models Developed for Two Chemical Reactors“. Sensors 21, Nr. 16 (20.08.2021): 5625. http://dx.doi.org/10.3390/s21165625.

Der volle Inhalt der Quelle
Annotation:
This work thoroughly compares the efficiency of Long Short-Term Memory Networks (LSTMs) and Gated Recurrent Unit (GRU) neural networks as models of the dynamical processes used in Model Predictive Control (MPC). Two simulated industrial processes were considered: a polymerisation reactor and a neutralisation (pH) process. First, MPC prediction equations for both types of models were derived. Next, the efficiency of the LSTM and GRU models was compared for a number of model configurations. The influence of the order of dynamics and the number of neurons on the model accuracy was analysed. Finally, the efficiency of the considered models when used in MPC was assessed. The influence of the model structure on different control quality indicators and the calculation time was discussed. It was found that the GRU network, although it had a lower number of parameters than the LSTM one, may be successfully used in MPC without any significant deterioration of control quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Nie, Gan Wei, Nurul Fathiah Ghazali, Norazman Shahar und Muhammad Amir As'ari. „Deep stair walking detection using wearable inertial sensor via long short-term memory network“. Bulletin of Electrical Engineering and Informatics 9, Nr. 1 (01.02.2020): 238–46. http://dx.doi.org/10.11591/eei.v9i1.1685.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a stair walking detection via Long-short Term Memory (LSTM) network to prevent stair fall event happen by alerting caregiver for assistance as soon as possible. The tri-axial accelerometer and gyroscope data of five activities of daily living (ADLs) including stair walking is collected from 20 subjects with wearable inertial sensors on the left heel, right heel, chest, left wrist and right wrist. Several parameters which are window size, sensor deployment, number of hidden cell unit and LSTM architecture were varied in finding an optimized LSTM model for stair walking detection. As the result, the best model in detecting stair walking event that achieve 95.6% testing accuracy is double layered LSTM with 250 hidden cell units that is fed with data from all sensor locations with window size of 2 seconds. The result also shows that with similar detection model but fed with single sensor data, the model can achieve very good performance which is above 83.2%. It should be possible, therefore, to integrate the proposed detection model for fall prevention especially among patients or elderly in helping to alert the caregiver when stair walking event occur.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Jeong, Myeong-Hun, Tae-Young Lee, Seung-Bae Jeon und Minkyo Youm. „Highway Speed Prediction Using Gated Recurrent Unit Neural Networks“. Applied Sciences 11, Nr. 7 (29.03.2021): 3059. http://dx.doi.org/10.3390/app11073059.

Der volle Inhalt der Quelle
Annotation:
Movement analytics and mobility insights play a crucial role in urban planning and transportation management. The plethora of mobility data sources, such as GPS trajectories, poses new challenges and opportunities for understanding and predicting movement patterns. In this study, we predict highway speed using a gated recurrent unit (GRU) neural network. Based on statistical models, previous approaches suffer from the inherited features of traffic data, such as nonlinear problems. The proposed method predicts highway speed based on the GRU method after training on digital tachograph data (DTG). The DTG data were recorded in one month, giving approximately 300 million records. These data included the velocity and locations of vehicles on the highway. Experimental results demonstrate that the GRU-based deep learning approach outperformed the state-of-the-art alternatives, the autoregressive integrated moving average model, and the long short-term neural network (LSTM) model, in terms of prediction accuracy. Further, the computational cost of the GRU model was lower than that of the LSTM. The proposed method can be applied to traffic prediction and intelligent transportation systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Guang, Xingxing, Yanbin Gao, Pan Liu und Guangchun Li. „IMU Data and GPS Position Information Direct Fusion Based on LSTM“. Sensors 21, Nr. 7 (03.04.2021): 2500. http://dx.doi.org/10.3390/s21072500.

Der volle Inhalt der Quelle
Annotation:
In recent years, the application of deep learning to the inertial navigation field has brought new vitality to inertial navigation technology. In this study, we propose a method using long short-term memory (LSTM) to estimate position information based on inertial measurement unit (IMU) data and Global Positioning System (GPS) position information. Simulations and experiments show the practicability of the proposed method in both static and dynamic cases. In static cases, vehicle stop data are simulated or recorded. In dynamic cases, uniform rectilinear motion data are simulated or recorded. The value range of LSTM hyperparameters is explored through both static and dynamic simulations. The simulations and experiments results are compared with the strapdown inertial navigation system (SINS)/GPS integrated navigation system based on kalman filter (KF). In a simulation, the LSTM method’s computed position error Standard Deviation (STD) was 52.38% of what the SINS computed. The biggest simulation radial error estimated by the LSTM method was 0.57 m. In experiments, the LSTM method computed a position error STD of 23.08% using only SINSs. The biggest experimental radial error the LSTM method estimated was 1.31 m. The position estimated by the LSTM fusion method has no cumulative divergence error compared to SINS (computed). All in all, the trained LSTM is a dependable fusion method for combining IMU data and GPS position information to estimate position.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Shahi, Tej Bahadur, Ashish Shrestha, Arjun Neupane und William Guo. „Stock Price Forecasting with Deep Learning: A Comparative Study“. Mathematics 8, Nr. 9 (27.08.2020): 1441. http://dx.doi.org/10.3390/math8091441.

Der volle Inhalt der Quelle
Annotation:
The long short-term memory (LSTM) and gated recurrent unit (GRU) models are popular deep-learning architectures for stock market forecasting. Various studies have speculated that incorporating financial news sentiment in forecasting could produce a better performance than using stock features alone. This study carried a normalized comparison on the performances of LSTM and GRU for stock market forecasting under the same conditions and objectively assessed the significance of incorporating the financial news sentiments in stock market forecasting. This comparative study is conducted on the cooperative deep-learning architecture proposed by us. Our experiments show that: (1) both LSTM and GRU are circumstantial in stock forecasting if only the stock market features are used; (2) the performance of LSTM and GRU for stock price forecasting can be significantly improved by incorporating the financial news sentiments with the stock features as the input; (3) both the LSTM-News and GRU-News models are able to produce better forecasting in stock price equally; (4) the cooperative deep-learning architecture proposed in this study could be modified as an expert system incorporating both the LSTM-News and GRU-News models to recommend the best possible forecasting whichever model can produce dynamically.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Hsieh, Shun-Chieh. „Tourism Demand Forecasting Based on an LSTM Network and Its Variants“. Algorithms 14, Nr. 8 (18.08.2021): 243. http://dx.doi.org/10.3390/a14080243.

Der volle Inhalt der Quelle
Annotation:
The need for accurate tourism demand forecasting is widely recognized. The unreliability of traditional methods makes tourism demand forecasting still challenging. Using deep learning approaches, this study aims to adapt Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit networks (GRU), which are straightforward and efficient, to improve Taiwan’s tourism demand forecasting. The networks are able to seize the dependence of visitor arrival time series data. The Adam optimization algorithm with adaptive learning rate is used to optimize the basic setup of the models. The results show that the proposed models outperform previous studies undertaken during the Severe Acute Respiratory Syndrome (SARS) events of 2002–2003. This article also examines the effects of the current COVID-19 outbreak to tourist arrivals to Taiwan. The results show that the use of the LSTM network and its variants can perform satisfactorily for tourism demand forecasting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Bouktif, Salah, Ali Fiaz, Ali Ouni und Mohamed Adel Serhani. „Single and Multi-Sequence Deep Learning Models for Short and Medium Term Electric Load Forecasting“. Energies 12, Nr. 1 (02.01.2019): 149. http://dx.doi.org/10.3390/en12010149.

Der volle Inhalt der Quelle
Annotation:
Time series analysis using long short term memory (LSTM) deep learning is a very attractive strategy to achieve accurate electric load forecasting. Although it outperforms most machine learning approaches, the LSTM forecasting model still reveals a lack of validity because it neglects several characteristics of the electric load exhibited by time series. In this work, we propose a load-forecasting model based on enhanced-LSTM that explicitly considers the periodicity characteristic of the electric load by using multiple sequences of inputs time lags. An autoregressive model is developed together with an autocorrelation function (ACF) to regress consumption and identify the most relevant time lags to feed the multi-sequence LSTM. Two variations of deep neural networks, LSTM and gated recurrent unit (GRU) are developed for both single and multi-sequence time-lagged features. These models are compared to each other and to a spectrum of data mining benchmark techniques including artificial neural networks (ANN), boosting, and bagging ensemble trees. France Metropolitan’s electricity consumption data is used to train and validate our models. The obtained results show that GRU- and LSTM-based deep learning model with multi-sequence time lags achieve higher performance than other alternatives including the single-sequence LSTM. It is demonstrated that the new models can capture critical characteristics of complex time series (i.e., periodicity) by encompassing past information from multiple timescale sequences. These models subsequently achieve predictions that are more accurate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Rajagukguk, Rial A., Raden A. A. Ramadhan und Hyun-Jin Lee. „A Review on Deep Learning Models for Forecasting Time Series Data of Solar Irradiance and Photovoltaic Power“. Energies 13, Nr. 24 (15.12.2020): 6623. http://dx.doi.org/10.3390/en13246623.

Der volle Inhalt der Quelle
Annotation:
Presently, deep learning models are an alternative solution for predicting solar energy because of their accuracy. The present study reviews deep learning models for handling time-series data to predict solar irradiance and photovoltaic (PV) power. We selected three standalone models and one hybrid model for the discussion, namely, recurrent neural network (RNN), long short-term memory (LSTM), gated recurrent unit (GRU), and convolutional neural network-LSTM (CNN–LSTM). The selected models were compared based on the accuracy, input data, forecasting horizon, type of season and weather, and training time. The performance analysis shows that these models have their strengths and limitations in different conditions. Generally, for standalone models, LSTM shows the best performance regarding the root-mean-square error evaluation metric (RMSE). On the other hand, the hybrid model (CNN–LSTM) outperforms the three standalone models, although it requires longer training data time. The most significant finding is that the deep learning models of interest are more suitable for predicting solar irradiance and PV power than other conventional machine learning models. Additionally, we recommend using the relative RMSE as the representative evaluation metric to facilitate accuracy comparison between studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Shewalkar, Apeksha, Deepika Nyavanandi und Simone A. Ludwig. „Performance Evaluation of Deep Neural Networks Applied to Speech Recognition: RNN, LSTM and GRU“. Journal of Artificial Intelligence and Soft Computing Research 9, Nr. 4 (01.10.2019): 235–45. http://dx.doi.org/10.2478/jaiscr-2019-0006.

Der volle Inhalt der Quelle
Annotation:
Abstract Deep Neural Networks (DNN) are nothing but neural networks with many hidden layers. DNNs are becoming popular in automatic speech recognition tasks which combines a good acoustic with a language model. Standard feedforward neural networks cannot handle speech data well since they do not have a way to feed information from a later layer back to an earlier layer. Thus, Recurrent Neural Networks (RNNs) have been introduced to take temporal dependencies into account. However, the shortcoming of RNNs is that long-term dependencies due to the vanishing/exploding gradient problem cannot be handled. Therefore, Long Short-Term Memory (LSTM) networks were introduced, which are a special case of RNNs, that takes long-term dependencies in a speech in addition to short-term dependencies into account. Similarily, GRU (Gated Recurrent Unit) networks are an improvement of LSTM networks also taking long-term dependencies into consideration. Thus, in this paper, we evaluate RNN, LSTM, and GRU to compare their performances on a reduced TED-LIUM speech data set. The results show that LSTM achieves the best word error rates, however, the GRU optimization is faster while achieving word error rates close to LSTM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Du, Wenjun, Bo Sun, Jiating Kuai, Jiemin Xie, Jie Yu und Tuo Sun. „Highway Travel Time Prediction of Segments Based on ANPR Data considering Traffic Diversion“. Journal of Advanced Transportation 2021 (09.07.2021): 1–16. http://dx.doi.org/10.1155/2021/9512501.

Der volle Inhalt der Quelle
Annotation:
Travel time is one of the most critical parameters in proactive traffic management and the deployment of advanced traveler information systems. This paper proposes a hybrid model named LSTM-CNN for predicting the travel time of highways by integrating the long short-term memory (LSTM) and the convolutional neural networks (CNNs) with the attention mechanism and the residual network. The highway is divided into multiple segments by considering the traffic diversion and the relative location of automatic number plate recognition (ANPR). There are four steps in this hybrid approach. First, the average travel time of each segment in each interval is calculated from ANPR and fed into LSTM in the form of a multidimensional array. Second, the attention mechanism is adopted to combine the hidden layer of LSTM with dynamic temporal weights. Third, the residual network is introduced to increase the network depth and overcome the vanishing gradient problem, which consists of three pairs of one-dimensional convolutional layers (Conv1D) and batch normalization (BatchNorm) with the rectified linear unit (ReLU) as the activation function. Finally, a series of Conv1D layers is connected to extract features further and reduce dimensionality. The proposed LSTM-CNN approach is tested on the three-month ANPR data of a real-world 39.25 km highway with four pairs of ANPR detectors of the uplink and downlink, Zhejiang, China. The experimental results indicate that LSTM-CNN learns spatial, temporal, and depth information better than the state-of-the-art traffic forecasting models, so LSTM-CNN can predict more accurate travel time. Moreover, LSTM-CNN outperforms the state-of-the-art methods in nonrecurrent prediction, multistep-ahead prediction, and long-term prediction. LSTM-CNN is a promising model with scalability and portability for highway traffic prediction and can be further extended to improve the performance of the advanced traffic management system (ATMS) and advanced traffic information system (ATIS).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Qiao, Mu, und Zixuan Cheng. „A Novel Long- and Short-Term Memory Network with Time Series Data Analysis Capabilities“. Mathematical Problems in Engineering 2020 (13.10.2020): 1–9. http://dx.doi.org/10.1155/2020/8885625.

Der volle Inhalt der Quelle
Annotation:
Time series data are an extremely important type of data in the real world. Time series data gradually accumulate over time. Due to the dynamic growth in time series data, they tend to have higher dimensions and large data scales. When performing cluster analysis on this type of data, there are shortcomings in using traditional feature extraction methods for processing. To improve the clustering performance on time series data, this study uses a recurrent neural network (RNN) to train the input data. First, an RNN called the long short-term memory (LSTM) network is used to extract the features of time series data. Second, pooling technology is used to reduce the dimensionality of the output features in the last layer of the LSTM network. Due to the long time series, the hidden layer in the LSTM network cannot remember the information at all times. As a result, it is difficult to obtain a compressed representation of the global information in the last layer. Therefore, it is necessary to combine the information from the previous hidden unit to supplement all of the data. By stacking all the hidden unit information and performing a pooling operation, a dimensionality reduction effect of the hidden unit information is achieved. In this way, the memory loss caused by an excessively long sequence is compensated. Finally, considering that many time series data are unbalanced data, the unbalanced K-means (UK-means) algorithm is used to cluster the features after dimensionality reduction. The experiments were conducted on multiple publicly available time series datasets. The experimental results show that LSTM-based feature extraction combined with the dimensionality reduction processing of the pooling technology and cluster processing for imbalanced data used in this study has a good effect on the processing of time series data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Gao, Miao, Guoyou Shi und Shuang Li. „Online Prediction of Ship Behavior with Automatic Identification System Sensor Data Using Bidirectional Long Short-Term Memory Recurrent Neural Network“. Sensors 18, Nr. 12 (30.11.2018): 4211. http://dx.doi.org/10.3390/s18124211.

Der volle Inhalt der Quelle
Annotation:
The real-time prediction of ship behavior plays an important role in navigation and intelligent collision avoidance systems. This study developed an online real-time ship behavior prediction model by constructing a bidirectional long short-term memory recurrent neural network (BI-LSTM-RNN) that is suitable for automatic identification system (AIS) date and time sequential characteristics, and for online parameter adjustment. The bidirectional structure enhanced the relevance between historical and future data, thus improving the prediction accuracy. Through the “forget gate” of the long short-term memory (LSTM) unit, the common behavioral patterns were remembered and unique behaviors were forgotten, improving the universality of the model. The BI-LSTM-RNN was trained using 2015 AIS data from Tianjin Port waters. The results indicate that the BI-LSTM-RNN effectively predicted the navigational behaviors of ships. This study contributes significantly to the increased efficiency and safety of sea operations. The proposed method could potentially be applied as the predictive foundation for various intelligent systems, including intelligent collision avoidance, vessel route planning, operational efficiency estimation, and anomaly detection systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Zhang, Qitao, Chenji Wei, Yuhe Wang, Shuyi Du, Yuanchun Zhou und Hongqing Song. „Potential for Prediction of Water Saturation Distribution in Reservoirs Utilizing Machine Learning Methods“. Energies 12, Nr. 19 (20.09.2019): 3597. http://dx.doi.org/10.3390/en12193597.

Der volle Inhalt der Quelle
Annotation:
Machine learning technology is becoming increasingly prevalent in the petroleum industry, especially for reservoir characterization and drilling problems. The aim of this study is to present an alternative way to predict water saturation distribution in reservoirs with a machine learning method. In this study, we utilized Long Short-Term Memory (LSTM) to build a prediction model for forecast of water saturation distribution. The dataset deriving from monitoring and simulating of an actual reservoir was utilized for model training and testing. The data model after training was validated and utilized to forecast water saturation distribution, pressure distribution and oil production. We also compared standard Recurrent Neural Network (RNN) and Gated Recurrent Unit (GRU) which are popular machine learning methods with LSTM for better water saturation prediction. The results show that the LSTM method has a good performance on the water saturation prediction with overall AARD below 14.82%. Compared with other machine learning methods such as GRU and standard RNN, LSTM has better performance in calculation accuracy. This study presented an alternative way for quick and robust prediction of water saturation distribution in reservoir.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Ningrum, Ayu Ahadi, Iwan Syarif, Agus Indra Gunawan, Edi Satriyanto und Rosmaliati Muchtar. „Algoritma Deep Learning-LSTM untuk Memprediksi Umur Transformator“. Jurnal Teknologi Informasi dan Ilmu Komputer 8, Nr. 3 (15.06.2021): 539. http://dx.doi.org/10.25126/jtiik.2021834587.

Der volle Inhalt der Quelle
Annotation:
<p>Kualitas dan ketersediaan pasokan listrik menjadi hal yang sangat penting. Kegagalan pada transformator menyebabkan pemadaman listrik yang dapat menurunkan kualitas layanan kepada pelanggan. Oleh karena itu, pengetahuan tentang umur transformator sangat penting untuk menghindari terjadinya kerusakan transformator secara mendadak yang dapat mengurangi kualitas layanan pada pelanggan. Penelitian ini bertujuan untuk mengembangkan aplikasi yang dapat memprediksi umur transformator secara akurat menggunakan metode <em>Deep Learning-LSTM. LSTM </em>adalah metode yang dapat digunakan untuk mempelajari suatu pola pada data deret waktu. Data yang digunakan dalam penelitian ini bersumber dari 25 unit transformator yang meliputi data dari sensor arus, tegangan, dan suhu. Analisis performa yang digunakan untuk mengukur kinerja LSTM adalah <em>Root Mean Squared Error</em> (RMSE) dan <em>Squared Correlation (SC</em>). Selain LSTM, penelitian ini juga menerapkan <em>algoritma Multilayer Perceptron, Linear Regression,</em> dan <em>Gradient Boosting Regressor</em> sebagai algoritma pembanding. Hasil eksperimen menunjukkan bahwa LSTM mempunyai kinerja yang sangat bagus setelah dilakukan pencarian komposisi data, seleksi fitur menggunakan algoritma KBest dan melakukan percobaan beberapa variasi parameter. Hasil penelitian menunjukkan bahwa metode <em>Deep Learning-LSTM</em> mempunyai kinerja yang lebih baik daripada 3 algoritma lain yaitu nilai RMSE= 0,0004 dan nilai <em>Squared Correlation</em>= 0,9690.</p><p> </p><p><em><strong>Abstract</strong></em></p><p><em></em><em>The quality and availability of the electricity supply is very important. Failures in the transformer cause power outages which can reduce the quality of service to customers. Therefore, knowledge of transformer life is very important to avoid sudden transformer damage which can reduce the quality of service to customers. This study aims to develop applications that can predict transformer life accurately using the Deep Learning-LSTM method. LSTM is a method that can be used to study a pattern in time series data. The data used in this research comes from 25 transformer units which include data from current, voltage, and temperature sensors. The performance analysis used to measure LSTM performance is Root Mean Squared Error (RMSE) and Squared Correlation (SC). Apart from LSTM, this research also applies the Multilayer Perceptron algorithm, Linear Regression, and Gradient Boosting Regressor as a comparison algorithm. The experimental results show that LSTM has a very good performance after searching for the composition of the data, selecting features using the KBest algorithm and experimenting with several parameter variations. The results showed that the Deep Learning-LSTM method had better performance than the other 3 algorithms, namely the value of RMSE = 0.0004 and the value of Squared Correlation = 0.9690.</em></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Zhou, Jianzhong, Yahui Shan, Jie Liu, Yanhe Xu und Yang Zheng. „Degradation Tendency Prediction for Pumped Storage Unit Based on Integrated Degradation Index Construction and Hybrid CNN-LSTM Model“. Sensors 20, Nr. 15 (31.07.2020): 4277. http://dx.doi.org/10.3390/s20154277.

Der volle Inhalt der Quelle
Annotation:
Accurate degradation tendency prediction (DTP) is vital for the secure operation of a pumped storage unit (PSU). However, the existing techniques and methodologies for DTP still face challenges, such as a lack of appropriate degradation indicators, insufficient accuracy, and poor capability to track the data fluctuation. In this paper, a hybrid model is proposed for the degradation tendency prediction of a PSU, which combines the integrated degradation index (IDI) construction and convolutional neural network-long short-term memory (CNN-LSTM). Firstly, the health model of a PSU is constructed with Gaussian process regression (GPR) and the condition parameters of active power, working head, and guide vane opening. Subsequently, for comprehensively quantifying the degradation level of PSU, an IDI is developed using entropy weight (EW) theory. Finally, combining the local feature extraction of the CNN with the time series representation of LSTM, the CNN-LSTM model is constructed to realize DTP. To validate the effectiveness of the proposed model, the monitoring data collected from a PSU in China is taken as case studies. The root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) obtained by the proposed model are 1.1588, 0.8994, 0.0918, and 0.9713, which can meet the engineering application requirements. The experimental results show that the proposed model outperforms other comparison models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ishizuka, Kazumi, Nobuaki Kobayashi und Ken Saito. „High Accuracy and Short Delay 1ch-SSVEP Quadcopter-BMI Using Deep Learning“. Journal of Robotics and Mechatronics 32, Nr. 4 (20.08.2020): 738–44. http://dx.doi.org/10.20965/jrm.2020.p0738.

Der volle Inhalt der Quelle
Annotation:
This study considers a brain-machine interface (BMI) system based on the steady state visually evoked potential (SSVEP) for controlling quadcopters using electroencephalography (EEG) signals. An EEG channel with a single dry electrode, i.e., without conductive gel or paste, was utilized to minimize the load on users. Convolutional neural network (CNN) and long short-term memory (LSTM) models, both of which have received significant research attention, were used to classify the EEG data obtained for flickers from multi-flicker screens at five different frequencies, with each flicker corresponding to a drone movement, viz., takeoff, forward and sideways movements, and landing. The subjects of the experiment were seven healthy men. Results indicate a high accuracy of 97% with the LSTM model for a 2 s segment used as the unit of processing. High accuracy of 93% for 0.5 s segment as a unit of processing can remain in the LSTM classification, consequently decreasing the delay of the system that may be required for safety reasons in real-time applications. A system demonstration was undertaken with 2 out of 7 subjects controlling the quadcopter and monitoring movements such as takeoff, forward motion, and landing, which showed a success rate of 90% on average.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Chae, Minsu, Sangwook Han und HwaMin Lee. „Outdoor Particulate Matter Correlation Analysis and Prediction Based Deep Learning in the Korea“. Electronics 9, Nr. 7 (15.07.2020): 1146. http://dx.doi.org/10.3390/electronics9071146.

Der volle Inhalt der Quelle
Annotation:
Particulate matter (PM) has become a problem worldwide, with many deleterious health effects such as worsened asthma, affected lungs, and various toxin-induced cancers. The International Agency for Research on Cancer (IARC) under the World Health Organization (WHO) has designated PM as a group 1 carcinogen. Although Korea Environment Corporation forecasts the status of outdoor PM four times a day, whichever is higher among PM10 and PM2.5. Korea Environment Corporation forecasts for the stages of PM. It remains difficult to predict the value of PM when going out. We correlate air quality and solar terms, address format, and weather data, and PM in the Korea. We analyzed the correlation between address format, air quality data, and weather data, and PM. We evaluated performance according to the sequence length and batch size and found the best outcome with a sequence length of 7 days, and a batch size of 96. We performed PM prediction using the Long Short-Term Recurrent Unit (LSTM), the Convolutional Neural Network (CNN), and the Gated Recurrent Unit (GRU) models. The CNN model suffered the limitation of only predicting from the training data, not from the test data. The LSTM and GRU models generated similar prediction results. We confirmed that the LSTM model has higher accuracy than the other two models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Son, Hojae, Anand Paul und Gwanggil Jeon. „Country Information Based on Long-Term Short-Term Memory (LSTM)“. International Journal of Engineering & Technology 7, Nr. 4.44 (01.12.2018): 47. http://dx.doi.org/10.14419/ijet.v7i4.44.26861.

Der volle Inhalt der Quelle
Annotation:
Social platform such as Facebook, Twitter and Instagram generates tremendous data these days. Researchers make use of these data to extract meaningful information and predict future. Especially twitter is the platform people can share their thought briefly on a certain topic and it provides real-time streaming data API (Application Programming Interface) for filtering data for a purpose. Over time a country has changed its interest in other countries. People can get a benefit to see a tendency of interest as well as prediction result from twitter streaming data. Capturing twitter data flow is connected to how people think and have an interest on the topic. We believe real-time twitter data reflect this change. Long-term Short-term Memory Unit (LSTM) is the widely used deep learning unit from recurrent neural network to learn the sequence. The purpose of this work is building prediction model “Country Interest Analysis based on LSTM (CIAL)” to forecast next interval of tweet counts when it comes to referring country on the tweet post. Additionally it’s necessary to cluster for analyzing multiple countries twitter data over the remote nodes. This paper presents how country attention tendency can be captured over twitter streaming data with LSTM algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Wang, Chuang, Wenbo Du, Zhixiang Zhu und Zhifeng Yue. „The real-time big data processing method based on LSTM or GRU for the smart job shop production process“. Journal of Algorithms & Computational Technology 14 (Januar 2020): 174830262096239. http://dx.doi.org/10.1177/1748302620962390.

Der volle Inhalt der Quelle
Annotation:
With the wide application of intelligent sensors and internet of things (IoT) in the smart job shop, a large number of real-time production data is collected. Accurate analysis of the collected data can help producers to make effective decisions. Compared with the traditional data processing methods, artificial intelligence, as the main big data analysis method, is more and more applied to the manufacturing industry. However, the ability of different AI models to process real-time data of smart job shop production is also different. Based on this, a real-time big data processing method for the job shop production process based on Long Short-Term Memory (LSTM) and Gate Recurrent Unit (GRU) is proposed. This method uses the historical production data extracted by the IoT job shop as the original data set, and after data preprocessing, uses the LSTM and GRU model to train and predict the real-time data of the job shop. Through the description and implementation of the model, it is compared with KNN, DT and traditional neural network model. The results show that in the real-time big data processing of production process, the performance of the LSTM and GRU models is superior to the traditional neural network, K nearest neighbor (KNN), decision tree (DT). When the performance is similar to LSTM, the training time of GRU is much lower than LSTM model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Liu, Bingchun, Lei Zhang, Qingshan Wang und Jiali Chen. „A Novel Method for Regional NO2 Concentration Prediction Using Discrete Wavelet Transform and an LSTM Network“. Computational Intelligence and Neuroscience 2021 (07.04.2021): 1–14. http://dx.doi.org/10.1155/2021/6631614.

Der volle Inhalt der Quelle
Annotation:
Achieving accurate predictions of urban NO2 concentration is essential for effectively control of air pollution. This paper selected the concentration of NO2 in Tianjin as the research object, concentrating predicting model based on Discrete Wavelet Transform and Long- and Short-Term Memory network (DWT-LSTM) for predicting daily average NO2 concentration. Five major atmospheric pollutants, key meteorological data, and historical data were selected as the input indexes, realizing the effective prediction of NO2 concentration in the next day. Firstly, the input data were decomposed by Discrete Wavelet Transform to increase the data dimension. Furthermore, the LSTM network model was used to learn the features of the decomposed data. Ultimately, Support Vector Regression (SVR), Gated Regression Unit (GRU), and single LSTM model were selected as comparison models, and each performance was evaluated by the Mean Absolute Percentage Error (MAPE). The results show that the DWT-LSTM model constructed in this paper can improve the accuracy and generalization ability of data mining by decomposing the input data into multiple components. Compared with the other three methods, the model structure is more suitable for predicting NO2 concentration in Tianjin.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Adipradana, Ryan, Bagas Pradipabista Nayoga, Ryan Suryadi und Derwin Suhartono. „Hoax analyzer for Indonesian news using RNNs with fasttext and glove embeddings“. Bulletin of Electrical Engineering and Informatics 10, Nr. 4 (01.08.2021): 2130–36. http://dx.doi.org/10.11591/eei.v10i4.2956.

Der volle Inhalt der Quelle
Annotation:
Misinformation has become an innocuous yet potentially harmful problem ever since the development of internet. Numbers of efforts are done to prevent the consumption of misinformation, including the use of artificial intelligence (AI), mainly natural language processing (NLP). Unfortunately, most of natural language processing use English as its linguistic approach since English is a high resource language. On the contrary, Indonesia language is considered a low resource language thus the amount of effort to diminish consumption of misinformation is low compared to English-based natural language processing. This experiment is intended to compare fastText and GloVe embeddings for four deep neural networks (DNN) models: long short-term memory (LSTM), bidirectional long short-term memory (BI-LSTM), gated recurrent unit (GRU) and bidirectional gated recurrent unit (BI-GRU) in terms of metrics score when classifying news between three classes: fake, valid, and satire. The latter results show that fastText embedding is better than GloVe embedding in supervised text classification, along with BI-GRU + fastText yielding the best result.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Kim, Jin-Gyeom, und Bowon Lee. „Appliance Classification by Power Signal Analysis Based on Multi-Feature Combination Multi-Layer LSTM“. Energies 12, Nr. 14 (21.07.2019): 2804. http://dx.doi.org/10.3390/en12142804.

Der volle Inhalt der Quelle
Annotation:
The imbalance of power supply and demand is an important problem to solve in power industry and Non Intrusive Load Monitoring (NILM) is one of the representative technologies for power demand management. The most critical factor to the NILM is the performance of the classifier among the last steps of the overall NILM operation, and therefore improving the performance of the NILM classifier is an important issue. This paper proposes a new architecture based on the RNN to overcome the limitations of existing classification algorithms and to improve the performance of the NILM classifier. The proposed model, called Multi-Feature Combination Multi-Layer Long Short-Term Memory (MFC-ML-LSTM), adapts various feature extraction techniques that are commonly used for audio signal processing to power signals. It uses Multi-Feature Combination (MFC) for generating the modified input data for improving the classification performance and adopts Multi-Layer LSTM (ML-LSTM) network as the classification model for further improvements. Experimental results show that the proposed method achieves the accuracy and the F1-score for appliance classification with the ranges of 95–100% and 84–100% that are superior to the existing methods based on the Gated Recurrent Unit (GRU) or a single-layer LSTM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Fouladgar, Nazanin, und Kary Främling. „A Novel LSTM for Multivariate Time Series with Massive Missingness“. Sensors 20, Nr. 10 (16.05.2020): 2832. http://dx.doi.org/10.3390/s20102832.

Der volle Inhalt der Quelle
Annotation:
Multivariate time series with missing data is ubiquitous when the streaming data is collected by sensors or any other recording instruments. For instance, the outdoor sensors gathering different meteorological variables may encounter low material sensitivity to specific situations, leading to incomplete information gathering. This is problematic in time series prediction with massive missingness and different missing rate of variables. Contribution addressing this problem on the regression task of meteorological datasets by employing Long Short-Term Memory (LSTM), capable of controlling the information flow with its memory unit, is still missing. In this paper, we propose a novel model called forward and backward variable-sensitive LSTM (FBVS-LSTM) consisting of two decay mechanisms and some informative data. The model inputs are mainly the missing indicator, time intervals of missingness in both forward and backward direction and missing rate of each variable. We employ this information to address the so-called missing not at random (MNAR) mechanism. Separately learning the features of each parameter, the model becomes adapted to deal with massive missingness. We conduct our experiment on three real-world datasets for the air pollution forecasting. The results demonstrate that our model performed well along with other LSTM-derivation models in terms of prediction accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Bouaafia, Soulef, Randa Khemiri, Amna Maraoui und Fatma Elzahra Sayadi. „CNN-LSTM Learning Approach-Based Complexity Reduction for High-Efficiency Video Coding Standard“. Scientific Programming 2021 (25.03.2021): 1–10. http://dx.doi.org/10.1155/2021/6628041.

Der volle Inhalt der Quelle
Annotation:
High-Efficiency Video Coding provides a better compression ratio compared to earlier standard, H.264/Advanced Video Coding. In fact, HEVC saves 50% bit rate compared to H.264/AVC for the same subjective quality. This improvement is notably obtained through the hierarchical quadtree structured Coding Unit. However, the computational complexity significantly increases due to the full search Rate-Distortion Optimization, which allows reaching the optimal Coding Tree Unit partition. Despite the many speedup algorithms developed in the literature, the HEVC encoding complexity still remains a crucial problem in video coding field. Towards this goal, we propose in this paper a deep learning model-based fast mode decision algorithm for HEVC intermode. Firstly, we provide a deep insight overview of the proposed CNN-LSTM, which plays a kernel and pivotal role in this contribution, thus predicting the CU splitting and reducing the HEVC encoding complexity. Secondly, a large training and inference dataset for HEVC intercoding was investigated to train and test the proposed deep framework. Based on this framework, the temporal correlation of the CU partition for each video frame is solved by the LSTM network. Numerical results prove that the proposed CNN-LSTM scheme reduces the encoding complexity by 58.60% with an increase in the BD rate of 1.78% and a decrease in the BD-PSNR of -0.053 dB. Compared to the related works, the proposed scheme has achieved a best compromise between RD performance and complexity reduction, as proven by experimental results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Oh, Youngkyo, und Dongyoung Koo. „Evaluation of Korean Reviews Automatically Generated using Long Short-Term Memory Unit“. Journal of KIISE 46, Nr. 6 (30.06.2019): 515–25. http://dx.doi.org/10.5626/jok.2019.46.6.515.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Li, Teng, und Yepeng Guan. „Dual Memory LSTM with Dual Attention Neural Network for Spatiotemporal Prediction“. Sensors 21, Nr. 12 (21.06.2021): 4248. http://dx.doi.org/10.3390/s21124248.

Der volle Inhalt der Quelle
Annotation:
Spatiotemporal prediction is challenging due to extracting representations being inefficient and the lack of rich contextual dependences. A novel approach is proposed for spatiotemporal prediction using a dual memory LSTM with dual attention neural network (DMANet). A new dual memory LSTM (DMLSTM) unit is proposed to extract the representations by leveraging differencing operations between the consecutive images and adopting dual memory transition mechanism. To make full use of historical representations, a dual attention mechanism is designed to capture long-term spatiotemporal dependences by computing the correlations between the current hidden representations and the historical hidden representations from temporal and spatial dimensions, respectively. Then, the dual attention is embedded into DMLSTM unit to construct a DMANet, which enables the model with greater modeling power for short-term dynamics and long-term contextual representations. An apparent resistivity map (AR Map) dataset is proposed in this paper. The B-spline interpolation method is utilized to enhance AR Map dataset and makes apparent resistivity trend curve continuous derivative in the time dimension. The experimental results demonstrate that the developed method has excellent prediction performance by comparisons with some state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Friedrich, Björn, Carolin Lübbe und Andreas Hein. „Analyzing the Importance of Sensors for Mode of Transportation Classification“. Sensors 21, Nr. 1 (29.12.2020): 176. http://dx.doi.org/10.3390/s21010176.

Der volle Inhalt der Quelle
Annotation:
The broad availability of smartphones and Inertial Measurement Units in particular brings them into the focus of recent research. Inertial Measurement Unit data is used for a variety of tasks. One important task is the classification of the mode of transportation. In the first step, we present a deep-learning-based algorithm that combines long-short-term-memory (LSTM) layer and convolutional layer to classify eight different modes of transportation on the Sussex–Huawei Locomotion-Transportation (SHL) dataset. The inputs of our model are the accelerometer, gyroscope, linear acceleration, magnetometer, gravity and pressure values as well as the orientation information. In the second step, we analyze the contribution of each sensor modality to the classification score and to the different modes of transportation. For this analysis, we subtract the baseline confusion matrix from a confusion matrix of a network trained with a left-out sensor modality (difference confusion matrix) and we visualize the low-level features from the LSTM layers. This approach provides useful insights into the properties of the deep-learning algorithm and indicates the presence of redundant sensor modalities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Hsu, Fu-Shun, Shang-Ran Huang, Chien-Wen Huang, Chao-Jung Huang, Yuan-Ren Cheng, Chun-Chieh Chen, Jack Hsiao et al. „Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database—HF_Lung_V1“. PLOS ONE 16, Nr. 7 (01.07.2021): e0254134. http://dx.doi.org/10.1371/journal.pone.0254134.

Der volle Inhalt der Quelle
Annotation:
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios—such as in monitoring disease progression of coronavirus disease 2019—to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Bi, Xiao-ying, Bo Li, Wen-long Lu und Xin-zhi Zhou. „Daily runoff forecasting based on data-augmented neural network model“. Journal of Hydroinformatics 22, Nr. 4 (16.05.2020): 900–915. http://dx.doi.org/10.2166/hydro.2020.017.

Der volle Inhalt der Quelle
Annotation:
Abstract Accurate daily runoff prediction plays an important role in the management and utilization of water resources. In order to improve the accuracy of prediction, this paper proposes a deep neural network (CAGANet) composed of a convolutional layer, an attention mechanism, a gated recurrent unit (GRU) neural network, and an autoregressive (AR) model. Given that the daily runoff sequence is abrupt and unstable, it is difficult for a single model and combined model to obtain high-precision daily runoff predictions directly. Therefore, this paper uses a linear interpolation method to enhance the stability of hydrological data and apply the augmented data to the CAGANet model, the support vector machine (SVM) model, the long short-term memory (LSTM) neural network and the attention-mechanism-based LSTM model (AM-LSTM). The comparison results show that among the four models based on data augmentation, the CAGANet model proposed in this paper has the best prediction accuracy. Its Nash–Sutcliffe efficiency can reach 0.993. Therefore, the CAGANet model based on data augmentation is a feasible daily runoff forecasting scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Jiang, Ming-xin, Chao Deng, Zhi-geng Pan, Lan-fang Wang und Xing Sun. „Multiobject Tracking in Videos Based on LSTM and Deep Reinforcement Learning“. Complexity 2018 (19.11.2018): 1–12. http://dx.doi.org/10.1155/2018/4695890.

Der volle Inhalt der Quelle
Annotation:
Multiple-object tracking is a challenging issue in the computer vision community. In this paper, we propose a multiobject tracking algorithm in videos based on long short-term memory (LSTM) and deep reinforcement learning. Firstly, the multiple objects are detected by the object detector YOLO V2. Secondly, the problem of single-object tracking is considered as a Markov decision process (MDP) since this setting provides a formal strategy to model an agent that makes sequence decisions. The single-object tracker is composed of a network that includes a CNN followed by an LSTM unit. Each tracker, regarded as an agent, is trained by utilizing deep reinforcement learning. Finally, we conduct a data association using LSTM for each frame between the results of the object detector and the results of single-object trackers. From the experimental results, we can see that our tracker achieves better performance than the other state-of-the-art methods. Multiple targets can be steadily tracked even when frequent occlusions, similar appearances, and scale changes happened.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Narkhede, Parag, Rahee Walambe, Shashi Poddar und Ketan Kotecha. „Incremental learning of LSTM framework for sensor fusion in attitude estimation“. PeerJ Computer Science 7 (04.08.2021): e662. http://dx.doi.org/10.7717/peerj-cs.662.

Der volle Inhalt der Quelle
Annotation:
This paper presents a novel method for attitude estimation of an object in 3D space by incremental learning of the Long-Short Term Memory (LSTM) network. Gyroscope, accelerometer, and magnetometer are few widely used sensors in attitude estimation applications. Traditionally, multi-sensor fusion methods such as the Extended Kalman Filter and Complementary Filter are employed to fuse the measurements from these sensors. However, these methods exhibit limitations in accounting for the uncertainty, unpredictability, and dynamic nature of the motion in real-world situations. In this paper, the inertial sensors data are fed to the LSTM network which are then updated incrementally to incorporate the dynamic changes in motion occurring in the run time. The robustness and efficiency of the proposed framework is demonstrated on the dataset collected from a commercially available inertial measurement unit. The proposed framework offers a significant improvement in the results compared to the traditional method, even in the case of a highly dynamic environment. The LSTM framework-based attitude estimation approach can be deployed on a standard AI-supported processing module for real-time applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Munir, Hafiz Shahbaz, Shengbing Ren, Mubashar Mustafa, Chaudry Naeem Siddique und Shazib Qayyum. „Attention based GRU-LSTM for software defect prediction“. PLOS ONE 16, Nr. 3 (04.03.2021): e0247444. http://dx.doi.org/10.1371/journal.pone.0247444.

Der volle Inhalt der Quelle
Annotation:
Software defect prediction (SDP) can be used to produce reliable, high-quality software. The current SDP is practiced on program granular components (such as file level, class level, or function level), which cannot accurately predict failures. To solve this problem, we propose a new framework called DP-AGL, which uses attention-based GRU-LSTM for statement-level defect prediction. By using clang to build an abstract syntax tree (AST), we define a set of 32 statement-level metrics. We label each statement, then make a three-dimensional vector and apply it as an automatic learning model, and then use a gated recurrent unit (GRU) with a long short-term memory (LSTM). In addition, the Attention mechanism is used to generate important features and improve accuracy. To verify our experiments, we selected 119,989 C/C++ programs in Code4Bench. The benchmark tests cover various programs and variant sets written by thousands of programmers. As an evaluation standard, compared with the state evaluation method, the recall, precision, accuracy and F1 measurement of our well-trained DP-AGL under normal conditions have increased by 1%, 4%, 5%, and 2% respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Park, Jinwan, Jungsik Jeong und Youngsoo Park. „Ship Trajectory Prediction Based on Bi-LSTM Using Spectral-Clustered AIS Data“. Journal of Marine Science and Engineering 9, Nr. 9 (21.09.2021): 1037. http://dx.doi.org/10.3390/jmse9091037.

Der volle Inhalt der Quelle
Annotation:
According to the statistics of maritime accidents, most collision accidents have been caused by human factors. In an encounter situation, the prediction of ship’s trajectory is a good way to notice the intention of the other ship. This paper proposes a methodology for predicting the ship’s trajectory that can be used for an intelligent collision avoidance algorithm at sea. To improve the prediction performance, the density-based spatial clustering of applications with noise (DBSCAN) has been used to recognize the pattern of the ship trajectory. Since the DBSCAN is a clustering algorithm based on the density of data points, it has limitations in clustering the trajectories with nonlinear curves. Thus, we applied the spectral clustering method that can reflect a similarity between individual trajectories. The similarity measured by the longest common subsequence (LCSS) distance. Based on the clustering results, the prediction model of ship trajectory was developed using the bidirectional long short-term memory (Bi-LSTM). Moreover, the performance of the proposed model was compared with that of the long short-term memory (LSTM) model and the gated recurrent unit (GRU) model. The input data was obtained by preprocessing techniques such as filtering, grouping, and interpolation of the automatic identification system (AIS) data. As a result of the experiment, the prediction accuracy of Bi-LSTM was found to be the highest compared to that of LSTM and GRU.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Lim, Se-Min, Hyeong-Cheol Oh, Jaein Kim, Juwon Lee und Jooyoung Park. „LSTM-Guided Coaching Assistant for Table Tennis Practice“. Sensors 18, Nr. 12 (23.11.2018): 4112. http://dx.doi.org/10.3390/s18124112.

Der volle Inhalt der Quelle
Annotation:
Recently, wearable devices have become a prominent health care application domain by incorporating a growing number of sensors and adopting smart machine learning technologies. One closely related topic is the strategy of combining the wearable device technology with skill assessment, which can be used in wearable device apps for coaching and/or personal training. Particularly pertinent to skill assessment based on high-dimensional time series data from wearable sensors is classifying whether a player is an expert or a beginner, which skills the player is exercising, and extracting some low-dimensional representations useful for coaching. In this paper, we present a deep learning-based coaching assistant method, which can provide useful information in supporting table tennis practice. Our method uses a combination of LSTM (Long short-term memory) with a deep state space model and probabilistic inference. More precisely, we use the expressive power of LSTM when handling high-dimensional time series data, and state space model and probabilistic inference to extract low-dimensional latent representations useful for coaching. Experimental results show that our method can yield promising results for characterizing high-dimensional time series patterns and for providing useful information when working with wearable IMU (Inertial measurement unit) sensors for table tennis coaching.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Yang, Guang, HwaMin Lee und Giyeol Lee. „A Hybrid Deep Learning Model to Forecast Particulate Matter Concentration Levels in Seoul, South Korea“. Atmosphere 11, Nr. 4 (31.03.2020): 348. http://dx.doi.org/10.3390/atmos11040348.

Der volle Inhalt der Quelle
Annotation:
Both long- and short-term exposure to high concentrations of airborne particulate matter (PM) severely affect human health. Many countries now regulate PM concentrations. Early-warning systems based on PM concentration levels are urgently required to allow countermeasures to reduce harm and loss. Previous studies sought to establish accurate, efficient predictive models. Many machine-learning methods are used for air pollution forecasting. The long short-term memory and gated recurrent unit methods, typical deep-learning methods, reliably predict PM levels with some limitations. In this paper, the authors proposed novel hybrid models to combine the strength of two types of deep learning methods. Moreover, the authors compare hybrid deep-learning methods (convolutional neural network (CNN)—long short-term memory (LSTM) and CNN—gated recurrent unit (GRU)) with several stand-alone methods (LSTM, GRU) in terms of predicting PM concentrations in 39 stations in Seoul. Hourly air pollution data and meteorological data from January 2015 to December 2018 was used for these training models. The results of the experiment confirmed that the proposed prediction model could predict the PM concentrations for the next 7 days. Hybrid models outperformed single models in five areas selected randomly with the lowest root mean square error (RMSE) and mean absolute error (MAE) values for both PM10 and PM2.5. The error rate for PM10 prediction in Gangnam with RMSE is 1.688, and MAE is 1.161. For hybrid models, the CNN–GRU better-predicted PM10 for all stations selected, while the CNN–LSTM model performed better on predicting PM2.5.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Chhetri, Manoj, Sudhanshu Kumar, Partha Pratim Roy und Byung-Gyu Kim. „Deep BLSTM-GRU Model for Monthly Rainfall Prediction: A Case Study of Simtokha, Bhutan“. Remote Sensing 12, Nr. 19 (28.09.2020): 3174. http://dx.doi.org/10.3390/rs12193174.

Der volle Inhalt der Quelle
Annotation:
Rainfall prediction is an important task due to the dependence of many people on it, especially in the agriculture sector. Prediction is difficult and even more complex due to the dynamic nature of rainfalls. In this study, we carry out monthly rainfall prediction over Simtokha a region in the capital of Bhutan, Thimphu. The rainfall data were obtained from the National Center of Hydrology and Meteorology Department (NCHM) of Bhutan. We study the predictive capability with Linear Regression, Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and Bidirectional Long Short Term Memory (BLSTM) based on the parameters recorded by the automatic weather station in the region. Furthermore, this paper proposes a BLSTM-GRU based model which outperforms the existing machine and deep learning models. From the six different existing models under study, LSTM recorded the best Mean Square Error (MSE) score of 0.0128. The proposed BLSTM-GRU model outperformed LSTM by 41.1% with a MSE score of 0.0075. Experimental results are encouraging and suggest that the proposed model can achieve lower MSE in rainfall prediction systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zafar, Rehman, Ba Hau Vu, Munir Husein und Il-Yop Chung. „Day-Ahead Solar Irradiance Forecasting Using Hybrid Recurrent Neural Network with Weather Classification for Power System Scheduling“. Applied Sciences 11, Nr. 15 (22.07.2021): 6738. http://dx.doi.org/10.3390/app11156738.

Der volle Inhalt der Quelle
Annotation:
At the present time, power-system planning and management is facing the major challenge of integrating renewable energy resources (RESs) due to their intermittent nature. To address this problem, a highly accurate renewable energy generation forecasting system is needed for day-ahead power generation scheduling. Day-ahead solar irradiance (SI) forecasting has various applications for system operators and market agents such as unit commitment, reserve management, and biding in the day-ahead market. To this end, a hybrid recurrent neural network is presented herein that uses the long short-term memory recurrent neural network (LSTM-RNN) approach to forecast day-ahead SI. In this approach, k-means clustering is first used to classify each day as either sunny or cloudy. Then, LSTM-RNN is used to learn the uncertainty and variability for each type of cluster separately to predict the SI with better accuracy. The exogenous features such as the dry-bulb temperature, dew point temperature, and relative humidity are used to train the models. Results show that the proposed hybrid model has performed better than a feed-forward neural network (FFNN), a support vector machine (SVM), a conventional LSTM-RNN, and a persistence model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lee, Dongwon, Minji Choi und Joohyun Lee. „Prediction of Head Movement in 360-Degree Videos Using Attention Model“. Sensors 21, Nr. 11 (25.05.2021): 3678. http://dx.doi.org/10.3390/s21113678.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a prediction algorithm, the combination of Long Short-Term Memory (LSTM) and attention model, based on machine learning models to predict the vision coordinates when watching 360-degree videos in a Virtual Reality (VR) or Augmented Reality (AR) system. Predicting the vision coordinates while video streaming is important when the network condition is degraded. However, the traditional prediction models such as Moving Average (MA) and Autoregression Moving Average (ARMA) are linear so they cannot consider the nonlinear relationship. Therefore, machine learning models based on deep learning are recently used for nonlinear predictions. We use the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural network methods, originated in Recurrent Neural Networks (RNN), and predict the head position in the 360-degree videos. Therefore, we adopt the attention model to LSTM to make more accurate results. We also compare the performance of the proposed model with the other machine learning models such as Multi-Layer Perceptron (MLP) and RNN using the root mean squared error (RMSE) of predicted and real coordinates. We demonstrate that our model can predict the vision coordinates more accurately than the other models in various videos.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Mao, Ning, Jiangning Xu, Jingshu Li und Hongyang He. „A LSTM-RNN-Based Fiber Optic Gyroscope Drift Compensation“. Mathematical Problems in Engineering 2021 (15.07.2021): 1–10. http://dx.doi.org/10.1155/2021/1636001.

Der volle Inhalt der Quelle
Annotation:
Fiber optic gyroscope (FOG) inertial measurement unit (IMU) containing a three-orthogonal gyroscope and three-orthogonal accelerometer has been widely utilized in positioning and navigation of military and aerospace fields, due to its simple structure, small size, and high accuracy. However, noise such as temperature drift will reduce the accuracy of FOG, which will affect the resolution accuracy of IMU. In order to reduce the FOG drift and improve the navigation accuracy, a long short-term memory recurrent neural network (LSTM-RNN) model is established, and a real-time acquisition method of the temperature change rate based on moving average is proposed. In addition, for comparative analysis, backpropagation (BP) neural network model, CART-Bagging, classification and regression tree (CART) model, and online support vector machine regression (Online-SVR) model are established to filter FOG outputs. Numerical simulation based on field test data in the range of -20°C to 50°C is employed to verify the effectiveness and superiority of the LSTM-RNN model. The results indicate that the LSTM-RNN model has better compensation accuracy and stability, which is suitable for online compensation. This proposed solution can be applied in military and aerospace fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie