Academic literature on the topic 'Long Short-term Memory (LSTM) Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Long Short-term Memory (LSTM) Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Long Short-term Memory (LSTM) Networks"

1

Singh, Arjun, Shashi Kant Dargar, Amit Gupta, Ashish Kumar, Atul Kumar Srivastava, Mitali Srivastava, Pradeep Kumar Tiwari, and Mohammad Aman Ullah. "Evolving Long Short-Term Memory Network-Based Text Classification." Computational Intelligence and Neuroscience 2022 (February 21, 2022): 1–11. http://dx.doi.org/10.1155/2022/4725639.

Full text
Abstract:
Recently, long short-term memory (LSTM) networks are extensively utilized for text classification. Compared to feed-forward neural networks, it has feedback connections, and thus, it has the ability to learn long-term dependencies. However, the LSTM networks suffer from the parameter tuning problem. Generally, initial and control parameters of LSTM are selected on a trial and error basis. Therefore, in this paper, an evolving LSTM (ELSTM) network is proposed. A multiobjective genetic algorithm (MOGA) is used to optimize the architecture and weights of LSTM. The proposed model is tested on a well-known factory reports dataset. Extensive analyses are performed to evaluate the performance of the proposed ELSTM network. From the comparative analysis, it is found that the LSTM network outperforms the competitive models.
APA, Harvard, Vancouver, ISO, and other styles
2

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long Short-Term Memory." Neural Computation 9, no. 8 (November 1, 1997): 1735–80. http://dx.doi.org/10.1162/neco.1997.9.8.1735.

Full text
Abstract:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Wei, Yanan Jiang, Xiaoli Zhang, Yi Li, Run Zhang, and Guangtao Fu. "Using long short-term memory networks for river flow prediction." Hydrology Research 51, no. 6 (October 5, 2020): 1358–76. http://dx.doi.org/10.2166/nh.2020.026.

Full text
Abstract:
Abstract Deep learning has made significant advances in methodologies and practical applications in recent years. However, there is a lack of understanding on how the long short-term memory (LSTM) networks perform in river flow prediction. This paper assesses the performance of LSTM networks to understand the impact of network structures and parameters on river flow predictions. Two river basins with different characteristics, i.e., Hun river and Upper Yangtze river basins, are used as case studies for the 10-day average flow predictions and the daily flow predictions, respectively. The use of the fully connected layer with the activation function before the LSTM cell layer can substantially reduce learning efficiency. On the contrary, non-linear transformation following the LSTM cells is required to improve learning efficiency due to the different magnitudes of precipitation and flow. The batch size and the number of LSTM cells are sensitive parameters and should be carefully tuned to achieve a balance between learning efficiency and stability. Compared with several hydrological models, the LSTM network achieves good performance in terms of three evaluation criteria, i.e., coefficient of determination, Nash–Sutcliffe Efficiency and relative error, which demonstrates its powerful capacity in learning non-linear and complex processes in hydrological modelling.
APA, Harvard, Vancouver, ISO, and other styles
4

Song, Tianyu, Wei Ding, Jian Wu, Haixing Liu, Huicheng Zhou, and Jinggang Chu. "Flash Flood Forecasting Based on Long Short-Term Memory Networks." Water 12, no. 1 (December 29, 2019): 109. http://dx.doi.org/10.3390/w12010109.

Full text
Abstract:
Flash floods occur frequently and distribute widely in mountainous areas because of complex geographic and geomorphic conditions and various climate types. Effective flash flood forecasting with useful lead times remains a challenge due to its high burstiness and short response time. Recently, machine learning has led to substantial changes across many areas of study. In hydrology, the advent of novel machine learning methods has started to encourage novel applications or substantially improve old ones. This study aims to establish a discharge forecasting model based on Long Short-Term Memory (LSTM) networks for flash flood forecasting in mountainous catchments. The proposed LSTM flood forecasting (LSTM-FF) model is composed of T multivariate single-step LSTM networks and takes spatial and temporal dynamics information of observed and forecast rainfall and early discharge as inputs. The case study in Anhe revealed that the proposed models can effectively predict flash floods, especially the qualified rates (the ratio of the number of qualified events to the total number of flood events) of large flood events are above 94.7% at 1–5 h lead time and range from 84.2% to 89.5% at 6–10 h lead-time. For the large flood simulation, the small flood events can help the LSTM-FF model to explore a better rainfall-runoff relationship. The impact analysis of weights in the LSTM network structures shows that the discharge input plays a more obvious role in the 1-h LSTM network and the effect decreases with the lead-time. Meanwhile, in the adjacent lead-time, the LSTM networks explored a similar relationship between input and output. The study provides a new approach for flash flood forecasting and the highly accurate forecast contributes to prepare for and mitigate disasters.
APA, Harvard, Vancouver, ISO, and other styles
5

Shankar, Sonali, P. Vigneswara Ilavarasan, Sushil Punia, and Surya Prakash Singh. "Forecasting container throughput with long short-term memory networks." Industrial Management & Data Systems 120, no. 3 (December 4, 2019): 425–41. http://dx.doi.org/10.1108/imds-07-2019-0370.

Full text
Abstract:
Purpose Better forecasting always leads to better management and planning of the operations. The container throughput data are complex and often have multiple seasonality. This makes it difficult to forecast accurately. The purpose of this paper is to forecast container throughput using deep learning methods and benchmark its performance over other traditional time-series methods. Design/methodology/approach In this study, long short-term memory (LSTM) networks are implemented to forecast container throughput. The container throughput data of the Port of Singapore are used for empirical analysis. The forecasting performance of the LSTM model is compared with seven different time-series forecasting methods, namely, autoregressive integrated moving average (ARIMA), simple exponential smoothing, Holt–Winter’s, error-trend-seasonality, trigonometric regressors (TBATS), neural network (NN) and ARIMA + NN. The relative error matrix is used to analyze the performance of the different models with respect to bias, accuracy and uncertainty. Findings The results showed that LSTM outperformed all other benchmark methods. From a statistical perspective, the Diebold–Mariano test is also conducted to further substantiate better forecasting performance of LSTM over other counterpart methods. Originality/value The proposed study is a contribution to the literature on the container throughput forecasting and adds value to the supply chain theory of forecasting. Second, this study explained the architecture of the deep-learning-based LSTM method and discussed in detail the steps to implement it.
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Sang Thi Thanh, and Bao Duy Tran. "Long Short-Term Memory Based Movie Recommendation." Science & Technology Development Journal - Engineering and Technology 3, SI1 (September 19, 2020): SI1—SI9. http://dx.doi.org/10.32508/stdjet.v3isi1.540.

Full text
Abstract:
Recommender systems (RS) have become a fundamental tool for helping users make decisions around millions of different choices nowadays – the era of Big Data. It brings a huge benefit for many business models around the world due to their effectiveness on the target customers. A lot of recommendation models and techniques have been proposed and many accomplished incredible outcomes. Collaborative filtering and content-based filtering methods are common, but these both have some disadvantages. A critical one is that they only focus on a user's long-term static preference while ignoring his or her short-term transactional patterns, which results in missing the user's preference shift through the time. In this case, the user's intent at a certain time point may be easily submerged by his or her historical decision behaviors, which leads to unreliable recommendations. To deal with this issue, a session of user interactions with the items can be considered as a solution. In this study, Long Short-Term Memory (LSTM) networks will be analyzed to be applied to user sessions in a recommender system. The MovieLens dataset is considered as a case study of movie recommender systems. This dataset is preprocessed to extract user-movie sessions for user behavior discovery and making movie recommendations to users. Several experiments have been carried out to evaluate the LSTM-based movie recommender system. In the experiments, the LSTM networks are compared with a similar deep learning method, which is Recurrent Neural Networks (RNN), and a baseline machine learning method, which is the collaborative filtering using item-based nearest neighbors (item-KNN). It has been found that the LSTM networks are able to be improved by optimizing their hyperparameters and outperform the other methods when predicting the next movies interested by users.
APA, Harvard, Vancouver, ISO, and other styles
7

Tra, Nguyen Ngoc, Ho Phuoc Tien, Nguyen Thanh Dat, and Nguyen Ngoc Vu. "VN-INDEX TREND PREDICTION USING LONG-SHORT TERM MEMORY NEURAL NETWORKS." Journal of Science and Technology: Issue on Information and Communications Technology 17, no. 12.2 (December 9, 2019): 61. http://dx.doi.org/10.31130/ict-ud.2019.94.

Full text
Abstract:
The paper attemps to forecast the future trend of Vietnam index (VN-index) by using long-short term memory (LSTM) networks. In particular, an LSTM-based neural network is employed to study the temporal dependence in time-series data of past and present VN index values. Empirical forecasting results show that LSTM-based stock trend prediction offers an accuracy of about 60% which outperforms moving-average-based prediction.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen Wang, Chen Wang, Bingchun Liu Chen Wang, Jiali Chen Bingchun Liu, and Xiaogang Yu Jiali Chen. "Air Quality Index Prediction Based on a Long Short-Term Memory Artificial Neural Network Model." 電腦學刊 34, no. 2 (April 2023): 069–79. http://dx.doi.org/10.53106/199115992023043402006.

Full text
Abstract:
<p>Air pollution has become one of the important challenges restricting the sustainable development of cities. Therefore, it is of great significance to achieve accurate prediction of Air Quality Index (AQI). Long Short Term Memory (LSTM) is a deep learning method suitable for learning time series data. Considering its superiority in processing time series data, this study established an LSTM forecasting model suitable for air quality index forecasting. First, we focus on optimizing the feature metrics of the model input through Information Gain (IG). Second, the prediction results of the LSTM model are compared with other machine learning models. At the same time the time step aspect of the LSTM model is used with selective experiments to ensure that model validation works properly. The results show that compared with other machine learning models, the LSTM model constructed in this paper is more suitable for the prediction of air quality index.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Jianyong, Lei Zhang, Yuanyuan Chen, and Zhang Yi. "A New Delay Connection for Long Short-Term Memory Networks." International Journal of Neural Systems 28, no. 06 (June 24, 2018): 1750061. http://dx.doi.org/10.1142/s0129065717500617.

Full text
Abstract:
Connections play a crucial role in neural network (NN) learning because they determine how information flows in NNs. Suitable connection mechanisms may extensively enlarge the learning capability and reduce the negative effect of gradient problems. In this paper, a new delay connection is proposed for Long Short-Term Memory (LSTM) unit to develop a more sophisticated recurrent unit, called Delay Connected LSTM (DCLSTM). The proposed delay connection brings two main merits to DCLSTM with introducing no extra parameters. First, it allows the output of the DCLSTM unit to maintain LSTM, which is absent in the LSTM unit. Second, the proposed delay connection helps to bridge the error signals to previous time steps and allows it to be back-propagated across several layers without vanishing too quickly. To evaluate the performance of the proposed delay connections, the DCLSTM model with and without peephole connections was compared with four state-of-the-art recurrent model on two sequence classification tasks. DCLSTM model outperformed the other models with higher accuracy and F1[Formula: see text]score. Furthermore, the networks with multiple stacked DCLSTM layers and the standard LSTM layer were evaluated on Penn Treebank (PTB) language modeling. The DCLSTM model achieved lower perplexity (PPL)/bit-per-character (BPC) than the standard LSTM model. The experiments demonstrate that the learning of the DCLSTM models is more stable and efficient.
APA, Harvard, Vancouver, ISO, and other styles
10

Lees, Thomas, Steven Reece, Frederik Kratzert, Daniel Klotz, Martin Gauch, Jens De Bruijn, Reetik Kumar Sahu, Peter Greve, Louise Slater, and Simon J. Dadson. "Hydrological concept formation inside long short-term memory (LSTM) networks." Hydrology and Earth System Sciences 26, no. 12 (June 20, 2022): 3079–101. http://dx.doi.org/10.5194/hess-26-3079-2022.

Full text
Abstract:
Abstract. Neural networks have been shown to be extremely effective rainfall-runoff models, where the river discharge is predicted from meteorological inputs. However, the question remains: what have these models learned? Is it possible to extract information about the learned relationships that map inputs to outputs, and do these mappings represent known hydrological concepts? Small-scale experiments have demonstrated that the internal states of long short-term memory networks (LSTMs), a particular neural network architecture predisposed to hydrological modelling, can be interpreted. By extracting the tensors which represent the learned translation from inputs (precipitation, temperature, and potential evapotranspiration) to outputs (discharge), this research seeks to understand what information the LSTM captures about the hydrological system. We assess the hypothesis that the LSTM replicates real-world processes and that we can extract information about these processes from the internal states of the LSTM. We examine the cell-state vector, which represents the memory of the LSTM, and explore the ways in which the LSTM learns to reproduce stores of water, such as soil moisture and snow cover. We use a simple regression approach to map the LSTM state vector to our target stores (soil moisture and snow). Good correlations (R2>0.8) between the probe outputs and the target variables of interest provide evidence that the LSTM contains information that reflects known hydrological processes comparable with the concept of variable-capacity soil moisture stores. The implications of this study are threefold: (1) LSTMs reproduce known hydrological processes. (2) While conceptual models have theoretical assumptions embedded in the model a priori, the LSTM derives these from the data. These learned representations are interpretable by scientists. (3) LSTMs can be used to gain an estimate of intermediate stores of water such as soil moisture. While machine learning interpretability is still a nascent field and our approach reflects a simple technique for exploring what the model has learned, the results are robust to different initial conditions and to a variety of benchmarking experiments. We therefore argue that deep learning approaches can be used to advance our scientific goals as well as our predictive goals.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Long Short-term Memory (LSTM) Networks"

1

Shojaee, Ali B. S. "Bacteria Growth Modeling using Long-Short-Term-Memory Networks." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617105038908441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Paschou, Michail. "ASIC implementation of LSTM neural network algorithm." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254290.

Full text
Abstract:
LSTM neural networks have been used for speech recognition, image recognition and other artificial intelligence applications for many years. Most applications perform the LSTM algorithm and the required calculations on cloud computers. Off-line solutions include the use of FPGAs and GPUs but the most promising solutions include ASIC accelerators designed for this purpose only. This report presents an ASIC design capable of performing the multiple iterations of the LSTM algorithm on a unidirectional and without peepholes neural network architecture. The proposed design provides arithmetic level parallelism options as blocks are instantiated based on parameters. The internal structure of the design implements pipelined, parallel or serial solutions depending on which is optimal in every case. The implications concerning these decisions are discussed in detail in the report. The design process is described in detail and the evaluation of the design is also presented to measure accuracy and error of the design output.This thesis work resulted in a complete synthesizable ASIC design implementing an LSTM layer, a Fully Connected layer and a Softmax layer which can perform classification of data based on trained weight matrices and bias vectors. The design primarily uses 16-bit fixed point format with 5 integer and 11 fractional bits but increased precision representations are used in some blocks to reduce error output. Additionally, a verification environment has also been designed and is capable of performing simulations, evaluating the design output by comparing it with results produced from performing the same operations with 64-bit floating point precision on a SystemVerilog testbench and measuring the encountered error. The results concerning the accuracy and the design output error margin are presented in this thesis report. The design went through Logic and Physical synthesis and successfully resulted in a functional netlist for every tested configuration. Timing, area and power measurements on the generated netlists of various configurations of the design show consistency and are reported in this report.
LSTM neurala nätverk har använts för taligenkänning, bildigenkänning och andra artificiella intelligensapplikationer i många år. De flesta applikationer utför LSTM-algoritmen och de nödvändiga beräkningarna i digitala moln. Offline lösningar inkluderar användningen av FPGA och GPU men de mest lovande lösningarna inkluderar ASIC-acceleratorer utformade för endast dettaändamål. Denna rapport presenterar en ASIC-design som kan utföra multipla iterationer av LSTM-algoritmen på en enkelriktad neural nätverksarkitetur utan peepholes. Den föreslagna designed ger aritmetrisk nivå-parallellismalternativ som block som är instansierat baserat på parametrar. Designens inre konstruktion implementerar pipelinerade, parallella, eller seriella lösningar beroende på vilket anternativ som är optimalt till alla fall. Konsekvenserna för dessa beslut diskuteras i detalj i rapporten. Designprocessen beskrivs i detalj och utvärderingen av designen presenteras också för att mäta noggrannheten och felmarginal i designutgången. Resultatet av arbetet från denna rapport är en fullständig syntetiserbar ASIC design som har implementerat ett LSTM-lager, ett fullständigt anslutet lager och ett Softmax-lager som kan utföra klassificering av data baserat på tränade viktmatriser och biasvektorer. Designen använder huvudsakligen 16bitars fast flytpunktsformat med 5 heltal och 11 fraktions bitar men ökade precisionsrepresentationer används i vissa block för att minska felmarginal. Till detta har även en verifieringsmiljö utformats som kan utföra simuleringar, utvärdera designresultatet genom att jämföra det med resultatet som produceras från att utföra samma operationer med 64-bitars flytpunktsprecision på en SystemVerilog testbänk och mäta uppstådda felmarginal. Resultaten avseende noggrannheten och designutgångens felmarginal presenteras i denna rapport.Designen gick genom Logisk och Fysisk syntes och framgångsrikt resulterade i en funktionell nätlista för varje testad konfiguration. Timing, area och effektmätningar på den genererade nätlistorna av olika konfigurationer av designen visar konsistens och rapporteras i denna rapport.
APA, Harvard, Vancouver, ISO, and other styles
3

Corni, Gabriele. "A study on the applicability of Long Short-Term Memory networks to industrial OCR." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
This thesis summarises the research-oriented study of applicability of Long Short-Term Memory Recurrent Neural Networks (LSTMs) to industrial Optical Character Recognition (OCR) problems. Traditionally solved through Convolutional Neural Network-based approaches (CNNs), the reported work aims to detect the OCR aspects that could be improved by exploiting recurrent patterns among pixel intensities, and speed up the overall character detection process. Accuracy, speed and complexity act as the main key performance indicators. After studying the core Deep Learning foundations, the best training technique to fit this problem first, and the best parametrisation next, have been selected. A set of tests eventually validated the preciseness of this solution. The final results highlight how difficult is to perform better than CNNs for what OCR tasks are concerned. Nonetheless, with favourable background conditions, the proposed LSTM-based approach is capable of reaching a comparable accuracy rate in (potentially) less time.
APA, Harvard, Vancouver, ISO, and other styles
4

Nawaz, Sabeen. "Analysis of Transactional Data with Long Short-Term Memory Recurrent Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281282.

Full text
Abstract:
An issue authorities and banks face is fraud related to payments and transactions where huge monetary losses occur to a party or where money laundering schemes are carried out. Previous work in the field of machine learning for fraud detection has addressed the issue as a supervised learning problem. In this thesis, we propose a model which can be used in a fraud detection system with transactions and payments that are unlabeled. The proposed modelis a Long Short-term Memory in an auto-encoder decoder network (LSTMAED)which is trained and tested on transformed data. The data is transformed by reducing it to Principal Components and clustering it with K-means. The model is trained to reconstruct the sequence with high accuracy. Our results indicate that the LSTM-AED performs better than a random sequence generating process in learning and reconstructing a sequence of payments. We also found that huge a loss of information occurs in the pre-processing stages.
Obehöriga transaktioner och bedrägerier i betalningar kan leda till stora ekonomiska förluster för banker och myndigheter. Inom maskininlärning har detta problem tidigare hanterats med hjälp av klassifierare via supervised learning. I detta examensarbete föreslår vi en modell som kan användas i ett system för att upptäcka bedrägerier. Modellen appliceras på omärkt data med många olika variabler. Modellen som används är en Long Short-term memory i en auto-encoder decoder nätverk. Datan transformeras med PCA och klustras med K-means. Modellen tränas till att rekonstruera en sekvens av betalningar med hög noggrannhet. Vår resultat visar att LSTM-AED presterar bättre än en modell som endast gissar nästa punkt i sekvensen. Resultatet visar också att mycket information i datan går förlorad när den förbehandlas och transformeras.
APA, Harvard, Vancouver, ISO, and other styles
5

Valluru, Aravind-Deshikh. "Realization of LSTM Based Cognitive Radio Network." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538697/.

Full text
Abstract:
This thesis presents the realization of an intelligent cognitive radio network that uses long short term memory (LSTM) neural network for sensing and predicting the spectrum activity at each instant of time. The simulation is done using Python and GNU Radio. The implementation is done using GNU Radio and Universal Software Radio Peripherals (USRP). Simulation results show that the confidence factor of opportunistic users not causing interference to licensed users of the spectrum is 98.75%. The implementation results demonstrate high reliability of the LSTM based cognitive radio network.
APA, Harvard, Vancouver, ISO, and other styles
6

Verner, Alexander. "LSTM Networks for Detection and Classification of Anomalies in Raw Sensor Data." Diss., NSUWorks, 2019. https://nsuworks.nova.edu/gscis_etd/1074.

Full text
Abstract:
In order to ensure the validity of sensor data, it must be thoroughly analyzed for various types of anomalies. Traditional machine learning methods of anomaly detections in sensor data are based on domain-specific feature engineering. A typical approach is to use domain knowledge to analyze sensor data and manually create statistics-based features, which are then used to train the machine learning models to detect and classify the anomalies. Although this methodology is used in practice, it has a significant drawback due to the fact that feature extraction is usually labor intensive and requires considerable effort from domain experts. An alternative approach is to use deep learning algorithms. Research has shown that modern deep neural networks are very effective in automated extraction of abstract features from raw data in classification tasks. Long short-term memory networks, or LSTMs in short, are a special kind of recurrent neural networks that are capable of learning long-term dependencies. These networks have proved to be especially effective in the classification of raw time-series data in various domains. This dissertation systematically investigates the effectiveness of the LSTM model for anomaly detection and classification in raw time-series sensor data. As a proof of concept, this work used time-series data of sensors that measure blood glucose levels. A large number of time-series sequences was created based on a genuine medical diabetes dataset. Anomalous series were constructed by six methods that interspersed patterns of common anomaly types in the data. An LSTM network model was trained with k-fold cross-validation on both anomalous and valid series to classify raw time-series sequences into one of seven classes: non-anomalous, and classes corresponding to each of the six anomaly types. As a control, the accuracy of detection and classification of the LSTM was compared to that of four traditional machine learning classifiers: support vector machines, Random Forests, naive Bayes, and shallow neural networks. The performance of all the classifiers was evaluated based on nine metrics: precision, recall, and the F1-score, each measured in micro, macro and weighted perspective. While the traditional models were trained on vectors of features, derived from the raw data, that were based on knowledge of common sources of anomaly, the LSTM was trained on raw time-series data. Experimental results indicate that the performance of the LSTM was comparable to the best traditional classifiers by achieving 99% accuracy in all 9 metrics. The model requires no labor-intensive feature engineering, and the fine-tuning of its architecture and hyper-parameters can be made in a fully automated way. This study, therefore, finds LSTM networks an effective solution to anomaly detection and classification in sensor data.
APA, Harvard, Vancouver, ISO, and other styles
7

Svanberg, John. "Anomaly detection for non-recurring traffic congestions using Long short-term memory networks (LSTMs)." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234465.

Full text
Abstract:
In this master thesis, we implement a two-step anomaly detection mechanism for non-recurrent traffic congestions with data collected from public transport buses in Stockholm. We investigate the use of machine learning to model time series data with LSTMs and evaluate the results with a baseline prediction model. The anomaly detection algorithm embodies both collective and contextual expressivity, meaning it is capable of findingcollections of delayed buses and also takes the temporality of the data into account. Results show that the anomaly detection performance benefits from the lower prediction errors produced by the LSTM network. The intersection rule significantly decreases the number of false positives while maintaining the true positive rate at a sufficient level. The performance of the anomaly detection algorithm has been found to depend on the road segment it is applied to, some segments have been identified to be particularly hard whereas other have been identified to be easier than others. The performance of the best performing setup of the anomaly detection mechanism had a true positive rate of 84.3 % and a true negative rate of 96.0 %.
I den här masteruppsatsen implementerar vi en tvåstegsalgoritm för avvikelsedetektering för icke återkommande trafikstockningar. Data är insamlad från kollektivtrafikbussarna i Stockholm. Vi undersöker användningen av maskininlärning för att modellerna tidsseriedata med hjälp av LSTM-nätverk och evaluerar sedan dessa resultat med en grundmodell. Avvikelsedetekteringsalgoritmen inkluderar både kollektiv och kontextuell uttrycksfullhet, vilket innebär att kollektiva förseningar kan hittas och att även temporaliteten hos datan beaktas. Resultaten visar att prestandan hos avvikelsedetekteringen förbättras av mindre prediktionsfel genererade av LSTM-nätverket i jämförelse med grundmodellen. En regel för avvikelser baserad på snittet av två andra regler reducerar märkbart antalet falska positiva medan den höll kvar antalet sanna positiva på en tillräckligt hög nivå. Prestandan hos avvikelsedetekteringsalgoritmen har setts bero av vilken vägsträcka den tillämpas på, där några vägsträckor är svårare medan andra är lättare för avvikelsedetekteringen. Den bästa varianten av algoritmen hittade 84.3 % av alla avvikelser och 96.0 % av all avvikelsefri data blev markerad som normal data.
APA, Harvard, Vancouver, ISO, and other styles
8

Hernandez, Villapol Jorge Luis. "Spectrum Analysis and Prediction Using Long Short Term Memory Neural Networks and Cognitive Radios." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062877/.

Full text
Abstract:
One statement that we can make with absolute certainty in our current time is that wireless communication is now the standard and the de-facto type of communication. Cognitive radios are able to interpret the frequency spectrum and adapt. The aim of this work is to be able to predict whether a frequency channel is going to be busy or free in a specific time located in the future. To do this, the problem is modeled as a time series problem where each usage of a channel is treated as a sequence of busy and free slots in a fixed time frame. For this time series problem, the method being implemented is one of the latest, state-of-the-art, technique in machine learning for time series and sequence prediction: long short-term memory neural networks, or LSTMs.
APA, Harvard, Vancouver, ISO, and other styles
9

van, der Westhuizen Jos. "Biological applications, visualizations, and extensions of the long short-term memory network." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287476.

Full text
Abstract:
Sequences are ubiquitous in the domain of biology. One of the current best machine learning techniques for analysing sequences is the long short-term memory (LSTM) network. Owing to significant barriers to adoption in biology, focussed efforts are required to realize the use of LSTMs in practice. Thus, the aim of this work is to improve the state of LSTMs for biology, and we focus on biological tasks pertaining to physiological signals, peripheral neural signals, and molecules. This goal drives the three subplots in this thesis: biological applications, visualizations, and extensions. We start by demonstrating the utility of LSTMs for biological applications. On two new physiological-signal datasets, LSTMs were found to outperform hidden Markov models. LSTM-based models, implemented by other researchers, also constituted the majority of the best performing approaches on publicly available medical datasets. However, even if these models achieve the best performance on such datasets, their adoption will be limited if they fail to indicate when they are likely mistaken. Thus, we demonstrate on medical data that it is straightforward to use LSTMs in a Bayesian framework via dropout, providing model predictions with corresponding uncertainty estimates. Another dataset used to show the utility of LSTMs is a novel collection of peripheral neural signals. Manual labelling of this dataset is prohibitively expensive, and as a remedy, we propose a sequence-to-sequence model regularized by Wasserstein adversarial networks. The results indicate that the proposed model is able to infer which actions a subject performed based on its peripheral neural signals with reasonable accuracy. As these LSTMs achieve state-of-the-art performance on many biological datasets, one of the main concerns for their practical adoption is their interpretability. We explore various visualization techniques for LSTMs applied to continuous-valued medical time series and find that learning a mask to optimally delete information in the input provides useful interpretations. Furthermore, we find that the input features looked for by the LSTM align well with medical theory. For many applications, extensions of the LSTM can provide enhanced suitability. One such application is drug discovery -- another important aspect of biology. Deep learning can aid drug discovery by means of generative models, but they often produce invalid molecules due to their complex discrete structures. As a solution, we propose a version of active learning that leverages the sequential nature of the LSTM along with its Bayesian capabilities. This approach enables efficient learning of the grammar that governs the generation of discrete-valued sequences such as molecules. Efficiency is achieved by reducing the search space from one over sequences to one over the set of possible elements at each time step -- a much smaller space. Having demonstrated the suitability of LSTMs for biological applications, we seek a hardware efficient implementation. Given the success of the gated recurrent unit (GRU), which has two gates, a natural question is whether any of the LSTM gates are redundant. Research has shown that the forget gate is one of the most important gates in the LSTM. Hence, we propose a forget-gate-only version of the LSTM -- the JANET -- which outperforms both the LSTM and some of the best contemporary models on benchmark datasets, while also reducing computational cost.
APA, Harvard, Vancouver, ISO, and other styles
10

Racette, Olsén Michael. "Electrocardiographic deviation detection : Using long short-term memory recurrent neural networks to detect deviations within electrocardiographic records." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76411.

Full text
Abstract:
Artificial neural networks have been gaining attention in recent years due to theirimpressive ability to map out complex nonlinear relations within data. In this report,an attempt is made to use a Long short-term memory neural network for detectinganomalies within electrocardiographic records. The hypothesis is that if a neuralnetwork is trained on records of normal ECGs to predict future ECG sequences, it isexpected to have trouble predicting abnormalities not previously seen in the trainingdata. Three different LSTM model configurations were trained using records fromthe MIT-BIH Arrhythmia database. Afterwards the models were evaluated for theirability to predict previously unseen normal and anomalous sections. This was doneby measuring the mean squared error of each prediction and the uncertainty of over-lapping predictions. The preliminary results of this study demonstrate that recurrentneural networks with the use of LSTM units are capable of detecting anomalies.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Long Short-term Memory (LSTM) Networks"

1

Nobre, Anna C. (Kia), and M.-Marsel Mesulam. Large-scale Networks for Attentional Biases. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.035.

Full text
Abstract:
Selective attention is essential for all aspects of cognition. Using the paradigmatic case of visual spatial attention, we present a theoretical account proposing the flexible control of attention through coordinated activity across a large-scale network of brain areas. It reviews evidence supporting top-down control of visual spatial attention by a distributed network, and describes principles emerging from a network approach. Stepping beyond the paradigm of visual spatial attention, we consider attentional control mechanisms more broadly. The chapter suggests that top-down biasing mechanisms originate from multiple sources and can be of several types, carrying information about receptive-field properties such as spatial locations or features of items; but also carrying information about properties that are not easily mapped onto receptive fields, such as the meanings or timings of items. The chapter considers how selective biases can operate on multiple slates of information processing, not restricted to the immediate sensory-motor stream, but also operating within internalized, short-term and long-term memory representations. Selective attention appears to be a general property of information processing systems rather than an independent domain within our cognitive make-up.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Long Short-term Memory (LSTM) Networks"

1

Hvitfeldt, Emil, and Julia Silge. "Long short-term memory (LSTM) networks." In Supervised Machine Learning for Text Analysis in R, 273–302. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003093459-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salem, Fathi M. "Gated RNN: The Long Short-Term Memory (LSTM) RNN." In Recurrent Neural Networks, 71–82. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nandam, Srinivasa Rao, Adouthu Vamshi, and Inapanuri Sucharitha. "CAN Intrusion Detection Using Long Short-Term Memory (LSTM)." In Lecture Notes in Networks and Systems, 295–302. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1976-3_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barone, Ben, David Coar, Ashley Shafer, Jinhong K. Guo, Brad Galego, and James Allen. "Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models." In Lecture Notes in Networks and Systems, 60–66. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80624-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wüthrich, Mario V., and Michael Merz. "Recurrent Neural Networks." In Springer Actuarial, 381–406. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_8.

Full text
Abstract:
AbstractThis chapter considers recurrent neural (RN) networks. These are special network architectures that are useful for time-series modeling, e.g., applied to time-series forecasting. We study the most popular RN networks which are the long short-term memory (LSTM) networks and the gated recurrent unit (GRU) networks. We apply these networks to mortality forecasting.
APA, Harvard, Vancouver, ISO, and other styles
6

Anwarsha, A., and T. Narendiranath Babu. "Intelligent Fault Detection of Rotating Machinery Using Long-Short-Term Memory (LSTM) Network." In Lecture Notes in Networks and Systems, 76–83. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20429-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sai Charan, P. V., T. Gireesh Kumar, and P. Mohan Anand. "Advance Persistent Threat Detection Using Long Short Term Memory (LSTM) Neural Networks." In Emerging Technologies in Computer Engineering: Microservices in Big Data Analytics, 45–54. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8300-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Nian, Xiangguang Dai, M. A. Ehsan, and Tolessa Deksissa. "Development of a Drought Prediction System Based on Long Short-Term Memory Networks (LSTM)." In Advances in Neural Networks – ISNN 2020, 142–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64221-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Myakal, Sabhapathy, Rajarshi Pal, and Nekuri Naveen. "A Novel Pixel Value Predictor Using Long Short Term Memory (LSTM) Network." In Lecture Notes in Computer Science, 324–35. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36402-0_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jahidul Islam Razin, Md, Md Abdul Karim, M. F. Mridha, S. M. Rafiuddin Rifat, and Tahira Alam. "A Long Short-Term Memory (LSTM) Model for Business Sentiment Analysis Based on Recurrent Neural Network." In Sustainable Communication Networks and Application, 1–15. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8677-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Long Short-term Memory (LSTM) Networks"

1

Lin, Yanbin, Dongliang Duan, Xueming Hong, Xiang Cheng, Liuqing Yang, and Shuguang Cui. "Very-Short-Term Solar Forecasting with Long Short-Term Memory (LSTM) Network." In 2020 Asia Energy and Electrical Engineering Symposium (AEEES). IEEE, 2020. http://dx.doi.org/10.1109/aeees48850.2020.9121512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Ting, Gehui Shen, and Zhi-Hong Deng. "Leap-LSTM: Enhancing Long Short-Term Memory for Text Categorization." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/697.

Full text
Abstract:
Recurrent Neural Networks (RNNs) are widely used in the field of natural language processing (NLP), ranging from text categorization to question answering and machine translation. However, RNNs generally read the whole text from beginning to end or vice versa sometimes, which makes it inefficient to process long texts. When reading a long document for a categorization task, such as topic categorization, large quantities of words are irrelevant and can be skipped. To this end, we propose Leap-LSTM, an LSTM-enhanced model which dynamically leaps between words while reading texts. At each step, we utilize several feature encoders to extract messages from preceding texts, following texts and the current word, and then determine whether to skip the current word. We evaluate Leap-LSTM on several text categorization tasks: sentiment analysis, news categorization, ontology classification and topic classification, with five benchmark data sets. The experimental results show that our model reads faster and predicts better than standard LSTM. Compared to previous models which can also skip words, our model achieves better trade-offs between performance and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Pérez, José, Rafael Baez, Jose Terrazas, Arturo Rodríguez, Daniel Villanueva, Olac Fuentes, Vinod Kumar, Brandon Paez, and Abdiel Cruz. "Physics-Informed Long-Short Term Memory Neural Network Performance on Holloman High-Speed Test Track Sled Study." In ASME 2022 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/fedsm2022-86953.

Full text
Abstract:
Abstract Physics Informed Neural Networks (PINNs) incorporate known physics equations into a network to reduce training time and increase accuracy. Traditional PINNs approaches are based on dense networks that do not consider the fact that simulations are a type of sequential data. Long-Short Term Memory (LSTM) networks are a modified version of Recurrent Neural Networks (RNNs) which are used to analyze sequential datasets. We propose a Physics Informed LSTM network that leverages the power of LSTMs for sequential datasets that also incorporates the governing physics equations of 2D incompressible Navier-Stokes fluid to analyze fluid flow around a stationary geometry resembling the water braking mechanism at the Holloman High-Speed Test Track. Currently, simulation data to analyze the fluid flow of the braking mechanism is generated through ANSYS and is costly, taking several days to generate a single simulation. By incorporating physics equations, our proposed Physics-Informed LSTM network was able to predict the last 20% of a simulation given the first 80% within a small margin of error in a shorter amount of time than a non-informed LSTM. This demonstrates the potential that physics-informed networks that leverage sequential information may have at speeding up computational fluid dynamics simulations and serves as a first step towards adapting PINNs for more advanced network architectures.
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Shubhendu Kumar, Ruoyu Yang, Amir Behjat, Rahul Rai, Souma Chowdhury, and Ion Matei. "PI-LSTM: Physics-Infused Long Short-Term Memory Network." In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). IEEE, 2019. http://dx.doi.org/10.1109/icmla.2019.00015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

C. Lemos Neto, Álvaro, Rodrigo A. Coelho, and Cristiano L. de Castro. "An Incremental Learning approach using Long Short-Term Memory Neural Networks." In Congresso Brasileiro de Automática - 2020. sbabra, 2020. http://dx.doi.org/10.48011/asba.v2i1.1491.

Full text
Abstract:
Due to Big Data and the Internet of Things, Machine Learning algorithms targeted specifically to model evolving data streams had gained attention from both academia and industry. Many Incremental Learning models had been successful in doing so, but most of them have one thing in common: they are complex variants of batch learning algorithms, which is a problem since, in a streaming setting, less complexity and more performance is desired. This paper proposes the Incremental LSTM model, which is a variant of the original LSTM with minor changes, that can tackle evolving data streams problems such as concept drift and the elasticity-plasticity dilemma without neither needing a dedicated drift detector nor a memory management system. It obtained great results that show it reacts fast to concept drifts and that is also robust to noise data.
APA, Harvard, Vancouver, ISO, and other styles
6

Tongta, Anawat, and Komkrit Chooruang. "Long Short-Term Memory (LSTM) Neural Networks Applied to Energy Disaggregation." In 2020 8th International Electrical Engineering Congress (iEECON). IEEE, 2020. http://dx.doi.org/10.1109/ieecon48109.2020.229559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Yuzhen, and Fathi M. Salem. "Simplified gating in long short-term memory (LSTM) recurrent neural networks." In 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2017. http://dx.doi.org/10.1109/mwscas.2017.8053244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Wennian, Chris K. Mechefske, and Il Yong Kim. "Cutting Tool Wear Estimation Using a Genetic Algorithm Based Long Short-Term Memory Neural Network." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85253.

Full text
Abstract:
On-line cutting tool wear monitoring plays a critical role in industry automation and has the potential to significantly increase productivity and improve product quality. In this study, we employed the long short-term memory neural network as the decision model of the tool condition monitoring system to predict the amount of cutting tool wear. Compared with the traditional recurrent neural networks, the long short-term memory (LSTM) network can capture the long-term dependencies within a time series. To further decrease the training error and enhance the prediction performance of the network, a genetic algorithm (GA) is applied to find the initial values of the networks that minimize the objective (training error). The proposed methodology is applied on a publicly available milling data set. Comparisons of the prediction performance between the Elman network and the LSTM with and without using GA optimization proves that the GA based LSTM shows an enhanced prediction performance on this data set.
APA, Harvard, Vancouver, ISO, and other styles
9

Gaurav, Akshat, Varsha Arya, Kwok Tai Chui, Brij B. Gupta, Chang Choi, and O.-Joun Lee. "Long Short-Term Memory Network (LSTM) based Stock Price Prediction." In RACS '23: International Conference on Research in Adaptive and Convergent Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3599957.3606240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Affonso, Felipe, and Thiago Magela Rodrigues Dias. "Applying Recurrent Neural Networks with Long Short- Term Memory in Clustered Stocks." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4421.

Full text
Abstract:
Predicting the stock market is a widely studied field, either due to the curiosity in finding an explanation for the behavior of financial assets or for financial purposes. Among these studies the best techniques use neural networks as a prediction tool. More specifically, the best networks for this purpose are called recurrent neural networks (RNN), and provide an extra option when dealing with a sequence of values. However, a great part of the studies is intended to predict the result of few stocks, therefore, this work aims to predict the behavior of a large number of stocks. For this, similar stocks were grouped based on their correlation and later the algorithm K-means was applied so that similar groups were clustered. After this process, the Long Short-Term Memory (LSTM) - a type of RNN - was used in order to predict the price of a certain group of assets. Results showed that clustering stocks did not influence the effectiveness of the network and that investors and portfolio managers can use it to simply their daily tasks.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Long Short-term Memory (LSTM) Networks"

1

Cárdenas-Cárdenas, Julián Alonso, Deicy J. Cristiano-Botia, and Nicolás Martínez-Cortés. Colombian inflation forecast using Long Short-Term Memory approach. Banco de la República, June 2023. http://dx.doi.org/10.32468/be.1241.

Full text
Abstract:
We use Long Short Term Memory (LSTM) neural networks, a deep learning technique, to forecast Colombian headline inflation one year ahead through two approaches. The first one uses only information from the target variable, while the second one incorporates additional information from some relevant variables. We employ sample rolling to the traditional neuronal network construction process, selecting the hyperparameters with criteria for minimizing the forecast error. Our results show a better forecasting capacity of the network with information from additional variables, surpassing both the other LSTM application and ARIMA models optimized for forecasting (with and without explanatory variables). This improvement in forecasting accuracy is most pronounced over longer time horizons, specifically from the seventh month onwards.
APA, Harvard, Vancouver, ISO, and other styles
2

Ankel, Victoria, Stella Pantopoulou, Matthew Weathered, Darius Lisowski, Anthonie Cilliers, and Alexander Heifetz. One-Step Ahead Prediction of Thermal Mixing Tee Sensors with Long Short Term Memory (LSTM) Neural Networks. Office of Scientific and Technical Information (OSTI), December 2020. http://dx.doi.org/10.2172/1760289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Kaushal, and Yupeng Wei. Attention-Based Data Analytic Models for Traffic Flow Predictions. Mineta Transportation Institute, March 2023. http://dx.doi.org/10.31979/mti.2023.2211.

Full text
Abstract:
Traffic congestion causes Americans to lose millions of hours and dollars each year. In fact, 1.9 billion gallons of fuel are wasted each year due to traffic congestion, and each hour stuck in traffic costs about $21 in wasted time and fuel. The traffic congestion can be caused by various factors, such as bottlenecks, traffic incidents, bad weather, work zones, poor traffic signal timing, and special events. One key step to addressing traffic congestion and identifying its root cause is an accurate prediction of traffic flow. Accurate traffic flow prediction is also important for the successful deployment of smart transportation systems. It can help road users make better travel decisions to avoid traffic congestion areas so that passenger and freight movements can be optimized to improve the mobility of people and goods. Moreover, it can also help reduce carbon emissions and the risks of traffic incidents. Although numerous methods have been developed for traffic flow predictions, current methods have limitations in utilizing the most relevant part of traffic flow data and considering the correlation among the collected high-dimensional features. To address this issue, this project developed attention-based methodologies for traffic flow predictions. We propose the use of an attention-based deep learning model that incorporates the attention mechanism with Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks. This attention mechanism can calculate the importance level of traffic flow data and enable the model to consider the most relevant part of the data while making predictions, thus improving accuracy and reducing prediction duration.
APA, Harvard, Vancouver, ISO, and other styles
4

Ly, Racine, Fousseini Traore, and Khadim Dia. Forecasting commodity prices using long-short-term memory neural networks. Washington, DC: International Food Policy Research Institute, 2021. http://dx.doi.org/10.2499/p15738coll2.134265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography