Добірка наукової літератури з теми "Long Short-Term Memory network ( LSTM)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Long Short-Term Memory network ( LSTM)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Long Short-Term Memory network ( LSTM)":

1

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long Short-Term Memory." Neural Computation 9, no. 8 (November 1, 1997): 1735–80. http://dx.doi.org/10.1162/neco.1997.9.8.1735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
2

Singh, Arjun, Shashi Kant Dargar, Amit Gupta, Ashish Kumar, Atul Kumar Srivastava, Mitali Srivastava, Pradeep Kumar Tiwari, and Mohammad Aman Ullah. "Evolving Long Short-Term Memory Network-Based Text Classification." Computational Intelligence and Neuroscience 2022 (February 21, 2022): 1–11. http://dx.doi.org/10.1155/2022/4725639.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, long short-term memory (LSTM) networks are extensively utilized for text classification. Compared to feed-forward neural networks, it has feedback connections, and thus, it has the ability to learn long-term dependencies. However, the LSTM networks suffer from the parameter tuning problem. Generally, initial and control parameters of LSTM are selected on a trial and error basis. Therefore, in this paper, an evolving LSTM (ELSTM) network is proposed. A multiobjective genetic algorithm (MOGA) is used to optimize the architecture and weights of LSTM. The proposed model is tested on a well-known factory reports dataset. Extensive analyses are performed to evaluate the performance of the proposed ELSTM network. From the comparative analysis, it is found that the LSTM network outperforms the competitive models.
3

Chen Wang, Chen Wang, Bingchun Liu Chen Wang, Jiali Chen Bingchun Liu, and Xiaogang Yu Jiali Chen. "Air Quality Index Prediction Based on a Long Short-Term Memory Artificial Neural Network Model." 電腦學刊 34, no. 2 (April 2023): 069–79. http://dx.doi.org/10.53106/199115992023043402006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>Air pollution has become one of the important challenges restricting the sustainable development of cities. Therefore, it is of great significance to achieve accurate prediction of Air Quality Index (AQI). Long Short Term Memory (LSTM) is a deep learning method suitable for learning time series data. Considering its superiority in processing time series data, this study established an LSTM forecasting model suitable for air quality index forecasting. First, we focus on optimizing the feature metrics of the model input through Information Gain (IG). Second, the prediction results of the LSTM model are compared with other machine learning models. At the same time the time step aspect of the LSTM model is used with selective experiments to ensure that model validation works properly. The results show that compared with other machine learning models, the LSTM model constructed in this paper is more suitable for the prediction of air quality index.</p> <p>&nbsp;</p>
4

Liu, Chen. "Long short-term memory (LSTM)-based news classification model." PLOS ONE 19, no. 5 (May 30, 2024): e0301835. http://dx.doi.org/10.1371/journal.pone.0301835.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this study, we used unidirectional and bidirectional long short-term memory (LSTM) deep learning networks for Chinese news classification and characterized the effects of contextual information on text classification, achieving a high level of accuracy. A Chinese glossary was created using jieba—a word segmentation tool—stop-word removal, and word frequency analysis. Next, word2vec was used to map the processed words into word vectors, creating a convenient lookup table for word vectors that could be used as feature inputs for the LSTM model. A bidirectional LSTM (BiLSTM) network was used for feature extraction from word vectors to facilitate the transfer of information in both the backward and forward directions to the hidden layer. Subsequently, an LSTM network was used to perform feature integration on all the outputs of the BiLSTM network, with the output from the last layer of the LSTM being treated as the mapping of the text into a feature vector. The output feature vectors were then connected to a fully connected layer to construct a feature classifier using the integrated features, finally classifying the news articles. The hyperparameters of the model were optimized based on the loss between the true and predicted values using the adaptive moment estimation (Adam) optimizer. Additionally, multiple dropout layers were added to the model to reduce overfitting. As text classification models for Chinese news articles, the Bi-LSTM and unidirectional LSTM models obtained f1-scores of 94.15% and 93.16%, respectively, with the former outperforming the latter in terms of feature extraction.
5

Zhou, Chenze. "Long Short-term Memory Applied on Amazon's Stock Prediction." Highlights in Science, Engineering and Technology 34 (February 28, 2023): 71–76. http://dx.doi.org/10.54097/hset.v34i.5380.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
More and more investors are paying attention to how to use data mining technology into stock investing decisions as a result of the introduction of big data and the quick expansion of financial markets. Machine learning can automatically apply complex mathematical calculations to big data repeatedly and faster. The machine model can analyze all the factors and indicators affecting stock price and achieve high efficiency. Based on the Amazon stock price published on Kaggle, this paper adopts the Long Short-term Memory (LSTM) method for model training. The Keras package in the Python program is used to normalize the data. The Sequence model in Keras establishes a two-layer LSTM network and a three-layer LSTM network to compare and analyze the fitting effect of the model on stock prices. By calculating RMSE and RMPE, the study found that the stock price prediction accuracy of two-layer LSTM is similar to that of three-layer LSTM. In terms of F-measure and Accuracy, the LSTM model of the three-layer network is significantly better than the LSTM model of the two-layer network layer. In general, the LSTM model can accurately predict stock price. Therefore, investors will know the upward or downward trend of stock prices in advance according to the prediction results of the model to make corresponding decisions.
6

Xu, Wei, Yanan Jiang, Xiaoli Zhang, Yi Li, Run Zhang, and Guangtao Fu. "Using long short-term memory networks for river flow prediction." Hydrology Research 51, no. 6 (October 5, 2020): 1358–76. http://dx.doi.org/10.2166/nh.2020.026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Deep learning has made significant advances in methodologies and practical applications in recent years. However, there is a lack of understanding on how the long short-term memory (LSTM) networks perform in river flow prediction. This paper assesses the performance of LSTM networks to understand the impact of network structures and parameters on river flow predictions. Two river basins with different characteristics, i.e., Hun river and Upper Yangtze river basins, are used as case studies for the 10-day average flow predictions and the daily flow predictions, respectively. The use of the fully connected layer with the activation function before the LSTM cell layer can substantially reduce learning efficiency. On the contrary, non-linear transformation following the LSTM cells is required to improve learning efficiency due to the different magnitudes of precipitation and flow. The batch size and the number of LSTM cells are sensitive parameters and should be carefully tuned to achieve a balance between learning efficiency and stability. Compared with several hydrological models, the LSTM network achieves good performance in terms of three evaluation criteria, i.e., coefficient of determination, Nash–Sutcliffe Efficiency and relative error, which demonstrates its powerful capacity in learning non-linear and complex processes in hydrological modelling.
7

Kumar, Naresh, Jatin Bindra, Rajat Sharma, and Deepali Gupta. "Air Pollution Prediction Using Recurrent Neural Network, Long Short-Term Memory and Hybrid of Convolutional Neural Network and Long Short-Term Memory Models." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4580–84. http://dx.doi.org/10.1166/jctn.2020.9283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Air pollution prediction was not an easy task few years back. With the increasing computation power and wide availability of the datasets, air pollution prediction problem is solved to some extend. Inspired by the deep learning models, in this paper three techniques for air pollution prediction have been proposed. The models used includes recurrent neural network (RNN), Long short-term memory (LSTM) and a hybrid combination of Convolutional neural network (CNN) and LSTM models. These models are tested by comparing MSE loss on air pollution test of Belgium. The validation loss on RNN is 0.0045, LSTM is 0.00441 and CNN and LSTM is 0.0049. The loss on testing dataset for these models are 0.00088, 0.00441 and 0.0049 respectively.
8

Song, Tianyu, Wei Ding, Jian Wu, Haixing Liu, Huicheng Zhou, and Jinggang Chu. "Flash Flood Forecasting Based on Long Short-Term Memory Networks." Water 12, no. 1 (December 29, 2019): 109. http://dx.doi.org/10.3390/w12010109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Flash floods occur frequently and distribute widely in mountainous areas because of complex geographic and geomorphic conditions and various climate types. Effective flash flood forecasting with useful lead times remains a challenge due to its high burstiness and short response time. Recently, machine learning has led to substantial changes across many areas of study. In hydrology, the advent of novel machine learning methods has started to encourage novel applications or substantially improve old ones. This study aims to establish a discharge forecasting model based on Long Short-Term Memory (LSTM) networks for flash flood forecasting in mountainous catchments. The proposed LSTM flood forecasting (LSTM-FF) model is composed of T multivariate single-step LSTM networks and takes spatial and temporal dynamics information of observed and forecast rainfall and early discharge as inputs. The case study in Anhe revealed that the proposed models can effectively predict flash floods, especially the qualified rates (the ratio of the number of qualified events to the total number of flood events) of large flood events are above 94.7% at 1–5 h lead time and range from 84.2% to 89.5% at 6–10 h lead-time. For the large flood simulation, the small flood events can help the LSTM-FF model to explore a better rainfall-runoff relationship. The impact analysis of weights in the LSTM network structures shows that the discharge input plays a more obvious role in the 1-h LSTM network and the effect decreases with the lead-time. Meanwhile, in the adjacent lead-time, the LSTM networks explored a similar relationship between input and output. The study provides a new approach for flash flood forecasting and the highly accurate forecast contributes to prepare for and mitigate disasters.
9

Zoremsanga, Chawngthu, and Jamal Hussain. "An Evaluation of Bidirectional Long Short-Term Memory Model for Estimating Monthly Rainfall in India." Indian Journal Of Science And Technology 17, no. 18 (April 24, 2024): 1828–37. http://dx.doi.org/10.17485/ijst/v17i18.2505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Objectives: Predicting the amount of rainfall is difficult due to its complexity and non-linearity. The objective of this study is to predict the average rainfall one month ahead using the all-India monthly average rainfall dataset from 1871 to 2016. Methods: This study proposed a Bidirectional Long Short-Term Memory (LSTM) model to predict the average monthly rainfall in India. The parameters of the models are determined using the grid search method. This study utilized the average monthly rainfall as an input, and the dataset consists of 1752 months of rainfall data prepared from thirty (30) meteorological sub-divisions in India. The model was compiled using the Mean Square Error (MSE) loss function and Adam optimizer. The models' performances were evaluated using statistical metrics such as Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). Findings: This study discovered that the proposed Bidirectional LSTM model achieved an RMSE of 240.79 and outperformed an existing Recurrent Neural Network (RNN), Vanilla LSTM and Stacked LSTM by 8%, 4% and 2% respectively. The study also finds that increasing the input time step and increasing the number of cells in the hidden layer enhanced the prediction performance of the proposed model, and the Bidirectional LSTM converges at a lower epoch compared to RNN and LSTM models. Novelty: This study applied the Bidirectional LSTM for the first time in predicting all-India monthly average rainfall and provides a new benchmark for this dataset. Keywords: Deep Learning, LSTM, Rainfall prediction, Stacked LSTM, Bidirectional LSTM
10

Muneer, Amgad, Rao Faizan Ali, Ahmed Almaghthawi, Shakirah Mohd Taib, Amal Alghamdi, and Ebrahim Abdulwasea Abdullah Ghaleb. "Short term residential load forecasting using long short-term memory recurrent neural network." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 5 (October 1, 2022): 5589. http://dx.doi.org/10.11591/ijece.v12i5.pp5589-5599.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<span>Load forecasting plays an essential role in power system planning. The efficiency and reliability of the whole power system can be increased with proper planning and organization. Residential load forecasting is indispensable due to its increasing role in the smart grid environment. Nowadays, smart meters can be deployed at the residential level for collecting historical data consumption of residents. Although the employment of smart meters ensures large data availability, the inconsistency of load data makes it challenging and taxing to forecast accurately. Therefore, the traditional forecasting techniques may not suffice the purpose. However, a deep learning forecasting network-based long short-term memory (LSTM) is proposed in this paper. The powerful nonlinear mapping capabilities of RNN in time series make it effective along with the higher learning capabilities of long sequences of LSTM. The proposed method is tested and validated through available real-world data sets. A comparison of LSTM is then made with two traditionally available techniques, exponential smoothing and auto-regressive integrated moving average model (ARIMA). Real data from 12 houses over three months is used to evaluate and validate the performance of load forecasts performed using the three mentioned techniques. LSTM model has achieved the best results due to its higher capability of memorizing large data in time series-based predictions.</span>

Дисертації з теми "Long Short-Term Memory network ( LSTM)":

1

Valluru, Aravind-Deshikh. "Realization of LSTM Based Cognitive Radio Network." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538697/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents the realization of an intelligent cognitive radio network that uses long short term memory (LSTM) neural network for sensing and predicting the spectrum activity at each instant of time. The simulation is done using Python and GNU Radio. The implementation is done using GNU Radio and Universal Software Radio Peripherals (USRP). Simulation results show that the confidence factor of opportunistic users not causing interference to licensed users of the spectrum is 98.75%. The implementation results demonstrate high reliability of the LSTM based cognitive radio network.
2

Paschou, Michail. "ASIC implementation of LSTM neural network algorithm." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
LSTM neural networks have been used for speech recognition, image recognition and other artificial intelligence applications for many years. Most applications perform the LSTM algorithm and the required calculations on cloud computers. Off-line solutions include the use of FPGAs and GPUs but the most promising solutions include ASIC accelerators designed for this purpose only. This report presents an ASIC design capable of performing the multiple iterations of the LSTM algorithm on a unidirectional and without peepholes neural network architecture. The proposed design provides arithmetic level parallelism options as blocks are instantiated based on parameters. The internal structure of the design implements pipelined, parallel or serial solutions depending on which is optimal in every case. The implications concerning these decisions are discussed in detail in the report. The design process is described in detail and the evaluation of the design is also presented to measure accuracy and error of the design output.This thesis work resulted in a complete synthesizable ASIC design implementing an LSTM layer, a Fully Connected layer and a Softmax layer which can perform classification of data based on trained weight matrices and bias vectors. The design primarily uses 16-bit fixed point format with 5 integer and 11 fractional bits but increased precision representations are used in some blocks to reduce error output. Additionally, a verification environment has also been designed and is capable of performing simulations, evaluating the design output by comparing it with results produced from performing the same operations with 64-bit floating point precision on a SystemVerilog testbench and measuring the encountered error. The results concerning the accuracy and the design output error margin are presented in this thesis report. The design went through Logic and Physical synthesis and successfully resulted in a functional netlist for every tested configuration. Timing, area and power measurements on the generated netlists of various configurations of the design show consistency and are reported in this report.
LSTM neurala nätverk har använts för taligenkänning, bildigenkänning och andra artificiella intelligensapplikationer i många år. De flesta applikationer utför LSTM-algoritmen och de nödvändiga beräkningarna i digitala moln. Offline lösningar inkluderar användningen av FPGA och GPU men de mest lovande lösningarna inkluderar ASIC-acceleratorer utformade för endast dettaändamål. Denna rapport presenterar en ASIC-design som kan utföra multipla iterationer av LSTM-algoritmen på en enkelriktad neural nätverksarkitetur utan peepholes. Den föreslagna designed ger aritmetrisk nivå-parallellismalternativ som block som är instansierat baserat på parametrar. Designens inre konstruktion implementerar pipelinerade, parallella, eller seriella lösningar beroende på vilket anternativ som är optimalt till alla fall. Konsekvenserna för dessa beslut diskuteras i detalj i rapporten. Designprocessen beskrivs i detalj och utvärderingen av designen presenteras också för att mäta noggrannheten och felmarginal i designutgången. Resultatet av arbetet från denna rapport är en fullständig syntetiserbar ASIC design som har implementerat ett LSTM-lager, ett fullständigt anslutet lager och ett Softmax-lager som kan utföra klassificering av data baserat på tränade viktmatriser och biasvektorer. Designen använder huvudsakligen 16bitars fast flytpunktsformat med 5 heltal och 11 fraktions bitar men ökade precisionsrepresentationer används i vissa block för att minska felmarginal. Till detta har även en verifieringsmiljö utformats som kan utföra simuleringar, utvärdera designresultatet genom att jämföra det med resultatet som produceras från att utföra samma operationer med 64-bitars flytpunktsprecision på en SystemVerilog testbänk och mäta uppstådda felmarginal. Resultaten avseende noggrannheten och designutgångens felmarginal presenteras i denna rapport.Designen gick genom Logisk och Fysisk syntes och framgångsrikt resulterade i en funktionell nätlista för varje testad konfiguration. Timing, area och effektmätningar på den genererade nätlistorna av olika konfigurationer av designen visar konsistens och rapporteras i denna rapport.
3

Shojaee, Ali B. S. "Bacteria Growth Modeling using Long-Short-Term-Memory Networks." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617105038908441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

van, der Westhuizen Jos. "Biological applications, visualizations, and extensions of the long short-term memory network." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287476.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sequences are ubiquitous in the domain of biology. One of the current best machine learning techniques for analysing sequences is the long short-term memory (LSTM) network. Owing to significant barriers to adoption in biology, focussed efforts are required to realize the use of LSTMs in practice. Thus, the aim of this work is to improve the state of LSTMs for biology, and we focus on biological tasks pertaining to physiological signals, peripheral neural signals, and molecules. This goal drives the three subplots in this thesis: biological applications, visualizations, and extensions. We start by demonstrating the utility of LSTMs for biological applications. On two new physiological-signal datasets, LSTMs were found to outperform hidden Markov models. LSTM-based models, implemented by other researchers, also constituted the majority of the best performing approaches on publicly available medical datasets. However, even if these models achieve the best performance on such datasets, their adoption will be limited if they fail to indicate when they are likely mistaken. Thus, we demonstrate on medical data that it is straightforward to use LSTMs in a Bayesian framework via dropout, providing model predictions with corresponding uncertainty estimates. Another dataset used to show the utility of LSTMs is a novel collection of peripheral neural signals. Manual labelling of this dataset is prohibitively expensive, and as a remedy, we propose a sequence-to-sequence model regularized by Wasserstein adversarial networks. The results indicate that the proposed model is able to infer which actions a subject performed based on its peripheral neural signals with reasonable accuracy. As these LSTMs achieve state-of-the-art performance on many biological datasets, one of the main concerns for their practical adoption is their interpretability. We explore various visualization techniques for LSTMs applied to continuous-valued medical time series and find that learning a mask to optimally delete information in the input provides useful interpretations. Furthermore, we find that the input features looked for by the LSTM align well with medical theory. For many applications, extensions of the LSTM can provide enhanced suitability. One such application is drug discovery -- another important aspect of biology. Deep learning can aid drug discovery by means of generative models, but they often produce invalid molecules due to their complex discrete structures. As a solution, we propose a version of active learning that leverages the sequential nature of the LSTM along with its Bayesian capabilities. This approach enables efficient learning of the grammar that governs the generation of discrete-valued sequences such as molecules. Efficiency is achieved by reducing the search space from one over sequences to one over the set of possible elements at each time step -- a much smaller space. Having demonstrated the suitability of LSTMs for biological applications, we seek a hardware efficient implementation. Given the success of the gated recurrent unit (GRU), which has two gates, a natural question is whether any of the LSTM gates are redundant. Research has shown that the forget gate is one of the most important gates in the LSTM. Hence, we propose a forget-gate-only version of the LSTM -- the JANET -- which outperforms both the LSTM and some of the best contemporary models on benchmark datasets, while also reducing computational cost.
5

Gustafsson, Anton, and Julian Sjödal. "Energy Predictions of Multiple Buildings using Bi-directional Long short-term Memory." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-43552.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The process of energy consumption and monitoring of a buildingis time-consuming. Therefore, an feasible approach for using trans-fer learning is presented to decrease the necessary time to extract re-quired large dataset. The technique applies a bidirectional long shortterm memory recurrent neural network using sequence to sequenceprediction. The idea involves a training phase that extracts informa-tion and patterns of a building that is presented with a reasonablysized dataset. The validation phase uses a dataset that is not sufficientin size. This dataset was acquired through a related paper, the resultscan therefore be validated accordingly. The conducted experimentsinclude four cases that involve different strategies in training and val-idation phases and percentages of fine-tuning. Our proposed modelgenerated better scores in terms of prediction performance comparedto the related paper.
6

Corni, Gabriele. "A study on the applicability of Long Short-Term Memory networks to industrial OCR." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis summarises the research-oriented study of applicability of Long Short-Term Memory Recurrent Neural Networks (LSTMs) to industrial Optical Character Recognition (OCR) problems. Traditionally solved through Convolutional Neural Network-based approaches (CNNs), the reported work aims to detect the OCR aspects that could be improved by exploiting recurrent patterns among pixel intensities, and speed up the overall character detection process. Accuracy, speed and complexity act as the main key performance indicators. After studying the core Deep Learning foundations, the best training technique to fit this problem first, and the best parametrisation next, have been selected. A set of tests eventually validated the preciseness of this solution. The final results highlight how difficult is to perform better than CNNs for what OCR tasks are concerned. Nonetheless, with favourable background conditions, the proposed LSTM-based approach is capable of reaching a comparable accuracy rate in (potentially) less time.
7

Nawaz, Sabeen. "Analysis of Transactional Data with Long Short-Term Memory Recurrent Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281282.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An issue authorities and banks face is fraud related to payments and transactions where huge monetary losses occur to a party or where money laundering schemes are carried out. Previous work in the field of machine learning for fraud detection has addressed the issue as a supervised learning problem. In this thesis, we propose a model which can be used in a fraud detection system with transactions and payments that are unlabeled. The proposed modelis a Long Short-term Memory in an auto-encoder decoder network (LSTMAED)which is trained and tested on transformed data. The data is transformed by reducing it to Principal Components and clustering it with K-means. The model is trained to reconstruct the sequence with high accuracy. Our results indicate that the LSTM-AED performs better than a random sequence generating process in learning and reconstructing a sequence of payments. We also found that huge a loss of information occurs in the pre-processing stages.
Obehöriga transaktioner och bedrägerier i betalningar kan leda till stora ekonomiska förluster för banker och myndigheter. Inom maskininlärning har detta problem tidigare hanterats med hjälp av klassifierare via supervised learning. I detta examensarbete föreslår vi en modell som kan användas i ett system för att upptäcka bedrägerier. Modellen appliceras på omärkt data med många olika variabler. Modellen som används är en Long Short-term memory i en auto-encoder decoder nätverk. Datan transformeras med PCA och klustras med K-means. Modellen tränas till att rekonstruera en sekvens av betalningar med hög noggrannhet. Vår resultat visar att LSTM-AED presterar bättre än en modell som endast gissar nästa punkt i sekvensen. Resultatet visar också att mycket information i datan går förlorad när den förbehandlas och transformeras.
8

Racette, Olsén Michael. "Electrocardiographic deviation detection : Using long short-term memory recurrent neural networks to detect deviations within electrocardiographic records." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial neural networks have been gaining attention in recent years due to theirimpressive ability to map out complex nonlinear relations within data. In this report,an attempt is made to use a Long short-term memory neural network for detectinganomalies within electrocardiographic records. The hypothesis is that if a neuralnetwork is trained on records of normal ECGs to predict future ECG sequences, it isexpected to have trouble predicting abnormalities not previously seen in the trainingdata. Three different LSTM model configurations were trained using records fromthe MIT-BIH Arrhythmia database. Afterwards the models were evaluated for theirability to predict previously unseen normal and anomalous sections. This was doneby measuring the mean squared error of each prediction and the uncertainty of over-lapping predictions. The preliminary results of this study demonstrate that recurrentneural networks with the use of LSTM units are capable of detecting anomalies.
9

Verner, Alexander. "LSTM Networks for Detection and Classification of Anomalies in Raw Sensor Data." Diss., NSUWorks, 2019. https://nsuworks.nova.edu/gscis_etd/1074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In order to ensure the validity of sensor data, it must be thoroughly analyzed for various types of anomalies. Traditional machine learning methods of anomaly detections in sensor data are based on domain-specific feature engineering. A typical approach is to use domain knowledge to analyze sensor data and manually create statistics-based features, which are then used to train the machine learning models to detect and classify the anomalies. Although this methodology is used in practice, it has a significant drawback due to the fact that feature extraction is usually labor intensive and requires considerable effort from domain experts. An alternative approach is to use deep learning algorithms. Research has shown that modern deep neural networks are very effective in automated extraction of abstract features from raw data in classification tasks. Long short-term memory networks, or LSTMs in short, are a special kind of recurrent neural networks that are capable of learning long-term dependencies. These networks have proved to be especially effective in the classification of raw time-series data in various domains. This dissertation systematically investigates the effectiveness of the LSTM model for anomaly detection and classification in raw time-series sensor data. As a proof of concept, this work used time-series data of sensors that measure blood glucose levels. A large number of time-series sequences was created based on a genuine medical diabetes dataset. Anomalous series were constructed by six methods that interspersed patterns of common anomaly types in the data. An LSTM network model was trained with k-fold cross-validation on both anomalous and valid series to classify raw time-series sequences into one of seven classes: non-anomalous, and classes corresponding to each of the six anomaly types. As a control, the accuracy of detection and classification of the LSTM was compared to that of four traditional machine learning classifiers: support vector machines, Random Forests, naive Bayes, and shallow neural networks. The performance of all the classifiers was evaluated based on nine metrics: precision, recall, and the F1-score, each measured in micro, macro and weighted perspective. While the traditional models were trained on vectors of features, derived from the raw data, that were based on knowledge of common sources of anomaly, the LSTM was trained on raw time-series data. Experimental results indicate that the performance of the LSTM was comparable to the best traditional classifiers by achieving 99% accuracy in all 9 metrics. The model requires no labor-intensive feature engineering, and the fine-tuning of its architecture and hyper-parameters can be made in a fully automated way. This study, therefore, finds LSTM networks an effective solution to anomaly detection and classification in sensor data.
10

Svanberg, John. "Anomaly detection for non-recurring traffic congestions using Long short-term memory networks (LSTMs)." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this master thesis, we implement a two-step anomaly detection mechanism for non-recurrent traffic congestions with data collected from public transport buses in Stockholm. We investigate the use of machine learning to model time series data with LSTMs and evaluate the results with a baseline prediction model. The anomaly detection algorithm embodies both collective and contextual expressivity, meaning it is capable of findingcollections of delayed buses and also takes the temporality of the data into account. Results show that the anomaly detection performance benefits from the lower prediction errors produced by the LSTM network. The intersection rule significantly decreases the number of false positives while maintaining the true positive rate at a sufficient level. The performance of the anomaly detection algorithm has been found to depend on the road segment it is applied to, some segments have been identified to be particularly hard whereas other have been identified to be easier than others. The performance of the best performing setup of the anomaly detection mechanism had a true positive rate of 84.3 % and a true negative rate of 96.0 %.
I den här masteruppsatsen implementerar vi en tvåstegsalgoritm för avvikelsedetektering för icke återkommande trafikstockningar. Data är insamlad från kollektivtrafikbussarna i Stockholm. Vi undersöker användningen av maskininlärning för att modellerna tidsseriedata med hjälp av LSTM-nätverk och evaluerar sedan dessa resultat med en grundmodell. Avvikelsedetekteringsalgoritmen inkluderar både kollektiv och kontextuell uttrycksfullhet, vilket innebär att kollektiva förseningar kan hittas och att även temporaliteten hos datan beaktas. Resultaten visar att prestandan hos avvikelsedetekteringen förbättras av mindre prediktionsfel genererade av LSTM-nätverket i jämförelse med grundmodellen. En regel för avvikelser baserad på snittet av två andra regler reducerar märkbart antalet falska positiva medan den höll kvar antalet sanna positiva på en tillräckligt hög nivå. Prestandan hos avvikelsedetekteringsalgoritmen har setts bero av vilken vägsträcka den tillämpas på, där några vägsträckor är svårare medan andra är lättare för avvikelsedetekteringen. Den bästa varianten av algoritmen hittade 84.3 % av alla avvikelser och 96.0 % av all avvikelsefri data blev markerad som normal data.

Книги з теми "Long Short-Term Memory network ( LSTM)":

1

Sangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip its readers with a comprehensive understanding of AI and its subsets, machine learning and deep learning, with a particular emphasis on neural networks. It is designed for novices venturing into the field, as well as experienced learners who desire to solidify their knowledge base or delve deeper into advanced topics. In Chapter 1, we provide a thorough introduction to the world of AI, exploring its definition, historical trajectory, and categories. We delve into the applications of AI, and underscore the ethical implications associated with its proliferation. Chapter 2 introduces machine learning, elucidating its types and basic algorithms. We examine the practical applications of machine learning and delve into challenges such as overfitting, underfitting, and model validation. Deep learning and neural networks, an integral part of AI, form the crux of Chapter 3. We provide a lucid introduction to deep learning, describe the structure of neural networks, and explore forward and backward propagation. This chapter also delves into the specifics of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). In Chapter 4, we outline the steps to train neural networks, including data preprocessing, cost functions, gradient descent, and various optimizers. We also delve into regularization techniques and methods for evaluating a neural network model. Chapter 5 focuses on specialized topics in neural networks such as autoencoders, Generative Adversarial Networks (GANs), Long Short-Term Memory Networks (LSTMs), and Neural Architecture Search (NAS). In Chapter 6, we illustrate the practical applications of neural networks, examining their role in computer vision, natural language processing, predictive analytics, autonomous vehicles, and the healthcare industry. Chapter 7 gazes into the future of AI and neural networks. It discusses the current challenges in these fields, emerging trends, and future ethical considerations. It also examines the potential impacts of AI and neural networks on society. Finally, Chapter 8 concludes the book with a recap of key learnings, implications for readers, and resources for further study. This book aims not only to provide a robust theoretical foundation but also to kindle a sense of curiosity and excitement about the endless possibilities AI and neural networks offer. The journ
2

Lampert, Jay. Philosophy of the Short Term. Bloomsbury Publishing Plc, 2023. http://dx.doi.org/10.5040/9781350347991.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The concept of the short term involves a complex network of quantitative, qualitative, and operational ideas. It is essential everywhere from the ontology of time, to the science of memory, to the preservation of art, to emotional life, to the practice of ethics. But what does the idea of the short term mean? What makes a temporal term short? What makes a time segment terminate? Is the short term a quantitative idea, or a qualitative or functional idea? When is it a good idea to understand events as short term events, and when is it a good idea to make decisions based on the short term? What does it mean for the nature of time if some of it can be short? Jay Lampert explores these questions in depth and makes use of the resources of short (as well as long) term processes in order to develop best temporal practices in ethical, aesthetic, epistemological, and metaphysical activities, both theoretical and practical. The methodology develops ideas based on the history of philosophy (from Plato to Hegel to Husserl to Deleuze), interdisciplinary studies (from cognitive science to poetics), and practical spheres where short term practices have been studied extensively (from short term psychotherapy to short term financial investments). Philosophy of the Short Term is the first book to deal systematically with the concept of the short term.
3

Nobre, Anna C. (Kia), and M.-Marsel Mesulam. Large-scale Networks for Attentional Biases. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Selective attention is essential for all aspects of cognition. Using the paradigmatic case of visual spatial attention, we present a theoretical account proposing the flexible control of attention through coordinated activity across a large-scale network of brain areas. It reviews evidence supporting top-down control of visual spatial attention by a distributed network, and describes principles emerging from a network approach. Stepping beyond the paradigm of visual spatial attention, we consider attentional control mechanisms more broadly. The chapter suggests that top-down biasing mechanisms originate from multiple sources and can be of several types, carrying information about receptive-field properties such as spatial locations or features of items; but also carrying information about properties that are not easily mapped onto receptive fields, such as the meanings or timings of items. The chapter considers how selective biases can operate on multiple slates of information processing, not restricted to the immediate sensory-motor stream, but also operating within internalized, short-term and long-term memory representations. Selective attention appears to be a general property of information processing systems rather than an independent domain within our cognitive make-up.

Частини книг з теми "Long Short-Term Memory network ( LSTM)":

1

Hvitfeldt, Emil, and Julia Silge. "Long short-term memory (LSTM) networks." In Supervised Machine Learning for Text Analysis in R, 273–302. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003093459-14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Salem, Fathi M. "Gated RNN: The Long Short-Term Memory (LSTM) RNN." In Recurrent Neural Networks, 71–82. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Nandam, Srinivasa Rao, Adouthu Vamshi, and Inapanuri Sucharitha. "CAN Intrusion Detection Using Long Short-Term Memory (LSTM)." In Lecture Notes in Networks and Systems, 295–302. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1976-3_36.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Barone, Ben, David Coar, Ashley Shafer, Jinhong K. Guo, Brad Galego, and James Allen. "Interpreting Pilot Behavior Using Long Short-Term Memory (LSTM) Models." In Lecture Notes in Networks and Systems, 60–66. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80624-8_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wüthrich, Mario V., and Michael Merz. "Recurrent Neural Networks." In Springer Actuarial, 381–406. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis chapter considers recurrent neural (RN) networks. These are special network architectures that are useful for time-series modeling, e.g., applied to time-series forecasting. We study the most popular RN networks which are the long short-term memory (LSTM) networks and the gated recurrent unit (GRU) networks. We apply these networks to mortality forecasting.
6

Myakal, Sabhapathy, Rajarshi Pal, and Nekuri Naveen. "A Novel Pixel Value Predictor Using Long Short Term Memory (LSTM) Network." In Lecture Notes in Computer Science, 324–35. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36402-0_30.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Anwarsha, A., and T. Narendiranath Babu. "Intelligent Fault Detection of Rotating Machinery Using Long-Short-Term Memory (LSTM) Network." In Lecture Notes in Networks and Systems, 76–83. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20429-6_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nikhil Chandran, A., Karthik Sreekumar, and D. P. Subha. "EEG-Based Automated Detection of Schizophrenia Using Long Short-Term Memory (LSTM) Network." In Algorithms for Intelligent Systems, 229–36. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5243-4_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sai Charan, P. V., T. Gireesh Kumar, and P. Mohan Anand. "Advance Persistent Threat Detection Using Long Short Term Memory (LSTM) Neural Networks." In Emerging Technologies in Computer Engineering: Microservices in Big Data Analytics, 45–54. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8300-7_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zainudin, Zanariah, Siti Mariyam Shamsuddin, and Shafaatunnur Hasan. "Convolutional Neural Network Long Short-Term Memory (CNN + LSTM) for Histopathology Cancer Image Classification." In Machine Intelligence and Signal Processing, 235–45. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1366-4_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Long Short-Term Memory network ( LSTM)":

1

Lin, Yanbin, Dongliang Duan, Xueming Hong, Xiang Cheng, Liuqing Yang, and Shuguang Cui. "Very-Short-Term Solar Forecasting with Long Short-Term Memory (LSTM) Network." In 2020 Asia Energy and Electrical Engineering Symposium (AEEES). IEEE, 2020. http://dx.doi.org/10.1109/aeees48850.2020.9121512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Singh, Shubhendu Kumar, Ruoyu Yang, Amir Behjat, Rahul Rai, Souma Chowdhury, and Ion Matei. "PI-LSTM: Physics-Infused Long Short-Term Memory Network." In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). IEEE, 2019. http://dx.doi.org/10.1109/icmla.2019.00015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Pérez, José, Rafael Baez, Jose Terrazas, Arturo Rodríguez, Daniel Villanueva, Olac Fuentes, Vinod Kumar, Brandon Paez, and Abdiel Cruz. "Physics-Informed Long-Short Term Memory Neural Network Performance on Holloman High-Speed Test Track Sled Study." In ASME 2022 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/fedsm2022-86953.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Physics Informed Neural Networks (PINNs) incorporate known physics equations into a network to reduce training time and increase accuracy. Traditional PINNs approaches are based on dense networks that do not consider the fact that simulations are a type of sequential data. Long-Short Term Memory (LSTM) networks are a modified version of Recurrent Neural Networks (RNNs) which are used to analyze sequential datasets. We propose a Physics Informed LSTM network that leverages the power of LSTMs for sequential datasets that also incorporates the governing physics equations of 2D incompressible Navier-Stokes fluid to analyze fluid flow around a stationary geometry resembling the water braking mechanism at the Holloman High-Speed Test Track. Currently, simulation data to analyze the fluid flow of the braking mechanism is generated through ANSYS and is costly, taking several days to generate a single simulation. By incorporating physics equations, our proposed Physics-Informed LSTM network was able to predict the last 20% of a simulation given the first 80% within a small margin of error in a shorter amount of time than a non-informed LSTM. This demonstrates the potential that physics-informed networks that leverage sequential information may have at speeding up computational fluid dynamics simulations and serves as a first step towards adapting PINNs for more advanced network architectures.
4

Gaurav, Akshat, Varsha Arya, Kwok Tai Chui, Brij B. Gupta, Chang Choi, and O.-Joun Lee. "Long Short-Term Memory Network (LSTM) based Stock Price Prediction." In RACS '23: International Conference on Research in Adaptive and Convergent Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3599957.3606240.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yu, Wennian, Chris K. Mechefske, and Il Yong Kim. "Cutting Tool Wear Estimation Using a Genetic Algorithm Based Long Short-Term Memory Neural Network." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85253.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On-line cutting tool wear monitoring plays a critical role in industry automation and has the potential to significantly increase productivity and improve product quality. In this study, we employed the long short-term memory neural network as the decision model of the tool condition monitoring system to predict the amount of cutting tool wear. Compared with the traditional recurrent neural networks, the long short-term memory (LSTM) network can capture the long-term dependencies within a time series. To further decrease the training error and enhance the prediction performance of the network, a genetic algorithm (GA) is applied to find the initial values of the networks that minimize the objective (training error). The proposed methodology is applied on a publicly available milling data set. Comparisons of the prediction performance between the Elman network and the LSTM with and without using GA optimization proves that the GA based LSTM shows an enhanced prediction performance on this data set.
6

Huang, Ting, Gehui Shen, and Zhi-Hong Deng. "Leap-LSTM: Enhancing Long Short-Term Memory for Text Categorization." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/697.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recurrent Neural Networks (RNNs) are widely used in the field of natural language processing (NLP), ranging from text categorization to question answering and machine translation. However, RNNs generally read the whole text from beginning to end or vice versa sometimes, which makes it inefficient to process long texts. When reading a long document for a categorization task, such as topic categorization, large quantities of words are irrelevant and can be skipped. To this end, we propose Leap-LSTM, an LSTM-enhanced model which dynamically leaps between words while reading texts. At each step, we utilize several feature encoders to extract messages from preceding texts, following texts and the current word, and then determine whether to skip the current word. We evaluate Leap-LSTM on several text categorization tasks: sentiment analysis, news categorization, ontology classification and topic classification, with five benchmark data sets. The experimental results show that our model reads faster and predicts better than standard LSTM. Compared to previous models which can also skip words, our model achieves better trade-offs between performance and efficiency.
7

Wang, Junzhe, and Evren M. Ozbayoglu. "Application of Recurrent Neural Network Long Short-Term Memory Model on Early Kick Detection." In ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/omae2022-78739.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Long-short term memory [1] (LSTM) is an artificial Recurrent Neural Network (RNN) architecture capable of performing deep learning tasks. With the special feedback feature, the LSTM network is suitable for processing a sequence of data and making a sequence of predictions. It has been successfully applied to many disciplines such as speech recognition, language translation, time series forecasting, and anomaly detection. In this paper, the RNN-LSTM network is applied to real-time drilling data to study the complex dependencies between multiple drilling parameters and common kick indicators. A well-trained model will use the concept of the sliding window to continuously predict the unforeseen value of sensitive kick indicators. With proper analysis, the predicted result is helpful to detect kicks ahead of time. This paper also proposed a general workflow to easily visualize the prediction results. Compared with other time series prediction methods, the LSTM network has the advantages of more accurate multi-step prediction, more physical, and more flexible. The proposed LSTM network uses accelerated GPU computing, the fast computational speed makes both online and offline learning possible. It is concluded that this approach is capable of accurately predicting kick indicators under certain circumstances. It may provide innovative guidance for the application of the LSTM network in early kick detection and future study.
8

Obiora, Chibuzor N., Ahmed Ali, and Ali N. Hasan. "Forecasting Hourly Solar Irradiance Using Long Short-Term Memory (LSTM) Network." In 2020 11th International Renewable Energy Congress (IREC). IEEE, 2020. http://dx.doi.org/10.1109/irec48820.2020.9310449.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Kaimao. "Long Short-Term Memory (LSTM) Network Applications in Stock Price Prediction." In 2023 International Conference on Ambient Intelligence, Knowledge Informatics and Industrial Electronics (AIKIIE). IEEE, 2023. http://dx.doi.org/10.1109/aikiie60097.2023.10390445.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Shabbir, Noman, Roya Ahmadiahangar, Argo Rosin, Oleksandr Husev, Tanel Jalakas, and Joao Martins. "Residential DC Load Forecasting Using Long Short-term Memory Network (LSTM)." In 2023 IEEE 11th International Conference on Smart Energy Grid Engineering (SEGE). IEEE, 2023. http://dx.doi.org/10.1109/sege59172.2023.10274596.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Long Short-Term Memory network ( LSTM)":

1

Cárdenas-Cárdenas, Julián Alonso, Deicy J. Cristiano-Botia, and Nicolás Martínez-Cortés. Colombian inflation forecast using Long Short-Term Memory approach. Banco de la República, June 2023. http://dx.doi.org/10.32468/be.1241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We use Long Short Term Memory (LSTM) neural networks, a deep learning technique, to forecast Colombian headline inflation one year ahead through two approaches. The first one uses only information from the target variable, while the second one incorporates additional information from some relevant variables. We employ sample rolling to the traditional neuronal network construction process, selecting the hyperparameters with criteria for minimizing the forecast error. Our results show a better forecasting capacity of the network with information from additional variables, surpassing both the other LSTM application and ARIMA models optimized for forecasting (with and without explanatory variables). This improvement in forecasting accuracy is most pronounced over longer time horizons, specifically from the seventh month onwards.
2

Ankel, Victoria, Stella Pantopoulou, Matthew Weathered, Darius Lisowski, Anthonie Cilliers, and Alexander Heifetz. One-Step Ahead Prediction of Thermal Mixing Tee Sensors with Long Short Term Memory (LSTM) Neural Networks. Office of Scientific and Technical Information (OSTI), December 2020. http://dx.doi.org/10.2172/1760289.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kumar, Kaushal, and Yupeng Wei. Attention-Based Data Analytic Models for Traffic Flow Predictions. Mineta Transportation Institute, March 2023. http://dx.doi.org/10.31979/mti.2023.2211.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Traffic congestion causes Americans to lose millions of hours and dollars each year. In fact, 1.9 billion gallons of fuel are wasted each year due to traffic congestion, and each hour stuck in traffic costs about $21 in wasted time and fuel. The traffic congestion can be caused by various factors, such as bottlenecks, traffic incidents, bad weather, work zones, poor traffic signal timing, and special events. One key step to addressing traffic congestion and identifying its root cause is an accurate prediction of traffic flow. Accurate traffic flow prediction is also important for the successful deployment of smart transportation systems. It can help road users make better travel decisions to avoid traffic congestion areas so that passenger and freight movements can be optimized to improve the mobility of people and goods. Moreover, it can also help reduce carbon emissions and the risks of traffic incidents. Although numerous methods have been developed for traffic flow predictions, current methods have limitations in utilizing the most relevant part of traffic flow data and considering the correlation among the collected high-dimensional features. To address this issue, this project developed attention-based methodologies for traffic flow predictions. We propose the use of an attention-based deep learning model that incorporates the attention mechanism with Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks. This attention mechanism can calculate the importance level of traffic flow data and enable the model to consider the most relevant part of the data while making predictions, thus improving accuracy and reducing prediction duration.
4

Vold, Andrew. Improving Physics Based Electron Neutrino Appearance Identication with a Long Short-Term Memory Network. Office of Scientific and Technical Information (OSTI), January 2018. http://dx.doi.org/10.2172/1529330.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії