Letteratura scientifica selezionata sul tema "LSTM Neural networks"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "LSTM Neural networks".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "LSTM Neural networks":

1

Bakir, Houda, Ghassen Chniti e Hédi Zaher. "E-Commerce Price Forecasting Using LSTM Neural Networks". International Journal of Machine Learning and Computing 8, n. 2 (aprile 2018): 169–74. http://dx.doi.org/10.18178/ijmlc.2018.8.2.682.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yu, Yong, Xiaosheng Si, Changhua Hu e Jianxun Zhang. "A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures". Neural Computation 31, n. 7 (luglio 2019): 1235–70. http://dx.doi.org/10.1162/neco_a_01199.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the cell structure, the long short-term memory (LSTM) could handle the problem of long-term dependencies well. Since its introduction, almost all the exciting results based on RNNs have been achieved by the LSTM. The LSTM has become the focus of deep learning. We review the LSTM cell and its variants to explore the learning capacity of the LSTM cell. Furthermore, the LSTM networks are divided into two broad categories: LSTM-dominated networks and integrated LSTM networks. In addition, their various applications are discussed. Finally, future research directions are presented for LSTM networks.
3

Kalinin, Maxim, Vasiliy Krundyshev e Evgeny Zubkov. "Estimation of applicability of modern neural network methods for preventing cyberthreats to self-organizing network infrastructures of digital economy platforms",. SHS Web of Conferences 44 (2018): 00044. http://dx.doi.org/10.1051/shsconf/20184400044.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The problems of applying neural network methods for solving problems of preventing cyberthreats to flexible self-organizing network infrastructures of digital economy platforms: vehicle adhoc networks, wireless sensor networks, industrial IoT, “smart buildings” and “smart cities” are considered. The applicability of the classic perceptron neural network, recurrent, deep, LSTM neural networks and neural networks ensembles in the restricting conditions of fast training and big data processing are estimated. The use of neural networks with a complex architecture– recurrent and LSTM neural networks – is experimentally justified for building a system of intrusion detection for self-organizing network infrastructures.
4

Zhang, Chuanwei, Xusheng Xu, Yikun Li, Jing Huang, Chenxi Li e Weixin Sun. "Research on SOC Estimation Method for Lithium-Ion Batteries Based on Neural Network". World Electric Vehicle Journal 14, n. 10 (2 ottobre 2023): 275. http://dx.doi.org/10.3390/wevj14100275.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With the increasingly serious problem of environmental pollution, new energy vehicles have become a hot spot in today’s research. The lithium-ion battery has become the mainstream power battery of new energy vehicles as it has the advantages of long service life, high-rated voltage, low self-discharge rate, etc. The battery management system is the key part that ensures the efficient and safe operation of the vehicle as well as the long life of the power battery. The accurate estimation of the power battery state directly affects the whole vehicle’s performance. As a result, this paper established a lithium-ion battery charge state estimation model based on BP, PSO-BP and LSTM neural networks, which tried to combine the PSO algorithm with the LSTM algorithm. The particle swarm algorithm was utilized to obtain the optimal parameters of the model in the process of repetitive iteration so as to establish the PSO-LSTM prediction model. The superiority of the LSTM neural network model in SOC estimation was demonstrated by comparing the estimation accuracies of BP, PSO-BP and LSTM neural networks. The comparative analysis under constant flow conditions in the laboratory showed that the PSO-LSTM neural network predicts SOC more accurately than BP, PSO-BP and LSTM neural networks. The comparative analysis under DST and US06 operating conditions showed that the PSO-LSTM neural network has a greater prediction accuracy for SOC than the LSTM neural network.
5

Sridhar, C., e Aniruddha Kanhe. "Performance Comparison of Various Neural Networks for Speech Recognition". Journal of Physics: Conference Series 2466, n. 1 (1 marzo 2023): 012008. http://dx.doi.org/10.1088/1742-6596/2466/1/012008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Speech recognition is a method where an audio signal is translated into text, words, or commands and also tells how the speech is recognized. Recently, many deep learning models have been adopted for automatic speech recognition and proved more effective than traditional machine learning methods like Artificial Neural Networks(ANN). This work examines the efficient learning architectures of features by different deep neural networks. In this paper, five neural network models, namely, CNN, LSTM, Bi-LSTM, GRU, and CONV-LSTM, for the comparative study. We trained the networks using Audio MNIST dataset for three different iterations and evaluated them based on performance metrics. Experimentally, CNN and Conv-LSTM network model consistently offers the best performance based on MFCC Features.
6

Wan, Yingliang, Hong Tao e Li Ma. "Forecasting Zhejiang Province's GDP Using a CNN-LSTM Model". Frontiers in Business, Economics and Management 13, n. 3 (5 marzo 2024): 233–35. http://dx.doi.org/10.54097/bmq2dy63.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Zhejiang province has experienced notable economic growth in recent years. Despite this, achieving sustainable high-quality economic development presents complex challenges and uncertainties. This study employs advanced neural network methodologies, including Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and an integrated CNN-LSTM model, to predict Zhejiang's economic trajectory. Our empirical analysis demonstrates the proficiency of neural networks in delivering reasonably precise economic forecasts, despite inherent prediction residuals. A comparative assessment indicates that the composite CNN-LSTM model surpasses the individual CNN and LSTM models in accuracy, providing a more reliable forecasting instrument for Zhejiang's high-quality economic progression.
7

Liu, David, e An Wei. "Regulated LSTM Artificial Neural Networks for Option Risks". FinTech 1, n. 2 (2 giugno 2022): 180–90. http://dx.doi.org/10.3390/fintech1020014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This research aims to study the pricing risks of options by using improved LSTM artificial neural network models and make direct comparisons with the Black–Scholes option pricing model based upon the option prices of 50 ETFs of the Shanghai Securities Exchange from 1 January 2018 to 31 December 2019. We study an LSTM model, a mathematical option pricing model (BS model), and an improved artificial neural network model—the regulated LSTM model. The method we adopted is first to price the options using the mathematical model—i.e., the BS model—and then to construct the LSTM neural network for training and predicting the option prices. We further form the regulated LSTM network with optimally selected key technical indicators using Python programming aiming at improving the network’s predicting ability. Risks of option pricing are measured by MSE, RMSE, MAE and MAPE, respectively, for all the models used. The results of this paper show that both the ordinary LSTM and the traditional BS option pricing model have lower predictive ability than the regulated LSTM model. The prediction ability of the regulated LSTM model with the optimal technical indicators is superior, and the approach adopted is effective.
8

Pal, Subarno, Soumadip Ghosh e Amitava Nag. "Sentiment Analysis in the Light of LSTM Recurrent Neural Networks". International Journal of Synthetic Emotions 9, n. 1 (gennaio 2018): 33–39. http://dx.doi.org/10.4018/ijse.2018010103.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Long short-term memory (LSTM) is a special type of recurrent neural network (RNN) architecture that was designed over simple RNNs for modeling temporal sequences and their long-range dependencies more accurately. In this article, the authors work with different types of LSTM architectures for sentiment analysis of movie reviews. It has been showed that LSTM RNNs are more effective than deep neural networks and conventional RNNs for sentiment analysis. Here, the authors explore different architectures associated with LSTM models to study their relative performance on sentiment analysis. A simple LSTM is first constructed and its performance is studied. On subsequent stages, the LSTM layer is stacked one upon another which shows an increase in accuracy. Later the LSTM layers were made bidirectional to convey data both forward and backward in the network. The authors hereby show that a layered deep LSTM with bidirectional connections has better performance in terms of accuracy compared to the simpler versions of LSTM used here.
9

Kabildjanov, A. S., Ch Z. Okhunboboeva e S. Yo Ismailov. "Intelligent forecasting of growth and development of fruit trees by deep learning recurrent neural networks". IOP Conference Series: Earth and Environmental Science 1206, n. 1 (1 giugno 2023): 012015. http://dx.doi.org/10.1088/1755-1315/1206/1/012015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract The questions of intellectual forecasting of dynamic processes of growth and development of fruit trees are considered. The average growth rate of shoots of apple trees of the «Renet Simirenko» variety was predicted. Forecasting was carried out using a deep learning recurrent neural network LSTM in relation to a one-dimensional time series, with which the specified parameter was described. The implementation of the recurrent neural network LSTM was carried out in the MATLAB 2021 environment. When defining the architecture and training of the LSTM recurrent neural network, the Deep Network Designer application was used, which is included in the MATLAB 2021 extensions and allows you to create, visualize, edit and train deep learning networks. The recurrent neural network LSTM was trained using the Adam method. The results obtained in the course of predicting the average growth rate of apple shoots using a trained LSTM recurrent neural network were evaluated by the root-mean-square error RMSE and the loss function LOSS.
10

Yu, Dian, e Shouqian Sun. "A Systematic Exploration of Deep Neural Networks for EDA-Based Emotion Recognition". Information 11, n. 4 (15 aprile 2020): 212. http://dx.doi.org/10.3390/info11040212.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Subject-independent emotion recognition based on physiological signals has become a research hotspot. Previous research has proved that electrodermal activity (EDA) signals are an effective data resource for emotion recognition. Benefiting from their great representation ability, an increasing number of deep neural networks have been applied for emotion recognition, and they can be classified as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or a combination of these (CNN+RNN). However, there has been no systematic research on the predictive power and configurations of different deep neural networks in this task. In this work, we systematically explore the configurations and performances of three adapted deep neural networks: ResNet, LSTM, and hybrid ResNet-LSTM. Our experiments use the subject-independent method to evaluate the three-class classification on the MAHNOB dataset. The results prove that the CNN model (ResNet) reaches a better accuracy and F1 score than the RNN model (LSTM) and the CNN+RNN model (hybrid ResNet-LSTM). Extensive comparisons also reveal that our three deep neural networks with EDA data outperform previous models with handcraft features on emotion recognition, which proves the great potential of the end-to-end DNN method.

Tesi sul tema "LSTM Neural networks":

1

Paschou, Michail. "ASIC implementation of LSTM neural network algorithm". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254290.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
LSTM neural networks have been used for speech recognition, image recognition and other artificial intelligence applications for many years. Most applications perform the LSTM algorithm and the required calculations on cloud computers. Off-line solutions include the use of FPGAs and GPUs but the most promising solutions include ASIC accelerators designed for this purpose only. This report presents an ASIC design capable of performing the multiple iterations of the LSTM algorithm on a unidirectional and without peepholes neural network architecture. The proposed design provides arithmetic level parallelism options as blocks are instantiated based on parameters. The internal structure of the design implements pipelined, parallel or serial solutions depending on which is optimal in every case. The implications concerning these decisions are discussed in detail in the report. The design process is described in detail and the evaluation of the design is also presented to measure accuracy and error of the design output.This thesis work resulted in a complete synthesizable ASIC design implementing an LSTM layer, a Fully Connected layer and a Softmax layer which can perform classification of data based on trained weight matrices and bias vectors. The design primarily uses 16-bit fixed point format with 5 integer and 11 fractional bits but increased precision representations are used in some blocks to reduce error output. Additionally, a verification environment has also been designed and is capable of performing simulations, evaluating the design output by comparing it with results produced from performing the same operations with 64-bit floating point precision on a SystemVerilog testbench and measuring the encountered error. The results concerning the accuracy and the design output error margin are presented in this thesis report. The design went through Logic and Physical synthesis and successfully resulted in a functional netlist for every tested configuration. Timing, area and power measurements on the generated netlists of various configurations of the design show consistency and are reported in this report.
LSTM neurala nätverk har använts för taligenkänning, bildigenkänning och andra artificiella intelligensapplikationer i många år. De flesta applikationer utför LSTM-algoritmen och de nödvändiga beräkningarna i digitala moln. Offline lösningar inkluderar användningen av FPGA och GPU men de mest lovande lösningarna inkluderar ASIC-acceleratorer utformade för endast dettaändamål. Denna rapport presenterar en ASIC-design som kan utföra multipla iterationer av LSTM-algoritmen på en enkelriktad neural nätverksarkitetur utan peepholes. Den föreslagna designed ger aritmetrisk nivå-parallellismalternativ som block som är instansierat baserat på parametrar. Designens inre konstruktion implementerar pipelinerade, parallella, eller seriella lösningar beroende på vilket anternativ som är optimalt till alla fall. Konsekvenserna för dessa beslut diskuteras i detalj i rapporten. Designprocessen beskrivs i detalj och utvärderingen av designen presenteras också för att mäta noggrannheten och felmarginal i designutgången. Resultatet av arbetet från denna rapport är en fullständig syntetiserbar ASIC design som har implementerat ett LSTM-lager, ett fullständigt anslutet lager och ett Softmax-lager som kan utföra klassificering av data baserat på tränade viktmatriser och biasvektorer. Designen använder huvudsakligen 16bitars fast flytpunktsformat med 5 heltal och 11 fraktions bitar men ökade precisionsrepresentationer används i vissa block för att minska felmarginal. Till detta har även en verifieringsmiljö utformats som kan utföra simuleringar, utvärdera designresultatet genom att jämföra det med resultatet som produceras från att utföra samma operationer med 64-bitars flytpunktsprecision på en SystemVerilog testbänk och mäta uppstådda felmarginal. Resultaten avseende noggrannheten och designutgångens felmarginal presenteras i denna rapport.Designen gick genom Logisk och Fysisk syntes och framgångsrikt resulterade i en funktionell nätlista för varje testad konfiguration. Timing, area och effektmätningar på den genererade nätlistorna av olika konfigurationer av designen visar konsistens och rapporteras i denna rapport.
2

Cavallie, Mester Jon William. "Using LSTM Neural Networks To Predict Daily Stock Returns". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-106124.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Long short-term memory (LSTM) neural networks have been proven to be effective for time series prediction, even in some instances where the data is non-stationary. This lead us to examine their predictive ability of stock market returns, as the development of stock prices and returns tend to be a non-stationary time series. We used daily stock trading data to let an LSTM train models at predicting daily returns for 60 stocks from the OMX30 and Nasdaq-100 indices. Subsequently, we measured their accuracy, precision, and recall. The mean accuracy was 49.75 percent, meaning that the observed accuracy was close to the accuracy one would observe by randomly selecting a prediction for each day and lower than the accuracy achieved by blindly predicting all days to be positive. Finally, we concluded that further improvements need to be made for models trained by LSTMs to have any notable predictive ability in the area of stock returns.
3

Pokhrel, Abhishek <1996&gt. "Stock Returns Prediction using Recurrent Neural Networks with LSTM". Master's Degree Thesis, Università Ca' Foscari Venezia, 2022. http://hdl.handle.net/10579/22038.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Research in asset pricing has, until recently, side-stepped the high dimensionality problem by focusing on low-dimensional models. Work on cross-sectional stock return prediction, for example, has focused on regressions with a small number of characteristics. Given the background of an enormously large number of variables that could potentially be relevant for predicting returns, focusing on such a small number of factors effectively means that the researchers are imposing a very high degree of sparsity on these models. This research studies the use of the recurrent neural network (RNN) method to deal with the “curse of dimensionality” challenge in the cross-section of stock returns. The purpose is to predict the daily stock returns. Compared with the traditional method of returns, namely the CAPM model, the focus will be on using the LSTM model to do the prediction. LSTM is very powerful in sequence prediction problems because they’re able to store past information. Thus, we compare the forecast of returns from the LSTM model with the traditional CAPM model. The comparison will be made using the out-of-sample R2 along with the Sharpe Ratio and Sortino Ratio. Finally, we conclude with the further improvements that need to be made for models trained by LSTMs to have any notable predictive ability in the area of stock returns.
4

Ärlemalm, Filip. "Harbour Porpoise Click Train Classification with LSTM Recurrent Neural Networks". Thesis, KTH, Teknisk informationsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215088.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The harbour porpoise is a toothed whale whose presence is threatened in Scandinavia. Onestep towards preserving the species in critical areas is to study and observe the harbourporpoise population growth or decline in these areas. Today this is done by using underwateraudio recorders, so called hydrophones, and manual analyzing tools. This report describes amethod that modernizes the process of harbour porpoise detection with machine learning. Thedetection method is based on data collected by the hydrophone AQUAclick 100. The data isprocessed and classified automatically with a stacked long short-term memory recurrent neuralnetwork designed specifically for this purpose.
Vanlig tumlare är en tandval vars närvaro i Skandinavien är hotad. Ett steg mot att kunnabevara arten i utsatta områden är att studera och observera tumlarbeståndets tillväxt ellertillbakagång i dessa områden. Detta görs idag med hjälp av ljudinspelare för undervattensbruk,så kallade hydrofoner, samt manuella analysverktyg. Den här rapporten beskriver enmetod som moderniserar processen för detektering av vanlig tumlare genom maskininlärning.Detekteringen är baserad på insamlad data från hydrofonen AQUAclick 100. Bearbetning ochklassificering av data har automatiserats genom att använda ett staplat återkopplande neuraltnätverk med långt korttidsminne utarbetat specifikt för detta ändamål.
5

Li, Edwin. "LSTM Neural Network Models for Market Movement Prediction". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231627.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Interpreting time varying phenomena is a key challenge in the capital markets. Time series analysis using autoregressive methods has been carried out over the last couple of decades, often with reassuring results. However, such methods sometimes fail to explain trends and cyclical fluctuations, which may be characterized by long-range dependencies or even dependencies between the input features. The purpose of this thesis is to investigate whether recurrent neural networks with LSTM-cells can be used to capture these dependencies, and ultimately be used as a complement for index trading decisions. Experiments are made on different setups of the S&P-500 stock index, and two distinct models are built, each one being an improvement of the previous model. The first model is a multivariate regression model, and the second model is a multivariate binary classifier. The output of each model is used to reason about the future behavior of the index. The experiment shows for the configuration provided that LSTM RNNs are unsuitable for predicting exact values of daily returns, but gives satisfactory results when used to predict the direction of the movement.
Att förstå och kunna förutsäga hur index varierar med tiden och andra parametrar är ett viktigt problem inom kapitalmarknader. Tidsserieanalys med autoregressiva metoder har funnits sedan årtionden tillbaka, och har oftast gett goda resultat. Dessa metoder saknar dock möjligheten att förklara trender och cykliska variationer i tidsserien, något som kan karaktäriseras av tidsvarierande samband, men även samband mellan parametrar som indexet beror utav. Syftet med denna studie är att undersöka om recurrent neural networks (RNN) med long short-term memory-celler (LSTM) kan användas för att fånga dessa samband, för att slutligen användas som en modell för att komplettera indexhandel. Experimenten är gjorda mot en modifierad S&P-500 datamängd, och två distinkta modeller har tagits fram. Den ena är en multivariat regressionsmodell för att förutspå exakta värden, och den andra modellen är en multivariat klassifierare som förutspår riktningen på nästa dags indexrörelse. Experimenten visar för den konfiguration som presenteras i rapporten att LSTM RNN inte passar för att förutspå exakta värden för indexet, men ger tillfredsställande resultat när modellen ska förutsäga indexets framtida riktning.
6

Zambezi, Samantha. "Predicting social unrest events in South Africa using LSTM neural networks". Master's thesis, Faculty of Science, 2021. http://hdl.handle.net/11427/33986.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis demonstrates an approach to predict the count of social unrest events in South Africa. A comparison is made between traditional forecasting approaches and neural networks; the traditional forecast method selected being the Autoregressive Integrated Moving Average (ARIMA model). The type of neural network implemented was the Long Short-Term Memory (LSTM) neural network. The basic theoretical concepts of ARIMA and LSTM neural networks are explained and subsequently, the patterns of the social unrest time series were analysed using time series exploratory techniques. The social unrest time series contained a significant number of irregular fluctuations with a non-linear trend. The structure of the social unrest time series suggested that traditional linear approaches would fail to model the non-linear behaviour of the time series. This thesis confirms this finding. Twelve experiments were conducted, and in these experiments, features, scaling procedures and model configurations are varied (i.e. univariate and multivariate models). Multivariate LSTM achieved the lowest forecast errors and performance improved as more explanatory features were introduced. The ARIMA model's performance deteriorated with added complexity and the univariate ARIMA produced lower forecast errors compared to the multivariate ARIMA. In conclusion, it can be claimed that multivariate LSTM neural networks are useful for predicting social unrest events.
7

Holm, Noah, e Emil Plynning. "Spatio-temporal prediction of residential burglaries using convolutional LSTM neural networks". Thesis, KTH, Geoinformatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229952.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The low amount solved residential burglary crimes calls for new and innovative methods in the prevention and investigation of the cases. There were 22 600 reported residential burglaries in Sweden 2017 but only four to five percent of these will ever be solved. There are many initiatives in both Sweden and abroad for decreasing the amount of occurring residential burglaries and one of the areas that are being tested is the use of prediction methods for more efficient preventive actions. This thesis is an investigation of a potential method of prediction by using neural networks to identify areas that have a higher risk of burglaries on a daily basis. The model use reported burglaries to learn patterns in both space and time. The rationale for the existence of patterns is based on near repeat theories in criminology which states that after a burglary both the burgled victim and an area around that victim has an increased risk of additional burglaries. The work has been conducted in cooperation with the Swedish Police authority. The machine learning is implemented with convolutional long short-term memory (LSTM) neural networks with max pooling in three dimensions that learn from ten years of residential burglary data (2007-2016) in a study area in Stockholm, Sweden. The model's accuracy is measured by performing predictions of burglaries during 2017 on a daily basis. It classifies cells in a 36x36 grid with 600 meter square grid cells as areas with elevated risk or not. By classifying 4% of all grid cells during the year as risk areas, 43% of all burglaries are correctly predicted. The performance of the model could potentially be improved by further configuration of the parameters of the neural network, along with a use of more data with factors that are correlated to burglaries, for instance weather. Consequently, further work in these areas could increase the accuracy. The conclusion is that neural networks or machine learning in general could be a powerful and innovative tool for the Swedish Police authority to predict and moreover prevent certain crime. This thesis serves as a first prototype of how such a system could be implemented and used.
8

Graffi, Giacomo. "A novel approach for Credit Scoring using Deep Neural Networks with bank transaction data". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With the PSD2 open banking revolution FinTechs obtained a key role in the financial industry. This role implies the inquiry and development of new techniques, products and solutions to compete with other players in this area. The aim of this thesis is to investigate the applicability of the state-of-the-art Deep Learning techniques for Credit Risk Modeling. In order to accomplish it, a PSD2-related synthetic and anonymized dataset has been used to simulate an application process with only one account per user. Firstly, a machine-readable representation of the bank accounts has been created, starting from the raw transactions’ data and scaling the variables using the quantile function. Afterwards, a Deep Neural Network has been created in order to capture the complex relations between the input variables and to extract information from the accounts’ representations. The proposed architecture accomplished the assigned tasks with a Gini index of 0.55, exploiting a Convolutional encoder to extract features from the inputs and a Recurrent decoder to analyze them.
9

Xiang, Wenliang. "Anomaly detection by prediction for health monitoring of satellites using LSTM neural networks". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24695/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Anomaly detection in satellite has not been well-documented due to the unavailability of satellite data, while it becomes more and more important with the increasing popularity of satellite applications. Our work focus on the anomaly detection by prediction on the dataset from the satellite, where we try and compare performance among recurrent neural network (RNN), Long Short-Term Memory (LSTM) and conventional neural network (NN). We conclude that LSTM with input length p=16, dimensionality n=32, output length q=2, 128 neurons and without maximum overlap is the best in terms of balanced accuracy. And LSTM with p=128, n=32, q=16, 128 and without maximum overlap outperforms most with respect to AUC metric. We also invent award function as a new performance metric trying to capture not only the correctness of decisions that NN made but also the amount of confidence in making its decisions, and we propose two candidates of award function. Regrettably, they partially meet our expectation as they possess a fatal defect which has been proved both from practical and theoretical viewpoints.
10

Lin, Alvin. "Video Based Automatic Speech Recognition Using Neural Networks". DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2343.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Neural network approaches have become popular in the field of automatic speech recognition (ASR). Most ASR methods use audio data to classify words. Lip reading ASR techniques utilize only video data, which compensates for noisy environments where audio may be compromised. A comprehensive approach, including the vetting of datasets and development of a preprocessing chain, to video-based ASR is developed. This approach will be based on neural networks, namely 3D convolutional neural networks (3D-CNN) and Long short-term memory (LSTM). These types of neural networks are designed to take in temporal data such as videos. Various combinations of different neural network architecture and preprocessing techniques are explored. The best performing neural network architecture, a CNN with bidirectional LSTM, compares favorably against recent works on video-based ASR.

Libri sul tema "LSTM Neural networks":

1

Sangeetha, V., e S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip its readers with a comprehensive understanding of AI and its subsets, machine learning and deep learning, with a particular emphasis on neural networks. It is designed for novices venturing into the field, as well as experienced learners who desire to solidify their knowledge base or delve deeper into advanced topics. In Chapter 1, we provide a thorough introduction to the world of AI, exploring its definition, historical trajectory, and categories. We delve into the applications of AI, and underscore the ethical implications associated with its proliferation. Chapter 2 introduces machine learning, elucidating its types and basic algorithms. We examine the practical applications of machine learning and delve into challenges such as overfitting, underfitting, and model validation. Deep learning and neural networks, an integral part of AI, form the crux of Chapter 3. We provide a lucid introduction to deep learning, describe the structure of neural networks, and explore forward and backward propagation. This chapter also delves into the specifics of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). In Chapter 4, we outline the steps to train neural networks, including data preprocessing, cost functions, gradient descent, and various optimizers. We also delve into regularization techniques and methods for evaluating a neural network model. Chapter 5 focuses on specialized topics in neural networks such as autoencoders, Generative Adversarial Networks (GANs), Long Short-Term Memory Networks (LSTMs), and Neural Architecture Search (NAS). In Chapter 6, we illustrate the practical applications of neural networks, examining their role in computer vision, natural language processing, predictive analytics, autonomous vehicles, and the healthcare industry. Chapter 7 gazes into the future of AI and neural networks. It discusses the current challenges in these fields, emerging trends, and future ethical considerations. It also examines the potential impacts of AI and neural networks on society. Finally, Chapter 8 concludes the book with a recap of key learnings, implications for readers, and resources for further study. This book aims not only to provide a robust theoretical foundation but also to kindle a sense of curiosity and excitement about the endless possibilities AI and neural networks offer. The journ

Capitoli di libri sul tema "LSTM Neural networks":

1

Wüthrich, Mario V., e Michael Merz. "Recurrent Neural Networks". In Springer Actuarial, 381–406. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractThis chapter considers recurrent neural (RN) networks. These are special network architectures that are useful for time-series modeling, e.g., applied to time-series forecasting. We study the most popular RN networks which are the long short-term memory (LSTM) networks and the gated recurrent unit (GRU) networks. We apply these networks to mortality forecasting.
2

Salem, Fathi M. "Gated RNN: The Long Short-Term Memory (LSTM) RNN". In Recurrent Neural Networks, 71–82. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Zhang, Nan, Wei-Long Zheng, Wei Liu e Bao-Liang Lu. "Continuous Vigilance Estimation Using LSTM Neural Networks". In Neural Information Processing, 530–37. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_59.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Alexandre, Luís A., e J. P. Marques de Sá. "Error Entropy Minimization for LSTM Training". In Artificial Neural Networks – ICANN 2006, 244–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_26.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Yu, Wen, Xiaoou Li e Jesus Gonzalez. "Fast Training of Deep LSTM Networks". In Advances in Neural Networks – ISNN 2019, 3–10. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22796-8_1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Klapper-Rybicka, Magdalena, Nicol N. Schraudolph e Jürgen Schmidhuber. "Unsupervised Learning in LSTM Recurrent Neural Networks". In Artificial Neural Networks — ICANN 2001, 684–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_95.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Haralabopoulos, Giannis, e Ioannis Anagnostopoulos. "A Custom State LSTM Cell for Text Classification Tasks". In Engineering Applications of Neural Networks, 489–504. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08223-8_40.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Li, SiLiang, Bin Xu e Tong Lee Chung. "Definition Extraction with LSTM Recurrent Neural Networks". In Lecture Notes in Computer Science, 177–89. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47674-2_16.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Gers, Felix A., Douglas Eck e Jürgen Schmidhuber. "Applying LSTM to Time Series Predictable through Time-Window Approaches". In Artificial Neural Networks — ICANN 2001, 669–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_93.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Gers, Felix A., Juan Antonio Pérez-Ortiz, Douglas Eck e Jürgen Schmidhuber. "Learning Context Sensitive Languages with LSTM Trained with Kalman Filters". In Artificial Neural Networks — ICANN 2002, 655–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_107.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "LSTM Neural networks":

1

Sun, Qingnan, Marko V. Jankovic, Lia Bally e Stavroula G. Mougiakakou. "Predicting Blood Glucose with an LSTM and Bi-LSTM Based Deep Neural Network". In 2018 14th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2018. http://dx.doi.org/10.1109/neurel.2018.8586990.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Arshi, Sahar, Li Zhang e Rebecca Strachan. "Prediction Using LSTM Networks". In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852206.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Pérez, José, Rafael Baez, Jose Terrazas, Arturo Rodríguez, Daniel Villanueva, Olac Fuentes, Vinod Kumar, Brandon Paez e Abdiel Cruz. "Physics-Informed Long-Short Term Memory Neural Network Performance on Holloman High-Speed Test Track Sled Study". In ASME 2022 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/fedsm2022-86953.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Physics Informed Neural Networks (PINNs) incorporate known physics equations into a network to reduce training time and increase accuracy. Traditional PINNs approaches are based on dense networks that do not consider the fact that simulations are a type of sequential data. Long-Short Term Memory (LSTM) networks are a modified version of Recurrent Neural Networks (RNNs) which are used to analyze sequential datasets. We propose a Physics Informed LSTM network that leverages the power of LSTMs for sequential datasets that also incorporates the governing physics equations of 2D incompressible Navier-Stokes fluid to analyze fluid flow around a stationary geometry resembling the water braking mechanism at the Holloman High-Speed Test Track. Currently, simulation data to analyze the fluid flow of the braking mechanism is generated through ANSYS and is costly, taking several days to generate a single simulation. By incorporating physics equations, our proposed Physics-Informed LSTM network was able to predict the last 20% of a simulation given the first 80% within a small margin of error in a shorter amount of time than a non-informed LSTM. This demonstrates the potential that physics-informed networks that leverage sequential information may have at speeding up computational fluid dynamics simulations and serves as a first step towards adapting PINNs for more advanced network architectures.
4

Lin, Tao, Tian Guo e Karl Aberer. "Hybrid Neural Networks for Learning the Trend in Time Series". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/316.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The trend of time series characterizes the intermediate upward and downward behaviour of time series. Learning and forecasting the trend in time series data play an important role in many real applications, ranging from resource allocation in data centers, load schedule in smart grid, and so on. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network to learn local and global contextual features for predicting the trend of time series. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series. Meanwhile, considering the long-range dependency existing in the sequence of historical trends of time series, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Then, a feature fusion layer is to learn joint representation for predicting the trend. TreNet demonstrates its effectiveness by outperforming CNN, LSTM, the cascade of CNN and LSTM, Hidden Markov Model based method and various kernel based baselines on real datasets.
5

Pulver, Andrew, e Siwei Lyu. "LSTM with working memory". In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965940.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Martinez-Garcia, Fernando, e Douglas Down. "E-LSTM: An extension to the LSTM architecture for incorporating long lag dependencies". In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892810.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Sundermeyer, Martin, Ralf Schlüter e Hermann Ney. "LSTM neural networks for language modeling". In Interspeech 2012. ISCA: ISCA, 2012. http://dx.doi.org/10.21437/interspeech.2012-65.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Srivastava, Anitej, e Anto S. "Weather Prediction Using LSTM Neural Networks". In 2022 IEEE 7th International conference for Convergence in Technology (I2CT). IEEE, 2022. http://dx.doi.org/10.1109/i2ct54291.2022.9824268.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Yang, Dongdong, Senzhang Wang e Zhoujun Li. "Ensemble Neural Relation Extraction with Adaptive Boosting". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/630.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Relation extraction has been widely studied to extract new relational facts from open corpus. Previous relation extraction methods are faced with the problem of wrong labels and noisy data, which substantially decrease the performance of the model. In this paper, we propose an ensemble neural network model - Adaptive Boosting LSTMs with Attention, to more effectively perform relation extraction. Specifically, our model first employs the recursive neural network LSTMs to embed each sentence. Then we import attention into LSTMs by considering that the words in a sentence do not contribute equally to the semantic meaning of the sentence. Next via adaptive boosting, we build strategically several such neural classifiers. By ensembling multiple such LSTM classifiers with adaptive boosting, we could build a more effective and robust joint ensemble neural networks based relation extractor. Experiment results on real dataset demonstrate the superior performance of the proposed model, improving F1-score by about 8% compared to the state-of-the-art models.
10

Qin, Yu, Jiajun Du, Xinyao Wang e Hongtao Lu. "Recurrent Layer Aggregation using LSTM". In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852077.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "LSTM Neural networks":

1

Cárdenas-Cárdenas, Julián Alonso, Deicy J. Cristiano-Botia e Nicolás Martínez-Cortés. Colombian inflation forecast using Long Short-Term Memory approach. Banco de la República, giugno 2023. http://dx.doi.org/10.32468/be.1241.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We use Long Short Term Memory (LSTM) neural networks, a deep learning technique, to forecast Colombian headline inflation one year ahead through two approaches. The first one uses only information from the target variable, while the second one incorporates additional information from some relevant variables. We employ sample rolling to the traditional neuronal network construction process, selecting the hyperparameters with criteria for minimizing the forecast error. Our results show a better forecasting capacity of the network with information from additional variables, surpassing both the other LSTM application and ARIMA models optimized for forecasting (with and without explanatory variables). This improvement in forecasting accuracy is most pronounced over longer time horizons, specifically from the seventh month onwards.
2

Ankel, Victoria, Stella Pantopoulou, Matthew Weathered, Darius Lisowski, Anthonie Cilliers e Alexander Heifetz. One-Step Ahead Prediction of Thermal Mixing Tee Sensors with Long Short Term Memory (LSTM) Neural Networks. Office of Scientific and Technical Information (OSTI), dicembre 2020. http://dx.doi.org/10.2172/1760289.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia