Dissertations / Theses on the topic 'Réseau Long Short-Term Memory (LSTM)'

To see the other types of publications on this topic, follow the link: Réseau Long Short-Term Memory (LSTM).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Réseau Long Short-Term Memory (LSTM).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Javid, Gelareh. "Contribution à l’estimation de charge et à la gestion optimisée d’une batterie Lithium-ion : application au véhicule électrique." Thesis, Mulhouse, 2021. https://www.learning-center.uha.fr/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'estimation de l'état de charge (SOC) est un point crucial pour la sécurité des performances et la durée de vie des batteries lithium-ion (Li-ion) utilisées pour alimenter les VE.Dans cette thèse, la précision de l'estimation de l'état de charge est étudiée à l'aide d'algorithmes de réseaux neuronaux récurrents profonds (DRNN). Pour ce faire, pour une cellule d’une batterie Li-ion, trois nouvelles méthodes sont proposées : une mémoire bidirectionnelle à long et court terme (BiLSTM), une mémoire robuste à long et court terme (RoLSTM) et une technique d'unités récurrentes à grille (GRU).En utilisant ces techniques, on ne dépend pas de modèles précis de la batterie et on peut éviter les méthodes mathématiques complexes, en particulier dans un bloc de batterie. En outre, ces modèles sont capables d'estimer précisément le SOC à des températures variables. En outre, contrairement au réseau de neurones récursif traditionnel dont le contenu est réécrit à chaque fois, ces réseaux peuvent décider de préserver la mémoire actuelle grâce aux passerelles proposées. Dans ce cas, il peut facilement transférer l'information sur de longs chemins pour recevoir et maintenir des dépendances à long terme.La comparaison des résultats indique que le réseau BiLSTM a de meilleures performances que les deux autres méthodes. De plus, le modèle BiLSTM peut travailler avec des séquences plus longues provenant de deux directions, le passé et le futur, sans problème de disparition du gradient. Cette caractéristique permet de sélectionner une longueur de séquence équivalente à une période de décharge dans un cycle de conduite, et d'obtenir une plus grande précision dans l'estimation. En outre, ce modèle s'est bien comporté face à une valeur initiale incorrecte du SOC.Enfin, une nouvelle méthode BiLSTM a été introduite pour estimer le SOC d'un pack de batteries dans un EV. Le logiciel IPG Carmaker a été utilisé pour collecter les données et tester le modèle en simulation. Les résultats ont montré que l'algorithme proposé peut fournir une bonne estimation du SOC sans utilisation de filtre dans le système de gestion de la batterie (BMS)
The State Of Charge (SOC) estimation is a significant issue for safe performance and the lifespan of Lithium-ion (Li-ion) batteries, which is used to power the Electric Vehicles (EVs). In this thesis, the accuracy of SOC estimation is investigated using Deep Recurrent Neural Network (DRNN) algorithms. To do this, for a one cell Li-ion battery, three new SOC estimator based on different DRNN algorithms are proposed: a Bidirectional LSTM (BiLSTM) method, Robust Long-Short Term Memory (RoLSTM) algorithm, and a Gated Recurrent Units (GRUs) technique. Using these, one is not dependent on precise battery models and can avoid complicated mathematical methods especially in a battery pack. In addition, these models are able to precisely estimate the SOC at varying temperature. Also, unlike the traditional recursive neural network where content is re-written at each time, these networks can decide on preserving the current memory through the proposed gateways. In such case, it can easily transfer the information over long paths to receive and maintain long-term dependencies. Comparing the results indicates the BiLSTM network has a better performance than the other two. Moreover, the BiLSTM model can work with longer sequences from two direction, the past and the future, without gradient vanishing problem. This feature helps to select a sequence length as much as a discharge period in one drive cycle, and to have more accuracy in the estimation. Also, this model well behaved against the incorrect initial value of SOC. Finally, a new BiLSTM method introduced to estimate the SOC of a pack of batteries in an Ev. IPG Carmaker software was used to collect data and test the model in the simulation. The results showed that the suggested algorithm can provide a good SOC estimation without using any filter in the Battery Management System (BMS)
2

Cifonelli, Antonio. "Probabilistic exponential smoothing for explainable AI in the supply chain domain." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le rôle clé que l’IA pourrait jouer dans l’amélioration des activités commerciales est connu depuis longtemps, mais le processus de pénétration de cette nouvelle technologie a rencontré certains freins au sein des entreprises, en particulier, les coûts de mise œuvre. En moyenne, 2.8 ans sont nécessaires depuis la sélection du fournisseur jusqu’au déploiement complet d’une nouvelle solution. Trois points fondamentaux doivent être pris en compte lors du développement d’un nouveau modèle. Le désalignement des attentes, le besoin de compréhension et d’explications et les problèmes de performance et de fiabilité. Dans le cas de modèles traitant des données de la supply chain, cinq questions spécifiques viennent s’ajouter aux précédentes : - La gestion des incertitudes. Les décideurs cherchent un moyen de minimiser le risque associé à chaque décision qu’ils doivent prendre en présence d’incertitude. Obtenir une prévision exacte est un rêve ; obtenir une prévision assez précise et en calculer les limites est réaliste et judicieux. - Le traitement des données entières et positives. La plupart des articles ne peuvent pas être vendus en sous-unités. Cet aspect simple de la vente se traduit par une contrainte qui doit être satisfaite : le résultat doit être un entier positif. - L’observabilité. La demande du client ne peut pas être mesurée directement, seules les ventes peuvent être enregistrées et servir de proxy. - La rareté et la parcimonie. Les ventes sont une quantité discontinue. En enregistrant les ventes par jour, une année entière est condensée en seulement 365 points. De plus, une grande partie d’entre elles sera à zéro. - L’optimisation juste-à-temps. La prévision est une fonction clé, mais elle n’est qu’un élément d’une chaîne de traitements soutenant la prise de décision. Le temps est une ressource précieuse qui ne peut pas être consacrée entièrement à une seule fonction. Le processus de décision et les adaptations associées doivent donc être effectuées dans un temps limité et d’une manière suffisamment flexible pour pouvoir être interrompu et relancé en cas de besoin afin d’incorporer des événements inattendus ou des ajustements nécessaires. Cette thèse s’insère dans ce contexte et est le résultat du travail effectué au cœur de Lokad. La recherche doctorale a été financée par Lokad en collaboration avec l’ANRT dans le cadre d’un contrat CIFRE. Le travail proposé a l’ambition d’être un bon compromis entre les nouvelles technologies et les attentes des entreprises, en abordant les divers aspects précédemment présentés. Nous avons commencé à effectuer des prévisions en utilisant la famille des lissages exponentiels, qui sont faciles à mettre en œuvre et extrêmement rapides à exécuter. Largement utilisés dans l’industrie, elles ont déjà gagné la confiance des utilisateurs. De plus, elles sont faciles à comprendre et à expliquer à un public non averti. En exploitant des techniques plus avancées relevant du domaine de l’IA, certaines des limites des modèles utilisés peuvent être surmontées. L’apprentissage par transfert s’est avéré être une approche pertinente pour extrapoler des informations utiles dans le cas où le nombre de données disponibles était très limité. Nous avons proposé d’utiliser un modèle associé à une loi de Poisson, une binomiale négative qui correspond mieux à la nature des phénomènes que nous cherchons à modéliser et à prévoir. Nous avons aussi proposé de traiter l’incertitude par des simulations de Monte Carlo. Un certain nombre de scénarios sont générés, échantillonnés et modélisés par dans une distribution. À partir de cette dernière, des intervalles de confiance de taille différentes et adaptés peuvent être déduits. Sur des données réelles de l’entreprise, nous avons comparé notre approche avec les méthodes de l’état de l’art comme DeepAR, DeepSSMs et N-Beats. Nous en avons déduit un nouveau modèle conçu à partir de la méthode Holt-Winter [...]
The key role that AI could play in improving business operations has been known for a long time, but the penetration process of this new technology has encountered certain obstacles within companies, in particular, implementation costs. On average, it takes 2.8 years from supplier selection to full deployment of a new solution. There are three fundamental points to consider when developing a new model. Misalignment of expectations, the need for understanding and explanation, and performance and reliability issues. In the case of models dealing with supply chain data, there are five additionally specific issues: - Managing uncertainty. Precision is not everything. Decision-makers are looking for a way to minimise the risk associated with each decision they have to make in the presence of uncertainty. Obtaining an exact forecast is a advantageous; obtaining a fairly accurate forecast and calculating its limits is realistic and appropriate. - Handling integer and positive data. Most items sold in retail cannot be sold in subunits. This simple aspect of selling, results in a constraint that must be satisfied by the result of any given method or model: the result must be a positive integer. - Observability. Customer demand cannot be measured directly, only sales can be recorded and used as a proxy. - Scarcity and parsimony. Sales are a discontinuous quantity. By recording sales by day, an entire year is condensed into just 365 points. What’s more, a large proportion of them will be zero. - Just-in-time optimisation. Forecasting is a key function, but it is only one element in a chain of processes supporting decision-making. Time is a precious resource that cannot be devoted entirely to a single function. The decision-making process and associated adaptations must therefore be carried out within a limited time frame, and in a sufficiently flexible manner to be able to be interrupted and restarted if necessary in order to incorporate unexpected events or necessary adjustments. This thesis fits into this context and is the result of the work carried out at the heart of Lokad, a Paris-based software company aiming to bridge the gap between technology and the supply chain. The doctoral research was funded by Lokad in collaborationwith the ANRT under a CIFRE contract. The proposed work aims to be a good compromise between new technologies and business expectations, addressing the various aspects presented above. We have started forecasting using the exponential smoothing family which are easy to implement and extremely fast to run. As they are widely used in the industry, they have already won the confidence of users. What’s more, they are easy to understand and explain to an unlettered audience. By exploiting more advanced AI techniques, some of the limitations of the models used can be overcome. Cross-learning proved to be a relevant approach for extrapolating useful information when the number of available data was very limited. Since the common Gaussian assumption is not suitable for discrete sales data, we proposed using a model associatedwith either a Poisson distribution or a Negative Binomial one, which better corresponds to the nature of the phenomena we are seeking to model and predict. We also proposed using Monte Carlo simulations to deal with uncertainty. A number of scenarios are generated, sampled and modelled using a distribution. From this distribution, confidence intervals of different and adapted sizes can be deduced. Using real company data, we compared our approach with state-of-the-art methods such as DeepAR model, DeepSSMs and N-Beats. We deduced a new model based on the Holt-Winter method. These models were implemented in Lokad’s work flow
3

Singh, Akash. "Anomaly Detection for Temporal Data using Long Short-Term Memory (LSTM)." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We explore the use of Long short-term memory (LSTM) for anomaly detection in temporal data. Due to the challenges in obtaining labeled anomaly datasets, an unsupervised approach is employed. We train recurrent neural networks (RNNs) with LSTM units to learn the normal time series patterns and predict future values. The resulting prediction errors are modeled to give anomaly scores. We investigate different ways of maintaining LSTM state, and the effect of using a fixed number of time steps on LSTM prediction and detection performance. LSTMs are also compared to feed-forward neural networks with fixed size time windows over inputs. Our experiments, with three real-world datasets, show that while LSTM RNNs are suitable for general purpose time series modeling and anomaly detection, maintaining LSTM state is crucial for getting desired results. Moreover, LSTMs may not be required at all for simple time series.
Vi undersöker Long short-term memory (LSTM) för avvikelsedetektion i tidsseriedata. På grund av svårigheterna i att hitta data med etiketter så har ett oövervakat an-greppssätt använts. Vi tränar rekursiva neuronnät (RNN) med LSTM-noder för att lära modellen det normala tidsseriemönstret och prediktera framtida värden. Vi undersö-ker olika sätt av att behålla LSTM-tillståndet och effekter av att använda ett konstant antal tidssteg på LSTM-prediktionen och avvikelsedetektionsprestandan. LSTM är också jämförda med vanliga neuronnät med fasta tidsfönster över indata. Våra experiment med tre verkliga datasetvisar att även om LSTM RNN är tillämpbara för generell tidsseriemodellering och avvikelsedetektion så är det avgörande att behålla LSTM-tillståndet för att få de önskaderesultaten. Dessutom är det inte nödvändigt att använda LSTM för enkla tidsserier.
4

Shojaee, Ali B. S. "Bacteria Growth Modeling using Long-Short-Term-Memory Networks." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617105038908441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gustafsson, Anton, and Julian Sjödal. "Energy Predictions of Multiple Buildings using Bi-directional Long short-term Memory." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-43552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The process of energy consumption and monitoring of a buildingis time-consuming. Therefore, an feasible approach for using trans-fer learning is presented to decrease the necessary time to extract re-quired large dataset. The technique applies a bidirectional long shortterm memory recurrent neural network using sequence to sequenceprediction. The idea involves a training phase that extracts informa-tion and patterns of a building that is presented with a reasonablysized dataset. The validation phase uses a dataset that is not sufficientin size. This dataset was acquired through a related paper, the resultscan therefore be validated accordingly. The conducted experimentsinclude four cases that involve different strategies in training and val-idation phases and percentages of fine-tuning. Our proposed modelgenerated better scores in terms of prediction performance comparedto the related paper.
6

Valluru, Aravind-Deshikh. "Realization of LSTM Based Cognitive Radio Network." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538697/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents the realization of an intelligent cognitive radio network that uses long short term memory (LSTM) neural network for sensing and predicting the spectrum activity at each instant of time. The simulation is done using Python and GNU Radio. The implementation is done using GNU Radio and Universal Software Radio Peripherals (USRP). Simulation results show that the confidence factor of opportunistic users not causing interference to licensed users of the spectrum is 98.75%. The implementation results demonstrate high reliability of the LSTM based cognitive radio network.
7

Corni, Gabriele. "A study on the applicability of Long Short-Term Memory networks to industrial OCR." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis summarises the research-oriented study of applicability of Long Short-Term Memory Recurrent Neural Networks (LSTMs) to industrial Optical Character Recognition (OCR) problems. Traditionally solved through Convolutional Neural Network-based approaches (CNNs), the reported work aims to detect the OCR aspects that could be improved by exploiting recurrent patterns among pixel intensities, and speed up the overall character detection process. Accuracy, speed and complexity act as the main key performance indicators. After studying the core Deep Learning foundations, the best training technique to fit this problem first, and the best parametrisation next, have been selected. A set of tests eventually validated the preciseness of this solution. The final results highlight how difficult is to perform better than CNNs for what OCR tasks are concerned. Nonetheless, with favourable background conditions, the proposed LSTM-based approach is capable of reaching a comparable accuracy rate in (potentially) less time.
8

van, der Westhuizen Jos. "Biological applications, visualizations, and extensions of the long short-term memory network." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Sequences are ubiquitous in the domain of biology. One of the current best machine learning techniques for analysing sequences is the long short-term memory (LSTM) network. Owing to significant barriers to adoption in biology, focussed efforts are required to realize the use of LSTMs in practice. Thus, the aim of this work is to improve the state of LSTMs for biology, and we focus on biological tasks pertaining to physiological signals, peripheral neural signals, and molecules. This goal drives the three subplots in this thesis: biological applications, visualizations, and extensions. We start by demonstrating the utility of LSTMs for biological applications. On two new physiological-signal datasets, LSTMs were found to outperform hidden Markov models. LSTM-based models, implemented by other researchers, also constituted the majority of the best performing approaches on publicly available medical datasets. However, even if these models achieve the best performance on such datasets, their adoption will be limited if they fail to indicate when they are likely mistaken. Thus, we demonstrate on medical data that it is straightforward to use LSTMs in a Bayesian framework via dropout, providing model predictions with corresponding uncertainty estimates. Another dataset used to show the utility of LSTMs is a novel collection of peripheral neural signals. Manual labelling of this dataset is prohibitively expensive, and as a remedy, we propose a sequence-to-sequence model regularized by Wasserstein adversarial networks. The results indicate that the proposed model is able to infer which actions a subject performed based on its peripheral neural signals with reasonable accuracy. As these LSTMs achieve state-of-the-art performance on many biological datasets, one of the main concerns for their practical adoption is their interpretability. We explore various visualization techniques for LSTMs applied to continuous-valued medical time series and find that learning a mask to optimally delete information in the input provides useful interpretations. Furthermore, we find that the input features looked for by the LSTM align well with medical theory. For many applications, extensions of the LSTM can provide enhanced suitability. One such application is drug discovery -- another important aspect of biology. Deep learning can aid drug discovery by means of generative models, but they often produce invalid molecules due to their complex discrete structures. As a solution, we propose a version of active learning that leverages the sequential nature of the LSTM along with its Bayesian capabilities. This approach enables efficient learning of the grammar that governs the generation of discrete-valued sequences such as molecules. Efficiency is achieved by reducing the search space from one over sequences to one over the set of possible elements at each time step -- a much smaller space. Having demonstrated the suitability of LSTMs for biological applications, we seek a hardware efficient implementation. Given the success of the gated recurrent unit (GRU), which has two gates, a natural question is whether any of the LSTM gates are redundant. Research has shown that the forget gate is one of the most important gates in the LSTM. Hence, we propose a forget-gate-only version of the LSTM -- the JANET -- which outperforms both the LSTM and some of the best contemporary models on benchmark datasets, while also reducing computational cost.
9

Nawaz, Sabeen. "Analysis of Transactional Data with Long Short-Term Memory Recurrent Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An issue authorities and banks face is fraud related to payments and transactions where huge monetary losses occur to a party or where money laundering schemes are carried out. Previous work in the field of machine learning for fraud detection has addressed the issue as a supervised learning problem. In this thesis, we propose a model which can be used in a fraud detection system with transactions and payments that are unlabeled. The proposed modelis a Long Short-term Memory in an auto-encoder decoder network (LSTMAED)which is trained and tested on transformed data. The data is transformed by reducing it to Principal Components and clustering it with K-means. The model is trained to reconstruct the sequence with high accuracy. Our results indicate that the LSTM-AED performs better than a random sequence generating process in learning and reconstructing a sequence of payments. We also found that huge a loss of information occurs in the pre-processing stages.
Obehöriga transaktioner och bedrägerier i betalningar kan leda till stora ekonomiska förluster för banker och myndigheter. Inom maskininlärning har detta problem tidigare hanterats med hjälp av klassifierare via supervised learning. I detta examensarbete föreslår vi en modell som kan användas i ett system för att upptäcka bedrägerier. Modellen appliceras på omärkt data med många olika variabler. Modellen som används är en Long Short-term memory i en auto-encoder decoder nätverk. Datan transformeras med PCA och klustras med K-means. Modellen tränas till att rekonstruera en sekvens av betalningar med hög noggrannhet. Vår resultat visar att LSTM-AED presterar bättre än en modell som endast gissar nästa punkt i sekvensen. Resultatet visar också att mycket information i datan går förlorad när den förbehandlas och transformeras.
10

Paschou, Michail. "ASIC implementation of LSTM neural network algorithm." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
LSTM neural networks have been used for speech recognition, image recognition and other artificial intelligence applications for many years. Most applications perform the LSTM algorithm and the required calculations on cloud computers. Off-line solutions include the use of FPGAs and GPUs but the most promising solutions include ASIC accelerators designed for this purpose only. This report presents an ASIC design capable of performing the multiple iterations of the LSTM algorithm on a unidirectional and without peepholes neural network architecture. The proposed design provides arithmetic level parallelism options as blocks are instantiated based on parameters. The internal structure of the design implements pipelined, parallel or serial solutions depending on which is optimal in every case. The implications concerning these decisions are discussed in detail in the report. The design process is described in detail and the evaluation of the design is also presented to measure accuracy and error of the design output.This thesis work resulted in a complete synthesizable ASIC design implementing an LSTM layer, a Fully Connected layer and a Softmax layer which can perform classification of data based on trained weight matrices and bias vectors. The design primarily uses 16-bit fixed point format with 5 integer and 11 fractional bits but increased precision representations are used in some blocks to reduce error output. Additionally, a verification environment has also been designed and is capable of performing simulations, evaluating the design output by comparing it with results produced from performing the same operations with 64-bit floating point precision on a SystemVerilog testbench and measuring the encountered error. The results concerning the accuracy and the design output error margin are presented in this thesis report. The design went through Logic and Physical synthesis and successfully resulted in a functional netlist for every tested configuration. Timing, area and power measurements on the generated netlists of various configurations of the design show consistency and are reported in this report.
LSTM neurala nätverk har använts för taligenkänning, bildigenkänning och andra artificiella intelligensapplikationer i många år. De flesta applikationer utför LSTM-algoritmen och de nödvändiga beräkningarna i digitala moln. Offline lösningar inkluderar användningen av FPGA och GPU men de mest lovande lösningarna inkluderar ASIC-acceleratorer utformade för endast dettaändamål. Denna rapport presenterar en ASIC-design som kan utföra multipla iterationer av LSTM-algoritmen på en enkelriktad neural nätverksarkitetur utan peepholes. Den föreslagna designed ger aritmetrisk nivå-parallellismalternativ som block som är instansierat baserat på parametrar. Designens inre konstruktion implementerar pipelinerade, parallella, eller seriella lösningar beroende på vilket anternativ som är optimalt till alla fall. Konsekvenserna för dessa beslut diskuteras i detalj i rapporten. Designprocessen beskrivs i detalj och utvärderingen av designen presenteras också för att mäta noggrannheten och felmarginal i designutgången. Resultatet av arbetet från denna rapport är en fullständig syntetiserbar ASIC design som har implementerat ett LSTM-lager, ett fullständigt anslutet lager och ett Softmax-lager som kan utföra klassificering av data baserat på tränade viktmatriser och biasvektorer. Designen använder huvudsakligen 16bitars fast flytpunktsformat med 5 heltal och 11 fraktions bitar men ökade precisionsrepresentationer används i vissa block för att minska felmarginal. Till detta har även en verifieringsmiljö utformats som kan utföra simuleringar, utvärdera designresultatet genom att jämföra det med resultatet som produceras från att utföra samma operationer med 64-bitars flytpunktsprecision på en SystemVerilog testbänk och mäta uppstådda felmarginal. Resultaten avseende noggrannheten och designutgångens felmarginal presenteras i denna rapport.Designen gick genom Logisk och Fysisk syntes och framgångsrikt resulterade i en funktionell nätlista för varje testad konfiguration. Timing, area och effektmätningar på den genererade nätlistorna av olika konfigurationer av designen visar konsistens och rapporteras i denna rapport.
11

Hernandez, Villapol Jorge Luis. "Spectrum Analysis and Prediction Using Long Short Term Memory Neural Networks and Cognitive Radios." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062877/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One statement that we can make with absolute certainty in our current time is that wireless communication is now the standard and the de-facto type of communication. Cognitive radios are able to interpret the frequency spectrum and adapt. The aim of this work is to be able to predict whether a frequency channel is going to be busy or free in a specific time located in the future. To do this, the problem is modeled as a time series problem where each usage of a channel is treated as a sequence of busy and free slots in a fixed time frame. For this time series problem, the method being implemented is one of the latest, state-of-the-art, technique in machine learning for time series and sequence prediction: long short-term memory neural networks, or LSTMs.
12

Verner, Alexander. "LSTM Networks for Detection and Classification of Anomalies in Raw Sensor Data." Diss., NSUWorks, 2019. https://nsuworks.nova.edu/gscis_etd/1074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to ensure the validity of sensor data, it must be thoroughly analyzed for various types of anomalies. Traditional machine learning methods of anomaly detections in sensor data are based on domain-specific feature engineering. A typical approach is to use domain knowledge to analyze sensor data and manually create statistics-based features, which are then used to train the machine learning models to detect and classify the anomalies. Although this methodology is used in practice, it has a significant drawback due to the fact that feature extraction is usually labor intensive and requires considerable effort from domain experts. An alternative approach is to use deep learning algorithms. Research has shown that modern deep neural networks are very effective in automated extraction of abstract features from raw data in classification tasks. Long short-term memory networks, or LSTMs in short, are a special kind of recurrent neural networks that are capable of learning long-term dependencies. These networks have proved to be especially effective in the classification of raw time-series data in various domains. This dissertation systematically investigates the effectiveness of the LSTM model for anomaly detection and classification in raw time-series sensor data. As a proof of concept, this work used time-series data of sensors that measure blood glucose levels. A large number of time-series sequences was created based on a genuine medical diabetes dataset. Anomalous series were constructed by six methods that interspersed patterns of common anomaly types in the data. An LSTM network model was trained with k-fold cross-validation on both anomalous and valid series to classify raw time-series sequences into one of seven classes: non-anomalous, and classes corresponding to each of the six anomaly types. As a control, the accuracy of detection and classification of the LSTM was compared to that of four traditional machine learning classifiers: support vector machines, Random Forests, naive Bayes, and shallow neural networks. The performance of all the classifiers was evaluated based on nine metrics: precision, recall, and the F1-score, each measured in micro, macro and weighted perspective. While the traditional models were trained on vectors of features, derived from the raw data, that were based on knowledge of common sources of anomaly, the LSTM was trained on raw time-series data. Experimental results indicate that the performance of the LSTM was comparable to the best traditional classifiers by achieving 99% accuracy in all 9 metrics. The model requires no labor-intensive feature engineering, and the fine-tuning of its architecture and hyper-parameters can be made in a fully automated way. This study, therefore, finds LSTM networks an effective solution to anomaly detection and classification in sensor data.
13

Racette, Olsén Michael. "Electrocardiographic deviation detection : Using long short-term memory recurrent neural networks to detect deviations within electrocardiographic records." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial neural networks have been gaining attention in recent years due to theirimpressive ability to map out complex nonlinear relations within data. In this report,an attempt is made to use a Long short-term memory neural network for detectinganomalies within electrocardiographic records. The hypothesis is that if a neuralnetwork is trained on records of normal ECGs to predict future ECG sequences, it isexpected to have trouble predicting abnormalities not previously seen in the trainingdata. Three different LSTM model configurations were trained using records fromthe MIT-BIH Arrhythmia database. Afterwards the models were evaluated for theirability to predict previously unseen normal and anomalous sections. This was doneby measuring the mean squared error of each prediction and the uncertainty of over-lapping predictions. The preliminary results of this study demonstrate that recurrentneural networks with the use of LSTM units are capable of detecting anomalies.
14

Svanberg, John. "Anomaly detection for non-recurring traffic congestions using Long short-term memory networks (LSTMs)." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this master thesis, we implement a two-step anomaly detection mechanism for non-recurrent traffic congestions with data collected from public transport buses in Stockholm. We investigate the use of machine learning to model time series data with LSTMs and evaluate the results with a baseline prediction model. The anomaly detection algorithm embodies both collective and contextual expressivity, meaning it is capable of findingcollections of delayed buses and also takes the temporality of the data into account. Results show that the anomaly detection performance benefits from the lower prediction errors produced by the LSTM network. The intersection rule significantly decreases the number of false positives while maintaining the true positive rate at a sufficient level. The performance of the anomaly detection algorithm has been found to depend on the road segment it is applied to, some segments have been identified to be particularly hard whereas other have been identified to be easier than others. The performance of the best performing setup of the anomaly detection mechanism had a true positive rate of 84.3 % and a true negative rate of 96.0 %.
I den här masteruppsatsen implementerar vi en tvåstegsalgoritm för avvikelsedetektering för icke återkommande trafikstockningar. Data är insamlad från kollektivtrafikbussarna i Stockholm. Vi undersöker användningen av maskininlärning för att modellerna tidsseriedata med hjälp av LSTM-nätverk och evaluerar sedan dessa resultat med en grundmodell. Avvikelsedetekteringsalgoritmen inkluderar både kollektiv och kontextuell uttrycksfullhet, vilket innebär att kollektiva förseningar kan hittas och att även temporaliteten hos datan beaktas. Resultaten visar att prestandan hos avvikelsedetekteringen förbättras av mindre prediktionsfel genererade av LSTM-nätverket i jämförelse med grundmodellen. En regel för avvikelser baserad på snittet av två andra regler reducerar märkbart antalet falska positiva medan den höll kvar antalet sanna positiva på en tillräckligt hög nivå. Prestandan hos avvikelsedetekteringsalgoritmen har setts bero av vilken vägsträcka den tillämpas på, där några vägsträckor är svårare medan andra är lättare för avvikelsedetekteringen. Den bästa varianten av algoritmen hittade 84.3 % av alla avvikelser och 96.0 % av all avvikelsefri data blev markerad som normal data.
15

Кит, М. О. "Математичні методи прогнозування забруднення повітря на основі нейронних мереж." Thesis, ХНУРЕ, 2021. https://openarchive.nure.ua/handle/document/16434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The work contains a solution of the problem of predicting air pollution by neural networks. Also, with assistance of Python programming language and Ju-pyter Notebook development environment, a software product was created and comparative analysis of corresponding methods and the received test result was carried out.
16

Mealey, Thomas C. "Binary Recurrent Unit: Using FPGA Hardware to Accelerate Inference in Long Short-Term Memory Neural Networks." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1524402925375566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ankaräng, Fredrik, and Fabian Waldner. "Evaluating Random Forest and a Long Short-Term Memory in Classifying a Given Sentence as a Question or Non-Question." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Natural language processing and text classification are topics of much discussion among researchers of machine learning. Contributions in the form of new methods and models are presented on a yearly basis. However, less focus is aimed at comparing models, especially comparing models that are less complex to state-of-the-art models. This paper compares a Random Forest with a Long-Short Term Memory neural network for the task of classifying sentences as questions or non-questions, without considering punctuation. The models were trained and optimized on chat data from a Swedish insurance company, as well as user comments data on articles from a newspaper. The results showed that the LSTM model performed better than the Random Forest. However, the difference was small and therefore Random Forest could still be a preferable alternative in some use cases due to its simplicity and its ability to handle noisy data. The models’ performances were not dramatically improved after hyper parameter optimization. A literature study was also conducted aimed at exploring how customer service can be automated using a chatbot and what features and functionality should be prioritized by management during such an implementation. The findings of the study showed that a data driven design should be used, where features are derived based on the specific needs and customers of the organization. However, three features were general enough to be presented the personality of the bot, its trustworthiness and in what stage of the value chain the chatbot is implemented.
Språkteknologi och textklassificering är vetenskapliga områden som tillägnats mycket uppmärksamhet av forskare inom maskininlärning. Nya metoder och modeller presenteras årligen, men mindre fokus riktas på att jämföra modeller av olika karaktär. Den här uppsatsen jämför Random Forest med ett Long Short-Term Memory neuralt nätverk genom att undersöka hur väl modellerna klassificerar meningar som frågor eller icke-frågor, utan att ta hänsyn till skiljetecken. Modellerna tränades och optimerades på användardata från ett svenskt försäkringsbolag, samt kommentarer från nyhetsartiklar. Resultaten visade att LSTM-modellen presterade bättre än Random Forest. Skillnaden var dock liten, vilket innebär att Random Forest fortfarande kan vara ett bättre alternativ i vissa situationer tack vare dess enkelhet. Modellernas prestanda förbättrades inte avsevärt efter hyperparameteroptimering. En litteraturstudie genomfördes även med målsättning att undersöka hur arbetsuppgifter inom kundsupport kan automatiseras genom införandet av en chatbot, samt vilka funktioner som bör prioriteras av ledningen inför en sådan implementation. Resultaten av studien visade att en data-driven approach var att föredra, där funktionaliteten bestämdes av användarnas och organisationens specifika behov. Tre funktioner var dock tillräckligt generella för att presenteras personligheten av chatboten, dess trovärdighet och i vilket steg av värdekedjan den implementeras.
18

Pavai, Arumugam Thendramil. "SENSOR-BASED HUMAN ACTIVITY RECOGNITION USING BIDIRECTIONAL LSTM FOR CLOSELY RELATED ACTIVITIES." CSUSB ScholarWorks, 2018. https://scholarworks.lib.csusb.edu/etd/776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recognizing human activities using deep learning methods has significance in many fields such as sports, motion tracking, surveillance, healthcare and robotics. Inertial sensors comprising of accelerometers and gyroscopes are commonly used for sensor based HAR. In this study, a Bidirectional Long Short-Term Memory (BLSTM) approach is explored for human activity recognition and classification for closely related activities on a body worn inertial sensor data that is provided by the UTD-MHAD dataset. The BLSTM model of this study could achieve an overall accuracy of 98.05% for 15 different activities and 90.87% for 27 different activities performed by 8 persons with 4 trials per activity per person. A comparison of this BLSTM model is made with the Unidirectional LSTM model. It is observed that there is a significant improvement in the accuracy for recognition of all 27 activities in the case of BLSTM than LSTM.
19

Cerna, Ñahuis Selene Leya. "Comparative analysis of XGBoost, MLP and LSTM techniques for the problem of predicting fire brigade Iiterventions /." Ilha Solteira, 2019. http://hdl.handle.net/11449/190740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientador: Anna Diva Plasencia Lotufo
Abstract: Many environmental, economic and societal factors are leading fire brigades to be increasingly solicited, and, as a result, they face an ever-increasing number of interventions, most of the time on constant resource. On the other hand, these interventions are directly related to human activity, which itself is predictable: swimming pool drownings occur in summer while road accidents due to ice storms occur in winter. One solution to improve the response of firefighters on constant resource is therefore to predict their workload, i.e., their number of interventions per hour, based on explanatory variables conditioning human activity. The present work aims to develop three models that are compared to determine if they can predict the firefighters' response load in a reasonable way. The tools chosen are the most representative from their respective categories in Machine Learning, such as XGBoost having as core a decision tree, a classic method such as Multi-Layer Perceptron and a more advanced algorithm like Long Short-Term Memory both with neurons as a base. The entire process is detailed, from data collection to obtaining the predictions. The results obtained prove a reasonable quality prediction that can be improved by data science techniques such as feature selection and adjustment of hyperparameters.
Resumo: Muitos fatores ambientais, econômicos e sociais estão levando as brigadas de incêndio a serem cada vez mais solicitadas e, como consequência, enfrentam um número cada vez maior de intervenções, na maioria das vezes com recursos constantes. Por outro lado, essas intervenções estão diretamente relacionadas à atividade humana, o que é previsível: os afogamentos em piscina ocorrem no verão, enquanto os acidentes de tráfego, devido a tempestades de gelo, ocorrem no inverno. Uma solução para melhorar a resposta dos bombeiros com recursos constantes é prever sua carga de trabalho, isto é, seu número de intervenções por hora, com base em variáveis explicativas que condicionam a atividade humana. O presente trabalho visa desenvolver três modelos que são comparados para determinar se eles podem prever a carga de respostas dos bombeiros de uma maneira razoável. As ferramentas escolhidas são as mais representativas de suas respectivas categorias em Machine Learning, como o XGBoost que tem como núcleo uma árvore de decisão, um método clássico como o Multi-Layer Perceptron e um algoritmo mais avançado como Long Short-Term Memory ambos com neurônios como base. Todo o processo é detalhado, desde a coleta de dados até a obtenção de previsões. Os resultados obtidos demonstram uma previsão de qualidade razoável que pode ser melhorada por técnicas de ciência de dados, como seleção de características e ajuste de hiperparâmetros.
Mestre
20

Larsson, Joel. "Optimizing text-independent speaker recognition using an LSTM neural network." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper a novel speaker recognition system is introduced. Automated speaker recognition has become increasingly popular to aid in crime investigations and authorization processes with the advances in computer science. Here, a recurrent neural network approach is used to learn to identify ten speakers within a set of 21 audio books. Audio signals are processed via spectral analysis into Mel Frequency Cepstral Coefficients that serve as speaker specific features, which are input to the neural network. The Long Short-Term Memory algorithm is examined for the first time within this area, with interesting results. Experiments are made as to find the optimum network model for the problem. These show that the network learns to identify the speakers well, text-independently, when the recording situation is the same. However the system has problems to recognize speakers from different recordings, which is probably due to noise sensitivity of the speech processing algorithm in use.
21

Stark, Love. "Outlier detection with ensembled LSTM auto-encoders on PCA transformed financial data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Financial institutions today generate a large amount of data, data that can contain interesting information to investigate to further the economic growth of said institution. There exists an interest in analyzing these points of information, especially if they are anomalous from the normal day-to-day work. However, to find these outliers is not an easy task and not possible to do manually due to the massive amounts of data being generated daily. Previous work to solve this has explored the usage of machine learning to find outliers in these financial datasets. Previous studies have shown that the pre-processing of data usually stands for a big part in information loss. This work aims to study if there is a proper balance in how the pre-processing is carried out to retain the highest amount of information while simultaneously not letting the data remain too complex for the machine learning models. The dataset used consisted of Foreign exchange transactions supplied by the host company and was pre-processed through the use of Principal Component Analysis (PCA). The main purpose of this work is to test if an ensemble of Long Short-Term Memory Recurrent Neural Networks (LSTM), configured as autoencoders, can be used to detect outliers in the data and if the ensemble is more accurate than a single LSTM autoencoder. Previous studies have shown that Ensemble autoencoders can prove more accurate than a single autoencoder, especially when SkipCells have been implemented (a configuration that skips over LSTM cells to make the model perform with more variation). A datapoint will be considered an outlier if the LSTM model has trouble properly recreating it, i.e. a pattern that is hard to classify, making it available for further investigations done manually. The results show that the ensembled LSTM model proved to be more accurate than that of a single LSTM model in regards to reconstructing the dataset, and by our definition of an outlier, more accurate in outlier detection. The results from the pre-processing experiments reveal different methods of obtaining an optimal number of components for your data. One of those is by studying retained variance and accuracy of PCA transformation compared to model performance for a certain number of components. One of the conclusions from the work is that ensembled LSTM networks can prove very powerful, but that alternatives to pre-processing should be explored such as categorical embedding instead of PCA.
Finansinstitut genererar idag en stor mängd data, data som kan innehålla intressant information värd att undersöka för att främja den ekonomiska tillväxten för nämnda institution. Det finns ett intresse för att analysera dessa informationspunkter, särskilt om de är avvikande från det normala dagliga arbetet. Att upptäcka dessa avvikelser är dock inte en lätt uppgift och ej möjligt att göra manuellt på grund av de stora mängderna data som genereras dagligen. Tidigare arbete för att lösa detta har undersökt användningen av maskininlärning för att upptäcka avvikelser i finansiell data. Tidigare studier har visat på att förbehandlingen av datan vanligtvis står för en stor del i förlust av emphinformation från datan. Detta arbete syftar till att studera om det finns en korrekt balans i hur förbehandlingen utförs för att behålla den högsta mängden information samtidigt som datan inte förblir för komplex för maskininlärnings-modellerna. Det emphdataset som användes bestod av valutatransaktioner som tillhandahölls av värdföretaget och förbehandlades genom användning av Principal Component Analysis (PCA). Huvudsyftet med detta arbete är att undersöka om en ensemble av Long Short-Term Memory Recurrent Neural Networks (LSTM), konfigurerad som autoenkodare, kan användas för att upptäcka avvikelser i data och om ensemblen är mer precis i sina predikteringar än en ensam LSTM-autoenkodare. Tidigare studier har visat att en ensembel avautoenkodare kan visa sig vara mer precisa än en singel autokodare, särskilt när SkipCells har implementerats (en konfiguration som hoppar över vissa av LSTM-cellerna för att göra modellerna mer varierade). En datapunkt kommer att betraktas som en avvikelse om LSTM-modellen har problem med att återskapa den väl, dvs ett mönster som nätverket har svårt att återskapa, vilket gör datapunkten tillgänglig för vidare undersökningar. Resultaten visar att en ensemble av LSTM-modeller predikterade mer precist än en singel LSTM-modell när det gäller att återskapa datasetet, och då enligt vår definition av avvikelser, mer precis avvikelse detektering. Resultaten från förbehandlingen visar olika metoder för att uppnå ett optimalt antal komponenter för dina data genom att studera bibehållen varians och precision för PCA-transformation jämfört med modellprestanda. En av slutsatserna från arbetet är att en ensembel av LSTM-nätverk kan visa sig vara mycket kraftfulla, men att alternativ till förbehandling bör undersökas, såsom categorical embedding istället för PCA.
22

Holm, Noah, and Emil Plynning. "Spatio-temporal prediction of residential burglaries using convolutional LSTM neural networks." Thesis, KTH, Geoinformatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The low amount solved residential burglary crimes calls for new and innovative methods in the prevention and investigation of the cases. There were 22 600 reported residential burglaries in Sweden 2017 but only four to five percent of these will ever be solved. There are many initiatives in both Sweden and abroad for decreasing the amount of occurring residential burglaries and one of the areas that are being tested is the use of prediction methods for more efficient preventive actions. This thesis is an investigation of a potential method of prediction by using neural networks to identify areas that have a higher risk of burglaries on a daily basis. The model use reported burglaries to learn patterns in both space and time. The rationale for the existence of patterns is based on near repeat theories in criminology which states that after a burglary both the burgled victim and an area around that victim has an increased risk of additional burglaries. The work has been conducted in cooperation with the Swedish Police authority. The machine learning is implemented with convolutional long short-term memory (LSTM) neural networks with max pooling in three dimensions that learn from ten years of residential burglary data (2007-2016) in a study area in Stockholm, Sweden. The model's accuracy is measured by performing predictions of burglaries during 2017 on a daily basis. It classifies cells in a 36x36 grid with 600 meter square grid cells as areas with elevated risk or not. By classifying 4% of all grid cells during the year as risk areas, 43% of all burglaries are correctly predicted. The performance of the model could potentially be improved by further configuration of the parameters of the neural network, along with a use of more data with factors that are correlated to burglaries, for instance weather. Consequently, further work in these areas could increase the accuracy. The conclusion is that neural networks or machine learning in general could be a powerful and innovative tool for the Swedish Police authority to predict and moreover prevent certain crime. This thesis serves as a first prototype of how such a system could be implemented and used.
23

Capshaw, Riley. "Relation Classification using Semantically-Enhanced Syntactic Dependency Paths : Combining Semantic and Syntactic Dependencies for Relation Classification using Long Short-Term Memory Networks." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many approaches to solving tasks in the field of Natural Language Processing (NLP) use syntactic dependency trees (SDTs) as a feature to represent the latent nonlinear structure within sentences. Recently, work in parsing sentences to graph-based structures which encode semantic relationships between words—called semantic dependency graphs (SDGs)—has gained interest. This thesis seeks to explore the use of SDGs in place of and alongside SDTs within a relation classification system based on long short-term memory (LSTM) neural networks. Two methods for handling the information in these graphs are presented and compared between two SDG formalisms. Three new relation extraction system architectures have been created based on these methods and are compared to a recent state-of-the-art LSTM-based system, showing comparable results when semantic dependencies are used to enhance syntactic dependencies, but with significantly fewer training parameters.
24

Díaz, González Fernando. "Federated Learning for Time Series Forecasting Using LSTM Networks: Exploiting Similarities Through Clustering." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Federated learning poses a statistical challenge when training on highly heterogeneous sequence data. For example, time-series telecom data collected over long intervals regularly shows mixed fluctuations and patterns. These distinct distributions are an inconvenience when a node not only plans to contribute to the creation of the global model but also plans to apply it on its local dataset. In this scenario, adopting a one-fits-all approach might be inadequate, even when using state-of-the-art machine learning techniques for time series forecasting, such as Long Short-Term Memory (LSTM) networks, which have proven to be able to capture many idiosyncrasies and generalise to new patterns. In this work, we show that by clustering the clients using these patterns and selectively aggregating their updates in different global models can improve local performance with minimal overhead, as we demonstrate through experiments using realworld time series datasets and a basic LSTM model.
Federated Learning utgör en statistisk utmaning vid träning med starkt heterogen sekvensdata. Till exempel så uppvisar tidsseriedata inom telekomdomänen blandade variationer och mönster över längre tidsintervall. Dessa distinkta fördelningar utgör en utmaning när en nod inte bara ska bidra till skapandet av en global modell utan även ämnar applicera denna modell på sin lokala datamängd. Att i detta scenario införa en global modell som ska passa alla kan visa sig vara otillräckligt, även om vi använder oss av de mest framgångsrika modellerna inom maskininlärning för tidsserieprognoser, Long Short-Term Memory (LSTM) nätverk, vilka visat sig kunna fånga komplexa mönster och generalisera väl till nya mönster. I detta arbete visar vi att genom att klustra klienterna med hjälp av dessa mönster och selektivt aggregera deras uppdateringar i olika globala modeller kan vi uppnå förbättringar av den lokal prestandan med minimala kostnader, vilket vi demonstrerar genom experiment med riktigt tidsseriedata och en grundläggande LSTM-modell.
25

Fors, Johansson Christoffer. "Arrival Time Predictions for Buses using Recurrent Neural Networks." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, two different types of bus passengers are identified. These two types, namely current passengers and passengers-to-be have different needs in terms of arrival time predictions. A set of machine learning models based on recurrent neural networks and long short-term memory units were developed to meet these needs. Furthermore, bus data from the public transport in Östergötland county, Sweden, were collected and used for training new machine learning models. These new models are compared with the current prediction system that is used today to provide passengers with arrival time information. The models proposed in this thesis uses a sequence of time steps as input and the observed arrival time as output. Each input time step contains information about the current state such as the time of arrival, the departure time from thevery first stop and the current position in Cartesian coordinates. The targeted value for each input is the arrival time at the next time step. To predict the rest of the trip, the prediction for the next step is simply used as input in the next time step. The result shows that the proposed models can improve the mean absolute error per stop between 7.2% to 40.9% compared to the system used today on all eight routes tested. Furthermore, the choice of loss function introduces models thatcan meet the identified passengers need by trading average prediction accuracy for a certainty that predictions do not overestimate or underestimate the target time in approximately 95% of the cases.
26

Forslund, John, and Jesper Fahlén. "Predicting customer purchase behavior within Telecom : How Artificial Intelligence can be collaborated into marketing efforts." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study aims to investigate the implementation of an AI model that predicts customer purchases, in the telecom industry. The thesis also outlines how such an AI model can assist decision-making in marketing strategies. It is concluded that designing the AI model by following a Recurrent Neural Network (RNN) architecture with a Long Short-Term Memory (LSTM) layer, allow for a successful implementation with satisfactory model performances. Stepwise instructions to construct such model is presented in the methodology section of the study. The RNN-LSTM model further serves as an assisting tool for marketers to assess how a consumer’s website behavior affect their purchase behavior over time, in a quantitative way - by observing what the authors refer to as the Customer Purchase Propensity Journey (CPPJ). The firm empirical basis of CPPJ, can help organizations improve their allocation of marketing resources, as well as benefit the organization’s online presence by allowing for personalization of the customer experience.
Denna studie undersöker implementeringen av en AI-modell som förutspår kunders köp, inom telekombranschen. Studien syftar även till att påvisa hur en sådan AI-modell kan understödja beslutsfattande i marknadsföringsstrategier. Genom att designa AI-modellen med en Recurrent Neural Network (RNN) arkitektur med ett Long Short-Term Memory (LSTM) lager, drar studien slutsatsen att en sådan design möjliggör en framgångsrik implementering med tillfredsställande modellprestation. Instruktioner erhålls stegvis för att konstruera modellen i studiens metodikavsnitt. RNN-LSTM-modellen kan med fördel användas som ett hjälpande verktyg till marknadsförare för att bedöma hur en kunds beteendemönster på en hemsida påverkar deras köpbeteende över tiden, på ett kvantitativt sätt - genom att observera det ramverk som författarna kallar för Kundköpbenägenhetsresan, på engelska Customer Purchase Propensity Journey (CPPJ). Den empiriska grunden av CPPJ kan hjälpa organisationer att förbättra allokeringen av marknadsföringsresurser, samt gynna deras digitala närvaro genom att möjliggöra mer relevant personalisering i kundupplevelsen.
27

Andersson, Aron, and Shabnam Mirkhani. "Portfolio Performance Optimization Using Multivariate Time Series Volatilities Processed With Deep Layering LSTM Neurons and Markowitz." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The stock market is a non-linear field, but many of the best-known portfolio optimization algorithms are based on linear models. In recent years, the rapid development of machine learning has produced flexible models capable of complex pattern recognition. In this paper, we propose two different methods of portfolio optimization; one based on the development of a multivariate time-dependent neural network,thelongshort-termmemory(LSTM),capable of finding lon gshort-term price trends. The other is the linear Markowitz model, where we add an exponential moving average to the input price data to capture underlying trends. The input data to our neural network are daily prices, volumes and market indicators such as the volatility index (VIX).The output variables are the prices predicted for each asset the following day, which are then further processed to produce metrics such as expected returns, volatilities and prediction error to design a portfolio allocation that optimizes a custom utility function like the Sharpe Ratio. The LSTM model produced a portfolio with a return and risk that was close to the actual market conditions for the date in question, but with a high error value, indicating that our LSTM model is insufficient as a sole forecasting tool. However,the ability to predict upward and downward trends was somewhat better than expected and therefore we conclude that multiple neural network can be used as indicators, each responsible for some specific aspect of what is to be analysed, to draw a conclusion from the result. The findings also suggest that the input data should be more thoroughly considered, as the prediction accuracy is enhanced by the choice of variables and the external information used for training.
Aktiemarknaden är en icke-linjär marknad, men många av de mest kända portföljoptimerings algoritmerna är baserad på linjära modeller. Under de senaste åren har den snabba utvecklingen inom maskininlärning skapat flexibla modeller som kan extrahera information ur komplexa mönster. I det här examensarbetet föreslår vi två sätt att optimera en portfölj, ett där ett neuralt nätverk utvecklas med avseende på multivariata tidsserier och ett annat där vi använder den linjära Markowitz modellen, där vi även lägger ett exponentiellt rörligt medelvärde på prisdatan. Ingångsdatan till vårt neurala nätverk är de dagliga slutpriserna, volymerna och marknadsindikatorer som t.ex. volatilitetsindexet VIX. Utgångsvariablerna kommer vara de predikterade priserna för nästa dag, som sedan bearbetas ytterligare för att producera mätvärden såsom förväntad avkastning, volatilitet och Sharpe ratio. LSTM-modellen producerar en portfölj med avkastning och risk som ligger närmre de verkliga marknadsförhållandena, men däremot gav resultatet ett högt felvärde och det visar att vår LSTM-modell är otillräckligt för att använda som ensamt predikteringssverktyg. Med det sagt så gav det ändå en bättre prediktion när det gäller trender än vad vi antog den skulle göra. Vår slutsats är därför att man bör använda flera neurala nätverk som indikatorer, där var och en är ansvarig för någon specifikt aspekt man vill analysera, och baserat på dessa dra en slutsats. Vårt resultat tyder också på att inmatningsdatan bör övervägas mera noggrant, eftersom predikteringsnoggrannheten.
28

Chowdhury, Muhammad Iqbal Hasan. "Question-answering on image/video content." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/205096/1/Muhammad%20Iqbal%20Hasan_Chowdhury_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis explores a computer's ability to understand multimodal data where the correspondence between image/video content and natural language text are utilised to answer open-ended natural language questions through question-answering tasks. Static image data consisting of both indoor and outdoor scenes, where complex textual questions are arbitrarily posed to a machine to generate correct answers, was examined. Dynamic videos consisting of both single-camera and multi-camera settings for the exploration of more challenging and unconstrained question-answering tasks were also considered. In exploring these challenges, new deep learning processes were developed to improve a computer's ability to understand and consider multimodal data.
29

Talár, Ondřej. "Redukce šumu audionahrávek pomocí hlubokých neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-317118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The thesis focuses on the use of deep recurrent neural network, architecture Long Short-Term Memory for robust denoising of audio signal. LSTM is currently very attractive due to its characteristics to remember previous weights, or edit them not only according to the used algorithms, but also by examining changes in neighboring cells. The work describes the selection of the initial dataset and used noise along with the creation of optimal test data. For creation of the training network is selected KERAS framework for Python and are explored and discussed possible candidates for viable solutions.
30

Jansson, Anton. "Predicting trajectories of golf balls using recurrent neural networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is concerned with the problem of predicting the remaining part of the trajectory of a golf ball as it travels through the air where only the three-dimensional position of the ball is captured. The approach taken to solve this problem relied on recurrent neural networks in the form of the long short-term memory networks (LSTM). The motivation behind this choice was that this type of networks had led to state-of-the-art performance for similar problems such as predicting the trajectory of pedestrians. The results show that using LSTMs led to an average reduction of 36.6 % of the error in the predicted impact position of the ball, compared to previous methods based on numerical simulations of a physical model, when the model was evaluated on the same driving range that it was trained on. Evaluating the model on a different driving range than it was trained on leads to improvements in general, but not for all driving ranges, in particular when the ball was captured at a different frequency compared to the data that the model was trained on. This problem was solved to some extent by retraining the model with small amounts of data on the new driving range.
Detta examensarbete har studerat problemet att förutspå den fullständiga bollbanan för en golfboll när den flyger i luften där endast den tredimensionella positionen av bollen observerades. Den typ av metod som användes för att lösa problemet använde sig av recurrent neural networks, i form av long short-term memory nätverk (LSTM). Motivationen bakom detta var att denna typ av nätverk hade lett till goda resultatet för liknande problem. Resultatet visar att använda sig av LSTM nätverk leder i genomsnitt till en 36.6 % förminskning av felet i den förutspådda nedslagsplatsen för bollen jämfört mot tidigare metoder som använder sig av numeriska simuleringar av en fysikalisk modell, om modellen användes på samma golfbana som den tränades på. Att använda en modell som var tränad på en annan golfbana leder till förbättringar i allmänhet, men inte om modellen användes på en golfbana där bollen fångades in med en annan frekvens. Detta problem löstes till en viss mån genom att träna om modellen med lite data från den nya golfbanan.
31

Sibelius, Parmbäck Sebastian. "HMMs and LSTMs for On-line Gesture Recognition on the Stylaero Board : Evaluating and Comparing Two Methods." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, methods of implementing an online gesture recognition system for the novel Stylaero Board device are investigated. Two methods are evaluated - one based on LSTMs and one based on HMMs - on three kinds of gestures: Tap, circle, and flick motions. A method’s performance was measured in its accuracy in determining both whether any of the above listed gestures were performed and, if so, which gesture, in an online single-pass scenario. Insight was acquired regarding the technical challenges and possible solutions to the online aspect of the problem. Poor performance was, however, observed in both methods, with a likely culprit identified as low quality of training data, due to an arduous and complex gesture performance capturing process. Further research improving on the process of gathering data is suggested.
32

Bonato, Tommaso. "Time Series Predictions With Recurrent Neural Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'obiettivo principale di questa tesi è studiare come gli algoritmi di apprendimento automatico (machine learning in inglese) e in particolare le reti neurali LSTM (Long Short Term Memory) possano essere utilizzati per prevedere i valori futuri di una serie storica regolare come, per esempio, le funzioni seno e coseno. Una serie storica è definita come una sequenza di osservazioni s_t ordinate nel tempo. Inoltre cercheremo di applicare gli stessi principi per prevedere i valori di una serie storica prodotta utilizzando i dati di vendita di un prodotto cosmetico durante un periodo di tre anni. Prima di arrivare alla parte pratica di questa tesi è necessario introdurre alcuni concetti fondamentali che saranno necessari per sviluppare l'architettura e il codice del nostro modello. Sia nell'introduzione teorica che nella parte pratica l'attenzione sarà focalizzata sull'uso di RNN (Recurrent Neural Network o Rete Neurale Ricorrente) poiché sono le reti neurali più adatte a questo tipo di problema. Un particolare tipo di RNN, chiamato Long Short Term Memory (LSTM), sarà soggetto dello studio principale di questa tesi e verrà presentata e utilizzata anche una delle sue varianti chiamata Gated Recurrent Unit (GRU). Questa tesi, in conclusione, conferma che LSTM e GRU sono il miglior tipo di rete neurale per le previsioni di serie temporali. Nell'ultima parte analizzeremo le differenze tra l'utilizzo di una CPU e una GPU durante la fase di training della rete neurale.
33

Norgren, Eric. "Pulse Repetition Interval Modulation Classification using Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Radar signals are used for estimating location, speed and direction of an object. Some radars emit pulses, while others emit a continuous wave. Both types of radars emit signals according to some pattern; a pulse radar, for example, emits pulses with a specific time interval between pulses. This time interval may either be stable, change linearly, or follow some other pattern. The interval between two emitted pulses is often referred to as the pulse repetition interval (PRI), and the pattern that defines the PRI is often referred to as the modulation. Classifying which PRI modulation is used in a radar signal is a crucial component for the task of identifying who is emitting the signal. Incorrectly classifying the used modulation can lead to an incorrect guess of the identity of the agent emitting the signal, and can as a consequence be fatal. This work investigates how a long short-term memory (LSTM) neural network performs compared to a state of the art feature extraction neural network (FE-MLP) approach for the task of classifying PRI modulation. The results indicate that the proposed LSTM model performs consistently better than the FE-MLP approach across all tested noise levels. The downside of the proposed LSTM model is that it is significantly more complex than the FE-MLP approach. Future work could investigate if the LSTM model is too complex to use in a real world setting where computing power may be limited. Additionally, the LSTM model can, in a trivial manner, be modified to support more modulations than those tested in this work. Hence, future work could also evaluate how the proposed LSTM model performs when support for more modulations is added.
Radarsignaler används för att uppskatta plats, hastighet och riktning av objekt. Vissa radarer sänder ut signaler i form av pulser, medan andra sänder ut en kontinuerlig våg. Båda typer av radarer avger signaler enligt ett visst mönster, till exempel avger en pulsradar pulser med ett specifikt tidsintervall mellan pulserna. Detta tidsintervall kan antingen vara konstant, förändras linjärt, eller följa ett annat mönster. Intervallet mellan två pulser benämns ofta pulsrepetitionsintervall (PRI), och mönstret som definierar PRIn benämns ofta modulering. Att klassificera vilken PRI-modulering som används i en radarsignal är en viktig del i processen att identifiera vem som skickade ut signalen. Felaktig klassificering av den använda moduleringen kan leda till en felaktig gissning av identiteten av agenten som skickade ut signalen, vilket kan leda till ett dödligt utfall. Detta arbete undersöker hur väl det framtagna neurala nätverket som består av ett långt korttidsminne (LSTM) kan klassificera PRI-modulering i förhållande till en modern modell som använder särskilt utvalda beräknade särdrag från data och klassificerar dessa särdrag med ett neuralt nätverk. Resultaten indikerar att LSTM-modellen konsekvent klassificerar med högre träffsäkerhet än modellen som använder särdrag, vilket gäller för alla testade brusnivåer. Nackdelen med LSTM-modellen är att den är mer komplex än modellen som använder särdrag. Framtida arbete kan undersöka om LSTM-modellen är för komplex för att använda i ett verkligt scenario där beräkningskraften kan vara begränsad. Dessutom skulle framtida arbete kunna utvärdera hur väl LSTM-modellen kan klassificera PRI-moduleringar när stöd för fler moduleringar än de som testats i detta arbete läggs till, detta då stöd för ytterligare PRI-moduleringar kan läggas till i LSTM-modellen på ett trivialt sätt.
34

Keisala, Simon. "Using a Character-Based Language Model for Caption Generation." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Using AI to automatically describe images is a challenging task. The aim of this study has been to compare the use of character-based language models with one of the current state-of-the-art token-based language models, im2txt, to generate image captions, with focus on morphological correctness. Previous work has shown that character-based language models are able to outperform token-based language models in morphologically rich languages. Other studies show that simple multi-layered LSTM-blocks are able to learn to replicate the syntax of its training data. To study the usability of character-based language models an alternative model based on TensorFlow im2txt has been created. The model changes the token-generation architecture into handling character-sized tokens instead of word-sized tokens. The results suggest that a character-based language model could outperform the current token-based language models, although due to time and computing power constraints this study fails to draw a clear conclusion. A problem with one of the methods, subsampling, is discussed. When using the original method on character-sized tokens this method removes characters (including special characters) instead of full words. To solve this issue, a two-phase approach is suggested, where training data first is separated into word-sized tokens where subsampling is performed. The remaining tokens are then separated into character-sized tokens. Future work where the modified subsampling and fine-tuning of the hyperparameters are performed is suggested to gain a clearer conclusion of the performance of character-based language models.
35

Zhang, Jiahui. "Bi-Objective Dispatch of Multi-Energy Virtual Power Plant: Deep-Learning based Prediction and Particle Swarm Optimization." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper addresses the coordinative operation problem of multi-energy virtual power plant (ME-VPP) in the context of energy internet. A bi-objective dispatch model is established to optimize the performance of ME-VPP on both economic cost(EC) and power quality (PQ).Various realistic factors are considered, which include environmental governance, transmission ratings, output limits, etc. Long short-term memory (LSTM), a deep learning method, is applied to the promotion of the accuracy of wind prediction. An improved multi-objective particle swarm optimization (MOPSO) is utilized as the solving algorithm. A practical case study is performed on Hongfeng Eco-town in Southwestern China. Simulation results of three scenarios verify the advantages of bi-objective optimization over solely saving EC and enhancing PQ. The Pareto frontier also provides a visible and flexible way for decision-making of ME-VPP operator. Two strategies, “improvisational” and “foresighted”, are compared by testing on IEEE 118-bus benchmark system. It is revealed that “foresighted” strategy, which incorporates LSTM prediction and bi-objective optimization over 5-hr receding horizon, takes 10 Pareto dominances in 24 hours.
36

Ridhagen, Markus, and Petter Lind. "A comparative study of Neural Network Forecasting models on the M4 competition data." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The development of machine learning research has provided statistical innovations and further developments within the field of time series analysis. This study seeks to investigate two different approaches on artificial neural network models based on different learning techniques, and answering how well the neural network approach compares with a basic autoregressive approach, as well as how the artificial neural network models compare to each other. The models were compared and analyzed in regards to the univariate forecast accuracy on 20 randomly drawn time series from two different time frequencies from the M4 competition dataset. Forecasting was made dependent on one time lag (t-1) and forecasted three and six steps ahead respectively. The artificial neural network models outperformed the baseline Autoregressive model, showing notably lower mean average percentage error overall. The Multilayered perceptron models performed better than the Long short-term memory model overall, whereas the Long short-term memory model showed improvement on longer prediction time dimensions. As the training were done univariately  on a limited set of time steps, it is believed that the one layered-approach gave a good enough approximation on the data, whereas the added layer couldn’t fully utilize its strengths of processing power. Likewise, the Long short-term memory model couldn’t fully demonstrate the advantagements of recurrent learning. Using the same dataset, further studies could be made with another approach to data processing. Implementing an unsupervised approach of clustering the data before analysis, the same models could be tested with multivariate analysis on models trained on multiple time series simultaneously.
37

Mohammadisohrabi, Ali. "Design and implementation of a Recurrent Neural Network for Remaining Useful Life prediction." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A key idea underlying many Predictive Maintenance solutions is Remaining Useful Life (RUL) of machine parts, and it simply involves a prediction on the time remaining before a machine part is likely to require repair or replacement. Nowadays, with respect to fact that the systems are getting more complex, the innovative Machine Learning and Deep Learning algorithms can be deployed to study the more sophisticated correlations in complex systems. The exponential increase in both data accumulation and processing power make the Deep Learning algorithms more desirable that before. In this paper a Long Short-Term Memory (LSTM) which is a Recurrent Neural Network is designed to predict the Remaining Useful Life (RUL) of Turbofan Engines. The dataset is taken from NASA data repository. Finally, the performance obtained by RNN is compared to the best Machine Learning algorithm for the dataset.
38

Sasse, Jonathan Patrick. "Distinguishing Behavior from Highly Variable Neural Recordings Using Machine Learning." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1522755406249275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dobiš, Lukáš. "Detekce osob a hodnocení jejich pohlaví a věku v obrazových datech." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Táto diplomová práca sa venuje automatickému rozpoznávaniu ludí v obrazových dátach s využitím konvolučných neurónových sieti na určenie polohy tváre a následnej analýze získaných dát. Výsledkom analýzy tváre je určenie pohlavia, emócie a veku osoby. Práca obsahuje popis použitých architektúr konvolučných sietí pre každú podúlohu. Sieť na odhad veku má natrénované nové váhy, ktoré sú vzápätí zmrazené a majú do svojej architektúry vložené LSTM vrstvy. Tieto vrstvy sú samostatne dotrénované a testované na novom datasete vytvorenom pre tento účel. Výsledky testov ukazujú zlepšenie predikcie veku. Riešenie pre rýchlu, robustnú a modulárnu detekciu tváre a ďalších ludských rysov z jedného obrazu alebo videa je prezentované ako kombinácia prepojených konvolučných sietí. Tieto sú implementované v podobe skriptu a následne vysvetlené. Ich rýchlosť je dostatočná pre ďalšie dodatočné analýzy tváre na živých obrazových dátach.
40

Dametto, Ronaldo César. "Estudo da aplicação de redes neurais artificiais para predição de séries temporais financeiras." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/157058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Submitted by Ronaldo Cesar Dametto (rdametto@uol.com.br) on 2018-09-18T19:17:34Z No. of bitstreams: 1 Dissertação_Completa_Final.pdf: 2885777 bytes, checksum: 05b2d5417efbec72f927cf8a62eef3fb (MD5)
Approved for entry into archive by Lucilene Cordeiro da Silva Messias null (lubiblio@bauru.unesp.br) on 2018-09-20T12:19:07Z (GMT) No. of bitstreams: 1 dametto_rc_me_bauru.pdf: 2877027 bytes, checksum: cee33d724090a01372e1292109af2ce9 (MD5)
Made available in DSpace on 2018-09-20T12:19:07Z (GMT). No. of bitstreams: 1 dametto_rc_me_bauru.pdf: 2877027 bytes, checksum: cee33d724090a01372e1292109af2ce9 (MD5) Previous issue date: 2018-08-06
O aprendizado de máquina vem sendo utilizado em diferentes segmentos da área financeira, como na previsão de preços de ações, mercado de câmbio, índices de mercado e composição de carteira de investimento. Este trabalho busca comparar e combinar três tipos de algoritmos de aprendizagem de máquina, mais especificamente, o método Ensemble de Redes Neurais Artificias com as redes Multilayer Perceptrons (MLP), auto-regressiva com entradas exógenas (NARX) e Long Short-Term Memory (LSTM) para predição do Índice Bovespa. A amostra da série do Ibovespa foi obtida pelo Yahoo!Finance no período de 04 de janeiro de 2010 a 28 de dezembro de 2017, de periodicidade diária. Foram utilizadas as séries temporais referentes a cotação do Dólar, além de indicadores numéricos da Análise Técnica como variáveis independentes para compor a predição. Os algoritmos foram desenvolvidos através da linguagem Python usando framework Keras. Para avaliação dos algoritmos foram utilizadas as métricas de desempenho MSE, RMSE e MAPE, além da comparação entre as previsões obtidas e os valores reais. Os resultados das métricas indicam bom desempenho de predição pelo modelo Ensemble proposto, obtendo 70% de acerto no movimento do índice, porém, não conseguiu atingir melhores resultados que as redes MLP e NARX, ambas com 80% de acerto.
Different segments of the financial area, such as the forecast of stock prices, the foreign exchange market, the market indices and the composition of investment portfolio, use machine learning. This work aims to compare and combine two types of machine learning algorithms, the Artificial Neural Network Ensemble method with Multilayer Perceptrons (MLP), auto-regressive with exogenous inputs (NARX) and Long Short-Term Memory (LSTM) for prediction of the Bovespa Index. The Bovespa time series samples were obtained daily, using Yahoo! Finance, from January 4th, 2010 to December 28th, 2017. Dollar quotation, Google trends and numerical indicators of the Technical Analysis were used as independent variables to compose the prediction. The algorithms were developed using Python and Keras framework. Finally, in order to evaluate the algorithms, the MSE, RMSE and MAPE performance metrics, as well as the comparison between the obtained predictions and the actual values, were used. The results of the metrics indicate good prediction performance by the proposed Ensemble model, obtaining a 70% accuracy in the index movement, but failed to achieve better results than the MLP and NARX networks, both with 80% accuracy.
41

Gattoni, Giacomo. "Improving the reliability of recurrent neural networks while dealing with bad data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In practical applications, machine learning and deep learning models can have difficulty in achieving generalization, especially when dealing with training samples that are either noisy or limited in quantity. Standard neural networks do not guarantee the monotonicity of the input features with respect to the output, therefore they lack interpretability and predictability when it is known a priori that the input-output relationship should be monotonic. This problem can be encountered in the CPG industry, where it is not possible to ensure that a deep learning model will learn the increasing monotonic relationship between promotional mechanics and sales. To overcome this issue, it is proposed the combined usage of recurrent neural networks, a type of artificial neural networks specifically designed to deal with data structured as sequences, with lattice networks, conceived to guarantee monotonicity of the desired input features with respect to the output. The proposed architecture has proven to be more reliable when new samples are fed to the neural network, demonstrating its ability to infer the evolution of the sales depending on the promotions, even when it is trained on bad data.
42

Nilsson, Mathias, and Corswant Sophie von. "How Certain Are You of Getting a Parking Space? : A deep learning approach to parking availability prediction." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traffic congestion is a severe problem in urban areas and it leads to the emission of greenhouse gases and air pollution. In general, drivers lack knowledge of the location and availability of free parking spaces in urban cities. This leads to people driving around searching for parking places, and about one-third of traffic congestion in cities is due to drivers searching for an available parking lot. In recent years, various solutions to provide parking information ahead have been proposed. The vast majority of these solutions have been applied in large cities, such as Beijing and San Francisco. This thesis has been conducted in collaboration with Knowit and Dukaten to predict parking occupancy in car parks one hour ahead in the relatively small city of Linköping. To make the predictions, this study has investigated the possibility to use long short-term memory and gradient boosting regression trees, trained on historical parking data. To enhance decision making, the predictive uncertainty was estimated using the novel approach Monte Carlo dropout for the former, and quantile regression for the latter. This study reveals that both of the models can predict parking occupancy ahead of time and they are found to excel in different contexts. The inclusion of exogenous features can improve prediction quality. More specifically, we found that incorporating hour of the day improved the models’ performances, while weather features did not contribute much. As for uncertainty, the employed method Monte Carlo dropout was shown to be sensitive to parameter tuning to obtain good uncertainty estimates.
43

Max, Lindblad. "The impact of parsing methods on recurrent neural networks applied to event-based vehicular signal data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis examines two different approaches to parsing event-based vehicular signal data to produce input to a neural network prediction model: event parsing, where the data is kept unevenly spaced over the temporal domain, and slice parsing, where the data is made to be evenly spaced over the temporal domain instead. The dataset used as a basis for these experiments consists of a number of vehicular signal logs taken at Scania AB. Comparisons between the parsing methods have been made by first training long short-term memory (LSTM) recurrent neural networks (RNN) on each of the parsed datasets and then measuring the output error and resource costs of each such model after having validated them on a number of shared validation sets. The results from these tests clearly show that slice parsing compares favourably to event parsing.
Denna avhandling jämför två olika tillvägagångssätt vad gäller parsningen av händelsebaserad signaldata från fordon för att producera indata till en förutsägelsemodell i form av ett neuronnät, nämligen händelseparsning, där datan förblir ojämnt fördelad över tidsdomänen, och skivparsning, där datan är omgjord till att istället vara jämnt fördelad över tidsdomänen. Det dataset som används för dessa experiment är ett antal signalloggar från fordon som kommer från Scania. Jämförelser mellan parsningsmetoderna gjordes genom att först träna ett lång korttidsminne (LSTM) återkommande neuronnät (RNN) på vardera av de skapade dataseten för att sedan mäta utmatningsfelet och resurskostnader för varje modell efter att de validerats på en delad uppsättning av valideringsdata. Resultaten från dessa tester visar tydligt på att skivparsning står sig väl mot händelseparsning.
44

Dahmani, Sara. "Synthèse audiovisuelle de la parole expressive : modélisation des émotions par apprentissage profond." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les travaux de cette thèse portent sur la modélisation des émotions pour la synthèse audiovisuelle expressive de la parole à partir du texte. Aujourd’hui, les résultats des systèmes de synthèse de la parole à partir du texte sont de bonne qualité, toutefois la synthèse audiovisuelle reste encore une problématique ouverte et la synthèse expressive l’est encore d’avantage. Nous proposons dans le cadre de cette thèse une méthode de modélisation des émotions malléable et flexible, permettant de mélanger les émotions comme on mélange les teintes sur une palette de couleurs. Dans une première partie, nous présentons et étudions deux corpus expressifs que nous avons construits. La stratégie d’acquisition ainsi que le contenu expressif de ces corpus sont analysés pour valider leur utilisation à des fins de synthèse audiovisuelle de la parole. Dans une seconde partie, nous proposons deux architectures neuronales pour la synthèse de la parole. Nous avons utilisé ces deux architectures pour modéliser trois aspects de la parole : 1) les durées des sons, 2) la modalité acoustique et 3) la modalité visuelle. Dans un premier temps, nous avons adopté une architecture entièrement connectée. Cette dernière nous a permis d’étudier le comportement des réseaux de neurones face à différents descripteurs contextuels et linguistiques. Nous avons aussi pu analyser, via des mesures objectives, la capacité du réseau à modéliser les émotions. La deuxième architecture neuronale proposée est celle d’un auto-encodeur variationnel. Cette architecture est capable d’apprendre une représentation latente des émotions sans utiliser les étiquettes des émotions. Après analyse de l’espace latent des émotions, nous avons proposé une procédure de structuration de ce dernier pour pouvoir passer d’une représentation par catégorie vers une représentation continue des émotions. Nous avons pu valider, via des expériences perceptives, la capacité de notre système à générer des émotions, des nuances d’émotions et des mélanges d’émotions, et cela pour la synthèse audiovisuelle expressive de la parole à partir du texte
: The work of this thesis concerns the modeling of emotions for expressive audiovisual textto-speech synthesis. Today, the results of text-to-speech synthesis systems are of good quality, however audiovisual synthesis remains an open issue and expressive synthesis is even less studied. As part of this thesis, we present an emotions modeling method which is malleable and flexible, and allows us to mix emotions as we mix shades on a palette of colors. In the first part, we present and study two expressive corpora that we have built. The recording strategy and the expressive content of these corpora are analyzed to validate their use for the purpose of audiovisual speech synthesis. In the second part, we present two neural architectures for speech synthesis. We used these two architectures to model three aspects of speech : 1) the duration of sounds, 2) the acoustic modality and 3) the visual modality. First, we use a fully connected architecture. This architecture allowed us to study the behavior of neural networks when dealing with different contextual and linguistic descriptors. We were also able to analyze, with objective measures, the network’s ability to model emotions. The second neural architecture proposed is a variational auto-encoder. This architecture is able to learn a latent representation of emotions without using emotion labels. After analyzing the latent space of emotions, we presented a procedure for structuring it in order to move from a discrete representation of emotions to a continuous one. We were able to validate, through perceptual experiments, the ability of our system to generate emotions, nuances of emotions and mixtures of emotions, and this for expressive audiovisual text-to-speech synthesis
45

Sainath, Pravish. "Modeling functional brain activity of human working memory using deep recurrent neural networks." Thesis, 2020. http://hdl.handle.net/1866/25468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans les systèmes cognitifs, le rôle de la mémoire de travail est crucial pour le raisonnement visuel et la prise de décision. D’énormes progrès ont été réalisés dans la compréhension des mécanismes de la mémoire de travail humain/animal, ainsi que dans la formulation de différents cadres de réseaux de neurones artificiels à mémoire augmentée. L’objectif global de notre projet est de former des modèles de réseaux de neurones artificiels capables de consolider la mémoire sur une courte période de temps pour résoudre une tâche de mémoire et les relier à l’activité cérébrale des humains qui ont résolu la même tâche. Le projet est de nature interdisciplinaire en essayant de relier les aspects de l’intelligence artificielle (apprentissage profond) et des neurosciences. La tâche cognitive utilisée est la tâche N-back, très populaire en neurosciences cognitives dans laquelle les sujets sont présentés avec une séquence d’images, dont chacune doit être identifiée pour savoir si elle a déjà été vue ou non. L’ensemble de données d’imagerie fonctionnelle (IRMf) utilisé a été collecté dans le cadre du projet Courtois Neurmod. Nous étudions plusieurs variantes de modèles de réseaux neuronaux récurrents qui apprennent à résoudre la tâche de mémoire de travail N-back en les entraînant avec des séquences d’images. Ces réseaux de neurones entraînés optimisés pour la tâche de mémoire sont finalement utilisés pour générer des représentations de caractéristiques pour les images de stimuli vues par les sujets humains pendant leurs enregistrements tout en résolvant la tâche. Les représentations dérivées de ces réseaux de neurones servent ensuite à créer un modèle de codage pour prédire l’activité IRMf BOLD des sujets. On comprend alors la relation entre le modèle de réseau neuronal et l’activité cérébrale en analysant cette capacité prédictive du modèle dans différentes zones du cerveau impliquées dans la mémoire de travail. Ce travail présente une manière d’utiliser des réseaux de neurones artificiels pour modéliser le comportement et le traitement de l’information de la mémoire de travail du cerveau et d’utiliser les données d’imagerie cérébrale capturées sur des sujets humains lors de la tâche N-back pour potentiellement comprendre certains mécanismes de mémoire du cerveau en relation avec ces modèles de réseaux de neurones artificiels.
In cognitive systems, the role of working memory is crucial for visual reasoning and decision making. Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of memory augmented artificial neural networks. The overall objective of our project is to train artificial neural network models that are capable of consolidating memory over a short period of time to solve a memory task and relate them to the brain activity of humans who solved the same task. The project is of interdisciplinary nature in trying to bridge aspects of Artificial Intelligence (deep learning) and Neuroscience. The cognitive task used is the N-back task, a very popular one in Cognitive Neuroscience in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not. The functional imaging (fMRI) dataset used has been collected as a part of the Courtois Neurmod Project. We study multiple variants of recurrent neural network models that learn to remember input images across timesteps. These trained neural networks optimized for the memory task are ultimately used to generate feature representations for the stimuli images seen by the human subjects during their recordings while solving the task. The representations derived from these neural networks are then to create an encoding model to predict the fMRI BOLD activity of the subjects. We then understand the relationship between the neural network model and brain activity by analyzing this predictive ability of the model in different areas of the brain that are involved in working memory. This work presents a way of using artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the N-back task to potentially understand some memory mechanisms of the brain in relation to these artificial neural network models.
46

Pinto, Rodrigo Emanuel de Sousa. "Seizure prediction based on Long Short Term Memory Networks." Master's thesis, 2017. http://hdl.handle.net/10316/82981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia
A epilepsia é uma doença neurológica que afeta milhões de pessoas em todo o mundo. Cerca de um terço delas é farmaco-resistente e não podem ser submetidas a cirurgia; sua doença é chamada de epilepsia refratária, e uma crise pode acontecer a qualquer momento, em qualquer lugar. Esses pacientes refratários poderiam beneficiar de dispositivos de previsão de crises, mas os métodos atuais para a previsão de crises não são suficientemente bons para aplicações clínicas. Apresentamos e avaliamos a capacidade de dois tipos de arquiteturas de redes neurais artificiais para previsão crises a partir de caracteristícas extraídas de registos eletroencefalograma (EEG): Long Short Term Memory (LSTM) e Convolutional Long Short Term Memory (C-LSTM). Para demonstrar a utilidade clínica dos nossos modelos, eles são avaliados usando dados EEG extensos e contínuos. O estudo considera 105 pacientes da European Epilepsy Database, 87 com registos EEG no couro cabeludo e 18 com registos invasivos. Os dados incluem: 1087 crises, das quais 203 são utilizadas para avaliação out-of-sample e uma duração total de gravação de 19959 horas, das quais 3991 horas são usadas para avaliação out-of-sample. Nós extraímos do sinal EEG 22 características univariadas baseadas em janelas de 5 segundos.Para todos os pacientes, com registo no couro cabeludo e invasivo, nossos modelos LSTM alcançaram uma sensibilidade média de 29.28% e um FPR médio de 0.58/h. Eles previram corretamente 52 de 203 (25.62%) crises, no conjunto de teste. Observa-se que para 5 dos 105 pacientes (4.8%) foi alcançado um desempenho ótimo com sensibilidade >= 50% e FPR <= 0.1/h. Desempenho perfeito com 100% de sensibilidade e 0/h FPR foi alcançado para 2 (1.9%) pacientes.Para todos os pacientes, com registo no couro cabeludo e invasivo, nossos modelos C-LSTM alcançaram uma sensibilidade média de 28.17% e um FPR médio de 0.64/h. Eles previram corretamente 54 de 203 (26.60%) crises, no conjunto de teste. Observa-se que para 2 dos 105 pacientes (1.9%) foi alcançado um desempenho ótimo com sensibilidade > = 50% e FPR <= 0.15/h foi alcançado.Desempenho perfeito com 100% de sensibilidade e 0/h FPR foi alcançado para 0 (0,0%) pacientes.Os resultados não foram satisfatórios, obtivemos piores resultados ao comprar com um estudo usando a mesma base de dados, mas baseado em Support Vector Machines (SVMs). Dado que, em teoria, as LSTMs são um modelo melhor que as SVMs quando dados sequenciais são considerados, esperávamos o contrário.No futuro, esperamos experimentar co sinal EEG bruto. Esperamos que as LSTMs possam capturar melhores dependências temporais a partir sinal bruto, uma vez que compactar 5 segundos de informação em um único valor leva a uma grande perda de informações. Também será considerada uma melhor abordagem de selecionamento de dados de treinamento, de forma a abranger um ciclo diurno/noturno completo para capturar melhor as variações do sinal ao longo do dia.
Epilepsy is a neurological disease affecting millions of people worldwide. About one third of them are pharmaco-resistant and cannot be submitted to surgery; their disease is called refractory epilepsy, and a seizure can happen any time, anywhere. These refractory patients would benefit from seizure prediction devices, but the current methods for seizure prediction are not good enough for clinical applications. We present and evaluate the capacity of two types of deep artificial neural networks architectures to learn how to predict seizures with data extracted from electroencephalogram (EEG): Long Short Term Memory (LSTM) and Convolutional Long Short Term Memory (C-LSTM). To demonstrate clinical usefulness of our models, they are evaluated using long and continuous out of sample records. The study considers 105 patients from the European Epilepsy Database, 87 with scalp recordings and 18 with invasive recordings. The data includes: 1087 seizures, from which 203 are used for out-of-sample evaluation, and a total recording duration of 19959 hours, from which 3991 hours are used for out-of-sample evaluation. We extracted 22 univariate features based on 5 second windows from the EEG signal.For all patients, scalp and invasive, our LSTM models achieved an average sensitivity of 29.28% and an average FPR of 0.58/h. We predicted 52 out of 203 (25.62%) seizures on the testing set. It is observed that for 5 out of 105 (4.8%) patients, optimal test performance with sensitivity >= 50% and FPR <= 0.15/h was achieved. Perfect performance with 100% sensitivity and 0/h FPR was achieved for 2 (1.9%) patients.For all patients, scalp and invasive, our C-LSTM models achieved an average sensitivity of 28.17% and an average FPR of 0.64/h. We predicted 54 out of 203 (26.60%) seizures on the testing set. It is observed that for 2 out of 105(1.9%) patients, optimal test performance with sensitivity >= 50% and FPR <= 0.15/h was achieved. Perfect performance with 100% sensitivity and 0/h FPR was achieved for 0 (0.0%) patients.The results were not satisfactory, we achieved worse results when making a comparison with a study using the same database but based on support vector machines (SVMs). Given that, in theory, the LSTMs are a method expected to perform better than the SVMs when sequential data is involved, we were expecting the opposite.In the future we expect to experiment with raw EEG signal. We expect that the LSTMs will be able to capture better temporal dependencies on raw signal, since compacting 5 seconds of information into a single value leads to a great loss on information. A better approach for selecting training data, in a way that it covers a full day/night cycle to better capture intra day variations, will also be considered.
Universidade de Coimbra - Bolsa de licenciado durante 3 meses.
47

Zhou, Quan. "Bidirectional long short-term memory network for proto-object representation." Thesis, 2018. https://hdl.handle.net/2144/31682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Researchers have developed many visual saliency models in order to advance the technology in computer vision. Neural networks, Convolution Neural Networks (CNNs) in particular, have successfully differentiate objects in images through feature extraction. Meanwhile, Cummings et al. has proposed a proto-object image saliency (POIS) model that shows perceptual objects or shapes can be modelled through the bottom-up saliency algorithm. Inspired from their work, this research is aimed to explore the imbedding features in the proto-object representations and utilizing artificial neural networks (ANN) to capture and predict the saliency output of POIS. A combination of CNN and a bi-directional long short-term memory (BLSTM) neural network is proposed for this saliency model as a machine learning alternative to the border ownership and grouping mechanism in POIS. As ANNs become more efficient in performing visual saliency tasks, the result of this work would extend their application in computer vision through successful implementation for proto-object based saliency.
48

SHIH, YU-SEN, and 施宇森. "Commodity Sales Forecasting by combining Deep Learning Long Short-Term Memory Network (LSTM) with sentiment analysis." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/j22kt7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立臺北科技大學
資訊與財金管理系
107
Sales forecasting is one of critical management tools for companies to make informed business decisions. The traditional sales forecasting methods can be divided into two categories: quantitative forecasting method and qualitative forecasting method. The quantitative forecasting method uses historical data mathematical method to predict, and the qualitative analysis method uses wisdom and opinion forecast by experts and department heads. Both methods have not performed well on commodity sales forecasts. In the past, many studies have pointed out that consumer reviews affect consumer purchases, and many studies use consumer digital reviews to predict product sales. Therefore, this study proposed a sales forecasting model that combined commentary sentiment analysis with Long Short-Term Memory Network (LSTM), in addition to exploring the impact of combining commodity reviews and historical sales data on e-commerce sales forecasts, as well as predicting future sales of commodities with the short-term demand characteristics. The comments and the sales figures were collected from “taobao.com”, the comments were converted to the ratings of confidence, “positive” and “negative” through sentiment analysis, in the training stage. The sales forecasting model was trained by historical data for predicting sales volume in the next period, but also using time-series sequence. The study designed multiple conditions in order to validate the accuracy of the model and compare the difference between the various length of training time-series, windows size, using and without using sentiment analysis of comments. This study assumed the LSTM model combined with sentiment analysis will have better performance in sales forecasting, and the results of experiments were consistent with the expectation. In addition to providing corporate decision support about the future, the contribution of this research also attempts to improve the accuracy of sales forecasts by combining qualitative and quantitative data, and provide future research directions for relevant researchers in the future.
49

Kumari, K., J. P. Singh, Y. K. Dwivedi, and Nripendra P. Rana. "Bilingual Cyber-aggression Detection on Social Media using LSTM Autoencoder." 2021. http://hdl.handle.net/10454/18439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Yes
Cyber-aggression is an offensive behaviour attacking people based on race, ethnicity, religion, gender, sexual orientation, and other traits. It has become a major issue plaguing the online social media. In this research, we have developed a deep learning-based model to identify different levels of aggression (direct, indirect and no aggression) in a social media post in a bilingual scenario. The model is an autoencoder built using the LSTM network and trained with non-aggressive comments only. Any aggressive comment (direct or indirect) will be regarded as an anomaly to the system and will be marked as Overtly (direct) or Covertly (indirect) aggressive comment depending on the reconstruction loss by the autoencoder. The validation results on the dataset from two popular social media sites: Facebook and Twitter with bilingual (English and Hindi) data outperformed the current state-of-the-art models with improvements of more than 11% on the test sets of the English dataset and more than 6% on the test sets of the Hindi dataset.
The full-text of this article will be released for public view at the end of the publisher embargo on 24 Apr 2022.
50

Zaroug, Abdelrahman. "Machine Learning Model for the Prediction of Human Movement Biomechanics." Thesis, 2021. https://vuir.vu.edu.au/42489/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An increasingly useful application of machine learning (ML) is in predicting features of human actions. If it can be shown that algorithm inputs related to actual movement mechanics can predict a limb or limb segment’s future trajectory, a range of apparently intractable problems in movement science could be solved. The forecasting of lower limb trajectories can anticipate movement characteristics that may predict the risk of tripping, slipping or balance loss. Particularly in the design of human augmentation technology such as the exoskeleton, human movement prediction will improve the synchronisation between the user and the device greatly enhancing its efficacy. Long Short Term Memory (LSTM) neural neworks are a subset of ML algoithms that proven a wide success in modelling the human movement data. The aim of this thesis was to examine four LSTM neural nework architectures (Vanilla, Stacked, Bidirectional and Autoencoders) in predicting the future trajectories of lower limb kinematics, i.e. Angular Velocity (AV) and Linear Acceleration (LA). This work also aims to investigate whether linear statistical methods such as the Linear Regression (LR) is enough to predict the trajectories of lower limb kinematics. Kinematics data (LA and AV) of foot, shank and thigh were collected from 13 male and 3 female participants (28 ± 4 years old, 1.72 ± 0.07 m in height, 66 ± 10 kg in mass) who walked for 10 minutes at 4 different walking speeds on a 0% gradient treadmill. Walking -1 -1 speeds included preferred walking speed (PWS 4.34 ± 0.43 km.h ), imposed speed (5km.h , 15.4% ± 7.6% faster), slower speed (-20% PWS 3.59 ± 0.47 km.h-1) and faster speed (+20% PWS 5.26 ± 0.53 km.h-1). The sliding window technique was adopted for training and testing the LSTM models with total kinematics time-series data of 17,638 strides for all trials. The aim and findings of this work were carried out in 3 studies. Study 1 confirmed the possibility of predicting the future trajectories of human lower limb kinematics using LSTM autoencoders (ED-LSTM) and the LR during an imposed walking speed (5km.h-1). Both models achieved satisfactory predicted trajectories up to 0.06s. A prediction horizon of 0.06s can be used to compensate for delays in an exoskeleton’s feed-forward controller to better estimate the human motions and synchronise with intended movement trajectories. Study 2 (Chapter 4) indicated that the LR model is not suitable for the prediction of future lower limb kinematics at PWS. The LSTM perfromace results suggested that the ED-LSTM and the Stacked LSTM are more accurate to predict the future lower limb kinematics up to 0.1s at PWS and imposed walking speed (5km.h-1). The average duration for a gait cycle rages between 0.98-1.07s, and a prediction horizon of 0.1 accounts for about 10% of the gait cycle. Such a forecast may assist users in anticipating a low foot clearance to develop early countermeasures such as slowing down or stopping. Study 3 (Chapter 5) have shown that at +20% PWS the LSTM models’ performance obtained better predictions compared to all tested walking speed conditions (i.e. PWS, -20% PWS and 5km.h-1). While at -20% PWS, results indicated that at slower walking speeds all of the LSTM architectures obtained weaker predictions compared to all tested walking speeds (i.e. PWS, +20% PWS and 5km.h-1). In addition to the applications of a known future trajectories at the PWS mentioned in study 1 and 2, the prediction at fast and slow walking speeds familiarise the developed ML models with changes in human walking speed which are known to have large effects on lower limb kinematics. When intelligent ML methods are familiarised with the degree of kinematic changes due to speed variations, it could be used to improve human-machine interface in bionics design for various walking speeds The key finding of the three studies is that the ED-LSTM was found to be the most accurate -1 model to predict and adapt to the human motion kinematics at PWS, ±20% PWS and 5km.h up to 0.1s. The ability to predict future lower limb motions may have a wide range of applications including the design and control of bionics allowing better human-machine interface and mitigating the risk of tripping and balance loss.

To the bibliography