To see the other types of publications on this topic, follow the link: LSTM ALGORITHM.

Dissertations / Theses on the topic 'LSTM ALGORITHM'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'LSTM ALGORITHM.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Paschou, Michail. "ASIC implementation of LSTM neural network algorithm." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254290.

Full text
Abstract:
LSTM neural networks have been used for speech recognition, image recognition and other artificial intelligence applications for many years. Most applications perform the LSTM algorithm and the required calculations on cloud computers. Off-line solutions include the use of FPGAs and GPUs but the most promising solutions include ASIC accelerators designed for this purpose only. This report presents an ASIC design capable of performing the multiple iterations of the LSTM algorithm on a unidirectional and without peepholes neural network architecture. The proposed design provides arithmetic level parallelism options as blocks are instantiated based on parameters. The internal structure of the design implements pipelined, parallel or serial solutions depending on which is optimal in every case. The implications concerning these decisions are discussed in detail in the report. The design process is described in detail and the evaluation of the design is also presented to measure accuracy and error of the design output.This thesis work resulted in a complete synthesizable ASIC design implementing an LSTM layer, a Fully Connected layer and a Softmax layer which can perform classification of data based on trained weight matrices and bias vectors. The design primarily uses 16-bit fixed point format with 5 integer and 11 fractional bits but increased precision representations are used in some blocks to reduce error output. Additionally, a verification environment has also been designed and is capable of performing simulations, evaluating the design output by comparing it with results produced from performing the same operations with 64-bit floating point precision on a SystemVerilog testbench and measuring the encountered error. The results concerning the accuracy and the design output error margin are presented in this thesis report. The design went through Logic and Physical synthesis and successfully resulted in a functional netlist for every tested configuration. Timing, area and power measurements on the generated netlists of various configurations of the design show consistency and are reported in this report.
LSTM neurala nätverk har använts för taligenkänning, bildigenkänning och andra artificiella intelligensapplikationer i många år. De flesta applikationer utför LSTM-algoritmen och de nödvändiga beräkningarna i digitala moln. Offline lösningar inkluderar användningen av FPGA och GPU men de mest lovande lösningarna inkluderar ASIC-acceleratorer utformade för endast dettaändamål. Denna rapport presenterar en ASIC-design som kan utföra multipla iterationer av LSTM-algoritmen på en enkelriktad neural nätverksarkitetur utan peepholes. Den föreslagna designed ger aritmetrisk nivå-parallellismalternativ som block som är instansierat baserat på parametrar. Designens inre konstruktion implementerar pipelinerade, parallella, eller seriella lösningar beroende på vilket anternativ som är optimalt till alla fall. Konsekvenserna för dessa beslut diskuteras i detalj i rapporten. Designprocessen beskrivs i detalj och utvärderingen av designen presenteras också för att mäta noggrannheten och felmarginal i designutgången. Resultatet av arbetet från denna rapport är en fullständig syntetiserbar ASIC design som har implementerat ett LSTM-lager, ett fullständigt anslutet lager och ett Softmax-lager som kan utföra klassificering av data baserat på tränade viktmatriser och biasvektorer. Designen använder huvudsakligen 16bitars fast flytpunktsformat med 5 heltal och 11 fraktions bitar men ökade precisionsrepresentationer används i vissa block för att minska felmarginal. Till detta har även en verifieringsmiljö utformats som kan utföra simuleringar, utvärdera designresultatet genom att jämföra det med resultatet som produceras från att utföra samma operationer med 64-bitars flytpunktsprecision på en SystemVerilog testbänk och mäta uppstådda felmarginal. Resultaten avseende noggrannheten och designutgångens felmarginal presenteras i denna rapport.Designen gick genom Logisk och Fysisk syntes och framgångsrikt resulterade i en funktionell nätlista för varje testad konfiguration. Timing, area och effektmätningar på den genererade nätlistorna av olika konfigurationer av designen visar konsistens och rapporteras i denna rapport.
APA, Harvard, Vancouver, ISO, and other styles
2

Shaif, Ayad. "Predictive Maintenance in Smart Agriculture Using Machine Learning : A Novel Algorithm for Drift Fault Detection in Hydroponic Sensors." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42270.

Full text
Abstract:
The success of Internet of Things solutions allowed the establishment of new applications such as smart hydroponic agriculture. One typical problem in such an application is the rapid degradation of the deployed sensors. Traditionally, this problem is resolved by frequent manual maintenance, which is considered to be ineffective and may harm the crops in the long run. The main purpose of this thesis was to propose a machine learning approach for automating the detection of sensor fault drifts. In addition, the solution’s operability was investigated in a cloud computing environment in terms of the response time. This thesis proposes a detection algorithm that utilizes RNN in predicting sensor drifts from time-series data streams. The detection algorithm was later named; Predictive Sliding Detection Window (PSDW) and consisted of both forecasting and classification models. Three different RNN algorithms, i.e., LSTM, CNN-LSTM, and GRU, were designed to predict sensor drifts using forecasting and classification techniques. The algorithms were compared against each other in terms of relevant accuracy metrics for forecasting and classification. The operability of the solution was investigated by developing a web server that hosted the PSDW algorithm on an AWS computing instance. The resulting forecasting and classification algorithms were able to make reasonably accurate predictions for this particular scenario. More specifically, the forecasting algorithms acquired relatively low RMSE values as ~0.6, while the classification algorithms obtained an average F1-score and accuracy of ~80% but with a high standard deviation. However, the response time was ~5700% slower during the simulation of the HTTP requests. The obtained results suggest the need for future investigations to improve the accuracy of the models and experiment with other computing paradigms for more reliable deployments.
APA, Harvard, Vancouver, ISO, and other styles
3

Malina, Ondřej. "Detekce začátku a konce komplexu QRS s využitím hlubokého učení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442595.

Full text
Abstract:
This thesis deals with the issue of automatic measurement of the duration of QRS complexes in ECG signals. Special emphasis is then placed on the possibility of automatic detection of QRS complexes while exciting cardiac tissue with a pacemaker. The content of this work is divided into four logical units, where the first part deals with the heart as an organ. It describes the origin and spread of excitement in the heart, its possible pathologies and their manifestations in ECG recording, it also deals with pacing and measuring ECG recording during simultaneous pacing. The second part of the thesis contains a brief introduction to the topic of machine and deep learning. The third part of the thesis contains a search of current approaches using methods based on deep learning to solve the detection of QRSd. The fourth part deals with the design and implementation of its own model of deep learning, able to detect the beginnings and ends of QRS complexes from ECG recordings. It describes the data preprocessing implemented in the MATLAB programming environment. The actual implementation of the model was performed in the Python using the PyTorch and NumPy moduls.
APA, Harvard, Vancouver, ISO, and other styles
4

Olsson, Charlie, and David Hurtig. "An approach to evaluate machine learning algorithms for appliance classification." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20217.

Full text
Abstract:
A cheap and powerful solution to lower the electricity usage and making the residents more energy aware in a home is to simply make the residents aware of what appliances that are consuming electricity. Meaning the residents can then take decisions to turn them off in order to save energy. Non-intrusive load monitoring (NILM) is a cost-effective solution to identify different appliances based on their unique load signatures by only measuring the energy consumption at a single sensing point. In this thesis, a low-cost hardware platform is developed with the help of an Arduino to collect consumption signatures in real time, with the help of a single CT-sensor. Three different algorithms and one recurrent neural network are implemented with Python to find out which of them is the most suited for this kind of work. The tested algorithms are k-Nearest Neighbors, Random Forest and Decision Tree Classifier and the recurrent neural network is Long short-term memory.
APA, Harvard, Vancouver, ISO, and other styles
5

Freberg, Daniel. "Evaluating Statistical MachineLearning and Deep Learning Algorithms for Anomaly Detection in Chat Messages." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235957.

Full text
Abstract:
Automatically detecting anomalies in text is of great interest for surveillance entities as vast amounts of data can be analysed to find suspicious activity. In this thesis, three distinct machine learning algorithms are evaluated as a chat message classifier is being implemented for the purpose of market surveillance. Naive Bayes and Support Vector Machine belong to the statistical class of machine learning algorithms being evaluated in this thesis and both require feature selection, a side objective of the thesis is thus to find a suitable feature selection technique to ensure mentioned algorithms achieve high performance. Long Short-Term Memory network is the deep learning algorithm being evaluated in the thesis, rather than depend on feature selection, the deep neural network will be evaluated as it is trained using word embeddings. Each of the algorithms achieved high performance but the findings ofthe thesis suggest Naive Bayes algorithm in conjunction with a feature counting feature selection technique is the most suitable choice for this particular learning problem.
Att automatiskt kunna upptäcka anomalier i text har stora implikationer för företag och myndigheter som övervakar olika sorters kommunikation. I detta examensarbete utvärderas tre olika maskininlärningsalgoritmer för chattmeddelandeklassifikation i ett marknadsövervakningsystem. Naive Bayes och Support Vector Machine tillhör båda den statistiska klassen av maskininlärningsalgoritmer som utvärderas i studien och bådar kräver selektion av vilka särdrag i texten som ska användas i algoritmen. Ett sekundärt mål med studien är således att hitta en passande selektionsteknik för att de statistiska algoritmerna ska prestera så bra som möjligt. Long Short-Term Memory Network är djupinlärningsalgoritmen som utvärderas i studien. Istället för att använda en selektionsteknik kommer djupinlärningsalgoritmen nyttja ordvektorer för att representera text. Resultaten visar att alla utvärderade algoritmer kan nå hög prestanda för ändamålet, i synnerhet Naive Bayes tillsammans med termfrekvensselektion.
APA, Harvard, Vancouver, ISO, and other styles
6

Almqvist, Olof. "A comparative study between algorithms for time series forecasting on customer prediction : An investigation into the performance of ARIMA, RNN, LSTM, TCN and HMM." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-16974.

Full text
Abstract:
Time series prediction is one of the main areas of statistics and machine learning. In 2018 the two new algorithms higher order hidden Markov model and temporal convolutional network were proposed and emerged as challengers to the more traditional recurrent neural network and long-short term memory network as well as the autoregressive integrated moving average (ARIMA). In this study most major algorithms together with recent innovations for time series forecasting is trained and evaluated on two datasets from the theme park industry with the aim of predicting future number of visitors. To develop models, Python libraries Keras and Statsmodels were used. Results from this thesis show that the neural network models are slightly better than ARIMA and the hidden Markov model, and that the temporal convolutional network do not perform significantly better than the recurrent or long-short term memory networks although having the lowest prediction error on one of the datasets. Interestingly, the Markov model performed worse than all neural network models even when using no independent variables.
APA, Harvard, Vancouver, ISO, and other styles
7

Blanco, Martínez Alejandro. "Study and design of classification algorithms for diagnosis and prognosis of failures in wind turbines from SCADA data." Doctoral thesis, Universitat de Vic - Universitat Central de Catalunya, 2018. http://hdl.handle.net/10803/586097.

Full text
Abstract:
Actualmente las operaciones de mantenimiento preventivo de los parques eólicos se soportan sobre técnicas de Machine Learning para reducir los costes de las paradas no planificadas. Por eso se necesita una predicción de fallos con cierta anticipación que funcione sobre los datos de SCADA. Estos datos necesitan ser procesados en distintas etapas descritas en esta tesis, con resultados publicados en cada una de ellas. En una primera fase se limpian los valores extremos (Outliers), indicando cómo deben ser tratados para no eliminar la información sobre los fallos. En una segunda, las distintas variables son seleccionadas por diversos métodos de selección de características (Feature Selection). En la misma fase, se compara el uso de variables transformadas mediante Autoencoders. En una tercera se construye el modelo, mediante métodos supervisados y no supervisados, obteniendo resultados destacables con Self Organizing Maps (SOM) y con técnicas de Deep Learning incluyendo redes ANN y LSTM multicapa.
Nowadays, the preventive maintenance operations of wind farms are supported by Machine Learning techniques to reduce the costs of unplanned downtime. That is why an early fault prediction that works with SCADA data is required. These data need to be processed at different stages described in this thesis, with results published in each of them. In a first phase, the extreme values (Outliers) are cleaned, indicating how they should address in order not to eliminate the information about the faults. In a second step, the different variables are selected by different Feature Selection methods. At the same step, the use of variables transformed by Autoencoders is also compared. In a third, the model is constructed using Supervised and Unsupervised methods, obtaining outstanding results with Self Organizing Maps (SOM) and Deep Learning techniques including ANN and LSTM multi-layer networks.
Actualment les operacions de manteniment preventiu dels parcs eòlics se suporten sobre tècniques de Machine Learning per a reduir els costos de les parades no planificades. Per això es necessita una predicció de fallades amb certa anticipació que funcioni sobre les dades de SCADA. Aquestes dades necessiten ser processades en diferents etapes descrites a aquesta tesi, amb resultats publicats en cadascuna d'elles. En una primera fase es netegen els valors extrems (Outliers), indicant com han de ser tractats per no eliminar la informació sobre les fallades. En una segona, les diferents variables són seleccionades per diversos mètodes de selecció de característiques (Feature Selection). En la mateixa fase, es compara l'ús de variables transformades mitjançant Autoencoders. En una tercera es construeix el model, mitjançant mètodes supervisats i no supervisats, obtenint resultats destacables amb Self Organizing Maps (SOM) i amb tècniques de Deep Learning incloent xarxes ANN i LSTM multicapa.
APA, Harvard, Vancouver, ISO, and other styles
8

Arvidsson, Philip, and Tobias Ånhed. "Sequence-to-sequence learning of financial time series in algorithmic trading." Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-12602.

Full text
Abstract:
Predicting the behavior of financial markets is largely an unsolved problem. The problem hasbeen approached with many different methods ranging from binary logic, statisticalcalculations and genetic algorithms. In this thesis, the problem is approached with a machinelearning method, namely the Long Short-Term Memory (LSTM) variant of Recurrent NeuralNetworks (RNNs). Recurrent neural networks are artificial neural networks (ANNs)—amachine learning algorithm mimicking the neural processing of the mammalian nervoussystem—specifically designed for time series sequences. The thesis investigates the capabilityof the LSTM in modeling financial market behavior as well as compare it to the traditionalRNN, evaluating their performances using various measures.
Prediktion av den finansiella marknadens beteende är i stort ett olöst problem. Problemet hartagits an på flera sätt med olika metoder så som binär logik, statistiska uträkningar ochgenetiska algoritmer. I den här uppsatsen kommer problemet undersökas medmaskininlärning, mer specifikt Long Short-Term Memory (LSTM), en variant av rekurrentaneurala nätverk (RNN). Rekurrenta neurala nätverk är en typ av artificiellt neuralt nätverk(ANN), en maskininlärningsalgoritm som ska efterlikna de neurala processerna hos däggdjursnervsystem, specifikt utformat för tidsserier. I uppsatsen undersöks kapaciteten hos ett LSTMatt modellera finansmarknadens beteenden och jämförs den mot ett traditionellt RNN, merspecifikt mäts deras effektivitet på olika vis.
APA, Harvard, Vancouver, ISO, and other styles
9

Nitz, Pettersson Hannes, and Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.

Full text
Abstract:
The demand for robots to work in environments together with humans is growing. This calls for new requirements on robots systems, such as the need to be perceived as responsive and accurate in human interactions. This thesis explores the possibility of using AI methods to predict the movement of a human and evaluating if that information can assist a robot with human interactions. The AI methods that were used is a Long Short Term Memory(LSTM) network and an artificial neural network(ANN). Both networks were trained on data from a motion capture dataset and on four different prediction times: 1/2, 1/4, 1/8 and a 1/16 second. The evaluation was performed directly on the dataset to determine the prediction error. The neural networks were also evaluated on a robotic arm in a simulated environment, to show if the prediction methods would be suitable for a real-life system. Both methods show promising results when comparing the prediction error. From the simulated system, it could be concluded that with the LSTM prediction the robotic arm would generally precede the actual position. The results indicate that the methods described in this thesis report could be used as a stepping stone for a human-robot interactive system.
APA, Harvard, Vancouver, ISO, and other styles
10

Alsulami, Khalil Ibrahim D. "Application-Based Network Traffic Generator for Networking AI Model Development." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1619387614152354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mohammadisohrabi, Ali. "Design and implementation of a Recurrent Neural Network for Remaining Useful Life prediction." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
A key idea underlying many Predictive Maintenance solutions is Remaining Useful Life (RUL) of machine parts, and it simply involves a prediction on the time remaining before a machine part is likely to require repair or replacement. Nowadays, with respect to fact that the systems are getting more complex, the innovative Machine Learning and Deep Learning algorithms can be deployed to study the more sophisticated correlations in complex systems. The exponential increase in both data accumulation and processing power make the Deep Learning algorithms more desirable that before. In this paper a Long Short-Term Memory (LSTM) which is a Recurrent Neural Network is designed to predict the Remaining Useful Life (RUL) of Turbofan Engines. The dataset is taken from NASA data repository. Finally, the performance obtained by RNN is compared to the best Machine Learning algorithm for the dataset.
APA, Harvard, Vancouver, ISO, and other styles
12

Lousseief, Elias. "MahlerNet : Unbounded Orchestral Music with Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264993.

Full text
Abstract:
Modelling music with mathematical and statistical methods in general, and with neural networks in particular, has a long history and has been well explored in the last decades. Exactly when the first attempt at strictly systematic music took place is hard to say; some would say in the days of Mozart, others would say even earlier, but it is safe to say that the field of algorithmic composition has a long history. Even though composers have always had structure and rules as part of the writing process, implicitly or explicitly, following rules at a stricter level was well investigated in the middle of the 20th century at which point also the first music writing computer program based on mathematics was implemented. This work in computer science focuses on the history of musical composition with computers, also known as algorithmic composition, using machine learning and neural networks and consists of two parts: a literature survey covering in-depth the last decades in the field from which is drawn inspiration and experience to construct MahlerNet, a neural network based on the previous architectures MusicVAE, BALSTM, PerformanceRNN and BachProp, capable of modelling polyphonic symbolic music with up to 23 instruments. MahlerNet is a new architecture that uses a custom preprocessor with musical heuristics to normalize and filter the input and output files in MIDI format into a data representation that it uses for processing. MahlerNet, and its preprocessor, was written altogether for this project and produces music that clearly shows musical characteristics reminiscent of the data it was trained on, with some long-term structure, albeit not in the form of motives and themes.
Matematik och statistik i allmänhet, och maskininlärning och neurala nätverk i synnerhet, har sedan långt tillbaka använts för att modellera musik med en utveckling som kulminerat under de senaste decennierna. Exakt vid vilken historisk tidpunkt som musikalisk komposition för första gången tillämpades med strikt systematiska regler är svårt att säga; vissa skulle hävda att det skedde under Mozarts dagar, andra att det skedde redan långt tidigare. Oavsett vilket, innebär det att systematisk komposition är en företeelse med lång historia. Även om kompositörer i alla tider följt strukturer och regler, medvetet eller ej, som en del av kompositionsprocessen började man under 1900-talets mitt att göra detta i högre utsträckning och det var också då som de första programmen för musikalisk komposition, baserade på matematik, kom till. Den här uppsatsen i datateknik behandlar hur musik historiskt har komponerats med hjälp av datorer, ett område som också är känt som algoritmisk komposition. Uppsatsens fokus ligger på användning av maskininlärning och neurala nätverk och består av två delar: en litteraturstudie som i hög detalj behandlar utvecklingen under de senaste decennierna från vilken tas inspiration och erfarenheter för att konstruera MahlerNet, ett neuralt nätverk baserat på de tidigare modellerna MusicVAE, BALSTM, PerformanceRNN och BachProp. MahlerNet kan modellera polyfon musik med upp till 23 instrument och är en ny arkitektur som kommer tillsammans med en egen preprocessor som använder heuristiker från musikteori för att normalisera och filtrera data i MIDI-format till en intern representation. MahlerNet, och dess preprocessor, är helt och hållet implementerade för detta arbete och kan komponera musik som tydligt uppvisar egenskaper från den musik som nätverket tränats på. En viss kontinuitet finns i den skapade musiken även om det inte är i form av konkreta teman och motiv.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Szu-Yu, and 劉思妤. "Using LSTM algorithm to improve network management in SDN." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/qqh5m3.

Full text
Abstract:
碩士
國立交通大學
資訊管理研究所
107
There are a lot of network monitoring technologies existed so far. Network administrators must have accurate monitoring to operate efficiently. In this paper, we propose a dynamic adjustment threshold method – Long short term memory network. In a resource-constrained network, SDN traffic engineering (SDN TE) can improve network utilization and service quality. we use a minimum bandwidth utilization routing mechanism to avoid congestion. The controller periodically monitors the traffic utilization of each link in the network. The overused links are identified as a bottleneck link. Removing the bottleneck links by the utilization rate, the remaining bandwidth calculation to be passed by the routing algorithm becomes the alternate selection path. When network traffic increases, the proposed dynamic adjustment utilization method - long-term and short-term memory networks can effectively predict traffic and improve network efficiency and network service quality.
APA, Harvard, Vancouver, ISO, and other styles
14

AGGARWAL, TUSHAR. "IMAGE DESCRIPTIVE SUMMARIZATION BY DEEP LEARNING AND ADVANCED LSTM MODEL ARCHITECTURE." Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/17084.

Full text
Abstract:
Auto Image Descriptor is becoming a trending point of interest in current era of research among researchers. Being a great community, which is proposing a continuous and enhanced Iist of intuitive algorithms which is solving to its problems. However, still there are lot of improvement to this field. Therefore, it’s becoming a field of attraction for many researchers and industries and reliable to this digital world. Of these various image descriptive algorithms, some outperform others in terms of basic descriptors requirements like robustness, invisibility, processing cost, etc. In this thesis, we study a new hybrid model image descriptor scheme which when combined with our proposed model algorithm provides us efficient results. Following illustrative points are made to describe the thesis in a nutshell which will later on be discussed in detail.  Firstly, we train our image in 9x9 kernels using CNN model. The idea behind this 1024 kernel of our host image is to divide each pixel of host image with lowest human value system characteristics i.e lowest entropy values and lowest edge entropy values.  Our host image is further divided into 8x8 pixel blocks. Therefore, we’ll have 64 rows and 64 columns are there of the 8x8 blocks. In total 64x64x8 blocks of host image i.e. the size of host image is 512x512.  Now we captionate the pre trained labels to our pixelate model obtained from our CNN model. This will be used in embedding of our labels with the LSTM algorithm. Using vi LSTM model algorithm, it will assign a label to each pixelate kernel which will perform embedding to the host image.  This embedding and extraction is done Long Short Term Memory algorithm, which is explained in later chapters.  Now using this embedded image, we our Quest Q function of our host image using embed RNN network.  We find best value of Q function using this RNN networks. Also we get the best computed label for our host image using this algorithm.  In a nutshell, this project has combined four major algorithms to generate best results possible. These adopted criteria significantly contributed to establishing a scheme with high robustness against attacks without affecting the visual quality of the image. To describe it briefly the project consists of following four subsections- 1. Convolutional Neural Networks (CNN) 2. Long Short Term Memory Algorithm (RNN) 3. Recurrent Neural Networks (RNN)
APA, Harvard, Vancouver, ISO, and other styles
15

Tsai, Jia-Ling, and 蔡佳陵. "A LSTM-Based Algorithm for the Estimation of Plantar Pressure Dynamics Using Inertial Sensors." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/f9dsxf.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系所
106
Gait analysis has become prevalent in many fields such as sports biomechanics, medical diagnostics, and injury prevention. For the plantar pressure dynamic estimation in gait analysis, the vertical component of ground reaction force (vGRF) and center of pressure (CoP) trajectories are vital parameters about human locomotion and balance. Three main approaches are used in measuring the gait parameters, namely computer vision, floor sensors, and wearables. Though the first two techniques are accurate, they are expensive and limited in the sensing area. The wearable devices have advantages in portable and low cost. However, the contact sensing like pressure detector suffers from long-term reliability. In this research, we employ the 6DOF inertial measurement unit (IMU) (gyro system, Taiwan) to estimate the vGRF and CoP trajectories. Unlike conventional pressure sensing, IMUs are low cost and durable. Four IMUs are attached on the heel of two feet, left shank, and waist. F-scan pressure sensing system serves as the ground truth. Associated with hardware setup, we propose a LSTM model to predict vGRF and CoP, based on the acceleration and angle velocity data from IMUs. The data synchronization, data formation, and LSTM structure are explained. Under 33,880 sample points of normal walks, the first 70% is for training and the last 30% is for testing. Experiment shows the root mean square error of peak value of vGRF is equal to 4.83N, the error is 4.025% and the root mean square error of CoP excursion is 0.14cm.
APA, Harvard, Vancouver, ISO, and other styles
16

Oguntala, George A., Yim Fun Hu, Ali A. S. Alabdullah, Raed A. Abd-Alhameed, Muhammad Ali, and D. K. Luong. "Passive RFID Module with LSTM Recurrent Neural Network Activity Classification Algorithm for Ambient Assisted Living." 2021. http://hdl.handle.net/10454/18418.

Full text
Abstract:
Yes
IEEE Human activity recognition from sensor data is a critical research topic to achieve remote health monitoring and ambient assisted living (AAL). In AAL, sensors are integrated into conventional objects aimed to support targets capabilities through digital environments that are sensitive, responsive and adaptive to human activities. Emerging technological paradigms to support AAL within the home or community setting offers people the prospect of a more individually focused care and improved quality of living. In the present work, an ambient human activity classification framework that augments information from the received signal strength indicator (RSSI) of passive RFID tags to obtain detailed activity profiling is proposed. Key indices of position, orientation, mobility, and degree of activities which are critical to guide reliable clinical management decisions using 4 volunteers are employed to simulate the research objective. A two-layer, fully connected sequence long short-term memory recurrent neural network model (LSTM RNN) is employed. The LSTM RNN model extracts the feature of RSS from the sensor data and classifies the sampled activities using SoftMax. The performance of the LSTM model is evaluated for different data size and the hyper-parameters of the RNN are adjusted to optimal states, which results in an accuracy of 98.18%. The proposed framework suits well for smart health and smart homes which offers pervasive sensing environment for the elderly, persons with disability and chronic illness.
APA, Harvard, Vancouver, ISO, and other styles
17

Tseng, Xian-Hong, and 曾憲泓. "Using LSTM algorithm to establish an Evaluation System for Child with Autistic Disorder during Autism Diagnostic Observation Schedule Interview." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/nr83hf.

Full text
Abstract:
碩士
國立清華大學
電機工程學系所
106
Autism spectrum disorder (ASD) is a highly-prevalent neuraldevelopmental disorder. In medical research often characterized by social communicative deficits and restricted repetitive interest. The heterogeneous nature of ASD in its behavior manifestations encompasses broad syndromes such as, Classical Autism (AD), Asperger syndrome (AS), and High functioning Autism (HFA). To evaluate the degree and there syndromes in ASD, doctor will diagnose through clinical observation and auxiliary diagnostic tools, one of them is Autism Diagnostic Observation Schedule (ADOS), i.e., a gold standard diagnostic tool. However, there are existing some problems in diagnosis of autism such as, subjective evaluation, non-scalable, and time-consuming. In this work, we design an automatic assessment system based on computing multimodal behavior features, including acoustic characteristic、body movements of the participant, using LSTM algorithm and machine learning technique to build model during ADOS story-telling part by behavioral signal processing (BSP) concept. Further, our behavior-based measurement achieve competitive, sometimes exceeding, recognition accuracies in discriminating between three syndromes of ASD when compare to investigator’s clinical-rating on participant during ADOS. Keywords: autism spectrum disorder, autism diagnostic observation schedule, long short-term memory, behavioral signal processing (BSP), multimodal behaviors
APA, Harvard, Vancouver, ISO, and other styles
18

CHU, YI-JUI, and 朱奕叡. "A Study of PCA dimensionality reduction technique and news articles for the Prediction of Stock Price - Using LSTM algorithm as modeling technology." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/nz4at6.

Full text
Abstract:
碩士
輔仁大學
資訊管理學系碩士在職專班
107
The main purpose of this study is to examine the influence of different data structure attributes, the number of eigenvalues, and the length of the training period on the accuracy of individual stock price trend prediction. In this study, the deep learning technique, of the LSTM algorithm, is used as the predictive model, and the PCA algorithm is adopted to screen the technical indicators for dimensionality reduction, moreover, Word2Vect text search technology is used to process the non-structural data of stock news, which is adopted as the predicted feature value of the LSTM model. Situational experiments are based on testing their impact on individual stock price trend forecasts. The experimental results show that: Firstly, the performance obtained by the eigenvalues of the training period of 50 days is better than the performance of the eigenvalues of the training period of 60 days, and consequently the training period of the appropriate length is significant. Secondly, more prediction models used in the number of eigenvalues, may not necessarily generate better prediction results, and using PCA to filter eigenvalues can indeed improve the prediction accuracy. Thirdly, using the rising lexicon in unstructured data to obtain news scores, the eigenvalue prediction accuracy is higher than the prediction accuracy of the eigenvalues in the news vector. And in TSMC (2317) and Foxconn (2330), both grades of the ticket test demonstrate a good accuracy rate of 79.59% and 78.95%. Finally, unstructured news scores combined with structured and dimensionally reduced technical indicators best predict the rise and fall of individual stocks. The experimental targets present a high accuracy of nearly 90% that meets the standard for actual trading practice, within online trading applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Brian, and 陳柏穎. "AUC oriented Bidirectional LSTM-CRF Models to Identify Algorithms Described in an Abstract." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/p3grat.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
105
In this thesis, we attempt to identify algorithms mentioned in the paper abstract. We further want to discriminate the algorithm proposed in this paper from algorithms only mentioned or compared, since we are more interested in the former. We model this task as a sequential labeled task and propose to use a state-of-the-art deep learning model LSTM-CRF as our solution. However, the data or labels are generally imbalanced since not all the sentence in the abstract is describing its algorithm. That is, the ratio between different labels is skewed. As a result, it is not suitable to use traditional LSTM-CRF model since it only optimizes accuracy. Instead, it is more reasonable to optimize AUC in imbalanced data because it can deal with skewed labels and perform better in predicting rare labels. Our experiment shows that the proposed AUC-optimized LSTM-CRF outperforms the traditional LSTM-CRF. We also show the ranking of algorithms used currently, and find the trend of different algorithms used in recent years. Moreover, we are able to discover some new algorithms that do not exist in our training data.
APA, Harvard, Vancouver, ISO, and other styles
20

Kurach, Karol. "Deep Neural Architectures for Algorithms and Sequential Data." Doctoral thesis, 2016. https://depotuw.ceon.pl/handle/item/1860.

Full text
Abstract:
The first part of the dissertation describes two deep neural architectures with external memories: Neural Random-Access Machine (NRAM) and Hierarchical Attentive Memory (HAM). The NRAM architecture is inspired by Neural Turing Machines, but the crucial difference is that it can manipulate and dereference pointers to its random-access memory. This allows it to learn concepts that require pointers chasing, such as “linked list” or “binary tree”. The HAM architecture is based on a binary tree with leaves corresponding to memory cells. This enables the memory access in Θ(log n), which is a significant improvement over Θ(n) access used in the standard attention mechanism. We show that Long Short-Term Memory (LSTM) augmented with HAM can successfully learn to solve a number of challenging algorithmic problems. In particular, it is the first architecture that learns from pure input/output examples to sort n numbers in time Θ(n log n) and the solution generalizes well to longer sequences. We also show that HAM is very generic and can be trained to act like classic data structures: a stack, a FIFO queue and a priority queue.The second part of the dissertation describes three novel systems based on deep neural networks. The first one is a framework for finding computationally efficient versions of symbolic math expressions. By using a recursive neural network it can efficiently search the state space and quickly find identities with significantly better time complexity (e.g., Θ(n^2) instead of exponential time). Then, we present a system for predicting dangerous events from multivariate, non-stationary time series data based on recurrent neural networks. It requires almost no feature engineering and achieved very good results in two machine learning competitions. Finally, we describe Smart Reply– an end-to-end system for suggesting automatic responses to e-mails. The system is capable of handling hundreds of millions messages daily. Smart Reply was successfully deployed in Google Inbox and currently generates 10% of responses on mobile devices.
Pierwsza część pracy przedstawia dwie głębokie architektury neuronowe wykorzystujące pamięć zewnętrzną: Neural Random-Access Machine (NRAM) oraz Hierarchical Attentive Memory (HAM). Pomysł na architekturę NRAM jest inspirowany Neuronowymi Maszynami Turinga (NTM). NRAM, w przeciwieństwie do NTM, posiada mechanizmy umożliwiające wykorzystanie wskaźników do pamięci. To sprawia, że NRAM jest w stanie nauczyć się pojęć wymagających użycia wskaźników, takich jak „lista jednokierunkowa” albo „drzewo binarne”. Architektura HAM bazuje na pełnym drzewie binarnym, w którym liście odpowiadają elementom pamięci. Umożliwia to wykonywanie operacji na pamięci w czasie Θ(log n), co jest znaczącą poprawą względem dostępu w czasie Θ(n), standardowo używanym w implementacji mechanizmu „skupienia uwagi” (ang. attention) w sieciach rekurencyjnych. Pokazujemy, że sieć LSTM połączona z HAM jest w stanie rozwiązać wymagające zadania o charakterze algorytmicznym. W szczególności, jest to pierwsza architektura, która mając dane jedynie pary wejście/poprawne wyjście potrafi się nauczyć sortowania elementów działającego w złożoności Θ(n log n) i dobrze generalizującego się do dłuższych ciągów. Pokazujemy również, że HAM jest ogólną architekturą, która może zostać wytrenowana aby działała jak standardowe struktury danych, takie jak stos, kolejka lub kolejka priorytetowa. Druga część pracy przedstawia trzy nowatorskie systemy bazujące na głębokich sieciach neuronowych. Pierwszy z nich to system do znajdowania wydajnych obliczeniowo formuł matematycznych. Przy wykorzystaniu sieci rekursywnej system jest w stanie efektywnie przeszukiwać przestrzeń stanów i szybko znajdować tożsame formułyo istotnie lepszej złożoności asymptotycznej (przykładowo, Θ(n^2) zamiast złożoności wykładniczej). Następnie, prezentujemy oparty na rekurencyjnej sieci neuronowej system do przewidywania niebezpiecznych zdarzeń z wielowymiarowych, niestacjonarnych szeregów czasowych. Nasza metoda osiągnęła bardzo dobre wyniki w dwóch konkursach uczenia maszynowego. Jako ostatni opisany został Smart Reply – system do sugerowania automatycznych odpowiedzi na e-maile. Smart Reply został zaimplementowany w Google Inbox i codziennie przetwarza setki milionów wiadomości. Aktualnie, 10% wiadomości wysłanych z urządzeń mobilnych jest generowana przez ten system.
APA, Harvard, Vancouver, ISO, and other styles
21

JHA, ROMAN KUMAR. "FORECASTING OF SOLAR IRRADIATION USING DEEP LEARNING ALGORITHMS." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19274.

Full text
Abstract:
This dissertation presents one of many application of Machine Learning (ML) and Deep Learning in the field of forecasting. ML algorithms used Multivariate Linear Regression(MLR), Support Vector Regression (SVR), Feed Forward Neural Network(FFNN) and Layered Recurrent Neural Network(RNN) to make solar irradiation forecasting. The forecasting has been done for the period of ten months in 2021 based on the historical data available for the year 2019 and 2020.MATLAB has been used to develop the ML model. The model developed using the above mentioned algorithms have been compared on the basis of key performance indicators(KPI). The indicators used are mean square error(MSE), Root Mean Square Error(RMSE), Mean Absolute Error(MAE), Mean Absolute Percentage Error(MAPE) and R Square Value (coefficient of determination). This dissertation proposes forecasting of solar irradiation using deep learning algorithm. The algorithm used in this dissertation is sequence to sequence (S2S) algorithm which uses LSTM cell in its encoder and decoder sections. Hence the forecasting has initially been done with LSTM (Long Short Term Memory) in order to make a comparative analysis. Python has been used for developing Deep Learning models. The algorithms are developed on Jupyter notebook using Keras and Tensor flow libraries. The loss function used for optimisation is Mean Square error (MSE) and the key performance Indicator is Root Mean Square Error (RMSE). The results obtained are well within the desirable limits for both ML and Deep Learning.
APA, Harvard, Vancouver, ISO, and other styles
22

Peterson, Cole. "Generating rhyming poetry using LSTM recurrent neural networks." Thesis, 2019. http://hdl.handle.net/1828/10801.

Full text
Abstract:
Current approaches to generating rhyming English poetry with a neural network involve constraining output to enforce the condition of rhyme. We investigate whether this approach is necessary, or if recurrent neural networks can learn rhyme patterns on their own. We compile a new dataset of amateur poetry which allows rhyme to be learned without external constraints because of the dataset’s size and high frequency of rhymes. We then evaluate models trained on the new dataset using a novel framework that automatically measures the system’s knowledge of poetic form and generalizability. We find that our trained model is able to generalize the pattern of rhyme, generate rhymes unseen in the training data, and also that the learned word embeddings for rhyming sets of words are linearly separable. Our model generates a couplet which rhymes 68.15% of the time; this is the first time that a recurrent neural network has been shown to generate rhyming poetry a high percentage of the time. Additionally, we show that crowd-source workers can only distinguish between our generated couplets and couplets from our dataset 63.3% of the time, indicating that our model generates poetry with coherency, semantic meaning, and fluency comparable to couplets written by humans.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
23

Lopes, Tiago Miguel Dias da Gama Lobo de Sousa. "Como construir um modelo híbrido de previsão para o S&P500 usando um modelo VECM com um algoritmo LSTM?" Master's thesis, 2021. http://hdl.handle.net/10071/23512.

Full text
Abstract:
A previsão de séries financeiras faz parte do processo de decisão das políticas monetárias por parte dos bancos centrais. Mendes, Ferreira e Mendes (2020) propõem um modelo híbrido que junta um VECM (modelo vetorial corretor de erro) com um algoritmo de aprendizagem profunda o LSTM (memória de longo curto-prazo) para uma previsão multivariada do índice acionista norte-americano S&P500, utilizando-se as séries do Nasdaq, Dow Jones e as taxas de juro dos bilhetes do tesouro americano a 3 meses no mercado secundário, com dados semanais, entre 19/04/2019 e 17/04/2020. Nesta dissertação, replicou-se esse artigo e construiu-se um modelo híbrido semelhante com a mesma finalidade e obteve-se um erro de previsão MAPE 86% inferior (4% versus 28%), mesmo incluindo a crise da COVID-19. Analisou-se o período sem crise e obteve-se um MAPE de 1.9%. Verificou-se que o vazamento de dados entre os períodos de teste e treino é um problema que prejudica os resultados. Comparou-se diferentes formas de construir o modelo híbrido variando o número de desfasamentos e de épocas de treino no LSTM, verificou-se o impacto de logaritmizar as séries, e comparou-se com modelos de referência (LSTM univariado/multivariado). Além disso, testou-se a causalidade à Granger entre os períodos com forte intervenção por parte da FED (décadas de 70 e 80, e crise da COVID-19 em fevereiro de 2020), concluindo-se que a variação das taxas de juro causam à Granger os retornos dos índices acionistas analisados, invertendo-se essa relação causal fora desses períodos.
The forecasting of financial series is part of the decision-making process of monetary policies by central banks. Mendes, Ferreira and Mendes (2020) proposed a hybrid model that combines a VECM (Vector Error Correction Model) with a deep learning algorithm LSTM (Long Short-Term Memory) for a multivariate forecast of the U.S. stock index S&P500, using Nasdaq, Dow Jones and U.S. treasury bills for 3 months yields of the secondary market series, with weekly data, between 19/04/2019 and 17/04/2020. In this dissertation, this article was replicated, and a similar hybrid model was constructed with the same purpose and an 86% lower MAPE forecast error was obtained (4% versus 28%), even including the COVID-19 crisis. The time period without the crisis was analyzed and a MAPE of 1.9% was obtained. It was found that data leakage between the test and training periods is a problem that impairs the results. Different ways of constructing the hybrid model were compared by varying the number of lags and training epochs in LSTM, the impact of using the log-series was verified, and benchmarking with univariate and multivariate LSTM was made. In addition, granger causality was tested between the time periods with strong intervention by the FED (1970s and 1980s, and the COVID-19 crisis in February 2020) concluding that the changes in yields Granger cause the stock indices returns. In contrast, this causal relationship outside these time periods was the opposite, with the indices returns causing the changes in yields.
APA, Harvard, Vancouver, ISO, and other styles
24

Rohovets, Taras. "Machine learning algorithms to predict stocks movements with Python language and dedicated libraries." Master's thesis, 2019. http://hdl.handle.net/10400.26/30163.

Full text
Abstract:
This research work focuses on machine learning algorithms in order to make predictions in financial markets. The foremost objective is to test whether the two machine learning algorithms: SVM and LSTM are capable of predicting the price movement in different time-frames and then develop a comparison analysis. In this research work, it is applied supervised machine learning with different input features. The practical and software component of this thesis applies Python programming language to test the hypothesis and act as proof of concept. The financial data quotes were obtained through online financial databases. The results demonstrate that SVM is capable of predicting the direction of the price while the LSTM did not present reliable results.
APA, Harvard, Vancouver, ISO, and other styles
25

Rita, Nicole Oliveira. "Machine learning techniques for predicting the stock market using daily market variables." Master's thesis, 2020. http://hdl.handle.net/10362/94992.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business Intelligence
Predicting the stock market was never seen as an easy task. The complexity of the financial systems makes it extremely difficult for anything or anyone to predict what the future of prices holds, let it be a day, a week, a month or even a year. Many variables influence the market’s volatility and some of these may even be the gut feeling of an investor on a specific day. Several machine learning techniques were already applied to forecast multiple stock market indexes, some presenting good values of accuracy when it comes to predict whether the prices will go up or down, and low values of error when dealing with regression data. This work aims to apply some state-of-the-art algorithms and compare their performance with Long Short-term Memory (LSTM) as well as between each other. The variables used to this empirical work were the prices of the Dow Jones Industrial Average (DJIA) registered for every business day, from January 1st of 2006 to January 1st of 2018, for 29 companies. Some changes and adjustments were made to the original variables to present different data types to the algorithms. To ensure good quality and certainty when evaluating the flexibility and stability of each model, the error measure used was the Root Mean Squared Error and the Mann-Whitney U test was also applied to assess statistical significance of the results obtained.
Prever a bolsa nunca foi considerado ser uma tarefa fácil. A complexidade dos sistemas financeiros torna extremamente difícil que um ser humano ou uma máquina consigam prever o que o futuro dos preços reserva, seja para um dia, uma semana, um mês ou um ano. Muitas variáveis influenciam a volatilidade do mercado e algumas podem até ser a confiança de um investidor em apostar em determinada empresa, naquele dia específico. Várias técnicas de aprendizagem automática foram aplicadas ao longo do tempo para prever vários índices de bolsas, algumas apresentando bons valores de precisão quando se tratou de prever se os preços subiam ou desciam e outras, baixos valores de erro ao lidar com dados de regressão. Este trabalho tem como objetivo aplicar alguns dos mais conhecidos algoritmos e comparar os seus desempenhos com o Long Short-Term Memory (LSTM), e entre si. As variáveis utilizadas para a elaboração deste trabalho empírico foram os preços da Dow Jones Industrial Average (DJIA) registados para todos os dias úteis, de 1 de Janeiro de 2006 a 1 de Janeiro de 2018, para 29 empresas. Algumas alterações e ajustes foram efetuados sobre as variáveis originais de forma a construír diferentes tipos de dados para posteriormente dar aos algoritmos. Para garantir boa qualidade e veracidade ao avaliar a flexibilidade e estabilidade de cada modelo, a medida de erro utilizada foi o erro médio quadrático da raíz e, de seguida, o teste U de Mann-Whitney foi aplicado para avaliar a significância estatística dos resultados obtidos.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography