Дисертації з теми "Recurrent Neural Network architecture"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Recurrent Neural Network architecture.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Recurrent Neural Network architecture".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Bopaiah, Jeevith. "A recurrent neural network architecture for biomedical event trigger classification." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/73.

Повний текст джерела
Анотація:
A “biomedical event” is a broad term used to describe the roles and interactions between entities (such as proteins, genes and cells) in a biological system. The task of biomedical event extraction aims at identifying and extracting these events from unstructured texts. An important component in the early stage of the task is biomedical trigger classification which involves identifying and classifying words/phrases that indicate an event. In this thesis, we present our work on biomedical trigger classification developed using the multi-level event extraction dataset. We restrict the scope of our classification to 19 biomedical event types grouped under four broad categories - Anatomical, Molecular, General and Planned. While most of the existing approaches are based on traditional machine learning algorithms which require extensive feature engineering, our model relies on neural networks to implicitly learn important features directly from the text. We use natural language processing techniques to transform the text into vectorized inputs that can be used in a neural network architecture. As per our knowledge, this is the first time neural attention strategies are being explored in the area of biomedical trigger classification. Our best results were obtained from an ensemble of 50 models which produced a micro F-score of 79.82%, an improvement of 1.3% over the previous best score.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Pan, YaDung. "Fuzzy adaptive recurrent counterpropagation neural networks: A neural network architecture for qualitative modeling and real-time simulation of dynamic processes." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187101.

Повний текст джерела
Анотація:
In this dissertation, a new artificial neural network (ANN) architecture called fuzzy adaptive recurrent counterpropagation neural network (FARCNN) is presented. FARCNNs can be directly synthesized from a set of training data, making system behavioral learning extremely fast. FARCNNs can be applied directly and effectively to model both static and dynamic system behavior based on observed input/output behavioral patterns alone without need of knowing anything about the internal structure of the system under study. The FARCNN architecture is derived from the methodology of fuzzy inductive reasoning and a basic form of counterpropagation neural networks (CNNs) for efficient implementation of finite state machines. Analog signals are converted to fuzzy signals by use of a new type of fuzzy A/D converter, thereby keeping the size of the Kohonen layer of the CNN manageably small. Fuzzy inferencing is accomplished by an application-independent feedforward network trained by means of backpropagation. Global feedback is used to represent full system dynamics. The FARCNN architecture combines the advantages of the quantitative approach (neural network) with that of the qualitative approach (fuzzy logic) as an efficient autonomous system modeling methodology. It also makes the simulation of mixed quantitative and qualitative models more feasible. In simulation experiments, we shall show that FARCNNs can be applied directly and easily to different types of systems, including static continuous nonlinear systems, discrete sequential systems, and as part of large dynamic continuous nonlinear control systems, embedding the FARCNN into much larger industry-sized quantitative models, even permitting a feedback structure to be placed around the FARCNN.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hanson, Jack. "Protein Structure Prediction by Recurrent and Convolutional Deep Neural Network Architectures." Thesis, Griffith University, 2018. http://hdl.handle.net/10072/382722.

Повний текст джерела
Анотація:
In this thesis, the application of convolutional and recurrent machine learning techniques to several key structural properties of proteins is explored. Chapter 2 presents the rst application of an LSTM-BRNN in structural bioinformat- ics. The method, called SPOT-Disorder, predicts the per-residue probability of a protein being intrinsically disordered (ie. unstructured, or exible). Using this methodology, SPOT-Disorder achieved the highest accuracy in the literature without separating short and long disordered regions during training as was required in previous models, and was additionally proven capable of indirectly discerning functional sites located in disordered regions. Chapter 3 extends the application of an LSTM-BRNN to a two-dimensional problem in the prediction of protein contact maps. Protein contact maps describe the intra-sequence distance between each residue pairing at a distance cuto , providing key restraints towards the possible conformations of a protein. This work, entitled SPOT-Contact, introduced the coupling of two-dimensional LSTM-BRNNs with ResNets to maximise dependency propagation in order to achieve the highest reported accuracies for contact map preci- sion. Several models of varying architectures were trained and combined as an ensemble predictor in order to minimise incorrect generalisations. Chapter 4 discusses the utilisation of an ensemble of LSTM-BRNNs and ResNets to predict local protein one-dimensional structural properties. The method, called SPOT-1D, predicts for a wide range of local structural descriptors, including several solvent exposure metrics, secondary structure, and real-valued backbone angles. SPOT-1D was signi cantly improved by the inclusion of the outputs of SPOT-Contact in the input features. Using this topology led to the best reported accuracy metrics for all predicted properties. The protein structures constructed by the backbone angles predicted by SPOT-1D achieved the lowest average error from their native structures in the literature. Chapter 5 presents an update on SPOT-Disorder, as it employs the inputs from SPOT- 1D in conjunction with an ensemble of LSTM-BRNN's and Inception Residual Squeeze and Excitation networks to predict for protein intrinsic disorder. This model con rmed the enhancement provided by utilising the coupled architectures over the LSTM-BRNN solely, whilst also introducing a new convolutional format to the bioinformatics eld. The work in Chapter 6 utilises the same topology from SPOT-1D for single-sequence prediction of protein intrinsic disorder in SPOT-Disorder-Single. Single-sequence predic- tion describes the prediction of a protein's properties without the use of evolutionary information. While evolutionary information generally improves the performance of a computational model, it comes at the expense of a greatly increased computational and time load. Removing this from the model allows for genome-scale protein analysis at a minor drop in accuracy. However, models trained without evolutionary profi les can be more accurate for proteins with limited and therefore unreliable evolutionary information.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Повний текст джерела
Анотація:
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequence problems such as Machine Translation. They contain data dependencies among the executions of time-steps hence the amount of parallelism is severely limited. Thus, evaluating them in an energy-efficient manner is more challenging than evaluating other DNN algorithms. This thesis studies applications using RNNs to improve their energy efficiency on specialized architectures. Specifically, we propose novel energy-saving techniques and highly efficient architectures tailored to the evaluation of RNNs. We focus on the most successful RNN topologies which are the Long Short Term memory and the Gated Recurrent Unit. First, we characterize a set of RNNs running on a modern SoC. We identify that accessing the memory to fetch the model weights is the main source of energy consumption. Thus, we propose E-PUR: an energy-efficient processing unit for RNN inference. E-PUR achieves 6.8x speedup and improves energy consumption by 88x compared to the SoC. These benefits are obtained by improving the temporal locality of the model weights. In E-PUR, fetching the parameters is the main source of energy consumption. Thus, we strive to reduce memory accesses and propose a scheme to reuse previous computations. Our observation is that when evaluating the input sequences of an RNN model, the output of a given neuron tends to change lightly between consecutive evaluations.Thus, we develop a scheme that caches the neurons' outputs and reuses them whenever it detects that the change between the current and previously computed output value for a given neuron is small avoiding to fetch the weights. In order to decide when to reuse a previous value we employ a Binary Neural Network (BNN) as a predictor of reusability. The low-cost BNN can be employed in this context since its output is highly correlated to the output of RNNs. We show that our proposal avoids more than 24.2% of computations. Hence, on average, energy consumption is reduced by 18.5% for a speedup of 1.35x. RNN models’ memory footprint is usually reduced by using low precision for evaluation and storage. In this case, the minimum precision used is identified offline and it is set such that the model maintains its accuracy. This method utilizes the same precision to compute all time-steps.Yet, we observe that some time-steps can be evaluated with a lower precision while preserving the accuracy. Thus, we propose a technique that dynamically selects the precision used to compute each time-step. A challenge of our proposal is choosing a lower bit-width. We address this issue by recognizing that information from a previous evaluation can be employed to determine the precision required in the current time-step. Our scheme evaluates 57% of the computations on a bit-width lower than the fixed precision employed by static methods. We implement it on E-PUR and it provides 1.46x speedup and 19.2% energy savings on average.
Los algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Мельникова, І. К. "Інтелектуальна технологія прогнозу курсу криптовалют методом рекурентних нейромереж". Master's thesis, Сумський державний університет, 2019. http://essuir.sumdu.edu.ua/handle/123456789/76753.

Повний текст джерела
Анотація:
Розроблено інтелектуальну систему прогнозу курсу криптовалют методом рекурентних нейромереж, спроектовано та реалізовано програмне забезпечення з підтримкою можливостей проведення операцій над криптовалютами у вигляді веб-сервісів з мікросервісною архітектурою для підвищення їх ефективності.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tekin, Mim Kemal. "Vehicle Path Prediction Using Recurrent Neural Network." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166134.

Повний текст джерела
Анотація:
Vehicle Path Prediction can be used to support Advanced Driver Assistance Systems (ADAS) that covers different technologies like Autonomous Braking System, Adaptive Cruise Control, etc. In this thesis, the vehicle’s future path, parameterized as 5 coordinates along the path, is predicted by using only visual data collected by a front vision sensor. This approach provides cheaper application opportunities without using different sensors. The predictions are done by deep convolutional neural networks (CNN) and the goal of the project is to use recurrent neural networks (RNN) and to investigate the benefits of using reccurence to the task. Two different approaches are used for the models. The first approach is a single-frame approach that makes predictions by using only one image frame as input and predicts the future location points of the car. The single-frame approach is the baseline model. The second approach is a sequential approach that enables the network the usage of historical information of previous image frames in order to predict the vehicle’s future path for the current frame. With this approach, the effect of using recurrence is investigated. Moreover, uncertainty is important for the model reliability. Having a small uncertainty in most of the predictions or having a high uncertainty in unfamiliar situations for the model will increase success of the model. In this project, the uncertainty estimation approach is based on capturing the uncertainty by following a method that allows to work on deep learning models. The uncertainty approach uses the same models that are defined by the first two approaches. Finally, the evaluation of the approaches are done by the mean absolute error and defining two different reasonable tolerance levels for the distance between the prediction path and the ground truth path. The difference between two tolerance levels is that the first one is a strict tolerance level and the the second one is a more relaxed tolerance level. When using strict tolerance level based on distances on test data, 36% of the predictions are accepted for single-frame model, 48% for the sequential model, 27% and 13% are accepted for single-frame and sequential models of uncertainty models. When using relaxed tolerance level on test data, 60% of the predictions are accepted by single-frame model, 67% for the sequential model, 65% and 53% are accepted for single-frame and sequential models of uncertainty models. Furthermore, by using stored information for each sequence, the methods are evaluated for different conditions such as day/night, road type and road cover. As a result, the sequential model outperforms in the majority of the evaluation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sivakumar, Shyamala C. "Architectures and algorithms for stable and constructive learning in discrete time recurrent neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ31533.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wen, Tsung-Hsien. "Recurrent neural network language generation for dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275648.

Повний текст джерела
Анотація:
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users. Additionally, these rules and heuristics are not scalable and hence not trivially extensible to other domains or languages. A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG. Statistical models also provide an opportunity to learn in-domain human colloquialisms and cross-domain model adaptations. A robust and quasi-supervised NLG model is proposed in this thesis. The model leverages a Recurrent Neural Network (RNN)-based surface realiser and a gating mechanism applied to input semantics. The model is motivated by the Long-Short Term Memory (LSTM) network. The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning and surface realisation into a single optimisation problem. The single optimisation not only bypasses the costly intermediate linguistic annotations but also generates more natural and human-like responses. Furthermore, a domain adaptation study shows that the proposed model can be readily adapted and extended to new dialogue domains via a proposed recipe. Continuing the success of end-to-end learning, the second part of the thesis speculates on building an end-to-end dialogue system by framing it as a conditional generation problem. The proposed model encapsulates a belief tracker with a minimal state representation and a generator that takes the dialogue context to produce responses. These features suggest comprehension and fast learning. The proposed model is capable of understanding requests and accomplishing tasks after training on only a few hundred human-human dialogues. A complementary Wizard-of-Oz data collection method is also introduced to facilitate the collection of human-human conversations from online workers. The results demonstrate that the proposed model can talk to human judges naturally, without any difficulty, for a sample application domain. In addition, the results also suggest that the introduction of a stochastic latent variable can help the system model intrinsic variation in communicative intention much better.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Melidis, Christos. "Adaptive neural architectures for intuitive robot control." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9998.

Повний текст джерела
Анотація:
This thesis puts forward a novel way of control for robotic morphologies. Taking inspiration from Behaviour Based robotics and self-organisation principles, we present an interfacing mechanism, capable of adapting both to the user and the robot, while enabling a paradigm of intuitive control for the user. A transparent mechanism is presented, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the cases where the user has to read and understand operation manuals or has to learn to operate a specific device. The seminal idea behind the work presented is the coupling of intuitive human behaviours with the dynamics of a machine in order to control and direct the machine dynamics. Starting from a tabula rasa basis, the architectures presented are able to identify control patterns (behaviours) for any given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. We provide a deep insight in the advantages of behaviour coupling, investigating the proposed system in detail, providing evidence for and quantifying emergent properties of the models proposed. The structural components of the interface are presented and assessed both individually and as a whole, as are inherent properties of the architectures. The proposed system is examined and tested both in vitro and in vivo, and is shown to work even in cases of complicated environments, as well as, complicated robotic morphologies. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

He, Jian. "Adaptive power system stabilizer based on recurrent neural network." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0008/NQ38471.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Gangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.

Повний текст джерела
Анотація:
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features. We present a pre-training method for RNNLMs, in which the output weights of a feed-forward neural network language model (NNLM) are shared with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. To investigate the effectiveness of the proposed pre-training method, we have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED lectures data. Across the experiments, we observe small but significant improvements in perplexity (PPL) and ASR WER. Next, we present unsupervised adaptation of RNNLMs. We adapted the RNNLMs to a target domain (topic or genre or television programme (show)) at test time using ASR transcripts from first pass recognition. We investigated two approaches to adapt the RNNLMs. In the first approach the forward propagating hidden activations are scaled - learning hidden unit contributions (LHUC). In the second approach we adapt all parameters of RNNLM.We evaluated the adapted RNNLMs by showing the WERs on multi genre broadcast speech data. We observe small (on an average 0.1% absolute) but significant improvements in WER compared to a strong unadapted RNNLM model. Finally, we present the context-enhancement of RNNLMs using prosody and syntactic features. The prosody features were computed from the acoustics of the context words and the syntactic features were from the surface form of the words in the context. We trained the RNNLMs with word duration, pause duration, final phone duration, syllable duration, syllable F0, part-of-speech tag and Combinatory Categorial Grammar (CCG) supertag features. The proposed context-enhanced RNNLMs were evaluated by reporting PPL and WER on two speech recognition tasks, Switchboard and TED lectures. We observed substantial improvements in PPL (5% to 15% relative) and small but significant improvements in WER (0.1% to 0.5% absolute).
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Amartur, Sundar C. "Competitive recurrent neural network model for clustering of multispectral data." Case Western Reserve University School of Graduate Studies / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=case1058445974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ljungehed, Jesper. "Predicting Customer Churn Using Recurrent Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210670.

Повний текст джерела
Анотація:
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The results show that the model performs better than random. This thesis also investigated the use of the K-means algorithm as a replacement to a rule-extraction algorithm. The K-means algorithm contributed to a more comprehensive analytical context regarding the churn prediction of the proposed model.
Illojalitet prediktering används för att identifiera kunder som är påväg att bli mindre lojala och är ett hjälpsamt verktyg för att ett företag ska kunna driva en konkurrenskraftig verksamhet. I detaljhandel behöves en dynamisk definition av illojalitet för att korrekt kunna identifera illojala kunder. Kundens livstidsvärde är ett mått på monetärt värde av en kundrelation. En avstannad förändring av detta värde indikerar en minskning av kundens lojalitet. Denna rapport föreslår en ny metod för att utföra illojalitet prediktering. Den föreslagna metoden består av ett återkommande neuralt nätverk som används för att identifiera illojalitet hos kunder genom att prediktera kunders livstidsvärde. Resultaten visar att den föreslagna modellen presterar bättre jämfört med slumpmässig metod. Rapporten undersöker också användningen av en k-medelvärdesalgoritm som ett substitut för en regelextraktionsalgoritm. K-medelsalgoritm bidrog till en mer omfattande analys av illojalitet predikteringen.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhao, Lichen. "Random pulse artificial neural network architecture." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0006/MQ36758.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dimopoulos, Konstantinos Panagiotis. "Non-linear control strategies using input-state network models." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Poormehdi, Ghaemmaghami Masoumeh. "Tracking of Humans in Video Stream Using LSTM Recurrent Neural Network." Thesis, KTH, Teoretisk datalogi, TCS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217495.

Повний текст джерела
Анотація:
In this master thesis, the problem of tracking humans in video streams by using Deep Learning is examined. We use spatially supervised recurrent convolutional neural networks for visual human tracking. In this method, the recurrent convolutional network uses both the history of locations and the visual features from the deep neural networks. This method is used for tracking, based on the detection results. We concatenate the location of detected bounding boxes with high-level visual features produced by convolutional networks and then predict the tracking bounding box for next frames. Because a video contain continuous frames, we decide to have a method which uses the information from history of frames to have a robust tracking in different visually challenging cases such as occlusion, motion blur, fast movement, etc. Long Short-Term Memory (LSTM) is a kind of recurrent convolutional neural network and useful for our purpose. Instead of using binary classification which is commonly used in deep learning based tracking methods, we use a regression for direct prediction of the tracking locations. Our purpose is to test our method on real videos which is recorded by head-mounted camera. So our test videos are very challenging and contain different cases of fast movements, motion blur, occlusions, etc. Considering the limitation of the training data-set which is spatially imbalanced, we have a problem for tracking the humans who are in the corners of the image but in other challenging cases, the proposed tracking method worked well.
I detta examensarbete undersöks problemet att spåra människor i videoströmmar genom att använda deep learning. Spårningen utförs genom att använda ett recurrent convolutional neural network. Input till nätverket består av visuella features extraherade med hjälp av ett convolutional neural network, samt av detektionsresultat från tidigare frames. Vi väljer att använda oss av historiska detektioner för att skapa en metod som är robust mot olika utmanande situationer, som t.ex. snabba rörelser, rörelseoskärpa och ocklusion. Long Short- Term Memory (LSTM) är ett recurrent convolutional neural network som är användbart för detta ändamål. Istället för att använda binära klassificering, vilket är vanligt i många deep learning-baserade tracking-metoder, så använder vi oss av regression för att direkt förutse positionen av de spårade subjekten. Vårt syfte är att testa vår metod på videor som spelats in med hjälp av en huvudmonterad kamera. På grund av begränsningar i våra träningsdataset som är spatiellt oblanserade har vi problem att spåra människor som befinner sig i utkanten av bildområdet, men i andra utmanande fall lyckades spårningen bra.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Gonzalez, Juan. "Spacecraft Formation Control| Adaptive PID-Extended Memory Recurrent Neural Network Controller." Thesis, California State University, Long Beach, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10978237.

Повний текст джерела
Анотація:

In today’s space industry, satellite formation flying has become a cost-efficient alternative solution for science, on-orbit repair and military time-critical missions. While in orbit, the satellites are exposed to the space environment and unpredictable spacecraft on-board disturbances that negatively affect the attitude control system’s ability to reduce relative position and velocity error. Satellites utilizing a PID or adaptive controller are typically tune to reduce the error induced by space environment disturbances. However, in the case of an unforeseen spacecraft disturbance, such as a fault in an IMU, the PID based attitude control system effectiveness will deteriorate and will not be able to reduce the error to an acceptable magnitude.

In order to address the shortcomings a PID-Extended Memory RNN (EMRNN) adaptive controller is proposed. A PID-EMRNN with a short memory of multiple time steps is capable of producing a control input that improves the translational position and velocity error transient response compared to a PID. The results demonstrate the PID-EMRNN controller ability to generate a faster settling and rise time for control signal curves. The PID-EMRNN also produced similar results for an altitude range of 400 km to 1000 km and inclination range of 40 to 65 degrees angles of inclination. The proposed PID-EMRNN adaptive controller has demonstrated the capability of yielding a faster position error and control signal transient response in satellite formation flying scenario.

Стилі APA, Harvard, Vancouver, ISO та ін.
18

Moradi, Mahdi. "TIME SERIES FORECASTING USING DUAL-STAGE ATTENTION-BASED RECURRENT NEURAL NETWORK." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2701.

Повний текст джерела
Анотація:
AN ABSTRACT OF THE RESEARCH PAPER OFMahdi Moradi, for the Master of Science degree in Computer Science, presented on April 1, 2020, at Southern Illinois University Carbondale.TITLE: TIME SERIES FORECASTING USING DUAL-STAGE ATTENTION-BASED RECURRENT NEURAL NETWORKMAJOR PROFESSOR: Dr. Banafsheh Rekabdar
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wang, Yuchen. "Detection of Opioid Addicts via Attention-based bidirectional Recurrent Neural Network." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1592255095863388.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Corell, Simon. "A Recurrent Neural Network For Battery Capacity Estimations In Electrical Vehicles." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160536.

Повний текст джерела
Анотація:
This study is an investigation if a recurrent long short-term memory (LSTM) based neural network can be used to estimate the battery capacity in electrical cars. There is an enormous interest in finding the underlying reasons why and how Lithium-ion batteries ages and this study is a part of this broader question. The research questions that have been answered are how well a LSTM model estimates the battery capacity, how the LSTM model is performing compared to a linear model and what parameters that are important when estimating the capacity. There have been other studies covering similar topics but only a few that has been performed on a real data set from real cars driving. With a data science approach, it was discovered that the LSTM model indeed is a powerful model to use for estimation the capacity. It had better accuracy than a linear regression model, but the linear regression model still gave good results. The parameters that implied to be important when estimating the capacity were logically related to the properties of a Lithium-ion battery.En studie över hur väl ett återkommande neuralt nätverk kan estimera kapaciteten hos Litium-ion batteri hos elektroniska fordon, när en en datavetenskaplig strategi har använts.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Cunanan, Kevin. "Developing a Recurrent Neural Network with High Accuracy for Binary Sentiment Analysis." Scholarship @ Claremont, 2018. http://scholarship.claremont.edu/cmc_theses/1835.

Повний текст джерела
Анотація:
Sentiment analysis has taken on various machine learning approaches in order to optimize accuracy, precision, and recall. However, Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) account for the context of a sentence by using previous predictions as additional input for future sentence predictions. Our approach focused on developing an LSTM RNN that could perform binary sentiment analysis for positively and negatively labeled sentences. In collaboration with Mariam Salloum, I developed a collection of programs to classify individual sentences as either positive or negative. This paper additionally looks into machine learning, neural networks, data preprocessing, implementation, and resulting comparisons.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Budik, Daniel Borisovich. "A resource-efficient localized recurrent neural network architecture and learning algorithm." 2006. http://etd.utk.edu/2006/BudikDaniel.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Oguntala, George A., Yim Fun Hu, Ali A. S. Alabdullah, Raed A. Abd-Alhameed, Muhammad Ali, and D. K. Luong. "Passive RFID Module with LSTM Recurrent Neural Network Activity Classification Algorithm for Ambient Assisted Living." 2021. http://hdl.handle.net/10454/18418.

Повний текст джерела
Анотація:
Yes
IEEE Human activity recognition from sensor data is a critical research topic to achieve remote health monitoring and ambient assisted living (AAL). In AAL, sensors are integrated into conventional objects aimed to support targets capabilities through digital environments that are sensitive, responsive and adaptive to human activities. Emerging technological paradigms to support AAL within the home or community setting offers people the prospect of a more individually focused care and improved quality of living. In the present work, an ambient human activity classification framework that augments information from the received signal strength indicator (RSSI) of passive RFID tags to obtain detailed activity profiling is proposed. Key indices of position, orientation, mobility, and degree of activities which are critical to guide reliable clinical management decisions using 4 volunteers are employed to simulate the research objective. A two-layer, fully connected sequence long short-term memory recurrent neural network model (LSTM RNN) is employed. The LSTM RNN model extracts the feature of RSS from the sensor data and classifies the sampled activities using SoftMax. The performance of the LSTM model is evaluated for different data size and the hyper-parameters of the RNN are adjusted to optimal states, which results in an accuracy of 98.18%. The proposed framework suits well for smart health and smart homes which offers pervasive sensing environment for the elderly, persons with disability and chronic illness.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lopes, Ana Patrícia Ribeiro. "Study of Deep Neural Network architectures for medical image segmentation." Master's thesis, 2020. http://hdl.handle.net/1822/69850.

Повний текст джерела
Анотація:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)
Medical image segmentation plays a crucial role in the medical field, since it allows performing quantitative analyses used for screening, monitoring and planning the treatment of numerous pathologies. Manual segmentation is time-consuming and prone to inter-rater variability. Thus, several automatic approaches have been proposed for medical image segmentation and most are based on Deep Learning. These approaches became specially relevant after the development of the Fully Convolutional Network. In this method, the fully-connected layers were eliminated and upsampling layers were incorporated, allowing one image to be segmented at once. Nowadays, the developed architectures are based on the FCN, being U-Net one of the most popular. The aim of this dissertation is to study Deep Learning architectures for medical image segmentation. Two challenging and very distinct tasks were selected, namely, retinal vessel segmentation from retinal fundus images and brain tumor segmentation from MRI images. The architectures studied in this work are based on the U-Net, due to high performances obtained in multiple medical segmentation tasks. The models developed for retinal vessel and brain tumor segmentation were tested in publicly available databases, DRIVE and BRATS 2017, respectively. Several studies were performed for the first segmentation task, namely, comparison of downsampling operations, replacement of a downsampling step with dilated convolutions, incorporation of a RNN-based layer and application of test time data augmentation techniques. In the second segmentation task, three modifications were evaluated, specifically, the incorporation of long skip connections, the substitution of standard convolutions with dilated convolutions and the replacement of a downsampling step with dilated convolutions. Regarding retinal vessel segmentation, the final approach achieved accuracy, sensitivity and AUC of 0.9575, 0.7938 and 0.9804, respectively. This approach consists on a U-Net, containing one strided convolution as downsampling step and dilated convolutions with dilation rate of 3, followed by a test time data augmentation technique, performed by a ConvLSTM. Regarding brain tumor segmentation, the proposed approach achieved Dice of 0.8944, 0.8051 and 0.7353 and HD95 of 6.79, 8.34 and 4.76 for complete, core and enhanced regions, respectively. The final method consists on a DLA architecture with a long skip connection and dilated convolutions with dilation rate of 2. For both tasks, the proposed approach is competitive with state-of-the-art methods.
A segmentação de imagens médicas desempenha um papel fundamental na área médica, pois permite realizar análises quantitativas usadas no rastreio, monitorização e planeamento do tratamento de inúmeras patologias. A segmentação manual é demorada e varia consoante o técnico. Assim, diversas abordagens automáticas têm sido propostas para a segmentação de imagens médicas e a maioria é baseada em Deep Learning. Estas abordagens tornaram-se especialmente relevantes após o desenvolvimento da Fully Convolutional Network. Neste método, as camadas totalmente ligadas foram eliminadas e foram incorporadas camadas de upsampling, permitindo que uma imagem seja segmentada de uma só vez. Atualmente, as arquiteturas desenvolvidas baseiam-se na FCN, sendo a U-Net uma das mais populares. O objetivo desta dissertação é estudar arquiteturas de Deep Learning para a segmentação de imagens médicas. Foram selecionadas duas tarefas desafiantes e muito distintas, a segmentação de vasos retinianos a partir de imagens do fundo da retina e a segmentação de tumores cerebrais a partir de imagens de MRI. As arquiteturas estudadas neste trabalho são baseadas na U-Net, devido às elevadas performances que esta obteve em diversas tarefas de segmentação médica. Os modelos desenvolvidos para segmentação de vasos retinianos e de tumores cerebrais foram testados em bases de dados públicas, DRIVE and BRATS 2017, respetivamente. Vários estudos foram realizados para a primeira tarefa, nomeadamente, comparação de operações de downsampling, substituição de uma camada de downsampling por convoluções dilatadas, incorporação de uma camada composta por RNNs e aplicação de técnicas de aumento de dados na fase de teste. Na segunda tarefa, três modificações foram avaliadas, a incorporação de long skip connections, a substituição de convoluções standard por convoluções dilatadas e a substituição de uma camada de downsampling por convoluções dilatadas. Quanto à segmentação de vasos retinianos, a abordagem final obteve accuracy, sensibilidade e AUC de 0.9575, 0.7938 e 0.9804, respetivamente. Esta abordagem consiste numa U-Net, que contém uma convolução strided como operação de downsampling e convoluções dilatadas com dilation rate de 3, seguida de uma técnica de aumento de dados em fase de teste, executada por uma ConvLSTM. Em relação à segmentação de tumores cerebrais, a bordagem proposta obteve Dice de 0.8944, 0.8051 e 0.7353 e HD95 de 6.79, 8.34 e 4.76 para o tumor completo, região central e região contrastante, respetivamente. O método final consiste numa arquitetura DLA com uma long skip connection e convoluções dilatadas com dilation rate de 2. As duas abordagens são competitivas com os métodos do estado da arte.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

(9178400), Sanchari Sen. "Efficient and Robust Deep Learning through Approximate Computing." Thesis, 2020.

Знайти повний текст джерела
Анотація:

Deep Neural Networks (DNNs) have greatly advanced the state-of-the-art in a wide range of machine learning tasks involving image, video, speech and text analytics, and are deployed in numerous widely-used products and services. Improvements in the capabilities of hardware platforms such as Graphics Processing Units (GPUs) and specialized accelerators have been instrumental in enabling these advances as they have allowed more complex and accurate networks to be trained and deployed. However, the enormous computational and memory demands of DNNs continue to increase with growing data size and network complexity, posing a continuing challenge to computing system designers. For instance, state-of-the-art image recognition DNNs require hundreds of millions of parameters and hundreds of billions of multiply-accumulate operations while state-of-the-art language models require hundreds of billions of parameters and several trillion operations to process a single input instance. Another major obstacle in the adoption of DNNs, despite their impressive accuracies on a range of datasets, has been their lack of robustness. Specifically, recent efforts have demonstrated that small, carefully-introduced input perturbations can force a DNN to behave in unexpected and erroneous ways, which can have to severe consequences in several safety-critical DNN applications like healthcare and autonomous vehicles. In this dissertation, we explore approximate computing as an avenue to improve the speed and energy efficiency of DNNs, as well as their robustness to input perturbations.

Approximate computing involves executing selected computations of an application in an approximate manner, while generating favorable trade-offs between computational efficiency and output quality. The intrinsic error resilience of machine learning applications makes them excellent candidates for approximate computing, allowing us to achieve execution time and energy reductions with minimal effect on the quality of outputs. This dissertation performs a comprehensive analysis of different approximate computing techniques for improving the execution efficiency of DNNs. Complementary to generic approximation techniques like quantization, it identifies approximation opportunities based on the specific characteristics of three popular classes of networks - Feed-forward Neural Networks (FFNNs), Recurrent Neural Networks (RNNs) and Spiking Neural Networks (SNNs), which vary considerably in their network structure and computational patterns.

First, in the context of feed-forward neural networks, we identify sparsity, or the presence of zero values in the data structures (activations, weights, gradients and errors), to be a major source of redundancy and therefore, an easy target for approximations. We develop lightweight micro-architectural and instruction set extensions to a general-purpose processor core that enable it to dynamically detect zero values when they are loaded and skip future instructions that are rendered redundant by them. Next, we explore LSTMs (the most widely used class of RNNs), which map sequences from an input space to an output space. We propose hardware-agnostic approximations that dynamically skip redundant symbols in the input sequence and discard redundant elements in the state vector to achieve execution time benefits. Following that, we consider SNNs, which are an emerging class of neural networks that represent and process information in the form of sequences of binary spikes. Observing that spike-triggered updates along synaptic connections are the dominant operation in SNNs, we propose hardware and software techniques to identify connections that can be minimally impact the output quality and deactivate them dynamically, skipping any associated updates.

The dissertation also delves into the efficacy of combining multiple approximate computing techniques to improve the execution efficiency of DNNs. In particular, we focus on the combination of quantization, which reduces the precision of DNN data-structures, and pruning, which introduces sparsity in them. We observe that the ability of pruning to reduce the memory demands of quantized DNNs decreases with precision as the overhead of storing non-zero locations alongside the values starts to dominate in different sparse encoding schemes. We analyze this overhead and the overall compression of three different sparse formats across a range of sparsity and precision values and propose a hybrid compression scheme that identifies that optimal sparse format for a pruned low-precision DNN.

Along with improved execution efficiency of DNNs, the dissertation explores an additional advantage of approximate computing in the form of improved robustness. We propose ensembles of quantized DNN models with different numerical precisions as a new approach to increase robustness against adversarial attacks. It is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. We overcome this limitation to achieve the best of both worlds, i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble.


In summary, this dissertation establishes approximate computing as a promising direction to improve the performance, energy efficiency and robustness of neural networks.

Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sarvadevabhatla, Ravi Kiran. "Deep Learning for Hand-drawn Sketches: Analysis, Synthesis and Cognitive Process Models." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5351.

Повний текст джерела
Анотація:
Deep Learning-based object category understanding is an important and active area of research in Computer Vision. Most work in this area has predominantly focused on the portion of depiction spectrum consisting of photographic images. However, depictions at the other end of the spectrum, freehand sketches, are a fascinating visual representation and worthy of study in themselves. In this thesis, we present deep-learning approaches for sketch analysis, sketch synthesis and modelling sketch-driven cognitive processes. On the analysis front, we first focus on the problem of recognizing hand-drawn line sketches of objects. We propose a deep Recurrent Neural Network architecture with a novel loss formulation for sketch object recognition. Our approach achieves state-of-the-art results on a large-scale sketch dataset. We also show that the inherently online nature of our framework is especially suitable for on-the- fly recognition of objects as they are being drawn. We then move beyond object-level label prediction to the relatively harder problem of parsing sketched objects, i.e. given a freehand object sketch, determine its salient attributes (e.g. category, semantic parts, pose). To this end, we propose SketchParse, the first deep-network architecture for fully automatic parsing of freehand object sketches. We subsequently demonstrate SketchParse's abilities (i) on two challenging large-scale sketch datasets (ii) in parsing unseen, semantically related object categories (iii) in improving fine-grained sketch-based image retrieval. As a novel application, we also illustrate how SketchParse's output can be used to generate caption-style descriptions for hand-drawn sketches. On the synthesis front, we design generative models for sketches via Generative Adversarial Networks (GANs). Keeping the limited size of sketch datasets in mind, we propose DeLi- GAN, a novel architecture for diverse and limited training data scenarios. In our approach, we reparameterize the latent generative space as a mixture model and learn the mixture model's parameters along with those of GAN. This seemingly simple modification to the vanilla GAN framework is surprisingly e ective and results in models which enable diversity in generated samples although trained with limited data. We show that DeLiGAN generates diverse samples not just for hand-drawn sketches but for other image modalities as well. To quantitatively characterize intra-class diversity of generated samples, we also introduce a modi ed version of \inception-score", a measure which has been found to correlate well with human assessment of generated samples. We subsequently present an approach for synthesizing minimally discriminative sketch-based object representations which we term category-epitomes. The synthesis procedure concurrently provides a natural measure for quantifying the sparseness underlying the original sketch, which we term epitome-score. We show that the category-level distribution of epitome-scores can be used to characterize level of detail required in general for recognizing object categories. On the cognitive process modelling front, we analyze the results of a free-viewing eye fixation study conducted on freehand sketches. The analysis reveals that eye relaxation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. This multi-level consistency is remarkable given the variability in depiction and extreme image content sparsity that characterizes hand-drawn object sketches. We show that the multi-level consistency in the fixation data can be exploited to predict a sketch's category given only its fixation sequence and to build a computational model which predicts part-labels underlying the eye fixations on objects. The ability of machine-based agents to play games in human-like fashion is considered a benchmark of progress in AI. Motivated by this observation, we introduce the first computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, an elementary version of Visual Question Answering task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual data and gathering open-ended guess-words from human guessers. To mimic humans playing Pictionary, we propose a deep neural model which generates guess-words in response to temporally evolving human-drawn sketches. The model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate the model on the large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test to obtain human impressions of the guess-words generated by humans and our model. The promising experimental results demonstrate the challenges and opportunities in building computational models for Pictionary and similarly themed games.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Krueger, David. "Designing Regularizers and Architectures for Recurrent Neural Networks." Thèse, 2016. http://hdl.handle.net/1866/14019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Lin, Ming Jang, and 林明璋. "Research on Dynamic Recurrent Neural Network." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/70522525556782624102.

Повний текст джерела
Анотація:
碩士
國立政治大學
應用數學研究所
82
Our task in this paper is to discuss the Recurrent Neural Network. We construct a singal layer neural network and apply three different learning rules to simulate circular trajectory and figure eight. Also, we present the proof of convergence.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Agrawal, Harish. "Novel Neural Architectures based on Recurrent Connections and Symmetric Filters for Visual Processing." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6022.

Повний текст джерела
Анотація:
Artificial Neural Networks (ANN) have been very successful due to their ability to extract meaningful information without any need for pre-processing raw data. First artificial neural networks were created in essence to understand how the human brain works. The expectations were that we would get a deeper understanding of the brain functions and human cognition, which we cannot explain just by biological experiments or intuitions. The field of ANN has grown so much now that the ANNs are not only limited for the purpose which they emerged for but are also being exploited for their unmatched pattern-matching and learning capabilities in addressing many complex problems, the problems which are difficult or impossible to solve by standard computational and statistical methods. The research has gone from ANN being used only for understanding brain functions to creating new types of ANN based on the neuronal pathways present in the brain. This thesis proposes two novel neural network layers based on studies on the human brain. First is a type of Recurrent Convolutional Neural Network layer called a Long-Short-Term-Convolutional-Neural-Network (LST_CNN) and the other is a Symmetric Convolutional Neural Network layer based on Symmetric Filters. The current feedforward neural network models have been successful in visual processing. Due to this, the lateral and feedback processing has been under-explored. Existing visual processing networks (Convolutional Neural Networks) lack the recurrent neuronal dynamics which are present in ventral visual pathways of human and non-human primate brains. Ventral visual pathways contain similar densities of feedforward and feedback connections. Furthermore, the current convolutional models are limited in learning spatial information, but we should also focus on learning temporal visual information, considering that the world is dynamic in nature and not static. Thus motivating us to incorporate recurrence in the convolutional neural networks. The layer we propose (LST_CNN) is not just limited to spatial learning but is also capable of exploiting temporal knowledge from the data due to the implicit presence of recurrence in the structure. The capability of LST_CNN’s spatiotemporal learning is examined by testing it on Object Detection and Tracking. Due to the fact that LST_CNN is based on LSTM, we explicitly evaluate its spatial learning capabilities through experiments. The visual cortex in the human brain has evolved to detect patterns and hence has specialized in detecting the pervasive symmetry in Nature. When filter weights from deep SOTA networks are visualized, several of them are symmetric similar to the features they represent. Hence inspiring the idea of constraining standard convolutional filter weights to symmetric weights. Given that the computational requirements for DNN training have doubled every few months, researchers have been trying to come up with NN architectural changes to combat this. In light of that, deploying symmetric filters reduces not only computational resources but also memory footprint. Therefore, using symmetric filters is beneficial for inference and also during training. Despite the reduction in trainable parameters, the accuracy is comparable to the standard version, thus allowing us to infer that they prevent over-fitting. We establish the quintessence of symmetric filters in NN models.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

CHEN, HUNG-PEI, and 陳虹霈. "Integrating Convolutional Neural Network and Recurrent Neural Network for Automatic Text Classification." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/4jqh8z.

Повний текст джерела
Анотація:
碩士
東吳大學
數學系
108
With the rapid development of huge data research area, the demand for processing textual information is increasing. Text classification is still a hot research in the field of natural language processing. In the traditional text mining process, we often use the "Bag-of-Words" model, which discards the order of the words in the sentence, mainly concerned with the frequency of occurrence of the words. TF-IDF (term frequency–inverse document frequency) is one of the techniques for feature extraction commonly used in text exploration and classification. Therefore, we combine convolutional neural network and recurrent neural network to consider the semantics and order of the words in the sentence for text classification. We apply 20Newsgroups news group as our test dataset. The performance of the result achieves an accuracy of 86.3% on the test set and improves about 3% comparing with the traditional model.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Yang, Neng-Jie, and 楊能傑. "An Optimal Recurrent Fuzzy Neural Network Controller." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/22893053061456487124.

Повний текст джерела
Анотація:
碩士
中原大學
電機工程研究所
90
In this thesis, an optimal recurrent fuzzy neural network controller is by an adaptive genetic algorithm. The recurrent fuzzy neural network has recurrent connections representing memory elements and uses a generalized dynamic backpropagation algoruthm to adjust fuzzy parameters on-line. Usually, the learning rate and the initial parameter values are chosen randomly or by experience, therefore is human resources consuming and inefficient. An adaptive genetic algorithm is used instead to optimize them. The adaptive genetic algorithm adjust the probability of crossover and mutation adaptively according to fitness values, therefore can avoid falling into local optimum and speed up convergence. The optimal recurrent fuzzy neural network controller is applied to the simulation of a second-ordeer linear system, a nonlinear system, a highly nonlinear system with instantaneous loads. The simulation results show that the learning rate as well as other fuzzy parameters are important factor for the optimal design. Certainly, with the optimal design, every simulation achieve the lowest sum of squared error and the design process done automatically by computer programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

LIN, CHENG-YANG, and 林政陽. "Recurrent Neural Network-based Microphone Howling Suppression." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hd839v.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
電子工程系
107
When using the karaoke system to sing, it is often too close the microphone and power of the amplified speaker is too large, causing a positive feedback and howling making the singer and the listener to be uncomfortable. Generally, to solve the microphone howling, often using a frequency shift to interrupt the resonance, or using a band-stop filter to remedy afterwards. But both may cause sound quality damage. Therefore, we want to use the adaptive feedback cancellation algorithm. Using the input source of the amplified speaker as the reference signal to automatically estimate the feedback signals that may record in different signal-to-noise. And eliminate the signal gain before howling occurs directly from the source. Based on the above ideas, in this paper, the howling elimination algorithm of normalized least mean square (NLMS) is realized, especially considering the nonlinear distortion of the sound amplification system, and the advanced algorithm based on recurrent neural network (RNN) is proposed. And in the experiment, test the time-domain or frequency-domain processing separately, and use NLMS or RNN, a total of four different combinations, the convergence speed and computational demand of different algorithms under different temperament and different environmental spatial response situations and howling suppression effect. The experimental results show that: (1) the convergence in the time domain is faster, (2) Stable effect in the frequency domain (3) Time domain RNN is best at eliminating effects, but there are too large calculations.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Abdolzadeh, Vida. "Efficient Implementation of Recurrent Neural Network Accelerators." Tesi di dottorato, 2020. http://www.fedoa.unina.it/13225/1/Abdolzadeh_Vida_32.pdf.

Повний текст джерела
Анотація:
In this dissertation, we propose an accelerator for the implementation of Lthe ong Short-Term Memory layer in Recurrent Neural Networks. We analyze the effect of quantization on the accuracy of the network and we derive an architecture that improves the throughput and latency of the accelerator. The proposed technique only requires one training process, hence reducing the design time. We present the implementation results of the proposed accelerator. The performance compares favorably with other solutions presented in Literature. The goal of this thesis is to choose which circuit is better in terms of precision, area and timing. In addition, to verify that the chosen circuit works perfectly as activation functions, it is converted in Vivado HLS using C and then integrated in an LSTM Layer. A Speech recognition application has been used to test the system. The results are compared with the ones computed using the same Layer in Matlab to obtain the accuracy and to decide if the precision of the Non-Linear functions is sufficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kurach, Karol. "Deep Neural Architectures for Algorithms and Sequential Data." Doctoral thesis, 2016. https://depotuw.ceon.pl/handle/item/1860.

Повний текст джерела
Анотація:
The first part of the dissertation describes two deep neural architectures with external memories: Neural Random-Access Machine (NRAM) and Hierarchical Attentive Memory (HAM). The NRAM architecture is inspired by Neural Turing Machines, but the crucial difference is that it can manipulate and dereference pointers to its random-access memory. This allows it to learn concepts that require pointers chasing, such as “linked list” or “binary tree”. The HAM architecture is based on a binary tree with leaves corresponding to memory cells. This enables the memory access in Θ(log n), which is a significant improvement over Θ(n) access used in the standard attention mechanism. We show that Long Short-Term Memory (LSTM) augmented with HAM can successfully learn to solve a number of challenging algorithmic problems. In particular, it is the first architecture that learns from pure input/output examples to sort n numbers in time Θ(n log n) and the solution generalizes well to longer sequences. We also show that HAM is very generic and can be trained to act like classic data structures: a stack, a FIFO queue and a priority queue.The second part of the dissertation describes three novel systems based on deep neural networks. The first one is a framework for finding computationally efficient versions of symbolic math expressions. By using a recursive neural network it can efficiently search the state space and quickly find identities with significantly better time complexity (e.g., Θ(n^2) instead of exponential time). Then, we present a system for predicting dangerous events from multivariate, non-stationary time series data based on recurrent neural networks. It requires almost no feature engineering and achieved very good results in two machine learning competitions. Finally, we describe Smart Reply– an end-to-end system for suggesting automatic responses to e-mails. The system is capable of handling hundreds of millions messages daily. Smart Reply was successfully deployed in Google Inbox and currently generates 10% of responses on mobile devices.
Pierwsza część pracy przedstawia dwie głębokie architektury neuronowe wykorzystujące pamięć zewnętrzną: Neural Random-Access Machine (NRAM) oraz Hierarchical Attentive Memory (HAM). Pomysł na architekturę NRAM jest inspirowany Neuronowymi Maszynami Turinga (NTM). NRAM, w przeciwieństwie do NTM, posiada mechanizmy umożliwiające wykorzystanie wskaźników do pamięci. To sprawia, że NRAM jest w stanie nauczyć się pojęć wymagających użycia wskaźników, takich jak „lista jednokierunkowa” albo „drzewo binarne”. Architektura HAM bazuje na pełnym drzewie binarnym, w którym liście odpowiadają elementom pamięci. Umożliwia to wykonywanie operacji na pamięci w czasie Θ(log n), co jest znaczącą poprawą względem dostępu w czasie Θ(n), standardowo używanym w implementacji mechanizmu „skupienia uwagi” (ang. attention) w sieciach rekurencyjnych. Pokazujemy, że sieć LSTM połączona z HAM jest w stanie rozwiązać wymagające zadania o charakterze algorytmicznym. W szczególności, jest to pierwsza architektura, która mając dane jedynie pary wejście/poprawne wyjście potrafi się nauczyć sortowania elementów działającego w złożoności Θ(n log n) i dobrze generalizującego się do dłuższych ciągów. Pokazujemy również, że HAM jest ogólną architekturą, która może zostać wytrenowana aby działała jak standardowe struktury danych, takie jak stos, kolejka lub kolejka priorytetowa. Druga część pracy przedstawia trzy nowatorskie systemy bazujące na głębokich sieciach neuronowych. Pierwszy z nich to system do znajdowania wydajnych obliczeniowo formuł matematycznych. Przy wykorzystaniu sieci rekursywnej system jest w stanie efektywnie przeszukiwać przestrzeń stanów i szybko znajdować tożsame formułyo istotnie lepszej złożoności asymptotycznej (przykładowo, Θ(n^2) zamiast złożoności wykładniczej). Następnie, prezentujemy oparty na rekurencyjnej sieci neuronowej system do przewidywania niebezpiecznych zdarzeń z wielowymiarowych, niestacjonarnych szeregów czasowych. Nasza metoda osiągnęła bardzo dobre wyniki w dwóch konkursach uczenia maszynowego. Jako ostatni opisany został Smart Reply – system do sugerowania automatycznych odpowiedzi na e-maile. Smart Reply został zaimplementowany w Google Inbox i codziennie przetwarza setki milionów wiadomości. Aktualnie, 10% wiadomości wysłanych z urządzeń mobilnych jest generowana przez ten system.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Hong, Frank Shihong. "Structural knowledge in simple recurrent network?" 1999. https://scholarworks.umass.edu/theses/2348.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wang, Hui-Hua, and 王慧華. "Adaptive Learning Rates in Diagonal Recurrent Neural Network." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/50105668211095009187.

Повний текст джерела
Анотація:
碩士
大同工學院
機械工程學系
84
In this paper, the ideal best adaptive learning rates arederived out for diagonal recurrent neural network. The adaptivelearning rates are chosen for fitting error convergence requirements.And the convergence requirements are discussed then modified for a practical control system. Finally the simulation results are shownin diagonal recurrent neural network based control system with the modified adaptive learning rates.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Liao, Yuan-Fu, and 廖元甫. "Isolated Mandarin Speech Recognition Using Recurrent Neural Network." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/68290588901248152864.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Thirion, Jan Willem Frederik. "Recurrent neural network-enhanced HMM speech recognition systems." Diss., 2002. http://hdl.handle.net/2263/29149.

Повний текст джерела
Анотація:
Please read the abstract in the section 00front of this document
Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2006.
Electrical, Electronic and Computer Engineering
unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Tsai, Yao-Cheng, and 蔡曜丞. "Acoustic Echo Cancellation Based on Recurrent Neural Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/jgk3ea.

Повний текст джерела
Анотація:
碩士
國立中央大學
通訊工程學系
107
Acoustic echo cancellation is a common problem in speech and signal processing until now. Application scenarios such as telephone conference, hands-free handsets and mobile communications. In the past we used adaptive filters to deal with acoustic echo cancellation, and today we can use deep learning to solve complex problems in acoustic echo cancellation. The method proposed in this work is to consider acoustic echo cancellation as a problem of speech separation, instead of the traditional adaptive filter to estimate acoustic echo. And use the recurrent neural network architecture in deep learning to train the model. Since the recurrent neural network has a good ability to simulate time-varying functions, it can play a role in solving the problem of acoustic echo cancellation. We train a bidirectional long short-term memory network and a bidirectional gated recurrent unit. Features are extracted from single-talk speech and double-talk speech. Adjust weights to control the ratio between double-talk speech and single-talk speech, and estimate the ideal ratio mask. This way to separate the signal, in order to achieve the purpose of removing the echo. The experimental results show that the method has good effect in echo cancellation.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Hu, Hsiao-Chun, and 胡筱君. "Recurrent Neural Network based Collaborative Filtering Recommender System." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ytva33.

Повний текст джерела
Анотація:
碩士
國立臺灣科技大學
資訊工程系
107
As the rapid development of e-commerce, Collaborative Filtering Recommender System has been widely applied to major network platforms. Predict customers’ preferences accurately through recommender system could solve the problem of information overload for users and reinforce their dependence on the network platform. Since the recommender system based on collaborative filtering has the ability to recommend products that are abstract or difficult to describe in words, research related to collaborative filtering has attracted more and more attention. In this paper, we propose a deep learning model framework for collaborative filtering recommender system. We use Recurrent Neural Network as the most important part of this framework which makes our model have the ability to consider the timestamp of implicit feedbacks from each user. This ability then significantly improve the performance of our models when making personalization item recommendations. In addition, we also propose a training data format for Recurrent Neural Network. This format makes our recommender system became the first Recurrent Neural Network model that can consider both positive and negative implicit feedback instance during the training process. Through conducted experiments on the two real-world datasets, MovieLens-1m and Pinterest, we verify that our model can finish the training process during a shorter time and have better recommendation performance than the current deep learning based Collaborative Filtering model.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Chiu, Yi-Feng, and 邱一峰. "STUDY ON SELF-CONSTRUCTING FUZZY NEURAL NETWORK CONTROLLER USING RECURRENT NEURAL NETWORK LEARNING STRATEGY." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/38808034711756082416.

Повний текст джерела
Анотація:
碩士
大同大學
電機工程學系(所)
101
In this thesis, the self-constructing fuzzy neural network controller (SCFNN) using recurrent neural network (RNN) learning strategy is proposed. For back-propagation (BP) algorithm of the SCFNN controller, the exact calculation of the Jacobian of the system cannot be determined. In this thesis, the RNN learning strategy is proposed to replace the error term of SCFNN controller. After the training of the RNN learning strategy, that will receive the relation between controlling signal and result of the nonlinear of the plant completely. Moreover, the structure and the parameter-learning phases are preformed concurrently and on-line in the SCRFNN. The SCFNN controller is designed to achieve the tracking control of an electronic throttle. The proposed controller, there are two processes that one is structure learning phase and another is parameter learning phase. The structure learning is based on the partition of input space, and the parameter learning is based on the supervised gradient-decent method using BP algorithm. Mahalanobis distance (M-distance) method in this thesis is employed as the criterion to identify the Gaussian function will be generated / eliminated or not. Finally, the simulation results of the electronic throttle valve are provided to demonstrate the performance and effectiveness of the proposed controller.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Chung-Hao, and 王仲豪. "STUDY ON SELF-CONSTRUCTING FUZZY NEURAL NETWORK CONTROLLER USING RECURRENT WAVELET NEURAL NETWORK LEARNING STRATEGY." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/66373384738532600320.

Повний текст джерела
Анотація:
碩士
大同大學
電機工程學系(所)
102
In this thesis, the self-constructing fuzzy neural network controller (SCFNN) using recurrent wavelet neural network (RWNN) learning strategy is proposed. SCFNN has been proven over the years to simulate the relationship between input and output of the nonlinear dynamic system. Nevertheless, there are still has the drawback of training retard in this control method. The RWNN approach with a widely similar range of nature since the formation of wavelet transform through the dilation and translation of mother wavelet, it has capability to resolve time domain and scaled and very suitable to describe the function of the nonlinear phenomenon. Importing the adaptable of RWNN learning strategy can improve the learning capability for SCFNN controller. The proposed controller has two learning phase, that is structure learning and parameter learning. In the former, Mahalanobis distance method is used as the basis for identify the function of Gaussian is generated or eliminated. The latter is based on the gradient-decent method to update parameters; the both learning phases are synchronized and real-time executed in parallel. In this study, the electronic throttle system as a control plant of nonlinear dynamic in order to achieve the throttle angle control, the simulation shows that the proposed control method has good capability of identification system and accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chang, Chun-Hung, and 張俊弘. "Pricing Euro Currency Options—Comparison of Back-Propagation Neural Network Modeland Recurrent Neural Network Model." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/08966045306928572228.

Повний текст джерела
Анотація:
碩士
中原大學
企業管理研究所
92
During the past four decades, Options have become one of the most popular derivatives products in the financial market. The accuracy of pricing option has been an interesting topic since Black and Scholes’ model in 1973. The target of this investigation is Euro currency option. The study uses two artificial neural network models (i.e., back-propagation neural network and recurrent neural network ) and employs four volatility variables (i.e., historical volatility, implied volatility, GARCH volatility and non-volatility) in order to compare the pricing performance of all kinds of association, and to analyze the valuation abilities of these two artificial neural network models and the applicability of volatility variables. Furthermore, this work verifies that whether the volatility is the key input under the learning mechanism of the artificial neural network models. The empirical results show that there are some limitations to forecast the accurate valuation for the long-term period on both neural network models. After reducing the length of forecast periods, the implied volatility variable in both artificial neural network models produced the smallest error, while non-volatility variable resulted in the largest error of four volatility variables. Regarding the other two volatility variables, this study finds that, under the back-propagation neural network model, GARCH volatility is just inferior to implied volatility, but the performance of historical volatility is better than GARCH volatility under the recurrent neural network model. In summary, this work suggests that different volatilities chosen will cause various impacts. Therefore, appropriate volatility used seems to be more important than the adoption of which artificial neural network models.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Li, Jyun-Hong, and 李俊宏. "Object Mask and Boundary Guided Recurrent Convolution Neural Network." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/cz2j2t.

Повний текст джерела
Анотація:
碩士
國立中央大學
資訊工程學系
104
Convolution neural network (CNN) has outstanding performance on recognition, CNN not only enhance the effectiveness of the whole-image classification, but also makes the identification of local task upgrade. The Full convolution neural network (FCN) also makes the improvement on semantic image segmentation, compared to the traditional way using region proposal combined super vector machine, and significantly improved the accuracy of semantic segmentation. In our paper, we combined two network to improve accuracy. One produces mask, and the other one classifies label of pixel. One of our proposed is that, we change the joint images of domain transform in DT-EdgeNet [19]. Due to the joint images of DT-EdgeNet are edges. These edges include the edges of object, which do not belong to the training set. So we guess that result of [19] after domain transform mind be influence by these edges. Our mask net can produce score map of background, object and boundary. These results do not include object belong to the training set. Therefore, we can reduce the influence of non-class object. Our mask net can also produce mask to optimize spatial information. Our other proposal is that we concatenate different pixel stride of OBG-FCN [18]. By adding this concatenate layer to train net, we can enhance the accuracy of object of boundary. In the end, we tested our proposed architecture on Pascal VOC2012, and got 6.6% higher than baseline on mean IOU.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Huang, Bo-Yuan, and 黃柏元. "The Composite Design of Recurrent Neural Network H∞ - Compensator." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/35654951335458184154.

Повний текст джерела
Анотація:
碩士
國立成功大學
系統及船舶機電工程學系碩博士班
93
In this study, a composite design of Recurrent Neural Network (RNN) H∞-Compensator is proposed for tracking the desired input. The composite control system is composed of an H∞ compensator, which is proposed by Hwang 【3】 and Doyle 【6】, and a back-propagation RNN compensator.  In order to make the controlled system robust, the H∞ control law is relatively conservative in the solution process. To speed up the convergence of tracking errors and match the prescribed performance, the recurrent neural network with self-learning algorithm is used to improve the performance of the H∞-compensator. The back-propagation algorithm in the proposed RNN-H∞ compensator is applied to minimize the calculating time of the predicting parameters.  Computer simulation results show that the desired performance can easily be achieved by using the proposed RNN-H∞ compensator under the presence of disturbances.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Hau-Lung, Huang, and 黃浩倫. "Real Time Learning Recurrent Neural Network for Flow Estimation." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/90765984108789147121.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
農業工程學研究所
87
This research presents an alternative approach of the Artificial Neural Network (ANN) model to estimate streamflow. The architecture of Recurrent Neural Network (RNN) that we used provides a representation of dynamic internal feedback loops in the system to store information for later use. The Real-Time Recurrent Learning (RTRL) algorithm is implanted to enhance the learning efficiency. The main feature of the RTRL is that it doesn''t need a lot of historical examples for training. Combining the RNN and RTRL to model watershed rainfall-runoff process will complement traditional techniques in the streamflow estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Peng, Chung-chi, and 彭中麒. "Recurrent Neural Network Control for a Synchronous Reluctance Motor." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/21986022062786916763.

Повний текст джерела
Анотація:
碩士
國立雲林科技大學
電機工程系碩士班
101
This thesis develops a digital signal processor (dSPACE inc. DS1104) based synchronous reluctance motor (SynRM) drive system. Elman neural network and modified Elman neural network controller are proposed in the SynRM when the SynRM has parameters variations and external disturbances. Recurrent Neural Network (RNN) and Elman neural network (ENN) are compared which ENN has faster convergence for special recurrent structure. The on-line parameters learning of the neural network used the back-propagation (BP) algorithm. We use the discrete-type Lyapunov function to guarantee the output error convergence. Finally, the proposed controller algorithms are shown in experimental results effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Lu, Tsai-Wei, and 盧采威. "Tikhonov regularization for deep recurrent neural network acoustic modeling." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/70636533678066549649.

Повний текст джерела
Анотація:
碩士
國立交通大學
電信工程研究所
102
Deep learning has been widely demonstrated to achieve high performance in many classification tasks. Deep neural network is now a new trend in the areas of automatic speech recognition. In this dissertation, we deal with the issue of model regularization in deep recurrent neural network and develop the deep acoustic models for speech recognition in noisy environments. Our idea is to compensate the variations of input speech data in the restricted Boltzmann machine (RBM) which is applied as a pre-training stage for feature learning and acoustic modeling. We implement the Tikhonov regularization in pre-training procedure and build the invariance properties in acoustic neural network model. The regularization based on weight decay is further combined with Tikhonov regularization to increase the mixing rate of the alternating Gibbs Markov chain so that the contrastive divergence training tends to approximate the maximum likelihood learning. In addition, the backpropagation through time (BPTT) algorithm is developed in modified truncated minibatch training for recurrent neural network. This algorithm is not implemented in the recurrent weights but also in the weights between previous layer and recurrent layer. In the experiments, we carry out the proposed methods using the open-source Kaldi toolkit. The experimental results using the speech corpora of Resource Management (RM) and Aurora4 show that the ideas of hybrid regularization and BPTT training do improve the performance of deep neural network acoustic model for robust speech recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

CHEN, JYUN-HE, and 陳均禾. "System Identification and Classification Using Elman Recurrent Neural Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/825p2n.

Повний текст джерела
Анотація:
碩士
國立雲林科技大學
電機工程系
107
In recent years, the fast development of Artificial Intelligence has promoted the technological progress. That the three major technologies, Machine Learning, Deep Learning, and Natural Language Processing. Machine Learning is the largest part. The use of software programming through artificial neural networks allows computers to emulate learning abilities like the human brain. In this thesis, in order to understand the learning effect of artificial neural networks on classification problems and nonlinear system identification, an Elman neural network with self-feedback factor is used. In this thesis, in order to study the classification problem and system identification problem, six algorithms, i.e., RTRL, GA, PSO, BBO, IWO and Hybrid IWO/BBO methods, are utilized to learn the weight of Elman neural network. To explore the effectiveness of algorithms and neural network architectures, four classification problems are used, Breast Cancer Data Set, Parkinsons Data Set, SPECT Heart Data Set, and Lung Cancer Data Set. Three nonlinear system identification problems are used, Nonlinear plant, Henon system and Mackey-Glass time series. Finally, the MSE, STD and the Classification rate, are used in the experimental classification problem. The MSE, STD and NDEI, are used to compare and analyze the system identification problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

CHEN, SHEN-CHI, and 陳順麒. "On the Recurrent Neural Network Based Intrusion Detection System." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/75tb39.

Повний текст джерела
Анотація:
碩士
逢甲大學
資訊工程學系
107
With the advancement of modern science and technology, numerous applications of the Internet of Things are developing faster and faster. Smart grid is one of the examples which provides full communication, monitor, and control abilities to the components in the power systems in order to meet the increasing demands of reliable energy. In such systems, many components can be monitored and controlled remotely. As a result, they could be vulnerable to malicious cyber-attacks if there exist exploitable loopholes. In the power system, the disturbances caused by cyber-attacks are mixed with those caused by natural events. It is crucial for the intrusion detection systems in the smart grid to classify the types of disturbances and pinpoint the attacks with high accuracy. The amount of information in a smart grid system is much larger than before, and the amount of computation of the big data increases accordingly. Many analyzing techniques have been proposed to extract useful information in these data and deep learning is one of them. It can be applied to “learn” a model from a large set of training data and classify unknown events from subsequent data. In this paper, we apply the methods of recurrent neural network (RNN) algorithm as well as two other variants to train models for intrusion detection in smart grid. Our experiment results showed that RNN can achieves high accuracy and precision on a set of real data collected from an experimental power system network.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії