Добірка наукової літератури з теми "Incremental neural network"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Incremental neural network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Incremental neural network"

1

Yang, Shuyuan, Min Wang, and Licheng Jiao. "Incremental constructive ridgelet neural network." Neurocomputing 72, no. 1-3 (December 2008): 367–77. http://dx.doi.org/10.1016/j.neucom.2008.01.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Siddiqui, Zahid Ali, and Unsang Park. "Progressive Convolutional Neural Network for Incremental Learning." Electronics 10, no. 16 (August 5, 2021): 1879. http://dx.doi.org/10.3390/electronics10161879.

Повний текст джерела
Анотація:
In this paper, we present a novel incremental learning technique to solve the catastrophic forgetting problem observed in the CNN architectures. We used a progressive deep neural network to incrementally learn new classes while keeping the performance of the network unchanged on old classes. The incremental training requires us to train the network only for new classes and fine-tune the final fully connected layer, without needing to train the entire network again, which significantly reduces the training time. We evaluate the proposed architecture extensively on image classification task using Fashion MNIST, CIFAR-100 and ImageNet-1000 datasets. Experimental results show that the proposed network architecture not only alleviates catastrophic forgetting but can also leverages prior knowledge via lateral connections to previously learned classes and their features. In addition, the proposed scheme is easily scalable and does not require structural changes on the network trained on the old task, which are highly required properties in embedded systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ho, Jiacang, and Dae-Ki Kang. "Brick Assembly Networks: An Effective Network for Incremental Learning Problems." Electronics 9, no. 11 (November 17, 2020): 1929. http://dx.doi.org/10.3390/electronics9111929.

Повний текст джерела
Анотація:
Deep neural networks have achieved high performance in image classification, image generation, voice recognition, natural language processing, etc.; however, they still have confronted several open challenges that need to be solved such as incremental learning problem, overfitting in neural networks, hyperparameter optimization, lack of flexibility and multitasking, etc. In this paper, we focus on the incremental learning problem which is related with machine learning methodologies that continuously train an existing model with additional knowledge. To the best of our knowledge, a simple and direct solution to solve this challenge is to retrain the entire neural network after adding the new labels in the output layer. Besides that, transfer learning can be applied only if the domain of the new labels is related to the domain of the labels that have already been trained in the neural network. In this paper, we propose a novel network architecture, namely Brick Assembly Network (BAN), which allows a trained network to assemble (or dismantle) a new label to (or from) a trained neural network without retraining the entire network. In BAN, we train labels with a sub-network (i.e., a simple neural network) individually and then we assemble the converged sub-networks that have trained for a single label together to form a full neural network. For each label to be trained in a sub-network of BAN, we introduce a new loss function that minimizes the loss of the network with only one class data. Applying one loss function for each class label is unique and different from standard neural network architectures (e.g., AlexNet, ResNet, InceptionV3, etc.) which use the values of a loss function from multiple labels to minimize the error of the network. The difference of between the loss functions of previous approaches and the one we have introduced is that we compute a loss values from node values of penultimate layer (we named it as a characteristic layer) instead of the output layer where the computation of the loss values occurs between true labels and predicted labels. From the experiment results on several benchmark datasets, we evaluate that BAN shows a strong capability of adding (and removing) a new label to a trained network compared with a standard neural network and other previous work.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Abramova, E. S., A. A. Orlov, and K. V. Makarov. "Possibi¬lities of Using Neural Network Incremental Learning." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 4 (November 2021): 19–27. http://dx.doi.org/10.14529/ctcr210402.

Повний текст джерела
Анотація:
The present time is characterized by unprecedented growth in the volume of information flows. Information processing underlies the solution of many practical problems. The intelligent infor-mation systems applications range is extremely extensive: from managing continuous technological processes in real-time to solving commercial and administrative problems. Intelligent information systems should have such a main property, as the ability to quickly process dynamical incoming da-ta in real-time. Also, intelligent information systems should be extracting knowledge from previously solved problems. Incremental neural network training has become one of the topical issues in ma-chine learning in recent years. Compared to traditional machine learning, incremental learning al-lows assimilating new knowledge that comes in gradually and preserving old knowledge gained from previous tasks. Such training should be useful in intelligent systems where data flows dynamically. Aim. Consider the concepts, problems, and methods of incremental neural network training, as well as assess the possibility of using it in intelligent systems development. Materials and methods. The idea of incremental learning, obtained in the analysis of a person's learning during his life, is consid-ered. The terms used in the literature to describe incremental learning are presented. The obstacles that arise in achieving the goal of incremental learning are described. A description of three scenari-os of incremental learning, among which class-incremental learning is distinguished, is given. An analysis of the methods of incremental learning, grouped into a family of techniques by the solution of the catastrophic forgetting problem, is given. The possibilities offered by incremental learning ver-sus traditional machine learning are presented. Results. The article attempts to assess the current state and the possibility of using incremental neural network learning, to identify differences from traditional machine learning. Conclusion. Incremental learning is useful for future intelligent sys-tems, as it allows to maintain existing knowledge in the process of updating, avoid learning from scratch, and dynamically adjust the model's ability to learn according to new data available.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tomimori, Haruka, Kui-Ting Chen, and Takaaki Baba. "A Convolutional Neural Network with Incremental Learning." Journal of Signal Processing 21, no. 4 (2017): 155–58. http://dx.doi.org/10.2299/jsp.21.155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shiotani, Shigetoshi, Toshio Fukuda, and Takanori Shibata. "A neural network architecture for incremental learning." Neurocomputing 9, no. 2 (October 1995): 111–30. http://dx.doi.org/10.1016/0925-2312(94)00061-v.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mellado, Diego, Carolina Saavedra, Steren Chabert, Romina Torres, and Rodrigo Salas. "Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning." Algorithms 12, no. 10 (October 1, 2019): 206. http://dx.doi.org/10.3390/a12100206.

Повний текст джерела
Анотація:
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43 % and, therefore, proceed with incremental class learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Heo, Kwang-Seung, and Kwee-Bo Sim. "Speaker Identification Based on Incremental Learning Neural Network." International Journal of Fuzzy Logic and Intelligent Systems 5, no. 1 (March 1, 2005): 76–82. http://dx.doi.org/10.5391/ijfis.2005.5.1.076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ciarelli, Patrick Marques, Elias Oliveira, and Evandro O. T. Salles. "An incremental neural network with a reduced architecture." Neural Networks 35 (November 2012): 70–81. http://dx.doi.org/10.1016/j.neunet.2012.08.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Yansheng, Dong Ye, Yuanhong Liu, and Jianjun Xu. "Incremental LLE Based on Back Propagation Neural Network." IOP Conference Series: Earth and Environmental Science 170 (July 2018): 042051. http://dx.doi.org/10.1088/1755-1315/170/4/042051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Incremental neural network"

1

Lundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.

Повний текст джерела
Анотація:
Vector Quantization (VQ) is a classic optimization problem and a simple approach to pattern recognition. Applications include lossy data compression, clustering and speech and speaker recognition. Although VQ has largely been replaced by time-aware techniques like Hidden Markov Models (HMMs) and Dynamic Time Warping (DTW) in some applications, such as speech and speaker recognition, VQ still retains some significance due to its much lower computational cost — especially for embedded systems. A recent study also demonstrates a multi-section VQ system which achieves performance rivaling that of DTW in an application to handwritten signature recognition, at a much lower computational cost. Adding sensitivity to temporal patterns to a VQ algorithm could help improve such results further. SOTPAR2 is such an extension of Neural Gas, an Artificial Neural Network algorithm for VQ. SOTPAR2 uses a conceptually simple approach, based on adding lateral connections between network nodes and creating “temporal activity” that diffuses through adjacent nodes. The activity in turn makes the nearest-neighbor classifier biased toward network nodes with high activity, and the SOTPAR2 authors report improvements over Neural Gas in an application to time series prediction. This report presents an investigation of how this same extension affects quantization and prediction performance of the self-organizing incremental neural network (SOINN) algorithm. SOINN is a VQ algorithm which automatically chooses a suitable codebook size and can also be used for clustering with arbitrary cluster shapes. This extension is found to not improve the performance of SOINN, in fact it makes performance worse in all experiments attempted. A discussion of this result is provided, along with a discussion of the impact of the algorithm parameters, and possible future work to improve the results is suggested.
Vektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Flores, João Henrique Ferreira. "ARMA-CIGMN : an Incremental Gaussian Mixture Network for time series analysis and forecasting." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/116126.

Повний текст джерела
Анотація:
Este trabalho apresenta um novo modelo de redes neurais para análise e previsão de séries temporais: o modelo ARMA-CIGMN (do inglês, Autoregressive Moving Average Classical Incremental Gaussian Mixture Network) além dos resultados obtidos pelo mesmo. Este modelo se baseia em modificações realizadas em uma versão reformulada da IGMN. A IGMN Clássica, CIGMN, é similar à versão original da IGMN, porém baseada em uma abordagem estatística clássica, a qual também é apresentada neste trabalho. As modificações do algoritmo da IGMN foram feitas para melhor adpatação a séries temporais. O modelo ARMA-CIGMN demonstra boa capacidade preditiva e a modelagem ainda pode ser auxiliada por conhecidas ferramentas estatísticas como a função de autorrelação (acf, do original em inglês autocorrelation function) e a de autocorrelação parcial (pacf, do original em inglês partial autocorrelation function), já utilizadas em modelagem de séries temporais e nos modelos da IGMN original. As comparações foram feitas utilizando-se séries conhecidas e dados simulados. Foram selecionados para comparação os modelos estatísticos clássicos ARIMA (do inglês, Autoregressive Integrated Moving Average), a IGMN original e duas modificações feitas ainda na IGMN original:(i) um modelo similar ao modelo ARMA (do inglês, Autoregressive Moving Average) clássico e (ii) um modelo similar ao modelo NOE (do inglês, Nonlinear Output Error). Também é apresentada um versão reformulada da IGMN, usando a abordagem clássica da estatística, necessária para o desenvolvimento do modelo ARMA-CIGMN.
This work presents a new model of neural network for time series analysis and forecasting: the ARMA-CIGMN (Autoregressive Moving Average Classical Incremental Gaussian Mixture Network) model and its analysis. This model is based on modifications made to a reformulated IGMN, the Classical IGMN (CIGMN). The CIGMN is similar to the original IGMN, but based on a classical statistical approach. The modifications to the IGMN algorithm were made to better fit it to time series. The proposed ARMA-CIGMN model demonstrates good forecasts and the modeling procedure can also be aided by known statistical tools as the autocorrelation (acf) and partial autocorrelation functions (pacf), already used in classical statistical time series modeling and also with the original IGMN algorithm models. The ARMA-CIGMN model was evaluated using known series and simulated data. The models used for comparisons were the classical statistical ARIMA model and its variants, the original IGMN and two modifications over the original IGMN: (i) a modification similar to a classical ARMA (Autoregressive Moving Average) model and (ii) a similar NOE (Nonlinear Output Error) model. It is also presented a reformulated IGMN version with a classical statistical approach, which is needed for the ARMA-CIGMN model.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hocquet, Guillaume. "Class Incremental Continual Learning in Deep Neural Networks." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.

Повний текст джерела
Анотація:
Nous nous intéressons au problème de l'apprentissage continu de réseaux de neurones artificiels dans le cas où les données ne sont accessibles que pour une seule catégorie à la fois. Pour remédier au problème de l'oubli catastrophique qui limite les performances d'apprentissage dans ces conditions, nous proposons une approche basée sur la représentation des données d'une catégorie par une loi normale. Les transformations associées à ces représentations sont effectuées à l'aide de réseaux inversibles, qui peuvent alors être entraînés avec les données d'une seule catégorie. Chaque catégorie se voit attribuer un réseau pour représenter ses caractéristiques. Prédire la catégorie revient alors à identifier le réseau le plus représentatif. L'avantage d'une telle approche est qu'une fois qu'un réseau est entraîné, il n'est plus nécessaire de le mettre à jour par la suite, chaque réseau étant indépendant des autres. C'est cette propriété particulièrement avantageuse qui démarque notre méthode des précédents travaux dans ce domaine. Nous appuyons notre démonstration sur des expériences réalisées sur divers jeux de données et montrons que notre approche fonctionne favorablement comparé à l'état de l'art. Dans un second temps, nous proposons d'optimiser notre approche en réduisant son impact en mémoire en factorisant les paramètres des réseaux. Il est alors possible de réduire significativement le coût de stockage de ces réseaux avec une perte de performances limitée. Enfin, nous étudions également des stratégies pour produire des réseaux capables d'être réutilisés sur le long terme et nous montrons leur pertinence par rapport aux réseaux traditionnellement utilisés pour l'apprentissage continu
We are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ronco, Eric. "Incremental polynomial controller networks two self-organising non-linear controllers /." Thesis, Connect to electronic version, 1997. http://hdl.handle.net/1905/181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Buttar, Sarpreet Singh. "Applying Artificial Neural Networks to Reduce the Adaptation Space in Self-Adaptive Systems : an exploratory work." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-87117.

Повний текст джерела
Анотація:
Self-adaptive systems have limited time to adjust their configurations whenever their adaptation goals, i.e., quality requirements, are violated due to some runtime uncertainties. Within the available time, they need to analyze their adaptation space, i.e., a set of configurations, to find the best adaptation option, i.e., configuration, that can achieve their adaptation goals. Existing formal analysis approaches find the best adaptation option by analyzing the entire adaptation space. However, exhaustive analysis requires time and resources and is therefore only efficient when the adaptation space is small. The size of the adaptation space is often in hundreds or thousands, which makes formal analysis approaches inefficient in large-scale self-adaptive systems. In this thesis, we tackle this problem by presenting an online learning approach that enables formal analysis approaches to analyze large adaptation spaces efficiently. The approach integrates with the standard feedback loop and reduces the adaptation space to a subset of adaptation options that are relevant to the current runtime uncertainties. The subset is then analyzed by the formal analysis approaches, which allows them to complete the analysis faster and efficiently within the available time. We evaluate our approach on two different instances of an Internet of Things application. The evaluation shows that our approach dramatically reduces the adaptation space and analysis time without compromising the adaptation goals.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Monica, Riccardo. "Deep Incremental Learning for Object Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12331/.

Повний текст джерела
Анотація:
In recent years, deep learning techniques received great attention in the field of information technology. These techniques proved to be particularly useful and effective in domains like natural language processing, speech recognition and computer vision. In several real world applications deep learning approaches improved the state-of-the-art. In the field of machine learning, deep learning was a real revolution and a number of effective techniques have been proposed for both supervised and unsupervised learning and for representation learning. This thesis focuses on deep learning for object recognition, and in particular, it addresses incremental learning techniques. With incremental learning we denote approaches able to create an initial model from a small training set and to improve the model as new data are available. Using temporal coherent sequences proved to be useful for incremental learning since temporal coherence also allows to operate in unsupervised manners. A critical point of incremental learning is called forgetting which is the risk to forget previously learned patterns as new data are presented. In the first chapters of this work we introduce the basic theory on neural networks, Convolutional Neural Networks and incremental learning. CNN is today one of the most effective approaches for supervised object recognition; it is well accepted by the scientific community and largely used by ICT big players like Google and Facebook: relevant applications are Facebook face recognition and Google image search. The scientific community has several (large) datasets (e.g., ImageNet) for the development and evaluation of object recognition approaches. However very few temporally coherent datasets are available to study incremental approaches. For this reason we decided to collect a new dataset named TCD4R (Temporal Coherent Dataset For Robotics).
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.

Повний текст джерела
Анотація:
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais.
This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.

Повний текст джерела
Анотація:
Este trabalho introduz novos algoritmos de redes neurais para o processamento online de padrões espaço-temporais, estendendo o algoritmo Incremental Gaussian Mixture Network (IGMN). O algoritmo IGMN é uma rede neural online incremental que aprende a partir de uma única passada através de dados por meio de uma versão incremental do algoritmo Expectation-Maximization (EM) combinado com regressão localmente ponderada (Locally Weighted Regression, LWR). Quatro abordagens diferentes são usadas para dar capacidade de processamento temporal para o algoritmo IGMN: linhas de atraso (Time-Delay IGMN), uma camada de reservoir (Echo-State IGMN), média móvel exponencial do vetor de entrada reconstruído (Merge IGMN) e auto-referência (Recursive IGMN). Isso resulta em algoritmos que são online, incrementais, agressivos e têm capacidades temporais e, portanto, são adequados para tarefas com memória ou estados internos desconhecidos, caracterizados por fluxo contínuo ininterrupto de dados, e que exigem operação perpétua provendo previsões sem etapas separadas para aprendizado e execução. Os algoritmos propostos são comparados a outras redes neurais espaço-temporais em 8 tarefas de previsão de séries temporais. Dois deles mostram desempenhos satisfatórios, em geral, superando as abordagens existentes. Uma melhoria geral para o algoritmo IGMN também é descrita, eliminando um dos parâmetros ajustáveis manualmente e provendo melhores resultados.
This work introduces novel neural networks algorithms for online spatio-temporal pattern processing by extending the Incremental Gaussian Mixture Network (IGMN). The IGMN algorithm is an online incremental neural network that learns from a single scan through data by means of an incremental version of the Expectation-Maximization (EM) algorithm combined with locally weighted regression (LWR). Four different approaches are used to give temporal processing capabilities to the IGMN algorithm: time-delay lines (Time-Delay IGMN), a reservoir layer (Echo-State IGMN), exponential moving average of reconstructed input vector (Merge IGMN) and self-referencing (Recursive IGMN). This results in algorithms that are online, incremental, aggressive and have temporal capabilities, and therefore are suitable for tasks with memory or unknown internal states, characterized by continuous non-stopping data-flows, and that require life-long learning while operating and giving predictions without separated stages. The proposed algorithms are compared to other spatio-temporal neural networks in 8 time-series prediction tasks. Two of them show satisfactory performances, generally improving upon existing approaches. A general enhancement for the IGMN algorithm is also described, eliminating one of the algorithm’s manually tunable parameters and giving better results.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Glöde, Isabella. "Autonomous control of a mobile robot with incremental deep learning neural networks." Master's thesis, Pontificia Universidad Católica del Perú, 2021. http://hdl.handle.net/20.500.12404/18676.

Повний текст джерела
Анотація:
Over the last few years autonomous driving had an increasingly strong impact on the automotive industry. This created an increased need for artificial intelligence algo- rithms which allow for computers to make human-like decisions. However, a compro- mise between the computational power drawn by these algorithms and their subsequent performance must be found to fulfil production requirements. In this thesis incremental deep learning strategies are used for the control of a mobile robot such as a four wheel steering vehicle. This strategy is similar to the human approach of learning. In many small steps the vehicle learns to achieve a specific goal. The usage of incremental training leads to growing knowledge-base within the system. It also provides the opportunity to use older training achievements to improve the system, when more training data is available. To demonstrate the capabilities of such an algorithm, two different models have been formulated. First, a more simple model with counter wheel steering, and second, a more complex, nonlinear model with independent steering. These two models are trained incrementally to follow different types of trajectories. Therefore an algorithm was established to generate useful initial points. The incremental steps allow the robot to be positioned further and further away from the desired trajectory in the environ- ment. Afterwards, the effects of different trajectory types on model behaviour are investigated by over one thousand simulation runs. To do this, path planning for straight lines and circles are introduced. This work demonstrates that even simulations with simple network structures can have high performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Thuv, Øyvin Halfdan. "Incrementally Evolving a Dynamic Neural Network for Tactile-Olfactory Insect Navigation." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8834.

Повний текст джерела
Анотація:

This Masters thesis gives a thorough description of a study carried out in the Self-Organizing Systems group at the NTNU. Much {AI research in the later years has moved towards increased use of representationless strategies such as simulated neural networks. One technique for creating such networks is to evolve them using simulated Darwinian evolution. This is a powerful technique, but it is often limited by the computer resources available. One way to speed up evolution, is to focus the evolutionary search on a more narrow range of solutions. It is for example possible to favor evolution of a specific ``species'' by initializing the search with a specialized set of genes. A disadvantage of doing this is of course that many other solutions (or ``species'') are disregarded so that good solutions in theory may be lost. It is therefore necessary to find focusing strategies that are generally applicable and (with a high probability) only disregards solutions that are considered unimportant. Three different ways of focusing evolutionary search for cognitive behaviours are merged and evaluated in this thesis: On a macro level, incremental evolution is applied to partition the evolutionary search. On a micro level, specific properties of the chosen neural network model (CTRNNs) are exploited. The two properties are seeding initial populations with center-crossing neural networks and/or bifurcative neurons. The techniques are compared to standard, naive, evolutionary searches by applying them to the evolution of simulated neural networks for the walking and control of a six-legged mobile robot. A problem simple enough to be satisfactorily understood, but complex enough to be a challenge for a traditional evolutionary search.

Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Incremental neural network"

1

Mundy, Peter. A Neural Networks, Information-Processing Model of Joint Attention and Social-Cognitive Development. Edited by Philip David Zelazo. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199958474.013.0010.

Повний текст джерела
Анотація:
A neural networks approach to the development of joint attention can inform the study of the nature of human social cognition, learning, and symbolic thought process. Joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and an organizer of a distributed and integrated brain network involving frontal and parietal cortical systems. In this chapter I discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances the depth of information processing and encoding beginning in the first year of life. I also propose that with development joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human social cognition and symbolic thinking.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Incremental neural network"

1

Shen, Shaofeng, Qiang Gan, Furao Shen, Chaomin Luo, and Jinxi Zhao. "An Incremental Network with Local Experts Ensemble." In Neural Information Processing, 515–22. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26555-1_58.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Alpaydm, Ethem. "Grow-and-Learn: An Incremental Method for Category Learning." In International Neural Network Conference, 761–64. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_69.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kakemoto, Yoshitsugu, and Shinchi Nakasuka. "Dynamics of Incremental Learning by VSF-Network." In Artificial Neural Networks – ICANN 2009, 688–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_71.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rizzi, A., M. Biancavilla, and F. M. Frattale Mascioli. "Incremental Min-Max Network. Part 1: Continuous Spaces." In Perspectives in Neural Computing, 371–76. London: Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0811-5_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Tianyue, Baile Xu, and Furao Shen. "Fuzzy Self-Organizing Incremental Neural Network for Fuzzy Clustering." In Neural Information Processing, 24–32. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70087-8_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Furao, Shen, and Osamu Hasegawa. "An Incremental Neural Network for Non-stationary Unsupervised Learning." In Neural Information Processing, 641–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30499-9_98.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Driff, Lydia Nahla, and Habiba Drias. "Artificial Neural Network for Incremental Data Mining." In Advances in Intelligent Systems and Computing, 133–43. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56535-4_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shen, Furao, and Osamu Hasegawa. "Self-Organizing Incremental Neural Network and Its Application." In Artificial Neural Networks – ICANN 2010, 535–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15825-4_74.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Alfarozi, Syukron Abu Ishaq, Noor Akhmad Setiawan, Teguh Bharata Adji, Kuntpong Woraratpanya, Kitsuchart Pasupa, and Masanori Sugimoto. "Analytical Incremental Learning: Fast Constructive Learning Method for Neural Network." In Neural Information Processing, 259–68. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_30.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Xiaoyu, Lucian Gheorghe, and Jun-ichi Imura. "A Gaussian Process-Based Incremental Neural Network for Online Regression." In Neural Information Processing, 149–61. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63836-8_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Incremental neural network"

1

Kou, Jialiang, Shengwu Xiong, Shuzhen Wan, and Hongbing Liu. "The Incremental Probabilistic Neural Network." In 2010 Sixth International Conference on Natural Computation (ICNC). IEEE, 2010. http://dx.doi.org/10.1109/icnc.2010.5583589.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mi, Fei, and Boi Faltings. "Memory Augmented Neural Model for Incremental Session-based Recommendation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/300.

Повний текст джерела
Анотація:
Increasing concerns with privacy have stimulated interests in Session-based Recommendation (SR) using no personal data other than what is observed in the current browser session. Existing methods are evaluated in static settings which rarely occur in real-world applications. To better address the dynamic nature of SR tasks, we study an incremental SR scenario, where new items and preferences appear continuously. We show that existing neural recommenders can be used in incremental SR scenarios with small incremental updates to alleviate computation overhead and catastrophic forgetting. More importantly, we propose a general framework called Memory Augmented Neural model (MAN). MAN augments a base neural recommender with a continuously queried and updated nonparametric memory, and the predictions from the neural and the memory components are combined through another lightweight gating network. We empirically show that MAN is well-suited for the incremental SR task, and it consistently outperforms state-oft-he-art neural and nonparametric methods. We analyze the results and demonstrate that it is particularly good at incrementally learning preferences on new and infrequent items.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Jenq Haur, and Hsin Yang Wang. "Incremental Neural Network Construction for Text Classification." In 2014 International Symposium on Computer, Consumer and Control (IS3C). IEEE, 2014. http://dx.doi.org/10.1109/is3c.2014.254.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Okada, Shogo, and Toyoaki Nishida. "Incremental clustering of gesture patterns based on a self organizing incremental neural network." In 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178845.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Huang, Shin-Ying, Fang Yu, Rua-Huan Tsaih, and Yennun Huang. "Network-traffic anomaly detection with incremental majority learning." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280573.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Xiaojian. "Incremental Wavelet Neural Network based Prediction of Network Security Situation." In 2016 4th International Conference on Machinery, Materials and Computing Technology. Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/icmmct-16.2016.236.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bakırlı, Gözde, Derya Birant, and Alp Kut. "INNA: Incremental Neural Network Algorithm and Performance Analysis." In Intelligent Systems and Control. Calgary,AB,Canada: ACTAPRESS, 2011. http://dx.doi.org/10.2316/p.2011.742-027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hebboul, Amel, Meriem Hacini, and Fella Hachouf. "An incremental parallel neural network for unsupervised classification." In 2011 7th International Workshop on Systems, Signal Processing and their Applications (WOSSPA). IEEE, 2011. http://dx.doi.org/10.1109/wosspa.2011.5931521.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Anowar, Farzana, and Samira Sadaoui. "Incremental Neural-Network Learning for Big Fraud Data." In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2020. http://dx.doi.org/10.1109/smc42975.2020.9283136.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ter-Sarkisov, Alex, Holger Schwenk, Fethi Bougares, and Loïc Barrault. "Incremental Adaptation Strategies for Neural Network Language Models." In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.18653/v1/w15-4006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Incremental neural network"

1

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Повний текст джерела
Анотація:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії