Дисертації з теми "Incremental neural network"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Incremental neural network.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-30 дисертацій для дослідження на тему "Incremental neural network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.

Повний текст джерела
Анотація:
Vector Quantization (VQ) is a classic optimization problem and a simple approach to pattern recognition. Applications include lossy data compression, clustering and speech and speaker recognition. Although VQ has largely been replaced by time-aware techniques like Hidden Markov Models (HMMs) and Dynamic Time Warping (DTW) in some applications, such as speech and speaker recognition, VQ still retains some significance due to its much lower computational cost — especially for embedded systems. A recent study also demonstrates a multi-section VQ system which achieves performance rivaling that of DTW in an application to handwritten signature recognition, at a much lower computational cost. Adding sensitivity to temporal patterns to a VQ algorithm could help improve such results further. SOTPAR2 is such an extension of Neural Gas, an Artificial Neural Network algorithm for VQ. SOTPAR2 uses a conceptually simple approach, based on adding lateral connections between network nodes and creating “temporal activity” that diffuses through adjacent nodes. The activity in turn makes the nearest-neighbor classifier biased toward network nodes with high activity, and the SOTPAR2 authors report improvements over Neural Gas in an application to time series prediction. This report presents an investigation of how this same extension affects quantization and prediction performance of the self-organizing incremental neural network (SOINN) algorithm. SOINN is a VQ algorithm which automatically chooses a suitable codebook size and can also be used for clustering with arbitrary cluster shapes. This extension is found to not improve the performance of SOINN, in fact it makes performance worse in all experiments attempted. A discussion of this result is provided, along with a discussion of the impact of the algorithm parameters, and possible future work to improve the results is suggested.
Vektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Flores, João Henrique Ferreira. "ARMA-CIGMN : an Incremental Gaussian Mixture Network for time series analysis and forecasting." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/116126.

Повний текст джерела
Анотація:
Este trabalho apresenta um novo modelo de redes neurais para análise e previsão de séries temporais: o modelo ARMA-CIGMN (do inglês, Autoregressive Moving Average Classical Incremental Gaussian Mixture Network) além dos resultados obtidos pelo mesmo. Este modelo se baseia em modificações realizadas em uma versão reformulada da IGMN. A IGMN Clássica, CIGMN, é similar à versão original da IGMN, porém baseada em uma abordagem estatística clássica, a qual também é apresentada neste trabalho. As modificações do algoritmo da IGMN foram feitas para melhor adpatação a séries temporais. O modelo ARMA-CIGMN demonstra boa capacidade preditiva e a modelagem ainda pode ser auxiliada por conhecidas ferramentas estatísticas como a função de autorrelação (acf, do original em inglês autocorrelation function) e a de autocorrelação parcial (pacf, do original em inglês partial autocorrelation function), já utilizadas em modelagem de séries temporais e nos modelos da IGMN original. As comparações foram feitas utilizando-se séries conhecidas e dados simulados. Foram selecionados para comparação os modelos estatísticos clássicos ARIMA (do inglês, Autoregressive Integrated Moving Average), a IGMN original e duas modificações feitas ainda na IGMN original:(i) um modelo similar ao modelo ARMA (do inglês, Autoregressive Moving Average) clássico e (ii) um modelo similar ao modelo NOE (do inglês, Nonlinear Output Error). Também é apresentada um versão reformulada da IGMN, usando a abordagem clássica da estatística, necessária para o desenvolvimento do modelo ARMA-CIGMN.
This work presents a new model of neural network for time series analysis and forecasting: the ARMA-CIGMN (Autoregressive Moving Average Classical Incremental Gaussian Mixture Network) model and its analysis. This model is based on modifications made to a reformulated IGMN, the Classical IGMN (CIGMN). The CIGMN is similar to the original IGMN, but based on a classical statistical approach. The modifications to the IGMN algorithm were made to better fit it to time series. The proposed ARMA-CIGMN model demonstrates good forecasts and the modeling procedure can also be aided by known statistical tools as the autocorrelation (acf) and partial autocorrelation functions (pacf), already used in classical statistical time series modeling and also with the original IGMN algorithm models. The ARMA-CIGMN model was evaluated using known series and simulated data. The models used for comparisons were the classical statistical ARIMA model and its variants, the original IGMN and two modifications over the original IGMN: (i) a modification similar to a classical ARMA (Autoregressive Moving Average) model and (ii) a similar NOE (Nonlinear Output Error) model. It is also presented a reformulated IGMN version with a classical statistical approach, which is needed for the ARMA-CIGMN model.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hocquet, Guillaume. "Class Incremental Continual Learning in Deep Neural Networks." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.

Повний текст джерела
Анотація:
Nous nous intéressons au problème de l'apprentissage continu de réseaux de neurones artificiels dans le cas où les données ne sont accessibles que pour une seule catégorie à la fois. Pour remédier au problème de l'oubli catastrophique qui limite les performances d'apprentissage dans ces conditions, nous proposons une approche basée sur la représentation des données d'une catégorie par une loi normale. Les transformations associées à ces représentations sont effectuées à l'aide de réseaux inversibles, qui peuvent alors être entraînés avec les données d'une seule catégorie. Chaque catégorie se voit attribuer un réseau pour représenter ses caractéristiques. Prédire la catégorie revient alors à identifier le réseau le plus représentatif. L'avantage d'une telle approche est qu'une fois qu'un réseau est entraîné, il n'est plus nécessaire de le mettre à jour par la suite, chaque réseau étant indépendant des autres. C'est cette propriété particulièrement avantageuse qui démarque notre méthode des précédents travaux dans ce domaine. Nous appuyons notre démonstration sur des expériences réalisées sur divers jeux de données et montrons que notre approche fonctionne favorablement comparé à l'état de l'art. Dans un second temps, nous proposons d'optimiser notre approche en réduisant son impact en mémoire en factorisant les paramètres des réseaux. Il est alors possible de réduire significativement le coût de stockage de ces réseaux avec une perte de performances limitée. Enfin, nous étudions également des stratégies pour produire des réseaux capables d'être réutilisés sur le long terme et nous montrons leur pertinence par rapport aux réseaux traditionnellement utilisés pour l'apprentissage continu
We are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ronco, Eric. "Incremental polynomial controller networks two self-organising non-linear controllers /." Thesis, Connect to electronic version, 1997. http://hdl.handle.net/1905/181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Buttar, Sarpreet Singh. "Applying Artificial Neural Networks to Reduce the Adaptation Space in Self-Adaptive Systems : an exploratory work." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-87117.

Повний текст джерела
Анотація:
Self-adaptive systems have limited time to adjust their configurations whenever their adaptation goals, i.e., quality requirements, are violated due to some runtime uncertainties. Within the available time, they need to analyze their adaptation space, i.e., a set of configurations, to find the best adaptation option, i.e., configuration, that can achieve their adaptation goals. Existing formal analysis approaches find the best adaptation option by analyzing the entire adaptation space. However, exhaustive analysis requires time and resources and is therefore only efficient when the adaptation space is small. The size of the adaptation space is often in hundreds or thousands, which makes formal analysis approaches inefficient in large-scale self-adaptive systems. In this thesis, we tackle this problem by presenting an online learning approach that enables formal analysis approaches to analyze large adaptation spaces efficiently. The approach integrates with the standard feedback loop and reduces the adaptation space to a subset of adaptation options that are relevant to the current runtime uncertainties. The subset is then analyzed by the formal analysis approaches, which allows them to complete the analysis faster and efficiently within the available time. We evaluate our approach on two different instances of an Internet of Things application. The evaluation shows that our approach dramatically reduces the adaptation space and analysis time without compromising the adaptation goals.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Monica, Riccardo. "Deep Incremental Learning for Object Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12331/.

Повний текст джерела
Анотація:
In recent years, deep learning techniques received great attention in the field of information technology. These techniques proved to be particularly useful and effective in domains like natural language processing, speech recognition and computer vision. In several real world applications deep learning approaches improved the state-of-the-art. In the field of machine learning, deep learning was a real revolution and a number of effective techniques have been proposed for both supervised and unsupervised learning and for representation learning. This thesis focuses on deep learning for object recognition, and in particular, it addresses incremental learning techniques. With incremental learning we denote approaches able to create an initial model from a small training set and to improve the model as new data are available. Using temporal coherent sequences proved to be useful for incremental learning since temporal coherence also allows to operate in unsupervised manners. A critical point of incremental learning is called forgetting which is the risk to forget previously learned patterns as new data are presented. In the first chapters of this work we introduce the basic theory on neural networks, Convolutional Neural Networks and incremental learning. CNN is today one of the most effective approaches for supervised object recognition; it is well accepted by the scientific community and largely used by ICT big players like Google and Facebook: relevant applications are Facebook face recognition and Google image search. The scientific community has several (large) datasets (e.g., ImageNet) for the development and evaluation of object recognition approaches. However very few temporally coherent datasets are available to study incremental approaches. For this reason we decided to collect a new dataset named TCD4R (Temporal Coherent Dataset For Robotics).
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.

Повний текст джерела
Анотація:
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais.
This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.

Повний текст джерела
Анотація:
Este trabalho introduz novos algoritmos de redes neurais para o processamento online de padrões espaço-temporais, estendendo o algoritmo Incremental Gaussian Mixture Network (IGMN). O algoritmo IGMN é uma rede neural online incremental que aprende a partir de uma única passada através de dados por meio de uma versão incremental do algoritmo Expectation-Maximization (EM) combinado com regressão localmente ponderada (Locally Weighted Regression, LWR). Quatro abordagens diferentes são usadas para dar capacidade de processamento temporal para o algoritmo IGMN: linhas de atraso (Time-Delay IGMN), uma camada de reservoir (Echo-State IGMN), média móvel exponencial do vetor de entrada reconstruído (Merge IGMN) e auto-referência (Recursive IGMN). Isso resulta em algoritmos que são online, incrementais, agressivos e têm capacidades temporais e, portanto, são adequados para tarefas com memória ou estados internos desconhecidos, caracterizados por fluxo contínuo ininterrupto de dados, e que exigem operação perpétua provendo previsões sem etapas separadas para aprendizado e execução. Os algoritmos propostos são comparados a outras redes neurais espaço-temporais em 8 tarefas de previsão de séries temporais. Dois deles mostram desempenhos satisfatórios, em geral, superando as abordagens existentes. Uma melhoria geral para o algoritmo IGMN também é descrita, eliminando um dos parâmetros ajustáveis manualmente e provendo melhores resultados.
This work introduces novel neural networks algorithms for online spatio-temporal pattern processing by extending the Incremental Gaussian Mixture Network (IGMN). The IGMN algorithm is an online incremental neural network that learns from a single scan through data by means of an incremental version of the Expectation-Maximization (EM) algorithm combined with locally weighted regression (LWR). Four different approaches are used to give temporal processing capabilities to the IGMN algorithm: time-delay lines (Time-Delay IGMN), a reservoir layer (Echo-State IGMN), exponential moving average of reconstructed input vector (Merge IGMN) and self-referencing (Recursive IGMN). This results in algorithms that are online, incremental, aggressive and have temporal capabilities, and therefore are suitable for tasks with memory or unknown internal states, characterized by continuous non-stopping data-flows, and that require life-long learning while operating and giving predictions without separated stages. The proposed algorithms are compared to other spatio-temporal neural networks in 8 time-series prediction tasks. Two of them show satisfactory performances, generally improving upon existing approaches. A general enhancement for the IGMN algorithm is also described, eliminating one of the algorithm’s manually tunable parameters and giving better results.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Glöde, Isabella. "Autonomous control of a mobile robot with incremental deep learning neural networks." Master's thesis, Pontificia Universidad Católica del Perú, 2021. http://hdl.handle.net/20.500.12404/18676.

Повний текст джерела
Анотація:
Over the last few years autonomous driving had an increasingly strong impact on the automotive industry. This created an increased need for artificial intelligence algo- rithms which allow for computers to make human-like decisions. However, a compro- mise between the computational power drawn by these algorithms and their subsequent performance must be found to fulfil production requirements. In this thesis incremental deep learning strategies are used for the control of a mobile robot such as a four wheel steering vehicle. This strategy is similar to the human approach of learning. In many small steps the vehicle learns to achieve a specific goal. The usage of incremental training leads to growing knowledge-base within the system. It also provides the opportunity to use older training achievements to improve the system, when more training data is available. To demonstrate the capabilities of such an algorithm, two different models have been formulated. First, a more simple model with counter wheel steering, and second, a more complex, nonlinear model with independent steering. These two models are trained incrementally to follow different types of trajectories. Therefore an algorithm was established to generate useful initial points. The incremental steps allow the robot to be positioned further and further away from the desired trajectory in the environ- ment. Afterwards, the effects of different trajectory types on model behaviour are investigated by over one thousand simulation runs. To do this, path planning for straight lines and circles are introduced. This work demonstrates that even simulations with simple network structures can have high performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Thuv, Øyvin Halfdan. "Incrementally Evolving a Dynamic Neural Network for Tactile-Olfactory Insect Navigation." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8834.

Повний текст джерела
Анотація:

This Masters thesis gives a thorough description of a study carried out in the Self-Organizing Systems group at the NTNU. Much {AI research in the later years has moved towards increased use of representationless strategies such as simulated neural networks. One technique for creating such networks is to evolve them using simulated Darwinian evolution. This is a powerful technique, but it is often limited by the computer resources available. One way to speed up evolution, is to focus the evolutionary search on a more narrow range of solutions. It is for example possible to favor evolution of a specific ``species'' by initializing the search with a specialized set of genes. A disadvantage of doing this is of course that many other solutions (or ``species'') are disregarded so that good solutions in theory may be lost. It is therefore necessary to find focusing strategies that are generally applicable and (with a high probability) only disregards solutions that are considered unimportant. Three different ways of focusing evolutionary search for cognitive behaviours are merged and evaluated in this thesis: On a macro level, incremental evolution is applied to partition the evolutionary search. On a micro level, specific properties of the chosen neural network model (CTRNNs) are exploited. The two properties are seeding initial populations with center-crossing neural networks and/or bifurcative neurons. The techniques are compared to standard, naive, evolutionary searches by applying them to the evolution of simulated neural networks for the walking and control of a six-legged mobile robot. A problem simple enough to be satisfactorily understood, but complex enough to be a challenge for a traditional evolutionary search.

Стилі APA, Harvard, Vancouver, ISO та ін.
11

Johansson, Philip. "Incremental Learning of Deep Convolutional Neural Networks for Tumour Classification in Pathology Images." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158225.

Повний текст джерела
Анотація:
Medical doctors understaffing is becoming a compelling problem in many healthcare systems. This problem can be alleviated by utilising Computer-Aided Diagnosis (CAD) systems to substitute doctors in different tasks, for instance, histopa-thological image classification. The recent surge of deep learning has allowed CAD systems to perform this task at a very competitive performance. However, a major challenge with this task is the need to periodically update the models with new data and/or new classes or diseases. These periodical updates will result in catastrophic forgetting, as Convolutional Neural Networks typically requires the entire data set beforehand and tend to lose knowledge about old data when trained on new data. Incremental learning methods were proposed to alleviate this problem with deep learning. In this thesis, two incremental learning methods, Learning without Forgetting (LwF) and a generative rehearsal-based method, are investigated. They are evaluated on two criteria: The first, capability of incrementally adding new classes to a pre-trained model, and the second is the ability to update the current model with an new unbalanced data set. Experiments shows that LwF does not retain knowledge properly for the two cases. Further experiments are needed to draw any definite conclusions, for instance using another training approach for the classes and try different combinations of losses. On the other hand, the generative rehearsal-based method tends to work for one class, showing a good potential to work if better quality images were generated. Additional experiments are also required in order to investigating new architectures and approaches for a more stable training.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Soares, Sérgio Aurélio Ferreira. "Spatial interpolation and geostatistic simulation with the incremental Gaussian mixture network." reponame:Repositório Institucional da UFSC, 2016. https://repositorio.ufsc.br/xmlui/handle/123456789/178581.

Повний текст джерела
Анотація:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2016.
Made available in DSpace on 2017-08-22T04:22:16Z (GMT). No. of bitstreams: 1 347911.pdf: 1690914 bytes, checksum: e43f9150ef3cb130f6d5696b46a68fa5 (MD5) Previous issue date: 2016
Abstract : Geostatistics aggregates a set of tools designed to deal with spatially correlated data. Two significant problems that Geostatistics tackles are the spatial interpolation and geostatistical simulation. Kriging and Sequential Gaussian Simulation (SGS) are two examples of traditional geostatistical tools used for these kinds of problems. These methods perform well when the provided Variogram is well modeled. The problem is that modeling the Variogram requires expert knowledge and a certain familiarity with the dataset. This complexity might make Geostatistics tools the last choice of a non-expert. On the other hand, an important feature present in neural networks is their ability to learn from data, even when the user does not have much information about the particular dataset. However, traditional models, such as Multilayer Perceptron (MLP), do not perform well in spatial interpolation problems due to their difficulty in accurately modeling the spatial correlation between samples. With this motivation in mind, we adapted the Incremental Gaussian Mixture Network (IGMN) model for spatial interpolation and geostatistical simulation applications. The three most important contributions of this work are: 1. An improvement in the IGMN estimation process for spatial interpolation problems with sparse datasets; 2. An algorithm to perform Sequential Gaussian Simulation using IGMN instead of Kriging; 3. An algorithm that mixes the Direct Sampling (DS) method and IGMN for cluster-based Multiple Point Simulation (MPS) with training images. Results show that our approach outperforms MLP and the original IGMN in spatial interpolation problems, especially in anisotropic and sparse datasets (in terms of RMSE and CC). Also, our algorithm for sequential simulation using IGMN instead of Kriging can generate equally probable realizations of the defined simulation grid for unconditioned simulations. Finally, our algorithm that mixes the DS method and IGMN can produce better quality simulations and runs much faster than the original DS. To the best of our knowledge, this is the first time a Neural Network model is specialized for spatial interpolation applications and can perform a geostatistical simulation.

A Geoestatística agrega um conjunto de ferramentas especializadas em dados espacialmente correlacionados. Dois problemas importantes na Geoestatística são a interpolação espacial e a simulação. A Krigagem e a Simulação Sequencial Gaussiana (SGS) são dois exemplos de ferramentas geoestatísticas utilizadas para esses tipos de problemas, respectivamente. A Krigagem e a SGS possuem bom desempenho quando o Variograma fornecido pelo usuário representa bem as correlações espaciais. O problema é que a modelagem do Variograma requer um conhecimento especializado e certa familiaridade com o conjunto de dados em estudo. Essa complexidade pode tornar difícíl a popularização dessas técnicas entre não-especialistas. Por outro lado, uma característica importante presente em Redes Neurais Artificiais é a capacidade de aprender a partir dos dados, mesmo quando o usuário não possui familiaridade com os dados. No entanto, os modelos tradicionais, como o Multilayer Perceptron (MLP), têm dificuldade em identificar a correlação espacial entre amostras e não apresentam um bom desempenho em problemas de interpolação espacial. Com essa motivação, nós adaptamos e aplicamos a Incremental Gaussian Mixture Network (IGMN) em problemas de interpolação espacial e simulação geoestatística. As três principais contribuições deste trabalho são: 1. Melhoria no processo de estimação da IGMN para problemas de interpolação espacial; 2. Um algoritmo para realizar simulação sequencial gaussiana utilizando a IGMN como interpolador; 3. Um algoritmo que mistura o método Direct Sampling (DS) e a IGMN para realizar simulação multiponto (MPS) a partir de imagens de treinamento. Os resultados mostram que a nossa abordagem é mais precisa que o MLP e a IGMN original em problemas de interpolação espacial, especialmente em conjuntos de dados esparsos e com anisotropia (em termos de RMSE e CC). Nosso algoritmo de simulação sequencial que utiliza a IGMN como interpolador é capaz de gerar simulações não condicionadas que respeitam características do conjunto original de dados. Finalmente, nosso algoritmo de simulação multiponto, que mistura o método DS e a IGMN, é capaz de realizar simulações condicionadas e produz realizações com qualidade superior num tempo de execução inferior ao do DS. Até onde sabemos, esta a primeira vez que um modelo de rede neural é especializado para aplicações de interpolação espacial e é capaz de realizar simulação geostatística.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Heinen, Milton Roberto. "A connectionist approach for incremental function approximation and on-line tasks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/29015.

Повний текст джерела
Анотація:
Este trabalho propõe uma nova abordagem conexionista, chamada de IGMN (do inglês Incremental Gaussian Mixture Network), para aproximação incremental de funções e tarefas de tempo real. Ela é inspirada em recentes teorias do cérebro, especialmente o MPF (do inglês Memory-Prediction Framework) e a Inteligência Artificial Construtivista, que fazem com que o modelo proposto possua características especiais que não estão presentes na maioria dos modelos de redes neurais existentes. Além disso, IGMN é baseado em sólidos princípios estatísticos (modelos de mistura gaussianos) e assintoticamente converge para a superfície de regressão ótima a medida que os dados de treinamento chegam. As principais vantagens do IGMN em relação a outros modelos de redes neurais são: (i) IGMN aprende instantaneamente analisando cada padrão de treinamento apenas uma vez (cada dado pode ser imediatamente utilizado e descartado); (ii) o modelo proposto produz estimativas razoáveis baseado em poucos dados de treinamento; (iii) IGMN aprende de forma contínua e perpétua a medida que novos dados de treinamento chegam (não existem fases separadas de treinamento e utilização); (iv) o modelo proposto resolve o dilema da estabilidade-plasticidade e não sofre de interferência catastrófica; (v) a topologia da rede neural é definida automaticamente e de forma incremental (novas unidades são adicionadas sempre que necessário); (vi) IGMN não é sensível às condições de inicialização (de fato IGMN não utiliza nenhuma decisão e/ou inicialização aleatória); (vii) a mesma rede neural IGMN pode ser utilizada em problemas diretos e inversos (o fluxo de informações é bidirecional) mesmo em regiões onde a função alvo tem múltiplas soluções; e (viii) IGMN fornece o nível de confiança de suas estimativas. Outra contribuição relevante desta tese é o uso do IGMN em importantes tarefas nas áreas de robótica e aprendizado de máquina, como por exemplo a identificação de modelos, a formação incremental de conceitos, o aprendizado por reforço, o mapeamento robótico e previsão de séries temporais. De fato, o poder de representação e a eficiência e do modelo proposto permitem expandir o conjunto de tarefas nas quais as redes neurais podem ser utilizadas, abrindo assim novas direções nos quais importantes contribuições do estado da arte podem ser feitas. Através de diversos experimentos, realizados utilizando o modelo proposto, é demonstrado que o IGMN é bastante robusto ao problema de overfitting, não requer um ajuste fino dos parâmetros de configuração e possui uma boa performance computacional que permite o seu uso em aplicações de controle em tempo real. Portanto pode-se afirmar que o IGMN é uma ferramenta de aprendizado de máquina bastante útil em tarefas de aprendizado incremental de funções e predição em tempo real.
This work proposes IGMN (standing for Incremental Gaussian Mixture Network), a new connectionist approach for incremental function approximation and real time tasks. It is inspired on recent theories about the brain, specially the Memory-Prediction Framework and the Constructivist Artificial Intelligence, which endows it with some unique features that are not present in most ANN models such as MLP, RBF and GRNN. Moreover, IGMN is based on strong statistical principles (Gaussian mixture models) and asymptotically converges to the optimal regression surface as more training data arrive. The main advantages of IGMN over other ANN models are: (i) IGMN learns incrementally using a single scan over the training data (each training pattern can be immediately used and discarded); (ii) it can produce reasonable estimates based on few training data; (iii) the learning process can proceed perpetually as new training data arrive (there is no separate phases for leaning and recalling); (iv) IGMN can handle the stability-plasticity dilemma and does not suffer from catastrophic interference; (v) the neural network topology is defined automatically and incrementally (new units added whenever is necessary); (vi) IGMN is not sensible to initialization conditions (in fact there is no random initialization/ decision in IGMN); (vii) the same neural network can be used to solve both forward and inverse problems (the information flow is bidirectional) even in regions where the target data are multi-valued; and (viii) IGMN can provide the confidence levels of its estimates. Another relevant contribution of this thesis is the use of IGMN in some important state-of-the-art machine learning and robotic tasks such as model identification, incremental concept formation, reinforcement learning, robotic mapping and time series prediction. In fact, the efficiency of IGMN and its representational power expand the set of potential tasks in which the neural networks can be applied, thus opening new research directions in which important contributions can be made. Through several experiments using the proposed model it is demonstrated that IGMN is also robust to overfitting, does not require fine-tunning of its configuration parameters and has a very good computational performance, thus allowing its use in real time control applications. Therefore, IGMN is a very useful machine learning tool for incremental function approximation and on-line prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Pereira, Renato de Pontes. "HIGMN : an IGMN-based hierarchical architecture and its applications for robotic tasks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/80752.

Повний текст джерела
Анотація:
O recente campo de Deep Learning introduziu a área de Aprendizagem de Máquina novos métodos baseados em representações distribuídas e abstratas dos dados de treinamento ao longo de estruturas hierárquicas. A organização hierárquica de camadas permite que esses métodos guardem informações distribuídas sobre os sinais sensoriais e criem conceitos com diferentes níveis de abstração para representar os dados de entrada. Este trabalho investiga o impacto de uma estrutura hierárquica inspirada pelas ideias apresentadas em Deep Learning, e com base na Incremental Gaussian Mixture Network (IGMN), uma rede neural probabilística com aprendizagem online e incremental, especialmente adequada para as tarefas de robótica. Como resultado, foi desenvolvida uma arquitetura hierárquica, denominada Hierarchical Incremental Gaussian Mixture Network (HIGMN), que combina dois níveis de IGMNs. As camadas de primeiro nível da HIGMN são capazes de aprender conceitos a partir de dados de diferentes domínios que são então relacionados na camada de segundo nível. O modelo proposto foi comparado com a IGMN em tarefas de robótica, em especial, na tarefa de aprender e reproduzir um comportamento de seguir paredes, com base em uma abordagem de Aprendizado por Demonstração. Os experimentos mostraram como a HIGMN pode executar três diferentes tarefas em paralelo (aprendizagem de conceitos, segmentação de comportamento, e aprendizagem e reprodução de comportamentos) e sua capacidade de aprender um comportamento de seguir paredes e reproduzi-lo em ambientes desconhecidos com novas informações sensoriais. A HIGMN conseguiu reproduzir o comportamento de seguir paredes depois de uma única, simples e curta demonstração do comportamento. Além disso, ela adquiriu conhecimento de diferentes tipos: informações sobre o ambiente, a cinemática do robô, e o comportamento alvo.
The recent field of Deep Learning has introduced to Machine Learning new meth- ods based on distributed abstract representations of the training data throughout hierarchical structures. The hierarchical organization of layers allows these meth- ods to store distributed information on sensory signals and to create concepts with different abstraction levels to represent the input data. This work investigates the impact of a hierarchical structure inspired by ideas on Deep Learning and based on the Incremental Gaussian Mixture Network (IGMN), a probabilistic neural network with an on-line and incremental learning, specially suitable for robotic tasks. As a result, a hierarchical architecture, called Hierarchical Incremental Gaussian Mixture Network (HIGMN), was developed, which combines two levels of IGMNs. The HIGMN first-level layers are able to learn concepts from data of different domains that are then related in the second-level layer. The proposed model was compared with the IGMN regarding robotic tasks, in special, the task of learning and repro- ducing a wall-following behavior, based on a Learning from Demonstration (LfD) approach. The experiments showed how the HIGMN can perform parallely three different tasks concept learning, behavior segmentation, and learning and repro- ducing behaviors and its ability to learn a wall-following behavior and to perform it in unknown environments with new sensory information. HIGMN could reproduce the wall-following behavior after a single, simple, and short demonstration of the behavior. Moreover, it acquired different types of knowledge: information on the environment, the robot kinematics, and the target behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Cholet, Stéphane. "Evaluation automatique des états émotionnels et dépressifs : vers un système de prévention des risques psychosociaux." Thesis, Antilles, 2019. http://www.theses.fr/2019ANTI0388/document.

Повний текст джерела
Анотація:
Les risques psychosociaux sont un enjeu de santé publique majeur, en particulier à cause des troubles qu'ils peuvent engendrer : stress, changements d'humeurs, burn-out, etc. Bien que le diagnostic de ces troubles doive être réalisé par un professionel, l'Affective Computing peut apporter une contribution en améliorant la compréhension des phénomènes. L'Affective Computing (ou Informatique Affective) est un domaine pluridisciplinaire, faisant intervenir des concepts d'Intelligence Artificielle, de psychologie et de psychiatrie, notamment. Dans ce travail de recherche, on s'intéresse à deux éléments pouvant faire l'objet de troubles : l'état émotionnel et l'état dépressif des individus.Le concept d'émotion couvre un très large champ de définitions et de modélisations, pour la plupart issues de travaux en psychiatrie ou en psychologie. C'est le cas, par exemple, du circumplex de Russell, qui définit une émotion comme étant la combinaison de deux dimensions affectives, nommées valence et arousal. La valence dénote le caractère triste ou joyeux d'un individu, alors que l'arousal qualifie son caractère passif ou actif. L'évaluation automatique des états émotionnels a suscité, dans la dernière décénie, un regain d'intérêt notable. Des méthodes issues de l'Intelligence Artificielle permettent d'atteindre des performances intéressantes, à partir de données capturées de manière non-invasive, comme des vidéos. Cependant, il demeure un aspect peu étudié : celui des intensités émotionnelles, et de la possibilité de les reconnaître. Dans cette thèse, nous avons exploré cet aspect au moyen de méthodes de visualisation et de classification pour montrer que l'usage de classes d'intensités émotionnelles, plutôt que de valeurs continues, bénéficie à la fois à la reconnaissance automatique et à l'interprétation des états.Le concept de dépression connaît un cadre plus strict, dans la mesure où c'est une maladie reconnue en tant que telle. Elle atteint les individus sans distinction d'âge, de genre ou de métier, mais varie en intensité ou en nature des symptômes. Pour cette raison, son étude tant au niveau de la détection que du suivi, présente un intérêt majeur pour la prévention des risques psychosociaux.Toutefois, son diagnostic est rendu difficile par le caractère parfois anodin des symptômes et par la démarche souvent délicate de consulter un spécialiste. L'échelle de Beck et le score associé permettent, au moyen d'un questionnaire, d'évaluer la sévérité de l'état dépressif d'un individu. Le système que nous avons développé est capable de reconnaître automatiquement le score dépressif d'un individu à partir de vidéos. Il comprend, d'une part, un descripteur visuel spatio-temporel bas niveau qui quantifie les micro et les macro-mouvements faciaux et, d'autre part, des méthodes neuronales issues des sciences cognitives. Sa rapidité autorise des applications de reconnaissance des états dépressifs en temps réel, et ses performances sont intéressantes au regard de l'état de l'art. La fusion des modalités visuelles et auditives a également fait l'objet d'une étude, qui montre que l'utilisation de ces deux canaux sensoriels bénéficie à la reconnaissance des états dépressifs.Au-delà des performances et de son originalité, l'un des points forts de ce travail de thèse est l'interprétabilité des méthodes. En effet, dans un contexte pluridisciplinaire tel que celui posé par l'Affective Computing, l'amélioration des connaissances et la compréhension des phénomènes étudiés sont des aspects majeurs que les méthodes informatiques sous forme de "boîte noire" ont souvent du mal à appréhender
Psychosocial risks are a major public health issue, because of the disorders they can trigger : stress, mood swings, burn-outs, etc. Although propoer diagnosis can only be made by a healthcare professionnel, Affective Computing can make a contribution by improving the understanding of the phenomena. Affective Computing is a multidisciplinary field involving concepts of Artificial Intelligence, psychology and psychiatry, among others. In this research, we are interested in two elements that can be subject to disorders: the emotional state and the depressive state of individuals.The concept of emotion covers a wide range of definitions and models, most of which are based on work in psychiatry or psychology. A famous example is Russell's circumplex, which defines an emotion as the combination of two emotional dimensions, called valence and arousal. Valence denotes an individual's sad or joyful character, while arousal denotes his passive or active character. The automatic evaluation of emotional states has generated a significant revival of interest in the last decade. Methods from Artificial Intelligence allow to achieve interesting performances, from data captured in a non-invasive manner, such as videos. However, there is one aspect that has not been studied much: that of emotional intensities and the possibility of recognizing them. In this thesis, we have explored this aspect using visualization and classification methods to show that the use of emotional intensity classes, rather than continuous values, benefits both automatic recognition and state interpretation.The concept of depression is more strict, as it is a recognized disease as such. It affects individuals regardless of age, gender or occupation, but varies in intensity or nature of symptoms. For this reason, its study, both at the level of detection and monitoring, is of major interest for the prevention of psychosocial risks.However, his diagnosis is made difficult by the sometimes innocuous nature of the symptoms and by the often delicate process of consulting a specialist. The Beck's scale and the associated score allow, by means of a questionnaire, to evaluate the severity of an individual's state of depression. The system we have developed is able to automatically recognize an individual's depressive score from videos. It includes, on the one hand, a low-level visual spatio-temporal descriptor that quantifies micro and macro facial movements and, on the other hand, neural methods from the cognitive sciences. Its speed allows applications for real-time recognition of depressive states, and its performance is interesting with regard to the state of the art. The fusion of visual and auditory modalities has also been studied, showing that the use of these two sensory channels benefits the recognition of depressive states.Beyond performance and originality, one of the strong points of this thesis is the interpretability of the methods. Indeed, in a multidisciplinary context such as that of Affective Computing, improving knowledge and understanding of the studied phenomena is a key point that usual computer methods implemeted as "black boxes" can't deal with
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Börthas, Lovisa, and Sjölander Jessica Krange. "Machine Learning Based Prediction and Classification for Uplift Modeling." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266379.

Повний текст джерела
Анотація:
The desire to model the true gain from targeting an individual in marketing purposes has lead to the common use of uplift modeling. Uplift modeling requires the existence of a treatment group as well as a control group and the objective hence becomes estimating the difference between the success probabilities in the two groups. Efficient methods for estimating the probabilities in uplift models are statistical machine learning methods. In this project the different uplift modeling approaches Subtraction of Two Models, Modeling Uplift Directly and the Class Variable Transformation are investigated. The statistical machine learning methods applied are Random Forests and Neural Networks along with the standard method Logistic Regression. The data is collected from a well established retail company and the purpose of the project is thus to investigate which uplift modeling approach and statistical machine learning method that yields in the best performance given the data used in this project. The variable selection step was shown to be a crucial component in the modeling processes as so was the amount of control data in each data set. For the uplift to be successful, the method of choice should be either the Modeling Uplift Directly using Random Forests, or the Class Variable Transformation using Logistic Regression. Neural network - based approaches are sensitive to uneven class distributions and is hence not able to obtain stable models given the data used in this project. Furthermore, the Subtraction of Two Models did not perform well due to the fact that each model tended to focus too much on modeling the class in both data sets separately instead of modeling the difference between the class probabilities. The conclusion is hence to use an approach that models the uplift directly, and also to use a great amount of control data in each data set.
Behovet av att kunna modellera den verkliga vinsten av riktad marknadsföring har lett till den idag vanligt förekommande metoden inkrementell responsanalys. För att kunna utföra denna typ av metod krävs förekomsten av en existerande testgrupp samt kontrollgrupp och målet är således att beräkna differensen mellan de positiva utfallen i de två grupperna. Sannolikheten för de positiva utfallen för de två grupperna kan effektivt estimeras med statistiska maskininlärningsmetoder. De inkrementella responsanalysmetoderna som undersöks i detta projekt är subtraktion av två modeller, att modellera den inkrementella responsen direkt samt en klassvariabeltransformation. De statistiska maskininlärningsmetoderna som tillämpas är random forests och neurala nätverk samt standardmetoden logistisk regression. Datan är samlad från ett väletablerat detaljhandelsföretag och målet är därmed att undersöka vilken inkrementell responsanalysmetod och maskininlärningsmetod som presterar bäst givet datan i detta projekt. De mest avgörande aspekterna för att få ett bra resultat visade sig vara variabelselektionen och mängden kontrolldata i varje dataset. För att få ett lyckat resultat bör valet av maskininlärningsmetod vara random forests vilken används för att modellera den inkrementella responsen direkt, eller logistisk regression tillsammans med en klassvariabeltransformation. Neurala nätverksmetoder är känsliga för ojämna klassfördelningar och klarar därmed inte av att erhålla stabila modeller med den givna datan. Vidare presterade subtraktion av två modeller dåligt på grund av att var modell tenderade att fokusera för mycket på att modellera klassen i båda dataseten separat, istället för att modellera differensen mellan dem. Slutsatsen är således att en metod som modellerar den inkrementella responsen direkt samt en relativt stor kontrollgrupp är att föredra för att få ett stabilt resultat.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Hamid, Muhammed Hamed. "Hyperspectral Image Generation, Processing and Analysis." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5905.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Osório, Fernando Santos. "Inss : un système hybride neuro-symbolique pour l'apprentissage automatique constructif." Grenoble INPG, 1998. https://tel.archives-ouvertes.fr/tel-00004899.

Повний текст джерела
Анотація:
Plusieurs méthodes ont été développées par l'Intelligence Artificielle pour reproduire certains aspects de l'intelligence humaine. Ces méthodes permettent de simuler les processus de raisonnement en s'appuyant sur les connaissances de base disponibles. Chaque méthode comporte des points forts, mais aussi des limitations. La réalisation de systèmes hybrides est une démarche courante Qui permet de combiner les points forts de chaque approche, et d'obtenir ainsi des performances plus élevées ou un champ d'application plus large. Un autre aspect très important du développement des systèmes hybrides intelligents est leur capacité d'acquérir de nouvelles connaissances à partir de plusieurs sources différentes et de les faire évoluer. Dans cette thèse, nous avons développé des recherches sur les systèmes hybrides neuro-symboliques, et en particulier sur l'acquisition incrémentale de connaissances à partir de connaissances théoriques (règles) et empiriques (exemples). Un nouveau système hybride, nommé système INSS - Incremental Neuro-Symbolic System, a été étudié et réalisé. Ce système permet le transfert de connaissances déclaratives (règles symboliques) d'un module symbolique vers un module connexionniste (réseau de neurones artificiel - RNA) à travers un convertisseur de règles en réseau. Les connaissances du réseau ainsi obtenu sont affinées par un processus d'apprentissage à partir d'exemples. Ce raffinement se fait soit par ajout de nouvelles connaissances, soit par correction des incohérences, grâce à l'utilisation d'un réseau constructif de type Cascade-Correlation. Une méthode d'extraction incrémentale de règles a été intégrée au système INSS, ainsi que des algorithmes de validation des connaissances qui ont permis de mieux coupler les modules connexionniste et symbolique. Le système d'apprentissage automatique INSS a été conçu pour l'acquisition constructive (incrémentale) de connaissances. Le système a été testé sur plusieurs applications, en utilisant des problèmes académiques et des problèmes réels (diagnostic médical, modélisation cognitive et contrôle d'un robot autonome). Les résultats montrent que le système INSS a des performances supérieures et de nombreux avantages par rapport aux autres systèmes hybrides du même type
Various Artificial Intelligence methods have been developed to reproduce intelligent human behaviour. These methods allow to reproduce some human reasoning process using the available knowledge. Each method has its advantages, but also some drawbacks. Hybrid systems combine different approaches in order to take advantage of their respective strengths. These hybrid intelligent systems also present the ability to acquire new knowledge from different sources and so to improve their application performance. This thesis presents our research in the field of hybrid neuro-symbolic systems, and in particular the study of machine learning tools used for constructive knowledge acquisition. We are interested in the automatic acquisition of theoretical knowledge (rules) and empirical knowledge (examples). We present a new hybrid system we implemented: INSS - Incremental Neuro-Symbolic System. This system allows knowledge transfer from the symbolic module to the connectionist module (Artificial Neural Network - ANN), through symbolic rule compilation into an ANN. We can refine the initial ANN knowledge through neural learning using a set of examples. The incremental ANN learning method used, the Cascade-Correlation algorithm, allows us to change or to add new knowledge to the network. Then, the system can also extract modified (or new) symbolic rules from the ANN and validate them. INSS is a hybrid machine learning system that implements a constructive knowledge acquisition method. We conclude by showing the results we obtained with this system in different application domains: ANN artificial problems(The Monk's Problems), computer aided medical diagnosis (Toxic Comas), a cognitive modelling task (The Balance Scale Problem) and autonomous robot control. The results we obtained show the improved performance of INSS and its advantages over others hybrid neuro-symbolic systems
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Besedin, Andrey. "Continual forgetting-free deep learning from high-dimensional data streams." Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1263.

Повний текст джерела
Анотація:
Dans cette thèse, nous proposons une nouvelle approche de l’apprentissage profond pour la classification des flux de données de grande dimension. Au cours des dernières années, les réseaux de neurones sont devenus la référence dans diverses applications d’apprentissage automatique. Cependant, la plupart des méthodes basées sur les réseaux de neurones sont conçues pour résoudre des problèmes d’apprentissage statique. Effectuer un apprentissage profond en ligne est une tâche difficile. La principale difficulté est que les classificateurs basés sur les réseaux de neurones reposent généralement sur l’hypothèse que la séquence des lots de données utilisées pendant l’entraînement est stationnaire ; ou en d’autres termes, que la distribution des classes de données est la même pour tous les lots (hypothèse i.i.d.). Lorsque cette hypothèse ne tient pas les réseaux de neurones ont tendance à oublier les concepts temporairement indisponibles dans le flux. Dans la littérature scientifique, ce phénomène est généralement appelé oubli catastrophique. Les approches que nous proposons ont comme objectif de garantir la nature i.i.d. de chaque lot qui provient du flux et de compenser l’absence de données historiques. Pour ce faire, nous entrainons des modèles génératifs et pseudo-génératifs capable de produire des échantillons synthétiques à partir des classes absentes ou mal représentées dans le flux, et complètent les lots du flux avec ces échantillons. Nous testons nos approches dans un scénario d’apprentissage incrémental et dans un type spécifique de l’apprentissage continu. Nos approches effectuent une classification sur des flux de données dynamiques avec une précision proche des résultats obtenus dans la configuration de classification statique où toutes les données sont disponibles pour la durée de l’apprentissage. En outre, nous démontrons la capacité de nos méthodes à s’adapter à des classes de données invisibles et à de nouvelles instances de catégories de données déjà connues, tout en évitant d’oublier les connaissances précédemment acquises
In this thesis, we propose a new deep-learning-based approach for online classification on streams of high-dimensional data. In recent years, Neural Networks (NN) have become the primary building block of state-of-the-art methods in various machine learning problems. Most of these methods, however, are designed to solve the static learning problem, when all data are available at once at training time. Performing Online Deep Learning is exceptionally challenging.The main difficulty is that NN-based classifiers usually rely on the assumption that the sequence of data batches used during training is stationary, or in other words, that the distribution of data classes is the same for all batches (i.i.d. assumption).When this assumption does not hold Neural Networks tend to forget the concepts that are temporarily not available in thestream. In the literature, this phenomenon is known as catastrophic forgetting. The approaches we propose in this thesis aim to guarantee the i.i.d. nature of each batch that comes from the stream and compensates for the lack of historical data. To do this, we train generative models and pseudo-generative models capable of producing synthetic samples from classes that are absent or misrepresented in the stream and complete the stream’s batches with these samples. We test our approaches in an incremental learning scenario and a specific type of continuous learning. Our approaches perform classification on dynamic data streams with the accuracy close to the results obtained in the static classification configuration where all data are available for the duration of the learning. Besides, we demonstrate the ability of our methods to adapt to invisible data classes and new instances of already known data categories, while avoiding forgetting the previously acquired knowledge
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Jiech-Chyn, Wu, and 吳戒秦. "Surface Data Compression by Incremental Approximation of RBF Neural Network." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/85819174739885685144.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
電機工程研究所
84
The measurement of three-dimension surface by scanning usually generates huge amount of data occupying very large space of storage. In stead of saving the dimensional data directly, this paper proposes to approximate the surface by learning the measured data with a neural network and then simply saves the weights of the neural network in the storage. This approach is found to be able to achieve simultaneously the results of approximation, smoothing and, more importantly, compression of the surface data. The Radial Basis Function (RBF) Neura Network is chosen to be the approximator due to its performance in accuracy. The learning algorithm is developed on the basis of Recursive Least Square Method (RLSM) so that incremental learning of the surface can be achieved. The capability of incremental learning allows the approximation by the Neural Network to be improved along with the addition of new measurements of the surface. Since the accuracy of approximating a surface by the Neural Network with fixed numbers of layers and nodes is closely related to the complexity or order of the target surface. An algorithm of adaptive segmentation is developed to split the surface data into segments each with suitable complexity for approximation. Given the desired accuracy of approximation, it is shown that the RBF Neural Network based approximator can obtain large ratio of data compression. Several numerical examples are demonstrated.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Іванова, Є. В. "Самоорганізовна інкрементна нейронна мережа для кластерування масивів даних". Thesis, 2018. https://openarchive.nure.ua/handle/document/20022.

Повний текст джерела
Анотація:
In the paper self-organizing incremental neural network for multidimensional data sets. It is proposed to accomplish the task of clustering and represent the topological structure of the input data. Self-organizing incremental neural network find typical prototypes for large-scale data set and robust to noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Van, der Stockt Stefan Aloysius Gert. "A generic neural network framework using design patterns." Diss., 2008. http://hdl.handle.net/2263/27614.

Повний текст джерела
Анотація:
Designing object-oriented software is hard, and designing reusable object-oriented software is even harder. This task is even more daunting for a developer of computational intelligence applications, as optimising one design objective tends to make others inefficient or even impossible. Classic examples in computer science include ‘storage vs. time’ and ‘simplicity vs. flexibility.’ Neural network requirements are by their very nature very tightly coupled – a required design change in one area of an existing application tends to have severe effects in other areas, making the change impossible or inefficient. Often this situation leads to a major redesign of the system and in many cases a completely rewritten application. Many commercial and open-source packages do exist, but these cannot always be extended to support input from other fields of computational intelligence due to proprietary reasons or failing to fully take all design requirements into consideration. Design patterns make a science out of writing software that is modular, extensible and efficient as well as easy to read and understand. The essence of a design pattern is to avoid repeatedly solving the same design problem from scratch by reusing a solution that solves the core problem. This pattern or template for the solution has well understood prerequisites, structure, properties, behaviour and consequences. CILib is a framework that allows developers to develop new computational intelligence applications quickly and efficiently. Flexibility, reusability and clear separation between components are maximised through the use of design patterns. Reliability is also ensured as the framework is open source and thus has many people that collaborate to ensure that the framework is well designed and error free. This dissertation discusses the design and implementation of a generic neural network framework that allows users to design, implement and use any possible neural network models and algorithms in such a way that they can reuse and be reused by any other computational intelligence algorithm in the rest of the framework, or any external applications. This is achieved by using object-oriented design patterns in the design of the framework.
Dissertation (MSc)--University of Pretoria, 2008.
Computer Science
unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
23

De, Wet Anton Petrus Christiaan. "An incremental learning system for artificial neural networks." Thesis, 2014. http://hdl.handle.net/10210/12024.

Повний текст джерела
Анотація:
M.Ing. (Electrical And Electronic Engineering)
This dissertation describes the development of a system of Artificial Neural Networks that enables the incremental training of feed forward neural networks using supervised training algorithms such as back propagation. It is argued that incremental learning is fundamental to the adaptive learning behavior observed in human intelligence and constitutes an imperative step towards artificial cognition. The importance of developing incremental learning as a system of ANNs is stressed before the complete system is presented. Details of the development and implementation of the system is complemented by the description of two case studies. In conclusion the role of the incremental learning system as basis for further development of fundamental elements of cognition is projected.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wu, Yi-Lian, and 吳易璉. "Integrated the Validation Incremental Neural Networks and Radial-Basis Function Neural Networks for Segmenting Prostate." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/92540595174319775086.

Повний текст джерела
Анотація:
碩士
國立雲林科技大學
資訊工程研究所
97
Recently, Transrectal ultrasoundgraphy (TRUS) imaging is widely used to diagnose prostate disease. Before a physician can diagnose prostate lesions, contour of the prostate in TRUS images must be manually outlined. However, manual segmentation is time-consuming and inefficient. Therefore, an automatic segmentation of prostate in TRUS images is necessary. Among the segmentation methods, active contour model (ACM) is a successful contour detection method. But the shortcoming of ACM is that the determination of the initial contour is manual. Thus, in this paper, an automatic neural-network-based prostate segmentation method in TRUS images is proposed, which can omit the complicated step of determine the initial contour. The proposed system consists of the Validation Incremental Neural Network and Radial-Basis Function Neural Networks for prostate segmentation. Experimental results show that the proposed method has higher accuracy than Active Contour Model (ACM).
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Obenauff, Alexander. "A progressive learning method for classification of manufacturing errors based on machine data." Master's thesis, 2019. http://hdl.handle.net/10362/76579.

Повний текст джерела
Анотація:
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics
Manufacturing companies face significant market pressure in today’s globalised world. Fierce global competition and product individualisation mean that production systems require continuous optimisation. This means that automation, flexibility and efficiency have all become vital elements for manufacturers. In this paper, a method based on incremental classification used for manufacturing errors is presented. The analysis and classification focus on data of binary form collected from a machine control unit during manufacturing operation in real time. Various methods that can learn from data incrementally and autonomously are to be applied. The training starts with the least amount of data possible and other important steps like data preprocessing are reviewed under the aspect of incremental learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

"Incremental Learning With Sample Generation From Pretrained Networks." Master's thesis, 2020. http://hdl.handle.net/2286/R.I.57207.

Повний текст джерела
Анотація:
abstract: In the last decade deep learning based models have revolutionized machine learning and computer vision applications. However, these models are data-hungry and training them is a time-consuming process. In addition, when deep neural networks are updated to augment their prediction space with new data, they run into the problem of catastrophic forgetting, where the model forgets previously learned knowledge as it overfits to the newly available data. Incremental learning algorithms enable deep neural networks to prevent catastrophic forgetting by retaining knowledge of previously observed data while also learning from newly available data. This thesis presents three models for incremental learning; (i) Design of an algorithm for generative incremental learning using a pre-trained deep neural network classifier; (ii) Development of a hashing based clustering algorithm for efficient incremental learning; (iii) Design of a student-teacher coupled neural network to distill knowledge for incremental learning. The proposed algorithms were evaluated using popular vision datasets for classification tasks. The thesis concludes with a discussion about the feasibility of using these techniques to transfer information between networks and also for incremental learning applications.
Dissertation/Thesis
Masters Thesis Computer Science 2020
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Mohamed, Shakir. "Dynamic protein classification: Adaptive models based on incremental learning strategies." Thesis, 2008. http://hdl.handle.net/10539/4678.

Повний текст джерела
Анотація:
Abstract One of the major problems in computational biology is the inability of existing classification models to incorporate expanding and new domain knowledge. This problem of static classification models is addressed in this thesis by the introduction of incremental learning for problems in bioinformatics. The tools which have been developed are applied to the problem of classifying proteins into a number of primary and putative families. The importance of this type of classification is of particular relevance due to its role in drug discovery programs and the benefit it lends to this process in terms of cost and time saving. As a secondary problem, multi–class classification is also addressed. The standard approach to protein family classification is based on the creation of committees of binary classifiers. This one-vs-all approach is not ideal, and the classification systems presented here consists of classifiers that are able to do all-vs-all classification. Two incremental learning techniques are presented. The first is a novel algorithm based on the fuzzy ARTMAP classifier and an evolutionary strategy. The second technique applies the incremental learning algorithm Learn++. The two systems are tested using three datasets: data from the Structural Classification of Proteins (SCOP) database, G-Protein Coupled Receptors (GPCR) database and Enzymes from the Protein Data Bank. The results show that both techniques are comparable with each other, giving classification abilities which are comparable to that of the single batch trained classifiers, with the added ability of incremental learning. Both the techniques are shown to be useful to the problem of protein family classification, but these techniques are applicable to problems outside this area, with applications in proteomics including the predictions of functions, secondary and tertiary structures, and applications in genomics such as promoter and splice site predictions and classification of gene microarrays.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

HUANG, HONG-WEI, and 黃弘偉. "Abnormal Moving Object Detection under Various Enviroments Using Self-Organizing Incremental Neural Networks." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/c54ss4.

Повний текст джерела
Анотація:
碩士
國立臺灣科技大學
資訊工程系
100
Abnormal moving objects detection is an essential issue for video surveillance. In order to judge whether the behavior of objects is abnormal, such as pedestrians walk back and forth, walk across the street, or scooters drive the wrong way, the main method is through computer vision technique to analyze objects as pedestrians, cars, and so on in video. Traditional abnormal moving objects detection aims at particular circumstances or requirement to predefine particular detection rules which the application of abnormal moving objects detection is restricted. Besides, if numerous abnormal moving objects are detected at the same time, surveillance system is overloaded with operation. Owing to this reason, in this paper, we expect to design a set of learning model which does not predefine abnormal rules and can detect a variety of abnormal moving objects automatically in different environments. To achieve the above goal, the first thing is to detect the moving objects in video. The proposed method in this paper utilizes Gaussian Mixture Model (GMM) to detect foreground objects and remove shadows of objects by shadow removal. Then, adoptive mean shift algorithm with Kalman filter is proposed to track these moving objects. Finally, Kalman filter is used to smooth trajectory. After collecting the trajectories of moving objects, abnormal moving object detection process proceeds. At first, for this trajectory information, take advantage of Self-Organizing Incremental Neural Network (SOINN) to learn and build a normal trajectory model which is a foundation to determine whether follow-up moving objects are abnormal. The average learning time is 7 to 55 seconds. The experiment monitors and analyzes different circumstances, such as School campus, roads, and one-way street. The system based on the proposed method can detect abnormal moving objects with the accuracy 100% in school campus, 98.3% in roads, and 98.8% in one-way street. The overall execution time is short and about 0.033 to 0.067 seconds, and it can be executed in real-time.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chang, Da-Yuan, and 張大元. "A Study on Water Stage Increment of the River due to Reservoir Drainage by Artificial Neural Network." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/20190379759396040336.

Повний текст джерела
Анотація:
碩士
中原大學
土木工程研究所
92
Abstract In Taiwan, roughly 78% of its yearly rainfall concentrates in the summer and autumn because of the particular climate and geographic characteristics. During Typhoon period, the reservoir operators often face the dilemma of maintaining more floodwater and taking the risk of failure of the dam and taking the risk of being drought if excess floodwater is released. The most difficult task of reservoir operation is to consider all the functions of the reservoir. To achieve this purpose, forecasting the inflows of reservoir and simulating downstream water-stage due to the drainage of reservoir are essential to operators. Watershed of Shihmen reservoir and Da-han River are taken as demonstrations in this paper. The Artificial Neural Network (ANN) simulates the characteristics of river basin, and a “rainfall-runoff” model is established. While the model is built, we could predict the inflows of several hours later. In addition, the relationship between drainage and water-stage is found. Some important results and a three-dimension plot of drainage, water-stage and tide are present in this paper. The plot could figure out relationships between drainage and water-stage under different rainfall intensity or tide-level conditions. It is expected that this research be used for online reservoir operation in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Santos, Veríssimo Manuel Brandão Lima. "Deteção automática de lesões no intestino delgado por análise de imagens obtidas por cápsula endoscópica." Doctoral thesis, 2020. http://hdl.handle.net/1822/76137.

Повний текст джерела
Анотація:
Tese de Doutoramento em Engenharia Biomédica
Nas últimas duas décadas foram propostas numerosas metodologias de deteção automática de lesões por análise de imagens obtidas por cápsula endoscópica, com vista à automatização do moroso processo de revisão das imagens, utilizando uma grande variedade de pré-processamentos e classificadores. Um contributo significativo para aumentar a eficácia na deteção de lesões poderá ser obtido pelo uso de classificadores de elevado desempenho. Os conjuntos de dados obtidos em endoscopia por cápsula frequentemente apresentam distribuições multimodais, que tornam as fronteiras de classificação complexas e com características diversas. Por este motivo foi definida a hipótese de que a utilização de um classificador por ensemble decompondo o problema em subproblemas mais pequenos e simples conduziria porventura aos melhores resultados. Foram analisados os ensembles existentes, identificada a razão pela qual perdem desempenho em modo de aprendizagem incremental e proposta uma nova estrutura de ensemble adequada a funcionar neste modo de adaptação. Através de resultados experimentais foi verificada a eficiência do método proposto e observada a vantagem em utilizar aprendizagem incremental, embora neste último caso a limitação imposta pela base de dados utilizada, não tenha permitido obter diferenças mais significativas que no entanto se observam nas mais diversas áreas da engenharia como por exemplo no reconhecimento automático da fala.
Over the past two decades, numerous methodologies for automatic lesion detection by endoscopic capsule image analysis have been proposed to automate the cumbersome image reviewing process using a wide variety of preprocessing methodologies and classifiers. Increased effectiveness in lesion detection can be obtained using high-performance classifiers. Capsule endoscopy datasets often exhibit multimodal distributions with complex and diverse classification boundaries. For this reason, it was hypothesized that the use of an ensemble classifier to decompose the original problem into smaller and simpler subproblems would lead to better results. Current ensembles were analyzed, and the reason for the incremental mode performance loss identified. To overcome this limitation a new ensemble classifier more suited to this learning mode has been proposed. Experimental results showed the effectiveness of this new ensemble classifier and the benefits of incremental learning, although in the later the dataset limited the expected performance increase, that happens in many of engineering problems such as automatic speech recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії