Journal articles on the topic 'Incremental neural network'

To see the other types of publications on this topic, follow the link: Incremental neural network.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Incremental neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Shuyuan, Min Wang, and Licheng Jiao. "Incremental constructive ridgelet neural network." Neurocomputing 72, no. 1-3 (December 2008): 367–77. http://dx.doi.org/10.1016/j.neucom.2008.01.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Siddiqui, Zahid Ali, and Unsang Park. "Progressive Convolutional Neural Network for Incremental Learning." Electronics 10, no. 16 (August 5, 2021): 1879. http://dx.doi.org/10.3390/electronics10161879.

Full text
Abstract:
In this paper, we present a novel incremental learning technique to solve the catastrophic forgetting problem observed in the CNN architectures. We used a progressive deep neural network to incrementally learn new classes while keeping the performance of the network unchanged on old classes. The incremental training requires us to train the network only for new classes and fine-tune the final fully connected layer, without needing to train the entire network again, which significantly reduces the training time. We evaluate the proposed architecture extensively on image classification task using Fashion MNIST, CIFAR-100 and ImageNet-1000 datasets. Experimental results show that the proposed network architecture not only alleviates catastrophic forgetting but can also leverages prior knowledge via lateral connections to previously learned classes and their features. In addition, the proposed scheme is easily scalable and does not require structural changes on the network trained on the old task, which are highly required properties in embedded systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Ho, Jiacang, and Dae-Ki Kang. "Brick Assembly Networks: An Effective Network for Incremental Learning Problems." Electronics 9, no. 11 (November 17, 2020): 1929. http://dx.doi.org/10.3390/electronics9111929.

Full text
Abstract:
Deep neural networks have achieved high performance in image classification, image generation, voice recognition, natural language processing, etc.; however, they still have confronted several open challenges that need to be solved such as incremental learning problem, overfitting in neural networks, hyperparameter optimization, lack of flexibility and multitasking, etc. In this paper, we focus on the incremental learning problem which is related with machine learning methodologies that continuously train an existing model with additional knowledge. To the best of our knowledge, a simple and direct solution to solve this challenge is to retrain the entire neural network after adding the new labels in the output layer. Besides that, transfer learning can be applied only if the domain of the new labels is related to the domain of the labels that have already been trained in the neural network. In this paper, we propose a novel network architecture, namely Brick Assembly Network (BAN), which allows a trained network to assemble (or dismantle) a new label to (or from) a trained neural network without retraining the entire network. In BAN, we train labels with a sub-network (i.e., a simple neural network) individually and then we assemble the converged sub-networks that have trained for a single label together to form a full neural network. For each label to be trained in a sub-network of BAN, we introduce a new loss function that minimizes the loss of the network with only one class data. Applying one loss function for each class label is unique and different from standard neural network architectures (e.g., AlexNet, ResNet, InceptionV3, etc.) which use the values of a loss function from multiple labels to minimize the error of the network. The difference of between the loss functions of previous approaches and the one we have introduced is that we compute a loss values from node values of penultimate layer (we named it as a characteristic layer) instead of the output layer where the computation of the loss values occurs between true labels and predicted labels. From the experiment results on several benchmark datasets, we evaluate that BAN shows a strong capability of adding (and removing) a new label to a trained network compared with a standard neural network and other previous work.
APA, Harvard, Vancouver, ISO, and other styles
4

Abramova, E. S., A. A. Orlov, and K. V. Makarov. "Possibi¬lities of Using Neural Network Incremental Learning." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 4 (November 2021): 19–27. http://dx.doi.org/10.14529/ctcr210402.

Full text
Abstract:
The present time is characterized by unprecedented growth in the volume of information flows. Information processing underlies the solution of many practical problems. The intelligent infor-mation systems applications range is extremely extensive: from managing continuous technological processes in real-time to solving commercial and administrative problems. Intelligent information systems should have such a main property, as the ability to quickly process dynamical incoming da-ta in real-time. Also, intelligent information systems should be extracting knowledge from previously solved problems. Incremental neural network training has become one of the topical issues in ma-chine learning in recent years. Compared to traditional machine learning, incremental learning al-lows assimilating new knowledge that comes in gradually and preserving old knowledge gained from previous tasks. Such training should be useful in intelligent systems where data flows dynamically. Aim. Consider the concepts, problems, and methods of incremental neural network training, as well as assess the possibility of using it in intelligent systems development. Materials and methods. The idea of incremental learning, obtained in the analysis of a person's learning during his life, is consid-ered. The terms used in the literature to describe incremental learning are presented. The obstacles that arise in achieving the goal of incremental learning are described. A description of three scenari-os of incremental learning, among which class-incremental learning is distinguished, is given. An analysis of the methods of incremental learning, grouped into a family of techniques by the solution of the catastrophic forgetting problem, is given. The possibilities offered by incremental learning ver-sus traditional machine learning are presented. Results. The article attempts to assess the current state and the possibility of using incremental neural network learning, to identify differences from traditional machine learning. Conclusion. Incremental learning is useful for future intelligent sys-tems, as it allows to maintain existing knowledge in the process of updating, avoid learning from scratch, and dynamically adjust the model's ability to learn according to new data available.
APA, Harvard, Vancouver, ISO, and other styles
5

Tomimori, Haruka, Kui-Ting Chen, and Takaaki Baba. "A Convolutional Neural Network with Incremental Learning." Journal of Signal Processing 21, no. 4 (2017): 155–58. http://dx.doi.org/10.2299/jsp.21.155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shiotani, Shigetoshi, Toshio Fukuda, and Takanori Shibata. "A neural network architecture for incremental learning." Neurocomputing 9, no. 2 (October 1995): 111–30. http://dx.doi.org/10.1016/0925-2312(94)00061-v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mellado, Diego, Carolina Saavedra, Steren Chabert, Romina Torres, and Rodrigo Salas. "Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning." Algorithms 12, no. 10 (October 1, 2019): 206. http://dx.doi.org/10.3390/a12100206.

Full text
Abstract:
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43 % and, therefore, proceed with incremental class learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Heo, Kwang-Seung, and Kwee-Bo Sim. "Speaker Identification Based on Incremental Learning Neural Network." International Journal of Fuzzy Logic and Intelligent Systems 5, no. 1 (March 1, 2005): 76–82. http://dx.doi.org/10.5391/ijfis.2005.5.1.076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ciarelli, Patrick Marques, Elias Oliveira, and Evandro O. T. Salles. "An incremental neural network with a reduced architecture." Neural Networks 35 (November 2012): 70–81. http://dx.doi.org/10.1016/j.neunet.2012.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yansheng, Dong Ye, Yuanhong Liu, and Jianjun Xu. "Incremental LLE Based on Back Propagation Neural Network." IOP Conference Series: Earth and Environmental Science 170 (July 2018): 042051. http://dx.doi.org/10.1088/1755-1315/170/4/042051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

HUNG, CHENG-AN, and SHENG-FUU LIN. "AN INCREMENTAL LEARNING NEURAL NETWORK FOR PATTERN CLASSIFICATION." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 06 (September 1999): 913–28. http://dx.doi.org/10.1142/s0218001499000501.

Full text
Abstract:
A neural network architecture that incorporates a supervised mechanism into a fuzzy adaptive Hamming net (FAHN) is presented. The FAHN constructs hyper-rectangles that represent template weights in an unsupervised learning paradigm. Learning in the FAHN consists of creating and adjusting hyper-rectangles in feature space. By aggregating multiple hyper-rectangles into a single class, we can build a classifier, to be henceforth termed as a supervised fuzzy adaptive Hamming net (SFAHN), that discriminates between nonconvex and even discontinuous classes. The SFAHN can operate at a fast-learning rate in online (incremental) or offline (batch) applications, without becoming unstable. The performance of the SFAHN is tested on the Fisher iris data and on an online character recognition problem.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Hongwei, Xiong Xiao, and Osamu Hasegawa. "A Load-Balancing Self-Organizing Incremental Neural Network." IEEE Transactions on Neural Networks and Learning Systems 25, no. 6 (June 2014): 1096–105. http://dx.doi.org/10.1109/tnnls.2013.2287884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Olmez, Tamer, Ertugrul Yazgan, and Okan K. Ersoy. "A multilayer incremental neural network architecture for classification." Neural Processing Letters 2, no. 2 (March 1995): 5–9. http://dx.doi.org/10.1007/bf02312348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Hao, and Xiao-juan Ban. "Clustering by growing incremental self-organizing neural network." Expert Systems with Applications 42, no. 11 (July 2015): 4965–81. http://dx.doi.org/10.1016/j.eswa.2015.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sarwar, Syed Shakib, Aayush Ankit, and Kaushik Roy. "Incremental Learning in Deep Convolutional Neural Networks Using Partial Network Sharing." IEEE Access 8 (2020): 4615–28. http://dx.doi.org/10.1109/access.2019.2963056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Attoh-Okine, Nii O. "Modeling incremental pavement roughness using functional network." Canadian Journal of Civil Engineering 32, no. 5 (October 1, 2005): 899–905. http://dx.doi.org/10.1139/l05-050.

Full text
Abstract:
Incremental roughness prediction is a critical component of decision making of any pavement management systems, therefore, proper estimation is of paramount importance. This paper presents the application of functional equations and networks to incremental roughness prediction of flexible pavement. In the functional networks, neuron functions are multivariate and multiargumentative. Functional equations form the basis of functional networks, therefore, established theorem in functional equations are easily applicable in the analysis. The model is developed from validated set of incremental and interactive pavement distress functions in the highway design and maintenance standard models (HDM). The models proposed and developed are intended for use in infrastructure management (pavement) applications and as a performance model for pavement design. The paper presents a computational procedure of the functional network using the serial associative functional network. The functional equation and networks approach use the domain knowledge and data for developing the roughness models.Key words: pavement roughness, functional equation, functional networks, neural networks, pavement management.
APA, Harvard, Vancouver, ISO, and other styles
17

Chefrour, Aida, Labiba Souici-Meslati, Iness Difi, and Nesrine Bakkouche. "A Novel Incremental Learning Algorithm Based on Incremental Vector Support Machina and Incremental Neural Network Learn++." Revue d'Intelligence Artificielle 33, no. 3 (October 10, 2019): 181–88. http://dx.doi.org/10.18280/ria.330303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yu, Feng, Jinglong Fang, Bin Chen, and Yanli Shao. "An Incremental Learning Based Convolutional Neural Network Model for Large-Scale and Short-Term Traffic Flow." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 143–51. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1027.

Full text
Abstract:
Traffic flow prediction is very important for smooth road conditions in cities and convenient travel for residents. With the explosive growth of traffic flow data size, traditional machine learning algorithms cannot fit large-scale training data effectively and the deep learning algorithms do not work well because of the huge training and update costs, and the prediction accuracy may need to be further improved when an emergency affecting traffic occurs. In this study, an incremental learning based convolutional neural network model, TF-net, is proposed to achieve the efficient and accurate prediction of large-scale and short-term traffic flow. The key idea is to introduce the uncertainty features into the model without increasing the training cost to improve the prediction accuracy. Meanwhile, based on the idea of combining incremental learning with active learning, a certain percentage of typical samples in historical traffic flow data are sampled to fine-tune the prediction model, so as to further improve the prediction accuracy for special situations and ensure the real-time requirement. The experimental results show that the proposed traffic flow prediction model has better performance than the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Gupta, Sharad, and Sudip Sanyal. "INNAMP: An incremental neural network architecture with monitor perceptron." AI Communications 31, no. 4 (June 11, 2018): 339–53. http://dx.doi.org/10.3233/aic-180767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tsiligaridis, John. "DECISION TREES ALGORITHMS AND CLASSIFICATION WITH INCREMENTAL NEURAL NETWORK." International Journal of Digital Information and Wireless Communications 5, no. 3 (2015): 203–9. http://dx.doi.org/10.17781/p001710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nadir Kurnaz, Mehmet, Zümray Dokur, and Tamer Ölmez. "Segmentation of remote-sensing images by incremental neural network." Pattern Recognition Letters 26, no. 8 (June 2005): 1096–104. http://dx.doi.org/10.1016/j.patrec.2004.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Pratama, Mahardhika, Jie Lu, Sreenatha Anavatti, Edwin Lughofer, and Chee-Peng Lim. "An incremental meta-cognitive-based scaffolding fuzzy neural network." Neurocomputing 171 (January 2016): 89–105. http://dx.doi.org/10.1016/j.neucom.2015.06.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Su, Mu-Chun, Jonathan Lee, and Kuo-Lung Hsieh. "A new ARTMAP-based neural network for incremental learning." Neurocomputing 69, no. 16-18 (October 2006): 2284–300. http://dx.doi.org/10.1016/j.neucom.2005.06.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Fuangkhon, Piyabute. "An incremental learning preprocessor for feed-forward neural network." Artificial Intelligence Review 41, no. 2 (January 6, 2012): 183–210. http://dx.doi.org/10.1007/s10462-011-9304-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mei, Liu, Quan Taifan, and Yao Tianbin. "Tracking maneuvering target based on neural fuzzy network with incremental neural leaning." Journal of Systems Engineering and Electronics 17, no. 2 (June 2006): 343–49. http://dx.doi.org/10.1016/s1004-4132(06)60060-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ding, Xue, and Hong Hong Yang. "A Study on the Image Classification Techniques Based on Wavelet Artificial Neural Network Algorithm." Applied Mechanics and Materials 602-605 (August 2014): 3512–14. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3512.

Full text
Abstract:
With the ever-changing education information technology, it is a big problem for the universities and college that how to classify the thousands of copies of the image during the art examination marking process. This paper is to explore the application of artificial intelligence techniques, and to do accurate classification of a large number of images within a limited time and under the help of computer. It is can be seen that the proposed method is feasible through the application of the results of the actual work. Artificial neural network training Artificial neural network training methods have two mainly style, which are Incremental Training and Batch Training, and take the amount of different network training mission as the distinction standard. First, to introduce the Incremental Training [1], that means whenever the network receives the input vector and target vector, it have to adjust once the connection weights and thresholds. It is an online learning method. The other one is Batch Training [2], that means no longer adjust the connection and immediately, but perform bulk adjustment, and after a given volume of the input vector and target vector. Both training methods can be applied, whether it is static or dynamic neural network. Different results will be obtained by artificial neural network for the use of different training methods. When using artificial neural networks to solve specific problems, learning methods, training methods and artificial neural network function should be selected according to the expected results of question type and its specific requirements [3-4]. The selection of parameters of wavelet neural networks and adaptive learning
APA, Harvard, Vancouver, ISO, and other styles
27

Deguchi, Toshinori, Toshiki Takahashi, and Naohiro Ishii. "On Temporal Summation in Chaotic Neural Network with Incremental Learning." International Journal of Software Innovation 2, no. 4 (October 2014): 72–84. http://dx.doi.org/10.4018/ijsi.2014100106.

Full text
Abstract:
The incremental learning is a method to compose an associate memory using a chaotic neural network and provides larger capacity than correlative learning in compensation for a large amount of computation. A chaotic neuron has spatiotemporal summation in it and the temporal summation makes the learning stable to input noise. When there is no noise in input, the neuron may not need temporal summation. In this paper, to reduce the computations, a simplified network without temporal summation is introduced and investigated through the computer simulations comparing with the network as in the past, which is called here the usual network. It turns out that the simplified network has the same capacity in comparison with the usual network and can learn faster than the usual one, but that the simplified network loses the learning ability in noisy inputs. To improve this ability, the parameters in the chaotic neural network are adjusted.
APA, Harvard, Vancouver, ISO, and other styles
28

MUHAMMED, HAMED HAMID. "UNSUPERVISED FUZZY CLUSTERING USING WEIGHTED INCREMENTAL NEURAL NETWORKS." International Journal of Neural Systems 14, no. 06 (December 2004): 355–71. http://dx.doi.org/10.1142/s0129065704002121.

Full text
Abstract:
A new more efficient variant of a recently developed algorithm for unsupervised fuzzy clustering is introduced. A Weighted Incremental Neural Network (WINN) is introduced and used for this purpose. The new approach is called FC-WINN (Fuzzy Clustering using WINN). The WINN algorithm produces a net of nodes connected by edges, which reflects and preserves the topology of the input data set. Additional weights, which are proportional to the local densities in input space, are associated with the resulting nodes and edges to store useful information about the topological relations in the given input data set. A fuzziness factor, proportional to the connectedness of the net, is introduced in the system. A watershed-like procedure is used to cluster the resulting net. The number of the resulting clusters is determined by this procedure. Only two parameters must be chosen by the user for the FC-WINN algorithm to determine the resolution and the connectedness of the net. Other parameters that must be specified are those which are necessary for the used incremental neural network, which is a modified version of the Growing Neural Gas algorithm (GNG). The FC-WINN algorithm is computationally efficient when compared to other approaches for clustering large high-dimensional data sets.
APA, Harvard, Vancouver, ISO, and other styles
29

Srilakshmi, V., K. Anuradha, and C. Shoba Bindu. "Incremental text categorization based on hybrid optimization-based deep belief neural network." Journal of High Speed Networks 27, no. 2 (July 7, 2021): 183–202. http://dx.doi.org/10.3233/jhs-210659.

Full text
Abstract:
One of the effective text categorization methods for learning the large-scale data and the accumulated data is incremental learning. The major challenge in the incremental learning is improving the accuracy as the text document consists of numerous terms. In this research, a incremental text categorization method is developed using the proposed Spider Grasshopper Crow Optimization Algorithm based Deep Belief Neural network (SGrC-based DBN) for providing optimal text categorization results. The proposed text categorization method has four processes, such as are pre-processing, feature extraction, feature selection, text categorization, and incremental learning. Initially, the database is pre-processed and fed into vector space model for the extraction of features. Once the features are extracted, the feature selection is carried out based on mutual information. Then, the text categorization is performed using the proposed SGrC-based DBN method, which is developed by the integration of the spider monkey optimization (SMO) with the Grasshopper Crow Optimization Algorithm (GCOA) algorithm. Finally, the incremental text categorization is performed based on the hybrid weight bounding model that includes the SGrC and Range degree and particularly, the optimal weights of the Range degree model is selected based on SGrC. The experimental result of the proposed text categorization method is performed by considering the data from the Reuter database and 20 Newsgroups database. The comparative analysis of the text categorization method is based on the performance metrics, such as precision, recall and accuracy. The proposed SGrC algorithm obtained a maximum accuracy of 0.9626, maximum precision of 0.9681 and maximum recall of 0.9600, respectively when compared with the existing incremental text categorization methods.
APA, Harvard, Vancouver, ISO, and other styles
30

ALPAYDIN, ETHEM. "GAL: NETWORKS THAT GROW WHEN THEY LEARN AND SHRINK WHEN THEY FORGET." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 01 (February 1994): 391–414. http://dx.doi.org/10.1142/s021800149400019x.

Full text
Abstract:
Learning when limited to modification of some parameters has a limited scope; capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e. the number of hidden layers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as is usually done, be determined by trial-and-error but should be computed by the learning algorithm. Incremental learning algorithms can modify the network structure by addition and/or removal of units and/or links. A survey of current connectionist literature is given on this line of thought. “Grow and Learn” (GAL) is a new algorithm that learns an association at one shot due to its being incremental and using a local representation. During the so-called “sleep” phase, units that were previously stored but which are no longer necessary due to recent modifications are removed to minimize network complexity. The incrementally constructed network can later be finetuned off-line to improve performance. Another method proposed that greatly increases recognition accuracy is to train a number of networks and vote over their responses. The algorithm and its variants were tested on recognition of handwritten numerals and seem promising especially in terms of learning speed. This makes the algorithm attractive for on-line learning tasks, e.g. in robotics. The biological plausibility of incremental learning is also discussed briefly.
APA, Harvard, Vancouver, ISO, and other styles
31

Ma’sum, Muhammad Anwar. "Intelligent Clustering and Dynamic Incremental Learning to Generate Multi-Codebook Fuzzy Neural Network for Multi-Modal Data Classification." Symmetry 12, no. 4 (April 24, 2020): 679. http://dx.doi.org/10.3390/sym12040679.

Full text
Abstract:
Classification in multi-modal data is one of the challenges in the machine learning field. The multi-modal data need special treatment as its features are distributed in several areas. This study proposes multi-codebook fuzzy neural networks by using intelligent clustering and dynamic incremental learning for multi-modal data classification. In this study, we utilized intelligent K-means clustering based on anomalous patterns and intelligent K-means clustering based on histogram information. In this study, clustering is used to generate codebook candidates before the training process, while incremental learning is utilized when the condition to generate a new codebook is sufficient. The condition to generate a new codebook in incremental learning is based on the similarity of the winner class and other classes. The proposed method was evaluated in synthetic and benchmark datasets. The experiment results showed that the proposed multi-codebook fuzzy neural networks that use dynamic incremental learning have significant improvements compared to the original fuzzy neural networks. The improvements were 15.65%, 5.31% and 11.42% on the synthetic dataset, the benchmark dataset, and the average of all datasets, respectively, for incremental version 1. The incremental learning version 2 improved by 21.08% 4.63%, and 14.35% on the synthetic dataset, the benchmark dataset, and the average of all datasets, respectively. The multi-codebook fuzzy neural networks that use intelligent clustering also had significant improvements compared to the original fuzzy neural networks, achieving 23.90%, 2.10%, and 15.02% improvements on the synthetic dataset, the benchmark dataset, and the average of all datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
32

Oki, Isao, Takeshi Haida, Yoshio Izui, and Seiji Kobayashi. "Incremental Cluster Learning Neural Network Application to GIS Internal Diagnostics." IEEJ Transactions on Power and Energy 116, no. 6 (1996): 731–40. http://dx.doi.org/10.1541/ieejpes1990.116.6_731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hebboul, Amel, Fella Hachouf, and Amel Boulemnadjel. "A new incremental neural network for simultaneous clustering and classification." Neurocomputing 169 (December 2015): 89–99. http://dx.doi.org/10.1016/j.neucom.2015.02.084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Meir, R., and V. E. Maiorov. "On the optimality of neural-network approximation using incremental algorithms." IEEE Transactions on Neural Networks 11, no. 2 (March 2000): 323–37. http://dx.doi.org/10.1109/72.839004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Jenq-Haur, Hsin-Yang Wang, Yen-Lin Chen, and Chuan-Ming Liu. "A constructive algorithm for unsupervised learning with incremental neural network." Journal of Applied Research and Technology 13, no. 2 (April 2015): 188–96. http://dx.doi.org/10.1016/j.jart.2015.06.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kurnaz, Mehmet Nadir, Zümray Dokur, and Tamer Ölmez. "An incremental neural network for tissue segmentation in ultrasound images." Computer Methods and Programs in Biomedicine 85, no. 3 (March 2007): 187–95. http://dx.doi.org/10.1016/j.cmpb.2006.10.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Dokur, Zümray. "Respiratory sound classification by using an incremental supervised neural network." Pattern Analysis and Applications 12, no. 4 (June 10, 2008): 309–19. http://dx.doi.org/10.1007/s10044-008-0125-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Wiwatcharakoses, Chayut, and Daniel Berrar. "A self-organizing incremental neural network for continual supervised learning." Expert Systems with Applications 185 (December 2021): 115662. http://dx.doi.org/10.1016/j.eswa.2021.115662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Shen, Furao, Hui Yu, Keisuke Sakurai, and Osamu Hasegawa. "An incremental online semi-supervised active learning algorithm based on self-organizing incremental neural network." Neural Computing and Applications 20, no. 7 (August 6, 2010): 1061–74. http://dx.doi.org/10.1007/s00521-010-0428-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Olaru, Adrian, Serban Olaru, Dan Paune, and Adrian Ghionea. "Assisted Research of the Neural Network." Advanced Materials Research 463-464 (February 2012): 1098–101. http://dx.doi.org/10.4028/www.scientific.net/amr.463-464.1098.

Full text
Abstract:
In the optimization of the trajectory or of the guidance of mobile robots one of the more important things is to assure one small difference between the output data of the system and the target. This paper show how on-line will be possible to establish one convergence way to the target without any influences of the input data or initial conditions of the weights or biases. The paper show the general components and the mathematical model of some more important neurons and one numerical simulation of the linear neural network. In the paper was used the least mean square (LMS) error algorithm for adjusting the weights and biases and incremental training by different training rate, finally to obtain one minimum error to the target.
APA, Harvard, Vancouver, ISO, and other styles
41

Kubo, Masao, Akihiro Yamaguchi, Sadayoshi Mikami, and Mitsuo Wada. "Logistic Chaos Protects Evolution against Environment Noise." Journal of Robotics and Mechatronics 10, no. 4 (August 20, 1998): 350–57. http://dx.doi.org/10.20965/jrm.1998.p0350.

Full text
Abstract:
We propose a neuron with noise generator specially for the evolutionary robotics approach to incremental knowledge acquisition. Genetically evolving neural networks are modified continuously by genetic operations. Difficulty in incrementing knowledge when a neural network acts as a robotic controller arises when network operates unlike in the past due to disturbance by neurons added by genetic operators. To evolve a network robust against such internal noise, we propose adding noise generators to neurons. We show the effectiveness of the application of a logistic chaos noise generator to neurons by comparing several noise generators.
APA, Harvard, Vancouver, ISO, and other styles
42

HABIBI, MUHAMMAD NIZAR, DIMAS NUR PRAKOSO, NOVIE AYUB WINDARKO, and ANANG TJAHJONO. "Perbaikan MPPT Incremental Conductance menggunakan ANN pada Berbayang Sebagian dengan Hubungan Paralel." ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 8, no. 3 (August 27, 2020): 546. http://dx.doi.org/10.26760/elkomika.v8i3.546.

Full text
Abstract:
ABSTRAKAlgoritma IncrementaL Conductance (IC) adalah algoritma yang bisa diimplementasikan pada sistem Maximum Power Point Tracking (MPPT) untuk mendapatkan daya maksimum dari panel surya. Akan tetapi algoritma MPPT IC tidak bisa bekerja dikondisi berbayang sebagian, karena menimbulkan daya maksimum lebih dari satu. Artificial Neural Network (ANN) bisa mengidentifikasi kurva karakteristik pada kondisi berbayang sebagian dan dapat mengetahui posisi daya maksimum yang sebenarnya. Masukan dari ANN merupakan nilai arus hubung singkat serta tegangan buka dari panel surya, dan keluaran dari ANN adalah nilai duty cycle yang digunakan sebagai posisi awal tracking dari MPPT IC. Data learning didapatkan dari perubahan nilai duty cycle secara manual pada sistem MPPT di berbagai kondisi radiasi. Hasil pengujian menunjukkan algoritma yang diajukan dapat menaikkan energi 5.79% - 13.32% dibandingkan dengan ANN-Perturb and Observe dan ANN-Incremental Resistance dengan durasi 0.6 detik.Kata kunci: MPPT, Incremental Conductance, Artficial Neural Network, Berbayang Sebagian, Hubungan Paralel ABSTRACTThe Incremental Conductance (IC) algorithm is an algorithm that can be implemented on Maximum Power Point Tracking (MPPT) systems to get maximum power from solar panels. However, the MPPT IC algorithm cannot work in partial shading conditions because it causes more than one maximum power. Artificial Neural Network (ANN) can identify characteristic curves under partial shading conditions and can know the actual maximum power position. The input from ANN is the short circuit current and the open voltage of the solar panel. The output of ANN is the duty cycle value that is used as the initial tracking position of the MPPT IC. Learning data is obtained from manually changing the duty cycle value in the MPPT system in various radiation conditions. The test results show the proposed algorithm can increase energy 5.79% - 13.32% when compared with ANN-Perturb and Observe and ANN-Incremental Resistance with a duration of 0.6 seconds.Keywords: Maximum Power Point Tracking, Incremental Conductance, Artficial Neural Network, Partial Shading, Parallel Connection
APA, Harvard, Vancouver, ISO, and other styles
43

Pelagotti, Andrea, and Vincenzo Piuri. "Entropic Analysis and Incremental Synthesis of Multilayered Feedforward Neural Networks." International Journal of Neural Systems 08, no. 05n06 (October 1997): 647–59. http://dx.doi.org/10.1142/s0129065797000574.

Full text
Abstract:
Neural network architecture optimization is often a critical issue, particularly when VLSI implementation is considered. This paper proposes a new minimization method for multilayered feedforward ANNs and an original approach to their synthesis, both based on the analysis of the information quantity (entropy) flowing through the network. A layer is described as an information filter which selects the relevant characteristics until the complete classification is performed. The basic incremental synthesis method, including the supervised training procedure, is derived to design application-tailored neural paradigms with good generalization capability.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Mingyuan, Tong Zhu, and John Z. H. Zhang. "Automated Construction of Neural Network Potential Energy Surface: The Enhanced Self-Organizing Incremental Neural Network Deep Potential Method." Journal of Chemical Information and Modeling 61, no. 11 (November 9, 2021): 5425–37. http://dx.doi.org/10.1021/acs.jcim.1c01125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xiang, Zhiyang, Zhu Xiao, Dong Wang, and Hassana Maigary Georges. "Incremental semi-supervised kernel construction with self-organizing incremental neural network and application in intrusion detection." Journal of Intelligent & Fuzzy Systems 31, no. 2 (July 22, 2016): 815–23. http://dx.doi.org/10.3233/jifs-169013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ohta, Hiroyuki, and Yukio Pegio Gunji. "Recurrent neural network architecture with pre-synaptic inhibition for incremental learning." Neural Networks 19, no. 8 (October 2006): 1106–19. http://dx.doi.org/10.1016/j.neunet.2006.06.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Furao, Shen, Tomotaka Ogura, and Osamu Hasegawa. "An enhanced self-organizing incremental neural network for online unsupervised learning." Neural Networks 20, no. 8 (October 2007): 893–903. http://dx.doi.org/10.1016/j.neunet.2007.07.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Xing, Youlu, Xiaofeng Shi, Furao Shen, Ke Zhou, and Jinxi Zhao. "A Self-Organizing Incremental Neural Network based on local distribution learning." Neural Networks 84 (December 2016): 143–60. http://dx.doi.org/10.1016/j.neunet.2016.08.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Roy, Deboleena, Priyadarshini Panda, and Kaushik Roy. "Tree-CNN: A hierarchical Deep Convolutional Neural Network for incremental learning." Neural Networks 121 (January 2020): 148–60. http://dx.doi.org/10.1016/j.neunet.2019.09.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

KOBAYASHI, Masataka, Seiichi OZAWA, and Shigeo ABE. "Incremental Learning Algorithm for Feedforward Neural Network with Long-Term Memory." Transactions of the Society of Instrument and Control Engineers 38, no. 9 (2002): 792–99. http://dx.doi.org/10.9746/sicetr1965.38.792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography