Articles de revues sur le sujet « Class-incremental learning »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Class-incremental learning.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Class-incremental learning ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Kim, Taehoon, Jaeyoo Park et Bohyung Han. « Cross-Class Feature Augmentation for Class Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 12 (24 mars 2024) : 13168–76. http://dx.doi.org/10.1609/aaai.v38i12.29216.

Texte intégral
Résumé :
We propose a novel class incremental learning approach, which incorporates a feature augmentation technique motivated by adversarial attacks. We employ a classifier learned in the past to complement training examples of previous tasks. The proposed approach has an unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes using examples in other classes via adversarial attacks on a previously learned classifier. By allowing the Cross-Class Feature Augmentations (CCFA), each class in the old tasks conveniently populates samples in the feature space, which alleviates the collapse of the decision boundaries caused by sample deficiency for the previous tasks, especially when the number of stored exemplars is small. This idea can be easily incorporated into existing class incremental learning algorithms without any architecture modification. Extensive experiments on the standard benchmarks show that our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios, especially under an environment with an extremely limited memory budget.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Park, Ju-Youn, et Jong-Hwan Kim. « Incremental Class Learning for Hierarchical Classification ». IEEE Transactions on Cybernetics 50, no 1 (janvier 2020) : 178–89. http://dx.doi.org/10.1109/tcyb.2018.2866869.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Qin, Yuping, Hamid Reza Karimi, Dan Li, Shuxian Lun et Aihua Zhang. « A Mahalanobis Hyperellipsoidal Learning Machine Class Incremental Learning Algorithm ». Abstract and Applied Analysis 2014 (2014) : 1–5. http://dx.doi.org/10.1155/2014/894246.

Texte intégral
Résumé :
A Mahalanobis hyperellipsoidal learning machine class incremental learning algorithm is proposed. To each class sample, the hyperellipsoidal that encloses as many as possible and pushes the outlier samples away is trained in the feature space. In the process of incremental learning, only one subclassifier is trained with the new class samples. The old models of the classifier are not influenced and can be reused. In the process of classification, considering the information of sample’s distribution in the feature space, the Mahalanobis distances from the sample mapping to the center of each hyperellipsoidal are used to decide the classified sample class. The experimental results show that the proposed method has higher classification precision and classification speed.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Pang, Shaoning, Lei Zhu, Gang Chen, Abdolhossein Sarrafzadeh, Tao Ban et Daisuke Inoue. « Dynamic class imbalance learning for incremental LPSVM ». Neural Networks 44 (août 2013) : 87–100. http://dx.doi.org/10.1016/j.neunet.2013.02.007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Liu, Yaoyao, Yingying Li, Bernt Schiele et Qianru Sun. « Online Hyperparameter Optimization for Class-Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 7 (26 juin 2023) : 8906–13. http://dx.doi.org/10.1609/aaai.v37i7.26070.

Texte intégral
Résumé :
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase. An inherent challenge of CIL is the stability-plasticity tradeoff, i.e., CIL models should keep stable to retain old knowledge and keep plastic to absorb new knowledge. However, none of the existing CIL models can achieve the optimal tradeoff in different data-receiving settings—where typically the training-from-half (TFH) setting needs more stability, but the training-from-scratch (TFS) needs more plasticity. To this end, we design an online learning method that can adaptively optimize the tradeoff without knowing the setting as a priori. Specifically, we first introduce the key hyperparameters that influence the tradeoff, e.g., knowledge distillation (KD) loss weights, learning rates, and classifier types. Then, we formulate the hyperparameter optimization process as an online Markov Decision Process (MDP) problem and propose a specific algorithm to solve it. We apply local estimated rewards and a classic bandit algorithm Exp3 to address the issues when applying online MDP methods to the CIL protocol. Our method consistently improves top-performing CIL methods in both TFH and TFS settings, e.g., boosting the average accuracy of TFH and TFS by 2.2 percentage points on ImageNet-Full, compared to the state-of-the-art. Code is provided at https://class-il.mpi-inf.mpg.de/online/
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Lijuan, Xiaokang Yang, Kai Zhang, Yong Li, Fu Li, Jun Li et Dongming Li. « Class-Incremental Learning Based on Anomaly Detection ». IEEE Access 11 (2023) : 69423–38. http://dx.doi.org/10.1109/access.2023.3293524.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Liang, Sen, Kai Zhu, Wei Zhai, Zhiheng Liu et Yang Cao. « Hypercorrelation Evolution for Video Class-Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 4 (24 mars 2024) : 3315–23. http://dx.doi.org/10.1609/aaai.v38i4.28117.

Texte intégral
Résumé :
Video class-incremental learning aims to recognize new actions while restricting the catastrophic forgetting of old ones, whose representative samples can only be saved in limited memory. Semantically variable subactions are susceptible to class confusion due to data imbalance. While existing methods address the problem by estimating and distilling the spatio-temporal knowledge, we further explores that the refinement of hierarchical correlations is crucial for the alignment of spatio-temporal features. To enhance the adaptability on evolved actions, we proposes a hierarchical aggregation strategy, in which hierarchical matching matrices are combined and jointly optimized to selectively store and retrieve relevant features from previous tasks. Meanwhile, a correlation refinement mechanism is presented to reinforce the bias on informative exemplars according to online hypercorrelation distribution. Experimental results demonstrate the effectiveness of the proposed method on three standard video class-incremental learning benchmarks, outperforming state-of-the-art methods. Code is available at: https://github.com/Lsen991031/HCE
Styles APA, Harvard, Vancouver, ISO, etc.
8

Xu, Shixiong, Gaofeng Meng, Xing Nie, Bolin Ni, Bin Fan et Shiming Xiang. « Defying Imbalanced Forgetting in Class Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 14 (24 mars 2024) : 16211–19. http://dx.doi.org/10.1609/aaai.v38i14.29555.

Texte intégral
Résumé :
We observe a high level of imbalance in the accuracy of different learned classes in the same old task for the first time. This intriguing phenomenon, discovered in replay-based Class Incremental Learning (CIL), highlights the imbalanced forgetting of learned classes, as their accuracy is similar before the occurrence of catastrophic forgetting. This discovery remains previously unidentified due to the reliance on average incremental accuracy as the measurement for CIL, which assumes that the accuracy of classes within the same task is similar. However, this assumption is invalid in the face of catastrophic forgetting. Further empirical studies indicate that this imbalanced forgetting is caused by conflicts in representation between semantically similar old and new classes. These conflicts are rooted in the data imbalance present in replay-based CIL methods. Building on these insights, we propose CLass-Aware Disentanglement (CLAD) as a means to predict the old classes that are more likely to be forgotten and enhance their accuracy. Importantly, CLAD can be seamlessly integrated into existing CIL methods. Extensive experiments demonstrate that CLAD consistently improves current replay-based methods, resulting in performance gains of up to 2.56%.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Guo, Jiaqi, Guanqiu Qi, Shuiqing Xie et Xiangyuan Li. « Two-Branch Attention Learning for Fine-Grained Class Incremental Learning ». Electronics 10, no 23 (1 décembre 2021) : 2987. http://dx.doi.org/10.3390/electronics10232987.

Texte intégral
Résumé :
As a long-standing research area, class incremental learning (CIL) aims to effectively learn a unified classifier along with the growth of the number of classes. Due to the small inter-class variances and large intra-class variances, fine-grained visual categorization (FGVC) as a challenging visual task has not attracted enough attention in CIL. Therefore, the localization of critical regions specialized for fine-grained object recognition plays a crucial role in FGVC. Additionally, it is important to learn fine-grained features from critical regions in fine-grained CIL for the recognition of new object classes. This paper designs a network architecture named two-branch attention learning network (TBAL-Net) for fine-grained CIL. TBAL-Net can localize critical regions and learn fine-grained feature representation by a lightweight attention module. An effective training framework is proposed for fine-grained CIL by integrating TBAL-Net into an effective CIL process. This framework is tested on three popular fine-grained object datasets, including CUB-200-2011, FGVC-Aircraft, and Stanford-Car. The comparative experimental results demonstrate that the proposed framework can achieve the state-of-the-art performance on the three fine-grained object datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Qin, Zhili, Wei Han, Jiaming Liu, Rui Zhang, Qingli Yang, Zejun Sun et Junming Shao. « Rethinking few-shot class-incremental learning : A lazy learning baseline ». Expert Systems with Applications 250 (septembre 2024) : 123848. http://dx.doi.org/10.1016/j.eswa.2024.123848.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Dong, Na, Yongqiang Zhang, Mingli Ding et Gim Hee Lee. « Incremental-DETR : Incremental Few-Shot Object Detection via Self-Supervised Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 1 (26 juin 2023) : 543–51. http://dx.doi.org/10.1609/aaai.v37i1.25129.

Texte intégral
Résumé :
Incremental few-shot object detection aims at detecting novel classes without forgetting knowledge of the base classes with only a few labeled training data from the novel classes. Most related prior works are on incremental object detection that rely on the availability of abundant training samples per novel class that substantially limits the scalability to real-world setting where novel data can be scarce. In this paper, we propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector. To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision from additional object proposals generated using Selective Search as pseudo labels. We further introduce an incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without forgetting the base classes. Extensive experiments conducted on standard incremental object detection and incremental few-shot object detection settings show that our approach significantly outperforms state-of-the-art methods by a large margin. Our source code is available at https://github.com/dongnana777/Incremental-DETR.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Wang, Ruixiang, Yong Luo, Yitao Ren et Keming Mao. « Discrimination Correction and Balance for Class-Incremental Learning ». Journal of Physics : Conference Series 2347, no 1 (1 septembre 2022) : 012024. http://dx.doi.org/10.1088/1742-6596/2347/1/012024.

Texte intégral
Résumé :
Abstract Catastrophic forgetting is the key role in incremental learning, since the model gains poor quality for old classes than new classes with continuously incoming tasks and less storage. In order to solve the balance between model plasticity and stability, this paper proposes a novel class incremental learning method DCBIL, which adopts discrimination correction and balance to avoid model bias. First, three components Conservative, Balancer and Opener with identical structure are designed. Then, Conservative and Opener have a bias to old and new classes respectively. And Balancer modifies the fully connected layer and replaces the base model for the next training. Moreover, a stochastic perturbation probability synergy is incorporated for final output fusion. Experimental evaluations on Mnist, Cifar-10, Cifar-100 and TinyImageNet demonstrate the effectiveness of DCBIL. DCBIL is portable for it can be used as a plug-in to strengthen the performance of existing incremental learning model.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Dong, Qi, Shaogang Gong et Xiatian Zhu. « Imbalanced Deep Learning by Minority Class Incremental Rectification ». IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no 6 (1 juin 2019) : 1367–81. http://dx.doi.org/10.1109/tpami.2018.2832629.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Guo, Lei, Gang Xie, Xinying Xu et Jinchang Ren. « Exemplar-Supported Representation for Effective Class-Incremental Learning ». IEEE Access 8 (2020) : 51276–84. http://dx.doi.org/10.1109/access.2020.2980386.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Guan, Sheng-Uei, Chunyu Bao et Ru-Tian Sun. « Hierarchical Incremental Class Learning with Reduced Pattern Training ». Neural Processing Letters 24, no 2 (20 septembre 2006) : 163–77. http://dx.doi.org/10.1007/s11063-006-9019-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Palit, Sanchar, Biplab Banerjee et Subhasis Chaudhuri. « Prototypical Quadruplet for Few-Shot Class Incremental Learning ». Procedia Computer Science 222 (2023) : 25–34. http://dx.doi.org/10.1016/j.procs.2023.08.141.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Tian, Songsong, Lusi Li, Weijun Li, Hang Ran, Xin Ning et Prayag Tiwari. « A survey on few-shot class-incremental learning ». Neural Networks 169 (janvier 2024) : 307–24. http://dx.doi.org/10.1016/j.neunet.2023.10.039.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Shen, Mingge, Dehu Chen, Silan Hu et Gang Xu. « Class incremental learning of remote sensing images based on class similarity distillation ». PeerJ Computer Science 9 (27 septembre 2023) : e1583. http://dx.doi.org/10.7717/peerj-cs.1583.

Texte intégral
Résumé :
When a well-trained model learns a new class, the data distribution differences between the new and old classes inevitably cause catastrophic forgetting in order to perform better in the new class. This behavior differs from human learning. In this article, we propose a class incremental object detection method for remote sensing images to address the problem of catastrophic forgetting caused by distribution differences among different classes. First, we introduce a class similarity distillation (CSD) loss based on the similarity between new and old class prototypes, ensuring the model’s plasticity to learn new classes and stability to detect old classes. Second, to better extract class similarity features, we propose a global similarity distillation (GSD) loss that maximizes the mutual information between the new class feature and old class features. Additionally, we present a region proposal network (RPN)-based method that assigns positive and negative labels to prevent mislearning issues. Experiments demonstrate that our method is more accurate for class incremental learning on public DOTA and DIOR datasets and significantly improves training efficiency compared to state-of-the-art class incremental object detection methods.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Chen, Xinzhe, et Hong Liang. « An Optimized Class Incremental Learning Network with Dynamic Backbone Based on Sonar Images ». Journal of Marine Science and Engineering 11, no 9 (12 septembre 2023) : 1781. http://dx.doi.org/10.3390/jmse11091781.

Texte intégral
Résumé :
Class incremental learning with sonar images introduces a new dimension to underwater target recognition. Directly applying networks designed for optical images to our constructed sonar image dataset (SonarImage20) results in significant catastrophic forgetting. To address this problem, our study carefully selects the Dynamically Expandable Representation (DER)—recognized for its superior performance—as the baseline. We combine the intrinsic properties of sonar images with deep learning theories and optimize both the backbone and the class incremental training strategies of DER. The culmination of this optimization is the introduction of DER-Sonar, a class incremental learning network tailored for sonar images. Evaluations on SonarImage20 underscore the power of DER-Sonar. It outperforms competing class incremental learning networks with an impressive average recognition accuracy of 96.30%, a significant improvement of 7.43% over the baseline.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Dong, Songlin, Xiaopeng Hong, Xiaoyu Tao, Xinyuan Chang, Xing Wei et Yihong Gong. « Few-Shot Class-Incremental Learning via Relation Knowledge Distillation ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 2 (18 mai 2021) : 1255–63. http://dx.doi.org/10.1609/aaai.v35i2.16213.

Texte intégral
Résumé :
In this paper, we focus on the challenging few-shot class incremental learning (FSCIL) problem, which requires to transfer knowledge from old tasks to new ones and solves catastrophic forgetting. We propose the exemplar relation distillation incremental learning framework to balance the tasks of old-knowledge preserving and new-knowledge adaptation. First, we construct an exemplar relation graph to represent the knowledge learned by the original network and update gradually for new tasks learning. Then an exemplar relation loss function for discovering the relation knowledge between different classes is introduced to learn and transfer the structural information in relation graph. A large number of experiments demonstrate that relation knowledge does exist in the exemplars and our approach outperforms other state-of-the-art class-incremental learning methods on the CIFAR100, miniImageNet, and CUB200 datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
21

XU, XIN, et WEI WANG. « AN INCREMENTAL GRAY RELATIONAL ANALYSIS ALGORITHM FOR MULTI-CLASS CLASSIFICATION AND OUTLIER DETECTION ». International Journal of Pattern Recognition and Artificial Intelligence 26, no 06 (septembre 2012) : 1250011. http://dx.doi.org/10.1142/s0218001412500115.

Texte intégral
Résumé :
The incremental classifier is superior in saving significant computational cost by incremental learning on continuously increasing training data. However, existing classification algorithms are problematic when applied for incremental learning for multi-class classification. First, some algorithms, such as neural network and SVM, are not inexpensive for incremental learning due to their complex architectures. When applied for multi-class classification, the computational cost would rise dramatically when the class number increases. Second, existing incremental classification algorithms are usually based on a heuristic scheme and sensitive to the training data input order. In addition, in case the test instance is an outlier and belongs to none of the existing classes, few classification algorithms is able to detect it. Finally, the feature selection and weighing schemes being utilized are generally risky for a "siren pitfall" for multi-class classification tasks. To address the above problems, we bring forward an incremental gray relational analysis algorithm (IGRA). Experimental results showed that, when applied for incremental multi-class classification, IGRA is stable in output, robust to training data input order, superior in computational efficiency, and also capable of detecting outliers and alleviating the "siren pitfall".
Styles APA, Harvard, Vancouver, ISO, etc.
22

Yao, Chengfei, Jie Zou, Yanan Luo, Tao Li et Gang Bai. « A Class-Incremental Learning Method Based on One Class Support Vector Machine ». Journal of Physics : Conference Series 1267 (juillet 2019) : 012007. http://dx.doi.org/10.1088/1742-6596/1267/1/012007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Shim, Dongsub, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim et Jongseong Jang. « Online Class-Incremental Continual Learning with Adversarial Shapley Value ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 11 (18 mai 2021) : 9630–38. http://dx.doi.org/10.1609/aaai.v35i11.17159.

Texte intégral
Résumé :
As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption. While memory replay techniques have shown exceptional promise for this task of continual learning, the best method for selecting which buffered images to replay is still an open question. In this paper, we specifically focus on the online class-incremental setting where a model needs to learn new classes continually from an online data stream. To this end, we contribute a novel Adversarial Shapley value scoring method that scores memory data samples according to their ability to preserve latent decision boundaries for previously observed classes (to maintain learning stability and avoid forgetting) while interfering with latent decision boundaries of current classes being learned (to encourage plasticity and optimal learning of new class boundaries). Overall, we observe that our proposed ASER method provides competitive or improved performance compared to state-of-the-art replay-based continual learning methods on a variety of datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Shevchyk, Anja, Rui Hu, Kevin Thandiackal, Michael Heizmann et Thomas Brunschwiler. « Privacy preserving synthetic respiratory sounds for class incremental learning ». Smart Health 23 (mars 2022) : 100232. http://dx.doi.org/10.1016/j.smhl.2021.100232.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Sancho, José-Luis, William E. Pierson, Batu Ulug, Anı́bal R. Figueiras-Vidal et Stanley C. Ahalt. « Class separability estimation and incremental learning using boundary methods ». Neurocomputing 35, no 1-4 (novembre 2000) : 3–26. http://dx.doi.org/10.1016/s0925-2312(00)00293-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zhao, Zhongtang, Zhenyu Chen, Yiqiang Chen, Shuangquan Wang et Hongan Wang. « A Class Incremental Extreme Learning Machine for Activity Recognition ». Cognitive Computation 6, no 3 (3 avril 2014) : 423–31. http://dx.doi.org/10.1007/s12559-014-9259-y.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Zhou, Yanzhao, Binghao Liu, Yiran Liu et Jianbin Jiao. « Filter Bank Networks for Few-Shot Class-Incremental Learning ». Computer Modeling in Engineering & ; Sciences 137, no 1 (2023) : 647–68. http://dx.doi.org/10.32604/cmes.2023.026745.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Zajdel, Roman. « Epoch-incremental reinforcement learning algorithms ». International Journal of Applied Mathematics and Computer Science 23, no 3 (1 septembre 2013) : 623–35. http://dx.doi.org/10.2478/amcs-2013-0047.

Texte intégral
Résumé :
Abstract In this article, a new class of the epoch-incremental reinforcement learning algorithm is proposed. In the incremental mode, the fundamental TD(0) or TD(λ) algorithm is performed and an environment model is created. In the epoch mode, on the basis of the environment model, the distances of past-active states to the terminal state are computed. These distances and the reinforcement terminal state signal are used to improve the agent policy.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Chountas, Panagiotis, Mustafa Hajmohammed et Ismael Rhemat. « On OWA, Machine Learning and Big Data : The case for IFS over universes ». Notes on Intuitionistic Fuzzy Sets 30, no 2 (1 juillet 2024) : 113–20. http://dx.doi.org/10.7546/nifs.2024.30.2.113-120.

Texte intégral
Résumé :
This paper provides a holistic view of open-world machine learning by investigating class discovery, and class incremental learning under OWA. The challenges, principles, and limitations of current methodologies are discussed in detail. Finally, we position IFS over multiple universes as a formalism to capture the evolution in Big Data as part of incremental learning.
Styles APA, Harvard, Vancouver, ISO, etc.
30

CHEN, Xinzhe, Hong LIANG et Weiyu XU. « Research on a class-incremental learning method based on sonar images ». Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 41, no 2 (avril 2023) : 303–9. http://dx.doi.org/10.1051/jnwpu/20234120303.

Texte intégral
Résumé :
Due to the low resolution and the small number of samples of sonar images, the existing class incremental learning networks have a serious problem of catastrophic forgetting of historical task targets, resulting in a low average recognition rate of all task targets. Based on the framework model of generated replay, an improved class incremental learning network is proposed in this paper, and a new deep convolution generative adversarial network is designed and built to replace the variational autoencoder as the reconstruction model of generated replay incremental network to improve the effect of image reconstruction; a new convolution neural network is constructed to replace the multi-layer perception as the recognition network of generated replay incremental network to improve the performance of image classification and recognition. The results show that the improved generated replay incremental network alleviates the problem of catastrophic forgetting of historical task targets, and the average recognition rate for all task targets is significantly improved.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Sagar, K. Manoj. « MultiClass Text Classification Using Support Vector Machine ». INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no 12 (1 décembre 2023) : 1–10. http://dx.doi.org/10.55041/ijsrem27465.

Texte intégral
Résumé :
Support vector machine (SVM) was initially designed for binary classification .To solve multi-class problems of support vector machines (SVM) more efficiently, a novel framework, which we call class-incremental learning (CIL) CIL reuses the old models of the classifier and learns only one binary sub-classifier with an additional phase of feature selection when a new class comes. In text classification, where computers sort text documents into categories, keeping up with new information can be tricky. Traditional methods need lots of retraining to adapt. However, Incremental Learning for multi-class Support Vector Machines (SVMs) offers a solution. It lets us update the model with new data while remembering what it learned before .In this project, we'll explore how Incremental Learning makes multi-class SVMs better at handling changing data and even learning about new categories as they appear .There is a problem in addressing the challenge of integrating new classes while maintaining classification accuracy on existing and new classes .The main goal of this project is to create a method that effectively adapts the MC- SVM to evolving data distributions while minimizing the impact on previously learned classes and optimising resource utilization. Keywords— feature extraction, support vector machine, multi class incremental learning, Gaussian kernel.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Wu, Yanan, Tengfei Liang, Songhe Feng, Yi Jin, Gengyu Lyu, Haojun Fei et Yang Wang. « MetaZSCIL : A Meta-Learning Approach for Generalized Zero-Shot Class Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 10408–16. http://dx.doi.org/10.1609/aaai.v37i9.26238.

Texte intégral
Résumé :
Generalized zero-shot learning (GZSL) aims to recognize samples whose categories may not have been seen at training. Standard GZSL cannot handle dynamic addition of new seen and unseen classes. In order to address this limitation, some recent attempts have been made to develop continual GZSL methods. However, these methods require end-users to continuously collect and annotate numerous seen class samples, which is unrealistic and hampers the applicability in the real-world. Accordingly, in this paper, we propose a more practical and challenging setting named Generalized Zero-Shot Class Incremental Learning (CI-GZSL). Our setting aims to incrementally learn unseen classes without any training samples, while recognizing all classes previously encountered. We further propose a bi-level meta-learning based method called MetaZSCIL to directly optimize the network to learn how to incrementally learn. Specifically, we sample sequential tasks from seen classes during the offline training to simulate the incremental learning process. For each task, the model is learned using a meta-objective such that it is capable to perform fast adaptation without forgetting. Note that our optimization can be flexibly equipped with most existing generative methods to tackle CI-GZSL. This work introduces a feature generative framework that leverages visual feature distribution alignment to produce replayed samples of previously seen classes to reduce catastrophic forgetting. Extensive experiments conducted on five widely used benchmarks demonstrate the superiority of our proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Wang, Ye, Yaxiong Wang, Guoshuai Zhao et Xueming Qian. « Learning to complement : Relation complementation network for few-shot class-incremental learning ». Knowledge-Based Systems 282 (décembre 2023) : 111130. http://dx.doi.org/10.1016/j.knosys.2023.111130.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Ran, Hang, Weijun Li, Lusi Li, Songsong Tian, Xin Ning et Prayag Tiwari. « Learning optimal inter-class margin adaptively for few-shot class-incremental learning via neural collapse-based meta-learning ». Information Processing & ; Management 61, no 3 (mai 2024) : 103664. http://dx.doi.org/10.1016/j.ipm.2024.103664.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Gu, Ziqi, Zihan Lu, Cao Han et Chunyan Xu. « Few Shot Class Incremental Learning via Grassmann Manifold and Information Entropy ». Electronics 12, no 21 (2 novembre 2023) : 4511. http://dx.doi.org/10.3390/electronics12214511.

Texte intégral
Résumé :
Few-shot class incremental learning is a challenging problem in the field of machine learning. It necessitates models to gradually learn new knowledge from a few samples while retaining the knowledge of old classes. Nevertheless, the limited data available for new classes not only leads to significant overfitting problems but also exacerbates the issue of catastrophic forgetting in the incremental learning process. To address the above two issues, we propose a novel framework named Grassmann Manifold and Information Entropy for Few-Shot Class Incremental Learning(GMIE-FSCIL). Different from existing methods that model parameters on the Euclidean space, our method optimizes the incremental learning network on the Grassmann manifold. More specifically, we incorporate the acquired knowledge of each class on the Grassmann manifold, ensuring the preservation of their inherent geometric properties by Grassmann Metric Learning(GML) module. Acknowledging the interconnected relationships of knowledge, with information entropy we create a neighborhood graph on Grassmann manifold to maintain inter-class structural information by Graph Information Preserving(GIP) module, thus mitigating catastrophic forgetting of learned knowledge. In our evaluation of CIFAR100, miniImageNet, and CUB200 datasets, we achieved significant improvements in terms of Avg compared to mainstream methods, with at least 2.72%, 1.21%, and 1.27% increases.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Wei, Kun, Cheng Deng, Xu Yang et Maosen Li. « Incremental Embedding Learning via Zero-Shot Translation ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 11 (18 mai 2021) : 10254–62. http://dx.doi.org/10.1609/aaai.v35i11.17229.

Texte intégral
Résumé :
Modern deep learning methods have achieved great success in machine learning and computer vision fields by learning a set of pre-defined datasets. Howerver, these methods perform unsatisfactorily when applied into real-world situations. The reason of this phenomenon is that learning new tasks leads the trained model quickly forget the knowledge of old tasks, which is referred to as catastrophic forgetting. Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks and ignore the problem existing in embedding networks, which are the basic networks for image retrieval, face recognition, zero-shot learning, etc. Different from traditional incremental classification networks, the semantic gap between the embedding spaces of two adjacent tasks is the main challenge for embedding networks under incremental learning setting. Thus, we propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI), which leverages zero-shot translation to estimate and compensate the semantic gap without any exemplars. Then, we try to learn a unified representation for two adjacent tasks in sequential learning process, which captures the relationships of previous classes and current classes precisely. In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks. We conduct extensive experiments on CUB-200-2011 and CIFAR100, and the experiment results prove the effectiveness of our method. The code of our method has been released in https://github.com/Drkun/ZSTCI.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Chen, Guangyao, Peixi Peng, Yangru Huang, Mengyue Geng et Yonghong Tian. « Adaptive Discovering and Merging for Incremental Novel Class Discovery ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 10 (24 mars 2024) : 11276–84. http://dx.doi.org/10.1609/aaai.v38i10.29006.

Texte intégral
Résumé :
One important desideratum of lifelong learning aims to discover novel classes from unlabelled data in a continuous manner. The central challenge is twofold: discovering and learning novel classes while mitigating the issue of catastrophic forgetting of established knowledge. To this end, we introduce a new paradigm called Adaptive Discovering and Merging (ADM) to discover novel categories adaptively in the incremental stage and integrate novel knowledge into the model without affecting the original knowledge. To discover novel classes adaptively, we decouple representation learning and novel class discovery, and use Triple Comparison (TC) and Probability Regularization (PR) to constrain the probability discrepancy and diversity for adaptive category assignment. To merge the learned novel knowledge adaptively, we propose a hybrid structure with base and novel branches named Adaptive Model Merging (AMM), which reduces the interference of the novel branch on the old classes to preserve the previous knowledge, and merges the novel branch to the base model without performance loss and parameter growth. Extensive experiments on several datasets show that ADM significantly outperforms existing class-incremental Novel Class Discovery (class-iNCD) approaches. Moreover, our AMM also benefits the class-incremental Learning (class-IL) task by alleviating the catastrophic forgetting problem. The source code is included in the supplementary materials.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Wang, Dong, et Shi Huan Xiong. « An Algorithm of Incremental Bayesian Classifier Based on K-Nearest Neighbor ». Advanced Materials Research 403-408 (novembre 2011) : 1455–59. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.1455.

Texte intégral
Résumé :
The learning sequence is an important factor of affecting the study effect about incremental Bayesian classifier. Reasonable learning sequence helps to strengthen the knowledge reserve of the classifier. This article proposes an incremental learning algorithm based on the K-Nearest Neighbor. Through calculating k maximum similar distance between test set and training set ,dividing and structuring the sequence of class number and the sequence of sum of class weight. According to the undulation degree of sequence, the instance including stronger class information is chosen to enter the learning process firstly. The experimental result indicates that the algorithm is effective and feasible.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Huang, Libo, Yan Zeng, Chuanguang Yang, Zhulin An, Boyu Diao et Yongjun Xu. « eTag : Class-Incremental Learning via Embedding Distillation and Task-Oriented Generation ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 11 (24 mars 2024) : 12591–99. http://dx.doi.org/10.1609/aaai.v38i11.29153.

Texte intégral
Résumé :
Class incremental learning (CIL) aims to solve the notorious forgetting problem, which refers to the fact that once the network is updated on a new task, its performance on previously-learned tasks degenerates catastrophically. Most successful CIL methods store exemplars (samples of learned tasks) to train a feature extractor incrementally, or store prototypes (features of learned tasks) to estimate the incremental feature distribution. However, the stored exemplars would violate the data privacy concerns, while the fixed prototypes might not reasonably be consistent with the incremental feature distribution, hindering the exploration of real-world CIL applications. In this paper, we propose a data-free CIL method with embedding distillation and Task-oriented generation (eTag), which requires neither exemplar nor prototype. Embedding distillation prevents the feature extractor from forgetting by distilling the outputs from the networks' intermediate blocks. Task-oriented generation enables a lightweight generator to produce dynamic features, fitting the needs of the top incremental classifier. Experimental results confirm that the proposed eTag considerably outperforms state-of-the-art methods on several benchmark datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Cui, Bo, Guyue Hu et Shan Yu. « DeepCollaboration : Collaborative Generative and Discriminative Models for Class Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 2 (18 mai 2021) : 1175–83. http://dx.doi.org/10.1609/aaai.v35i2.16204.

Texte intégral
Résumé :
An important challenge for neural networks is to learn incrementally, i.e., learn new classes without catastrophic forgetting. To overcome this problem, generative replay technique has been suggested, which can generate samples belonging to learned classes while learning new ones. However, such generative models usually suffer from increased distribution mismatch between the generated and original samples along the learning process. In this work, we propose DeepCollaboration (D-Collab), a collaborative framework of deep generative and discriminative models to solve this problem effectively. We develop a discriminative learning model to incrementally update the latent feature space for continual classification. At the same time, a generative model is introduced to achieve conditional generation using the latent feature distribution produced by the discriminative model. Importantly, the generative and discriminative models are connected through bidirectional training to enforce cycle-consistency of mappings between feature and image domains. Furthermore, a domain alignment module is used to eliminate the divergence between the feature distributions of generated images and real ones. This module together with the discriminative model can perform effective sample mining to facilitate incremental learning. Extensive experiments on several visual recognition datasets show that our system can achieve state-of-the-art performance.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Li, Depeng, Tianqi Wang, Junwei Chen, Kenji Kawaguchi, Cheng Lian et Zhigang Zeng. « Multi-view class incremental learning ». Information Fusion, septembre 2023, 102021. http://dx.doi.org/10.1016/j.inffus.2023.102021.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Qiu, Zihuan, Linfeng Xu, Zhichuan Wang, Qingbo Wu, Fanman Meng et Hongliang Li. « ISM-Net : Mining incremental semantics for class incremental learning ». Neurocomputing, décembre 2022. http://dx.doi.org/10.1016/j.neucom.2022.12.029.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Qiu, Zihuan, Linfeng Xu, Zhichuan Wang, Qingbo Wu, Fanman Meng et Hongliang Li. « Ism-Net : Mining Incremental Semantics for Class Incremental Learning ». SSRN Electronic Journal, 2022. http://dx.doi.org/10.2139/ssrn.4179872.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Liu, Hao, Zhaoyu Yan, Bing Liu, Jiaqi Zhao, Yong Zhou et Abdulmotaleb El Saddik. « Distilled Meta-Learning for Multi-Class Incremental Learning ». ACM Transactions on Multimedia Computing, Communications, and Applications, 17 janvier 2023. http://dx.doi.org/10.1145/3576045.

Texte intégral
Résumé :
Meta-learning approaches have recently achieved promising performance in multi-class incremental learning. However, meta-learners still suffer from catastrophic forgetting, i.e., they tend to forget the learned knowledge from the old tasks, when they focus on rapidly adapting to the new classes of the current task. To solve this problem, we propose a novel distilled meta-learning (DML) framework for multi-class incremental learning, which integrates seamlessly meta-learning with knowledge distillation in each incremental stage. Specifically, during inner-loop training, knowledge distillation is incorporated into the DML to overcome catastrophic forgetting. During outer-loop training, a meta-update rule is designed for the meta-learner to learn across tasks and quickly adapt to new tasks. By virtue of the bilevel optimization, our model is encouraged to reach a balance between the retention of old knowledge and the learning of new knowledge. Experimental results on four benchmark datasets demonstrate the effectiveness of our proposal and show that our method significantly outperforms other state-of-the-art incremental learning methods.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Mazumder, Pratik, Mohammed Asad Karim, Indu Joshi et Pravendra Singh. « Leveraging joint incremental learning objective with data ensemble for class incremental learning ». Neural Networks, janvier 2023. http://dx.doi.org/10.1016/j.neunet.2023.01.017.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Yang, Yang, Zhen-Qiang Sun, Hengshu Zhu, Yanjie Fu, Yuanchun Zhou, Hui Xiong et Jian Yang. « Learning Adaptive Embedding Considering Incremental Class ». IEEE Transactions on Knowledge and Data Engineering, 2021, 1. http://dx.doi.org/10.1109/tkde.2021.3109131.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Wang, Shaokun, Weiwei Shi, Songlin Dong, Xinyuan Gao, Xiang Song et Yihong Gong. « Semantic Knowledge Guided Class-Incremental Learning ». IEEE Transactions on Circuits and Systems for Video Technology, 2023, 1. http://dx.doi.org/10.1109/tcsvt.2023.3262739.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Sun, Zhenfeng, Rui Feng et Yanwei Fu. « Class-Incremental Generalized Zero-Shot Learning ». Multimedia Tools and Applications, 3 août 2023. http://dx.doi.org/10.1007/s11042-023-16316-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Xiao, Jue, XueMing Tang et SongFeng Lu. « Privacy-Preserving Federated Class-Incremental Learning ». IEEE Transactions on Machine Learning in Communications and Networking, 2023, 1. http://dx.doi.org/10.1109/tmlcn.2023.3344074.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Li, Jiashuo, Songlin Dong, Yihong Gong, Yuhang He et Xing Wei. « Analogical Learning-Based Few-Shot Class-Incremental Learning ». IEEE Transactions on Circuits and Systems for Video Technology, 2024, 1. http://dx.doi.org/10.1109/tcsvt.2024.3350913.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie