Littérature scientifique sur le sujet « Class-incremental learning »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Class-incremental learning ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Class-incremental learning"

1

Kim, Taehoon, Jaeyoo Park et Bohyung Han. « Cross-Class Feature Augmentation for Class Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 12 (24 mars 2024) : 13168–76. http://dx.doi.org/10.1609/aaai.v38i12.29216.

Texte intégral
Résumé :
We propose a novel class incremental learning approach, which incorporates a feature augmentation technique motivated by adversarial attacks. We employ a classifier learned in the past to complement training examples of previous tasks. The proposed approach has an unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes using examples in other classes via adversarial attacks on a previously learned classifier. By allowing the Cross-Class Feature Augmentations (CCFA), each class in the old tasks conveniently populates samples in the feature space, which alleviates the collapse of the decision boundaries caused by sample deficiency for the previous tasks, especially when the number of stored exemplars is small. This idea can be easily incorporated into existing class incremental learning algorithms without any architecture modification. Extensive experiments on the standard benchmarks show that our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios, especially under an environment with an extremely limited memory budget.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Park, Ju-Youn, et Jong-Hwan Kim. « Incremental Class Learning for Hierarchical Classification ». IEEE Transactions on Cybernetics 50, no 1 (janvier 2020) : 178–89. http://dx.doi.org/10.1109/tcyb.2018.2866869.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Qin, Yuping, Hamid Reza Karimi, Dan Li, Shuxian Lun et Aihua Zhang. « A Mahalanobis Hyperellipsoidal Learning Machine Class Incremental Learning Algorithm ». Abstract and Applied Analysis 2014 (2014) : 1–5. http://dx.doi.org/10.1155/2014/894246.

Texte intégral
Résumé :
A Mahalanobis hyperellipsoidal learning machine class incremental learning algorithm is proposed. To each class sample, the hyperellipsoidal that encloses as many as possible and pushes the outlier samples away is trained in the feature space. In the process of incremental learning, only one subclassifier is trained with the new class samples. The old models of the classifier are not influenced and can be reused. In the process of classification, considering the information of sample’s distribution in the feature space, the Mahalanobis distances from the sample mapping to the center of each hyperellipsoidal are used to decide the classified sample class. The experimental results show that the proposed method has higher classification precision and classification speed.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Pang, Shaoning, Lei Zhu, Gang Chen, Abdolhossein Sarrafzadeh, Tao Ban et Daisuke Inoue. « Dynamic class imbalance learning for incremental LPSVM ». Neural Networks 44 (août 2013) : 87–100. http://dx.doi.org/10.1016/j.neunet.2013.02.007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Liu, Yaoyao, Yingying Li, Bernt Schiele et Qianru Sun. « Online Hyperparameter Optimization for Class-Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 7 (26 juin 2023) : 8906–13. http://dx.doi.org/10.1609/aaai.v37i7.26070.

Texte intégral
Résumé :
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase. An inherent challenge of CIL is the stability-plasticity tradeoff, i.e., CIL models should keep stable to retain old knowledge and keep plastic to absorb new knowledge. However, none of the existing CIL models can achieve the optimal tradeoff in different data-receiving settings—where typically the training-from-half (TFH) setting needs more stability, but the training-from-scratch (TFS) needs more plasticity. To this end, we design an online learning method that can adaptively optimize the tradeoff without knowing the setting as a priori. Specifically, we first introduce the key hyperparameters that influence the tradeoff, e.g., knowledge distillation (KD) loss weights, learning rates, and classifier types. Then, we formulate the hyperparameter optimization process as an online Markov Decision Process (MDP) problem and propose a specific algorithm to solve it. We apply local estimated rewards and a classic bandit algorithm Exp3 to address the issues when applying online MDP methods to the CIL protocol. Our method consistently improves top-performing CIL methods in both TFH and TFS settings, e.g., boosting the average accuracy of TFH and TFS by 2.2 percentage points on ImageNet-Full, compared to the state-of-the-art. Code is provided at https://class-il.mpi-inf.mpg.de/online/
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Lijuan, Xiaokang Yang, Kai Zhang, Yong Li, Fu Li, Jun Li et Dongming Li. « Class-Incremental Learning Based on Anomaly Detection ». IEEE Access 11 (2023) : 69423–38. http://dx.doi.org/10.1109/access.2023.3293524.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Liang, Sen, Kai Zhu, Wei Zhai, Zhiheng Liu et Yang Cao. « Hypercorrelation Evolution for Video Class-Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 4 (24 mars 2024) : 3315–23. http://dx.doi.org/10.1609/aaai.v38i4.28117.

Texte intégral
Résumé :
Video class-incremental learning aims to recognize new actions while restricting the catastrophic forgetting of old ones, whose representative samples can only be saved in limited memory. Semantically variable subactions are susceptible to class confusion due to data imbalance. While existing methods address the problem by estimating and distilling the spatio-temporal knowledge, we further explores that the refinement of hierarchical correlations is crucial for the alignment of spatio-temporal features. To enhance the adaptability on evolved actions, we proposes a hierarchical aggregation strategy, in which hierarchical matching matrices are combined and jointly optimized to selectively store and retrieve relevant features from previous tasks. Meanwhile, a correlation refinement mechanism is presented to reinforce the bias on informative exemplars according to online hypercorrelation distribution. Experimental results demonstrate the effectiveness of the proposed method on three standard video class-incremental learning benchmarks, outperforming state-of-the-art methods. Code is available at: https://github.com/Lsen991031/HCE
Styles APA, Harvard, Vancouver, ISO, etc.
8

Xu, Shixiong, Gaofeng Meng, Xing Nie, Bolin Ni, Bin Fan et Shiming Xiang. « Defying Imbalanced Forgetting in Class Incremental Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 14 (24 mars 2024) : 16211–19. http://dx.doi.org/10.1609/aaai.v38i14.29555.

Texte intégral
Résumé :
We observe a high level of imbalance in the accuracy of different learned classes in the same old task for the first time. This intriguing phenomenon, discovered in replay-based Class Incremental Learning (CIL), highlights the imbalanced forgetting of learned classes, as their accuracy is similar before the occurrence of catastrophic forgetting. This discovery remains previously unidentified due to the reliance on average incremental accuracy as the measurement for CIL, which assumes that the accuracy of classes within the same task is similar. However, this assumption is invalid in the face of catastrophic forgetting. Further empirical studies indicate that this imbalanced forgetting is caused by conflicts in representation between semantically similar old and new classes. These conflicts are rooted in the data imbalance present in replay-based CIL methods. Building on these insights, we propose CLass-Aware Disentanglement (CLAD) as a means to predict the old classes that are more likely to be forgotten and enhance their accuracy. Importantly, CLAD can be seamlessly integrated into existing CIL methods. Extensive experiments demonstrate that CLAD consistently improves current replay-based methods, resulting in performance gains of up to 2.56%.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Guo, Jiaqi, Guanqiu Qi, Shuiqing Xie et Xiangyuan Li. « Two-Branch Attention Learning for Fine-Grained Class Incremental Learning ». Electronics 10, no 23 (1 décembre 2021) : 2987. http://dx.doi.org/10.3390/electronics10232987.

Texte intégral
Résumé :
As a long-standing research area, class incremental learning (CIL) aims to effectively learn a unified classifier along with the growth of the number of classes. Due to the small inter-class variances and large intra-class variances, fine-grained visual categorization (FGVC) as a challenging visual task has not attracted enough attention in CIL. Therefore, the localization of critical regions specialized for fine-grained object recognition plays a crucial role in FGVC. Additionally, it is important to learn fine-grained features from critical regions in fine-grained CIL for the recognition of new object classes. This paper designs a network architecture named two-branch attention learning network (TBAL-Net) for fine-grained CIL. TBAL-Net can localize critical regions and learn fine-grained feature representation by a lightweight attention module. An effective training framework is proposed for fine-grained CIL by integrating TBAL-Net into an effective CIL process. This framework is tested on three popular fine-grained object datasets, including CUB-200-2011, FGVC-Aircraft, and Stanford-Car. The comparative experimental results demonstrate that the proposed framework can achieve the state-of-the-art performance on the three fine-grained object datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Qin, Zhili, Wei Han, Jiaming Liu, Rui Zhang, Qingli Yang, Zejun Sun et Junming Shao. « Rethinking few-shot class-incremental learning : A lazy learning baseline ». Expert Systems with Applications 250 (septembre 2024) : 123848. http://dx.doi.org/10.1016/j.eswa.2024.123848.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Class-incremental learning"

1

Hocquet, Guillaume. « Class Incremental Continual Learning in Deep Neural Networks ». Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.

Texte intégral
Résumé :
Nous nous intéressons au problème de l'apprentissage continu de réseaux de neurones artificiels dans le cas où les données ne sont accessibles que pour une seule catégorie à la fois. Pour remédier au problème de l'oubli catastrophique qui limite les performances d'apprentissage dans ces conditions, nous proposons une approche basée sur la représentation des données d'une catégorie par une loi normale. Les transformations associées à ces représentations sont effectuées à l'aide de réseaux inversibles, qui peuvent alors être entraînés avec les données d'une seule catégorie. Chaque catégorie se voit attribuer un réseau pour représenter ses caractéristiques. Prédire la catégorie revient alors à identifier le réseau le plus représentatif. L'avantage d'une telle approche est qu'une fois qu'un réseau est entraîné, il n'est plus nécessaire de le mettre à jour par la suite, chaque réseau étant indépendant des autres. C'est cette propriété particulièrement avantageuse qui démarque notre méthode des précédents travaux dans ce domaine. Nous appuyons notre démonstration sur des expériences réalisées sur divers jeux de données et montrons que notre approche fonctionne favorablement comparé à l'état de l'art. Dans un second temps, nous proposons d'optimiser notre approche en réduisant son impact en mémoire en factorisant les paramètres des réseaux. Il est alors possible de réduire significativement le coût de stockage de ces réseaux avec une perte de performances limitée. Enfin, nous étudions également des stratégies pour produire des réseaux capables d'être réutilisés sur le long terme et nous montrons leur pertinence par rapport aux réseaux traditionnellement utilisés pour l'apprentissage continu
We are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Styles APA, Harvard, Vancouver, ISO, etc.
2

Júnior, João Roberto Bertini. « Classificação de dados estacionários e não estacionários baseada em grafos ». Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15032011-102039/.

Texte intégral
Résumé :
Métodos baseados em grafos consistem em uma poderosa forma de representação e abstração de dados que proporcionam, dentre outras vantagens, representar relações topológicas, visualizar estruturas, representar grupos de dados com formatos distintos, bem como, fornecer medidas alternativas para caracterizar os dados. Esse tipo de abordagem tem sido cada vez mais considerada para solucionar problemas de aprendizado de máquina, principalmente no aprendizado não supervisionado, como agrupamento de dados, e mais recentemente, no aprendizado semissupervisionado. No aprendizado supervisionado, por outro lado, o uso de algoritmos baseados em grafos ainda tem sido pouco explorado na literatura. Este trabalho apresenta um algoritmo não paramétrico baseado em grafos para problemas de classificação com distribuição estacionária, bem como sua extensão para problemas que apresentam distribuição não estacionária. O algoritmo desenvolvido baseia-se em dois conceitos, a saber, 1) em uma estrutura chamada grafo K-associado ótimo, que representa o conjunto de treinamento como um grafo esparso e dividido em componentes; e 2) na medida de pureza de cada componente, que utiliza a estrutura do grafo para determinar o nível de mistura local dos dados em relação às suas classes. O trabalho também considera problemas de classificação que apresentam alteração na distribuição de novos dados. Este problema caracteriza a mudança de conceito e degrada o desempenho do classificador. De modo que, para manter bom desempenho, é necessário que o classificador continue aprendendo durante a fase de aplicação, por exemplo, por meio de aprendizado incremental. Resultados experimentais sugerem que ambas as abordagens apresentam vantagens na classificação de dados em relação aos algoritmos testados
Graph-based methods consist in a powerful form for data representation and abstraction which provides, among others advantages, representing topological relations, visualizing structures, representing groups of data with distinct formats, as well as, supplying alternative measures to characterize data. Such approach has been each time more considered to solve machine learning related problems, mainly concerning unsupervised learning, like clustering, and recently, semi-supervised learning. However, graph-based solutions for supervised learning tasks still remain underexplored in literature. This work presents a non-parametric graph-based algorithm suitable for classification problems with stationary distribution, as well as its extension to cope with problems of non-stationary distributed data. The developed algorithm relies on the following concepts, 1) a graph structure called optimal K-associated graph, which represents the training set as a sparse graph separated into components; and 2) the purity measure for each component, which uses the graph structure to determine local data mixture level in relation to their classes. This work also considers classification problems that exhibit modification on distribution of data flow. This problem qualifies concept drift and worsens any static classifier performance. Hence, in order to maintain accuracy performance, it is necessary for the classifier to keep learning during application phase, for example, by implementing incremental learning. Experimental results, concerning both algorithms, suggest that they had presented advantages over the tested algorithms on data classification tasks
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ngo, Ho Anh Khoi. « Méthodes de classifications dynamiques et incrémentales : application à la numérisation cognitive d'images de documents ». Thesis, Tours, 2015. http://www.theses.fr/2015TOUR4006/document.

Texte intégral
Résumé :
Cette thèse s’intéresse à la problématique de la classification dynamique en environnements stationnaires et non stationnaires, tolérante aux variations de quantités des données d’apprentissage et capable d’ajuster ses modèles selon la variabilité des données entrantes. Pour cela, nous proposons une solution faisant cohabiter des classificateurs one-class SVM indépendants ayant chacun leur propre procédure d’apprentissage incrémentale et par conséquent, ne subissant pas d’influences croisées pouvant émaner de la configuration des modèles des autres classificateurs. L’originalité de notre proposition repose sur l’exploitation des anciennes connaissances conservées dans les modèles de SVM (historique propre à chaque SVM représenté par l’ensemble des vecteurs supports trouvés) et leur combinaison avec les connaissances apportées par les nouvelles données au moment de leur arrivée. Le modèle de classification proposé (mOC-iSVM) sera exploité à travers trois variations exploitant chacune différemment l’historique des modèles. Notre contribution s’inscrit dans un état de l’art ne proposant pas à ce jour de solutions permettant de traiter à la fois la dérive de concepts, l’ajout ou la suppression de concepts, la fusion ou division de concepts, tout en offrant un cadre privilégié d’interactions avec l’utilisateur. Dans le cadre du projet ANR DIGIDOC, notre approche a été appliquée sur plusieurs scénarios de classification de flux d’images pouvant survenir dans des cas réels lors de campagnes de numérisation. Ces scénarios ont permis de valider une exploitation interactive de notre solution de classification incrémentale pour classifier des images arrivant en flux afin d’améliorer la qualité des images numérisées
This research contributes to the field of dynamic learning and classification in case of stationary and non-stationary environments. The goal of this PhD is to define a new classification framework able to deal with very small learning dataset at the beginning of the process and with abilities to adjust itself according to the variability of the incoming data inside a stream. For that purpose, we propose a solution based on a combination of independent one-class SVM classifiers having each one their own incremental learning procedure. Consequently, each classifier is not sensitive to crossed influences which can emanate from the configuration of the models of the other classifiers. The originality of our proposal comes from the use of the former knowledge kept in the SVM models (represented by all the found support vectors) and its combination with the new data coming incrementally from the stream. The proposed classification model (mOC-iSVM) is exploited through three variations in the way of using the existing models at each step of time. Our contribution states in a state of the art where no solution is proposed today to handle at the same time, the concept drift, the addition or the deletion of concepts, the fusion or division of concepts while offering a privileged solution for interaction with the user. Inside the DIGIDOC project, our approach was applied to several scenarios of classification of images streams which can correspond to real cases in digitalization projects. These different scenarios allow validating an interactive exploitation of our solution of incremental classification to classify images coming in a stream in order to improve the quality of the digitized images
Styles APA, Harvard, Vancouver, ISO, etc.
4

Daou, Andrea. « Real-time Indoor Localization with Embedded Computer Vision and Deep Learning ». Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMR002.

Texte intégral
Résumé :
La localisation d'une personne ou d'un bien dans des environnements intérieurs est devenue une nécessité. Le système de positionnement par satellites, une solution prédominante pour la localisation en extérieur, rencontre des limites lorsqu'il est appliqué en intérieur en raison de la réflexion des signaux et de l'atténuation causée par les obstacles. Pour y remédier, diverses solutions de localisation en intérieur ont été étudiées. Les méthodes de localisation en intérieur sans fil exploitent les signaux pour déterminer la position d'un appareil dans un environnement intérieur. Cependant, l'interférence des signaux, souvent causée par des obstacles physiques, des réflexions et des appareils concurrents, peut entraîner des imprécisions dans l'estimation de la position. De plus, ces méthodes nécessitent le déploiement d'infrastructures, ce qui entraîne des coûts d'installation et de maintenance. Une autre approche consiste à estimer le mouvement de l'utilisateur à l'aide des capteurs inertiels de l'appareil. Toutefois, cette méthode se heurte à des difficultés liées à la précision des capteurs, aux caractéristiques de mouvement de l'utilisateur et à la dérive temporelle. D'autres techniques de localisation en intérieur exploitent les champs magnétiques générés par la Terre et les structures métalliques. Ces techniques dépendent des appareils et des capteurs utilisés ainsi que de l'environnement dans lequel se situe l'utilisateur.L'objectif de cette thèse est de réaliser un système de localisation en intérieur conçu pour les professionnels, tels que les pompiers, les officiers de police et les travailleurs isolés, qui ont besoin de solutions de positionnement précises et robustes dans des environnements intérieurs complexes. Dans cette thèse, nous proposons un système de localisation en intérieur qui exploite les récentes avancées en vision par ordinateur pour localiser une personne à l’intérieur d’un bâtiment. Nous développons un système de localisation au niveau de la pièce. Ce système est basé sur l'apprentissage profond et les capteurs intégrés dans le smartphone, combinant ainsi les informations visuelles avec le cap magnétique du smartphone. Pour se localiser, l'utilisateur capture une image de l'environnement intérieur à l'aide d'un smartphone équipé d'une caméra, d'un accéléromètre et d'un magnétomètre. L'image capturée est ensuite traitée par notre système composé de plusieurs réseaux neuronaux convolutionnels directionnels pour identifier la pièce spécifique dans laquelle se situe l'utilisateur. Le système proposé nécessite une infrastructure minimale et fournit une localisation précise. Nous soulignons l'importance de la maintenance continue du système de localisation en intérieur par vision. Ce système nécessite une maintenance régulière afin de s'adapter à l'évolution des environnements intérieurs, en particulier lorsque de nouvelles pièces doivent être intégrées dans le système de localisation existant. L'apprentissage incrémental par classe est une approche de vision par ordinateur qui permet aux réseaux neuronaux profonds d'intégrer de nouvelles classes au fil du temps sans oublier les connaissances déjà acquises. Dans le contexte de la localisation en intérieur par vision, ce concept doit être appliqué pour prendre en compte de nouvelles pièces. La sélection d'échantillons représentatifs est essentielle pour contrôler les limites de la mémoire, éviter l'oubli et conserver les connaissances des classes déjà apprises. Nous développons une méthode de sélection d'échantillons basée sur la cohérence pour l'apprentissage incrémental par classe dans le cadre de l'apprentissage profond. La pertinence de la méthodologie et des contributions algorithmiques de cette thèse est rigoureusement testée et validée par des expérimentations et des évaluations complètes sur des données réelles
The need to determine the location of individuals or objects in indoor environments has become an essential requirement. The Global Navigation Satellite System, a predominant outdoor localization solution, encounters limitations when applied indoors due to signal reflections and attenuation caused by obstacles. To address this, various indoor localization solutions have been explored. Wireless-based indoor localization methods exploit wireless signals to determine a device's indoor location. However, signal interference, often caused by physical obstructions, reflections, and competing devices, can lead to inaccuracies in location estimation. Additionally, these methods require access points deployment, incurring associated costs and maintenance efforts. An alternative approach is dead reckoning, which estimates a user's movement using a device's inertial sensors. However, this method faces challenges related to sensor accuracy, user characteristics, and temporal drift. Other indoor localization techniques exploit magnetic fields generated by the Earth and metal structures. These techniques depend on the used devices and sensors as well as the user's surroundings.The goal of this thesis is to provide an indoor localization system designed for professionals, such as firefighters, police officers, and lone workers, who require precise and robust positioning solutions in challenging indoor environments. In this thesis, we propose a vision-based indoor localization system that leverages recent advances in computer vision to determine the location of a person within indoor spaces. We develop a room-level indoor localization system based on Deep Learning (DL) and built-in smartphone sensors combining visual information with smartphone magnetic heading. To achieve localization, the user captures an image of the indoor surroundings using a smartphone, equipped with a camera, an accelerometer, and a magnetometer. The captured image is then processed using our proposed multiple direction-driven Convolutional Neural Networks to accurately predict the specific indoor room. The proposed system requires minimal infrastructure and provides accurate localization. In addition, we highlight the importance of ongoing maintenance of the vision-based indoor localization system. This system necessitates regular maintenance to adapt to changing indoor environments, particularly when new rooms have to be integrated into the existing localization framework. Class-Incremental Learning (Class-IL) is a computer vision approach that allows deep neural networks to incorporate new classes over time without forgetting the knowledge previously learned. In the context of vision-based indoor localization, this concept must be applied to accommodate new rooms. The selection of representative samples is essential to control memory limits, avoid forgetting, and retain knowledge from previous classes. We develop a coherence-based sample selection method for Class-IL, bringing forward the advantages of the coherence measure to a DL framework. The relevance of the methodology and algorithmic contributions of this thesis is rigorously tested and validated through comprehensive experimentation and evaluations on real datasets
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bruni, Matteo. « Incremental Learning of Stationary Representations ». Doctoral thesis, 2021. http://hdl.handle.net/2158/1237986.

Texte intégral
Résumé :
Humans and animals, during their life, continuously acquire new knowledge over time while making new experiences. They learn new concepts without forgetting what already learned, they typically use a few training examples (i.e. a child could recognize a giraffe after seeing a single picture) and they are able to discern what is known from what is unknown (i.e. unknown faces). In contrast, current supervised learning systems, work under the assumption that all data is known and available during learning, training is performed offline and a test dataset is typically required. What is missing in current research is a way to bridge the human learning capabilities in an artificial learning system where learning is performed incrementally from a data stream of infinite length (i.e. lifelong learning). This is a challenging task that is not sufficiently studied in the literature. According to this, in this thesis, we investigated different aspects of Deep Neural Network models (DNNs) to obtain stationary representations. Similar to fixed representations these representations can remain compatible between learning steps and are therefore well suited for incremental learning. Specifically, in the first part of the thesis, we propose a memory-based approach that collects and preserves all the past visual information observed so far, building a comprehensive and cumulative representation. We exploit a pre-trained fixed representation for the task of learning the appearance of face identities from unconstrained video streams leveraging temporal-coherence as a form of self-supervision. In this task, the representation allows us to learn from a few images and to detect unknown subjects similar to how humans learn. As the proposed approach makes use of a pre-trained fixed representation, learning is somewhat limited. This is due to the fact that the features stored in the memory bank remain fixed (i.e. they are not undergoing learning) and only the memory bank is learned. To address this issue, in the second part of the thesis, we propose a representation learning approach that can be exploited to learn both the feature and the memory without considering their joint learning (computationally prohibitive). The intuition is that every time the internal feature representation changes the memory bank must be relearned from scratch. The proposed method can mitigate the need of feature relearning by keeping the compatibility of features between learning steps thanks to feature stationarity. We show that the stationarity of the internal representation can be achieved with a fixed classifier by setting the classifier weights according to values taken from the coordinate vertices of the regular polytopes in high dimensional space. In the last part of the thesis, we apply the previously stationary representation method in the task of class incremental learning. We show that the method is as effective as the standard approaches while exhibiting novel stationarity properties of the internal feature representation that are otherwise non-existent. The approach exploits future unseen classes as negative examples and learns features that do not change their geometric configuration as novel classes are incorporated in the learning model. We show that a large number of classes can be learned with no loss of accuracy allowing the method to meet the underlying assumption of incremental lifelong learning.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Mandal, Devraj. « Cross-Modal Retrieval and Hashing ». Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4685.

Texte intégral
Résumé :
The objective of cross-modal retrieval is to retrieve relevant items from one modality (say image), given a query from another modality (say textual document). Cross-modal retrieval has various applications like matching image-sketch, audio-visual, near infrared-RGB, etc. Different feature representations of the two modalities, absence of paired correspondences, etc. makes this a very challenging problem. In this thesis, we have extensively looked at the cross-modal retrieval problem from different aspects and proposed methodologies to address them. • In the first work, we propose a novel framework, which can work with unpaired data of the two modalities. The method has two-steps, consisting of a hash code learning stage followed by a hash function learning stage. The method can also generate unified hash representations in post-processing stage for even better performance. Finally, we investigate, formulate and address the cross-modal hashing problem in presence of missing similarity information between the data items. • In the second work, we investigate how to make the cross-modal hashing algorithms scalable so that it can handle large amounts of training data and propose two solutions. The first approach builds on mini-batch realization of the previously formulated objective and the second is based on matrix factorization. We also investigate whether it is possible to build a hashing based approach without the need to learn a hash function as is typically done in literature. Finally, we propose a strategy so that an already trained cross-modal approach can be adapted and updated to take into account the real life scenario of increasing label space, without retraining the entire model from scratch. • In the third work, we explore semi-supervised approaches for cross-modal retrieval. We first propose a novel framework, which can predict the labels of the unlabeled data using complementary information from the different modalities. The framework can be used as an add-on with any baseline cross-modal algorithm. The second approach estimates the labels of the unlabeled data using nearest neighbor strategy, and then train a network with skip connections to predict the true labels. • In the fourth work, we investigate the cross-modal problem in an incremental multiclass scenario, where new data may contain previously unseen categories. We propose a novel incremental cross-modal hashing algorithm, which can adapt itself to handle incoming data of new categories. At every stage, a small amount of old category data termed exemplars is used, so as not to forget the old data while trying to learn for the new incoming data. • Finally, we investigate the effect of label corruption on cross-modal algorithms. We first study the recently proposed training paradigms, which focuses on small loss samples to build noise-resistant image classification models and improve upon that model using techniques like self-supervision and relabeling of large loss samples. Next we extend this work for cross-modal retrieval under noisy data.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Class-incremental learning"

1

Tao, Xiaoyu, Xinyuan Chang, Xiaopeng Hong, Xing Wei et Yihong Gong. « Topology-Preserving Class-Incremental Learning ». Dans Computer Vision – ECCV 2020, 254–70. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58529-7_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Liu, Xialei, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li et Ming-Ming Cheng. « Long-Tailed Class Incremental Learning ». Dans Lecture Notes in Computer Science, 495–512. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19827-4_29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

de Carvalho, Marcus, Mahardhika Pratama, Jie Zhang et Yajuan Sun. « Class-Incremental Learning via Knowledge Amalgamation ». Dans Machine Learning and Knowledge Discovery in Databases, 36–50. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yang, Dejie, Minghang Zheng, Weishuai Wang, Sizhe Li et Yang Liu. « Recent Advances in Class-Incremental Learning ». Dans Lecture Notes in Computer Science, 212–24. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46308-2_18.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Belouadah, Eden, Adrian Popescu, Umang Aggarwal et Léo Saci. « Active Class Incremental Learning for Imbalanced Datasets ». Dans Computer Vision – ECCV 2020 Workshops, 146–62. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65414-6_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Ayromlou, Sana, Purang Abolmaesumi, Teresa Tsang et Xiaoxiao Li. « Class Impression for Data-Free Incremental Learning ». Dans Lecture Notes in Computer Science, 320–29. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16440-8_31.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Zhenyao, et Lijun Zhang. « NeCa : Network Calibration for Class Incremental Learning ». Dans Lecture Notes in Computer Science, 385–99. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47634-1_29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Er, Meng Joo, Vijaya Krishna Yalavarthi, Ning Wang et Rajasekar Venkatesan. « A Novel Incremental Class Learning Technique for Multi-class Classification ». Dans Advances in Neural Networks – ISNN 2016, 474–81. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40663-3_54.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Elskhawy, Abdelrahman, Aneta Lisowska, Matthias Keicher, Joseph Henry, Paul Thomson et Nassir Navab. « Continual Class Incremental Learning for CT Thoracic Segmentation ». Dans Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, 106–16. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60548-3_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Lei, Cheng-Hsun, Yi-Hsin Chen, Wen-Hsiao Peng et Wei-Chen Chiu. « Class-Incremental Learning with Rectified Feature-Graph Preservation ». Dans Computer Vision – ACCV 2020, 358–74. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69544-6_22.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Class-incremental learning"

1

Luo, Zilin, Yaoyao Liu, Bernt Schiele et Qianru Sun. « Class-Incremental Exemplar Compression for Class-Incremental Learning ». Dans 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01094.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mi, Fei, Lingjing Kong, Tao Lin, Kaicheng Yu et Boi Faltings. « Generalized Class Incremental Learning ». Dans 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00128.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tao, Qingyi, Chen Change Loy, Jianfei Cad, Zongyuan Get et Simon See. « Retrospective Class Incremental Learning ». Dans 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021. http://dx.doi.org/10.1109/icme51207.2021.9428257.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Dong, Jiahua, Lixu Wang, Zhen Fang, Gan Sun, Shichao Xu, Xiao Wang et Qi Zhu. « Federated Class-Incremental Learning ». Dans 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00992.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Tao, Xiaoyu, Xiaopeng Hong, Xinyuan Chang, Songlin Dong, Xing Wei et Yihong Gong. « Few-Shot Class-Incremental Learning ». Dans 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.01220.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Lechat, Alexis, Stephane Herbin et Frederic Jurie. « Semi-Supervised Class Incremental Learning ». Dans 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9413225.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Mittal, Sudhanshu, Silvio Galesso et Thomas Brox. « Essentials for Class Incremental Learning ». Dans 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00390.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Pian, Weiguo, Shentong Mo, Yunhui Guo et Yapeng Tian. « Audio-Visual Class-Incremental Learning ». Dans 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00717.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wang, Wei, Zhiying Zhang et Jielong Guo. « Brain-inspired Class Incremental Learning ». Dans 2022 IEEE 5th International Conference on Information Systems and Computer Aided Education (ICISCAE). IEEE, 2022. http://dx.doi.org/10.1109/iciscae55891.2022.9927584.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Han, Ruizhi, C. L. Philip Chen et Shuang Feng. « Broad Learning System for Class Incremental Learning ». Dans 2018 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC). IEEE, 2018. http://dx.doi.org/10.1109/spac46244.2018.8965551.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie