To see the other types of publications on this topic, follow the link: Multi-labels.

Journal articles on the topic 'Multi-labels'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-labels.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Seongmin, Hyunsik Jeon, and U. Kang. "Multi-EPL: Accurate multi-source domain adaptation." PLOS ONE 16, no. 8 (August 5, 2021): e0255754. http://dx.doi.org/10.1371/journal.pone.0255754.

Full text
Abstract:
Given multiple source datasets with labels, how can we train a target model with no labeled data? Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels. MSDA is a crucial problem applicable to many practical cases where labels for the target data are unavailable due to privacy issues. Existing MSDA frameworks are limited since they align data without considering labels of the features of each domain. They also do not fully utilize the target data without labels and rely on limited feature extraction with a single extractor. In this paper, we propose Multi-EPL, a novel method for MSDA. Multi-EPL exploits label-wise moment matching to align the conditional distributions of the features for the labels, uses pseudolabels for the unavailable target labels, and introduces an ensemble of multiple feature extractors for accurate domain adaptation. Extensive experiments show that Multi-EPL provides the state-of-the-art performance for MSDA tasks in both image domains and text domains, improving the accuracy by up to 13.20%.
APA, Harvard, Vancouver, ISO, and other styles
2

Hao, Pingting, Kunpeng Liu, and Wanfu Gao. "Double-Layer Hybrid-Label Identification Feature Selection for Multi-View Multi-Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12295–303. http://dx.doi.org/10.1609/aaai.v38i11.29120.

Full text
Abstract:
Multi-view multi-label feature selection aims to select informative features where the data are collected from multiple sources with multiple interdependent class labels. For fully exploiting multi-view information, most prior works mainly focus on the common part in the ideal circumstance. However, the inconsistent part hidden in each view, including noises and specific elements, may affect the quality of mapping between labels and feature representations. Meanwhile, ignoring the specific part might lead to a suboptimal result, as each label is supposed to possess specific characteristics of its own. To deal with the double problems in multi-view multi-label feature selection, we propose a unified loss function which is a totally splitting structure for observed labels as hybrid labels that is, common labels, view-to-all specific labels and noisy labels, and the view-to-all specific labels further splits into several specific labels of each view. The proposed method simultaneously considers the consistency and complementarity of different views. Through exploring the feature weights of hybrid labels, the mapping relationships between labels and features can be established sequentially based on their attributes. Additionally, the interrelatedness among hybrid labels is also investigated and injected into the function. Specific to the specific labels of each view, we construct the novel regularization paradigm incorporating logic operations. Finally, the convergence of the result is proved after applying the multiplicative update rules. Experiments on six datasets demonstrate the effectiveness and superiority of our method compared with the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Kai-Wei, Chong Ho Lee, and Xiao-Feng Xie. "MLHN: A Hypernetwork Model for Multi-Label Classification." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 06 (August 12, 2015): 1550020. http://dx.doi.org/10.1142/s0218001415500202.

Full text
Abstract:
Multi-label classification has attracted significant attentions in machine learning. In multi-label classification, exploiting correlations among labels is an essential but nontrivial task. First, labels may be correlated in various degrees. Second, the scalability may suffer from the large number of labels, because the number of combinations among labels grows exponentially as the number of labels increases. In this paper, a multi-label hypernetwork (MLHN) is proposed to deal with these problems. By extending the traditional hypernetwork model, MLHN can represent arbitrary order correlations among labels. The classification model of MLHN is simple and the computational complexity of MLHN is linear with respect to the number of labels, which contribute to the good scalability of MLHN. We perform experiments on a variety of datasets. The results illustrate that the proposed MLHN achieves competitive performances against state-of-the-art multi-label classification algorithms in terms of both effectiveness and scalability with respect to the number of labels.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Hai-Feng, Lixin Han, Shoubao Su, and Zhou-Bao Sun. "Deep Multi-Instance Multi-Label Learning for Image Annotation." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 03 (November 22, 2017): 1859005. http://dx.doi.org/10.1142/s021800141859005x.

Full text
Abstract:
Multi-Instance Multi-Label learning (MIML) is a popular framework for supervised classification where an example is described by multiple instances and associated with multiple labels. Previous MIML approaches have focused on predicting labels for instances. The idea of tackling the problem is to identify its equivalence in the traditional supervised learning framework. Motivated by the recent advancement in deep learning, in this paper, we still consider the problem of predicting labels and attempt to model deep learning in MIML learning framework. The proposed approach enables us to train deep convolutional neural network with images from social networks where images are well labeled, even labeled with several labels or uncorrelated labels. Experiments on real-world datasets demonstrate the effectiveness of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Xing, Yuying, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Zili Zhang, and Maozu Guo. "Multi-View Multi-Instance Multi-Label Learning Based on Collaborative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5508–15. http://dx.doi.org/10.1609/aaai.v33i01.33015508.

Full text
Abstract:
Multi-view Multi-instance Multi-label Learning (M3L) deals with complex objects encompassing diverse instances, represented with different feature views, and annotated with multiple labels. Existing M3L solutions only partially explore the inter or intra relations between objects (or bags), instances, and labels, which can convey important contextual information for M3L. As such, they may have a compromised performance.\ In this paper, we propose a collaborative matrix factorization based solution called M3Lcmf. M3Lcmf first uses a heterogeneous network composed of nodes of bags, instances, and labels, to encode different types of relations via multiple relational data matrices. To preserve the intrinsic structure of the data matrices, M3Lcmf collaboratively factorizes them into low-rank matrices, explores the latent relationships between bags, instances, and labels, and selectively merges the data matrices. An aggregation scheme is further introduced to aggregate the instance-level labels into bag-level and to guide the factorization. An empirical study on benchmark datasets show that M3Lcmf outperforms other related competitive solutions both in the instance-level and bag-level prediction.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Lei, Yuqi Chu, Guanfeng Liu, and Xindong Wu. "Multi-Objective Optimization-Based Networked Multi-Label Active Learning." Journal of Database Management 30, no. 2 (April 2019): 1–26. http://dx.doi.org/10.4018/jdm.2019040101.

Full text
Abstract:
Along with the fast development of network applications, network research has attracted more and more attention, where one of the most important research directions is networked multi-label classification. Based on it, unknown labels of nodes can be inferred by known labels of nodes in the neighborhood. As both the scale and complexity of networks are increasing, the problems of previously neglected system overhead are turning more and more seriously. In this article, a novel multi-objective optimization-based networked multi-label seed node selection algorithm (named as MOSS) is proposed to improve both the prediction accuracy for unknown labels of nodes from labels of seed nodes during classification and the system overhead for mining the labels of seed nodes with third parties before classification. Compared with other algorithms on several real networked data sets, MOSS algorithm not only greatly reduces the system overhead before classification but also improves the prediction accuracy during classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Tianshui, Tao Pu, Hefeng Wu, Yuan Xie, and Liang Lin. "Structured Semantic Transfer for Multi-Label Recognition with Partial Labels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 339–46. http://dx.doi.org/10.1609/aaai.v36i1.19910.

Full text
Abstract:
Multi-label image recognition is a fundamental yet practical task because real-world images inherently possess multiple semantic labels. However, it is difficult to collect large-scale multi-label annotations due to the complexity of both the input images and output label spaces. To reduce the annotation cost, we propose a structured semantic transfer (SST) framework that enables training multi-label recognition models with partial labels, i.e., merely some labels are known while other labels are missing (also called unknown labels) per image. The framework consists of two complementary transfer modules that explore within-image and cross-image semantic correlations to transfer knowledge of known labels to generate pseudo labels for unknown labels. Specifically, an intra-image semantic transfer module learns image-specific label co-occurrence matrix and maps the known labels to complement unknown labels based on this matrix. Meanwhile, a cross-image transfer module learns category-specific feature similarities and helps complement unknown labels with high similarities. Finally, both known and generated labels are used to train the multi-label recognition models. Extensive experiments on the Microsoft COCO, Visual Genome and Pascal VOC datasets show that the proposed SST framework obtains superior performance over current state-of-the-art algorithms. Codes are available at https://github.com/HCPLab-SYSU/HCP-MLR-PL.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Jun, Linchuan Xu, Kun Qian, Jing Wang, and Kenji Yamanishi. "Multi-label learning with missing and completely unobserved labels." Data Mining and Knowledge Discovery 35, no. 3 (March 12, 2021): 1061–86. http://dx.doi.org/10.1007/s10618-021-00743-x.

Full text
Abstract:
AbstractMulti-label learning deals with data examples which are associated with multiple class labels simultaneously. Despite the success of existing approaches to multi-label learning, there is still a problem neglected by researchers, i.e., not only are some of the values of observed labels missing, but also some of the labels are completely unobserved for the training data. We refer to the problem as multi-label learning with missing and completely unobserved labels, and argue that it is necessary to discover these completely unobserved labels in order to mine useful knowledge and make a deeper understanding of what is behind the data. In this paper, we propose a new approach named MCUL to solve multi-label learning with Missing and Completely Unobserved Labels. We try to discover the unobserved labels of a multi-label data set with a clustering based regularization term and describe the semantic meanings of them based on the label-specific features learned by MCUL, and overcome the problem of missing labels by exploiting label correlations. The proposed method MCUL can predict both the observed and newly discovered labels simultaneously for unseen data examples. Experimental results validated over ten benchmark datasets demonstrate that the proposed method can outperform other state-of-the-art approaches on observed labels and obtain an acceptable performance on the new discovered labels as well.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Ze-Sen, Xuan Wu, Qing-Guo Chen, Yao Hu, and Min-Ling Zhang. "Multi-View Partial Multi-Label Learning with Graph-Based Disambiguation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3553–60. http://dx.doi.org/10.1609/aaai.v34i04.5761.

Full text
Abstract:
In multi-view multi-label learning (MVML), each training example is represented by different feature vectors and associated with multiple labels simultaneously. Nonetheless, the labeling quality of training examples is tend to be affected by annotation noises. In this paper, the problem of multi-view partial multi-label learning (MVPML) is studied, where the set of associated labels are assumed to be candidate ones and only partially valid. To solve the MVPML problem, a two-stage graph-based disambiguation approach is proposed. Firstly, the ground-truth labels of each training example are estimated by disambiguating the candidate labels with fused similarity graph. After that, the predictive model for each label is learned from embedding features generated from disambiguation-guided clustering analysis. Extensive experimental studies clearly validate the effectiveness of the proposed approach in solving the MVPML problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Jun, Haowei Rui, Guorong Li, Xiwen Qu, Tao Tao, and Xiao Zheng. "Multi-Label Learning With Hidden Labels." IEEE Access 8 (2020): 29667–76. http://dx.doi.org/10.1109/access.2020.2972599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tan, Z. M., J. Y. Liu, Q. Li, D. Y. Wang, and C. Y. Wang. "An approach to error label discrimination based on joint clustering." Journal of Physics: Conference Series 2294, no. 1 (June 1, 2022): 012018. http://dx.doi.org/10.1088/1742-6596/2294/1/012018.

Full text
Abstract:
Abstract Inaccurate multi-label learning aims at dealing with multi-label data with wrong labels. Error labels in data sets usually result in cognitive bias for objects. To discriminate and correct wrong labels is a significant issue in multi-label learning. In this paper, a joint discrimination model based on fuzzy C-means (FCM) and possible C-means (PCM) is proposed to find wrong labels in data sets. In this model, the connection between samples and their labels is analyzed based on the assumption of consistence between samples and their labels. Samples and labels are clustered by considering this connection in the joint FCM-PCM clustering model. An inconsistence measure between a sample and its label is established to recognize wrong labels. A series of simulated experiments are comparatively implemented on several real multi-label data sets and experimental results show superior performance of the proposed model in comparison with two state of the art methods of mislabeling correction.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Xinda, and Lili Wang. "Multi-granularity sequence generation for hierarchical image classification." Computational Visual Media 10, no. 2 (January 3, 2024): 243–60. http://dx.doi.org/10.1007/s41095-022-0332-2.

Full text
Abstract:
AbstractHierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously. Existing methods tend to overlook that different image regions contribute differently to label prediction at different granularities, and also insufficiently consider relationships between the hierarchical multi-granularity labels. We introduce a sequence-to-sequence mechanism to overcome these two problems and propose a multi-granularity sequence generation (MGSG) approach for the hierarchical multi-granularity image classification task. Specifically, we introduce a transformer architecture to encode the image into visual representation sequences. Next, we traverse the taxonomic tree and organize the multi-granularity labels into sequences, and vectorize them and add positional information. The proposed multi-granularity sequence generation method builds a decoder that takes visual representation sequences and semantic label embedding as inputs, and outputs the predicted multi-granularity label sequence. The decoder models dependencies and correlations between multi-granularity labels through a masked multi-head self-attention mechanism, and relates visual information to the semantic label information through a cross-modality attention mechanism. In this way, the proposed method preserves the relationships between labels at different granularity levels and takes into account the influence of different image regions on labels with different granularities. Evaluations on six public benchmarks qualitatively and quantitatively demonstrate the advantages of the proposed method. Our project is available at https://github.com/liuxindazz/mgsg.
APA, Harvard, Vancouver, ISO, and other styles
13

Xie, Ming-Kun, and Sheng-Jun Huang. "Partial Multi-Label Learning with Noisy Label Identification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6454–61. http://dx.doi.org/10.1609/aaai.v34i04.6117.

Full text
Abstract:
Partial multi-label learning (PML) deals with problems where each instance is assigned with a candidate label set, which contains multiple relevant labels and some noisy labels. Recent studies usually solve PML problems with the disambiguation strategy, which recovers ground-truth labels from the candidate label set by simply assuming that the noisy labels are generated randomly. In real applications, however, noisy labels are usually caused by some ambiguous contents of the example. Based on this observation, we propose a partial multi-label learning approach to simultaneously recover the ground-truth information and identify the noisy labels. The two objectives are formalized in a unified framework with trace norm and ℓ1 norm regularizers. Under the supervision of the observed noise-corrupted label matrix, the multi-label classifier and noisy label identifier are jointly optimized by incorporating the label correlation exploitation and feature-induced noise model. Extensive experiments on synthetic as well as real-world data sets validate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Peng, Cheng, Ke Chen, Lidan Shou, and Gang Chen. "CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14581–89. http://dx.doi.org/10.1609/aaai.v38i13.29374.

Full text
Abstract:
Multi-modal multi-label emotion recognition (MMER) aims to identify relevant emotions from multiple modalities. The challenge of MMER is how to effectively capture discriminative features for multiple labels from heterogeneous data. Recent studies are mainly devoted to exploring various fusion strategies to integrate multi-modal information into a unified representation for all labels. However, such a learning scheme not only overlooks the specificity of each modality but also fails to capture individual discriminative features for different labels. Moreover, dependencies of labels and modalities cannot be effectively modeled. To address these issues, this paper presents ContrAstive feature Reconstruction and AggregaTion (CARAT) for the MMER task. Specifically, we devise a reconstruction-based fusion mechanism to better model fine-grained modality-to-label dependencies by contrastively learning modal-separated and label-specific features. To further exploit the modality complementarity, we introduce a shuffle-based aggregation strategy to enrich co-occurrence collaboration among labels. Experiments on two benchmark datasets CMU-MOSEI and M3ED demonstrate the effectiveness of CARAT over state-of-the-art methods. Code is available at https://github.com/chengzju/CARAT.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Ping, Wanfu Gao, Juncheng Hu, and Yonghao Li. "Multi-Label Feature Selection Based on High-Order Label Correlation Assumption." Entropy 22, no. 7 (July 21, 2020): 797. http://dx.doi.org/10.3390/e22070797.

Full text
Abstract:
Multi-label data often involve features with high dimensionality and complicated label correlations, resulting in a great challenge for multi-label learning. Feature selection plays an important role in multi-label learning to address multi-label data. Exploring label correlations is crucial for multi-label feature selection. Previous information-theoretical-based methods employ the strategy of cumulative summation approximation to evaluate candidate features, which merely considers low-order label correlations. In fact, there exist high-order label correlations in label set, labels naturally cluster into several groups, similar labels intend to cluster into the same group, different labels belong to different groups. However, the strategy of cumulative summation approximation tends to select the features related to the groups containing more labels while ignoring the classification information of groups containing less labels. Therefore, many features related to similar labels are selected, which leads to poor classification performance. To this end, Max-Correlation term considering high-order label correlations is proposed. Additionally, we combine the Max-Correlation term with feature redundancy term to ensure that selected features are relevant to different label groups. Finally, a new method named Multi-label Feature Selection considering Max-Correlation (MCMFS) is proposed. Experimental results demonstrate the classification superiority of MCMFS in comparison to eight state-of-the-art multi-label feature selection methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Lidén, Mats, Ola Hjelmgren, Jenny Vikgren, and Per Thunberg. "Multi-Reader–Multi-Split Annotation of Emphysema in Computed Tomography." Journal of Digital Imaging 33, no. 5 (August 10, 2020): 1185–93. http://dx.doi.org/10.1007/s10278-020-00378-2.

Full text
Abstract:
Abstract Emphysema is visible on computed tomography (CT) as low-density lesions representing the destruction of the pulmonary alveoli. To train a machine learning model on the emphysema extent in CT images, labeled image data is needed. The provision of these labels requires trained readers, who are a limited resource. The purpose of the study was to test the reading time, inter-observer reliability and validity of the multi-reader–multi-split method for acquiring CT image labels from radiologists. The approximately 500 slices of each stack of lung CT images were split into 1-cm chunks, with 17 thin axial slices per chunk. The chunks were randomly distributed to 26 readers, radiologists and radiology residents. Each chunk was given a quick score concerning emphysema type and severity in the left and right lung separately. A cohort of 102 subjects, with varying degrees of visible emphysema in the lung CT images, was selected from the SCAPIS pilot, performed in 2012 in Gothenburg, Sweden. In total, the readers created 9050 labels for 2881 chunks. Image labels were compared with regional annotations already provided at the SCAPIS pilot inclusion. The median reading time per chunk was 15 s. The inter-observer Krippendorff’s alpha was 0.40 and 0.53 for emphysema type and score, respectively, and higher in the apical part than in the basal part of the lungs. The multi-split emphysema scores were generally consistent with regional annotations. In conclusion, the multi-reader–multi-split method provided reasonably valid image labels, with an estimation of the inter-observer reliability.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Xingyu, Bingbing Jiang, Kui Yu, Huanhuan Chen, and Chunyan Miao. "Multi-Label Causal Feature Selection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6430–37. http://dx.doi.org/10.1609/aaai.v34i04.6114.

Full text
Abstract:
Multi-label feature selection has received considerable attentions during the past decade. However, existing algorithms do not attempt to uncover the underlying causal mechanism, and individually solve different types of variable relationships, ignoring the mutual effects between them. Furthermore, these algorithms lack of interpretability, which can only select features for all labels, but cannot explain the correlation between a selected feature and a certain label. To address these problems, in this paper, we theoretically study the causal relationships in multi-label data, and propose a novel Markov blanket based multi-label causal feature selection (MB-MCF) algorithm. MB-MCF mines the causal mechanism of labels and features first, to obtain a complete representation of information about labels. Based on the causal relationships, MB-MCF then selects predictive features and simultaneously distinguishes common features shared by multiple labels and label-specific features owned by single labels. Experiments on real-world data sets validate that MB-MCF could automatically determine the number of selected features and simultaneously achieve the best performance compared with state-of-the-art methods. An experiment in Emotions data set further demonstrates the interpretability of MB-MCF.
APA, Harvard, Vancouver, ISO, and other styles
18

Fang, Jun-Peng, and Min-Ling Zhang. "Partial Multi-Label Learning via Credible Label Elicitation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3518–25. http://dx.doi.org/10.1609/aaai.v33i01.33013518.

Full text
Abstract:
In partial multi-label learning (PML), each training example is associated with multiple candidate labels which are only partially valid. The task of PML naturally arises in learning scenarios with inaccurate supervision, and the goal is to induce a multi-label predictor which can assign a set of proper labels for unseen instance. To learn from PML training examples, the training procedure is prone to be misled by the false positive labels concealed in candidate label set. In light of this major difficulty, a novel two-stage PML approach is proposed which works by eliciting credible labels from the candidate label set for model induction. In this way, most false positive labels are expected to be excluded from the training procedure. Specifically, in the first stage, the labeling confidence of candidate label for each PML training example is estimated via iterative label propagation. In the second stage, by utilizing credible labels with high labeling confidence, multi-label predictor is induced via pairwise label ranking with virtual label splitting or maximum a posteriori (MAP) reasoning. Extensive experiments on synthetic as well as real-world data sets clearly validate the effectiveness of credible label elicitation in learning from PML examples.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhu, Yue, Kai Ming Ting, and Zhi-Hua Zhou. "Multi-Label Learning with Emerging New Labels." IEEE Transactions on Knowledge and Data Engineering 30, no. 10 (October 1, 2018): 1901–14. http://dx.doi.org/10.1109/tkde.2018.2810872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhu, Pengfei, Qian Xu, Qinghua Hu, Changqing Zhang, and Hong Zhao. "Multi-label feature selection with missing labels." Pattern Recognition 74 (February 2018): 488–502. http://dx.doi.org/10.1016/j.patcog.2017.09.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Yaojin, Qinghua Hu, Jia Zhang, and Xindong Wu. "Multi-label feature selection with streaming labels." Information Sciences 372 (December 2016): 256–75. http://dx.doi.org/10.1016/j.ins.2016.08.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Miao, Yu-Feng Li, and Zhi-Hua Zhou. "Multi-Label Learning with PRO Loss." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 998–1004. http://dx.doi.org/10.1609/aaai.v27i1.8689.

Full text
Abstract:
Multi-label learning methods assign multiple labels to one object. In practice, in addition to differentiating relevant labels from irrelevant ones, it is often desired to rank the relevant labels for an object, whereas the rankings of irrelevant labels are not important. Such a requirement, however, cannot be met because most existing methods were designed to optimize existing criteria, yet there is no criterion which encodes the aforementioned requirement. In this paper, we present a new criterion, Pro Loss, concerning the prediction on all labels as well as the rankings of only relevant labels. We then propose ProSVM which optimizes Pro Lossefficiently using alternating direction method of multipliers. We further improve its efficiency with an upper approximation that reduces the number of constraints from O(T,2) to O(T), where T is the number of labels. Experiments show that our proposals are not only superior on Pro Loss, but also highly competitive on existing evaluation criteria.
APA, Harvard, Vancouver, ISO, and other styles
23

ZHANG, Yongwei. "Learning Label Correlations for Multi-Label Online Passive Aggressive Classification Algorithm." Wuhan University Journal of Natural Sciences 29, no. 1 (February 2024): 51–58. http://dx.doi.org/10.1051/wujns/2024291051.

Full text
Abstract:
Label correlations are an essential technique for data mining that solves the possible correlation problem between different labels in multi-label classification. Although this technique is widely used in multi-label classification problems, batch learning deals with most issues, which consumes a lot of time and space resources. Unlike traditional batch learning methods, online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale datasets. However, existing online learning research has done little to consider correlations between labels. On the basis of existing research, this paper proposes a multi-label online learning algorithm based on label correlations by maximizing the interval between related labels and unrelated labels in multi-label samples. We evaluate the performance of the proposed algorithm on several public datasets. Experiments show the effectiveness of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Xiujuan, and Yuchen Zhou. "Multi-Label Feature Selection with Conditional Mutual Information." Computational Intelligence and Neuroscience 2022 (October 8, 2022): 1–13. http://dx.doi.org/10.1155/2022/9243893.

Full text
Abstract:
Feature selection is an important way to optimize the efficiency and accuracy of classifiers. However, traditional feature selection methods cannot work with many kinds of data in the real world, such as multi-label data. To overcome this challenge, multi-label feature selection is developed. Multi-label feature selection plays an irreplaceable role in pattern recognition and data mining. This process can improve the efficiency and accuracy of multi-label classification. However, traditional multi-label feature selection based on mutual information does not fully consider the effect of redundancy among labels. The deficiency may lead to repeated computing of mutual information and leave room to enhance the accuracy of multi-label feature selection. To deal with this challenge, this paper proposed a multi-label feature selection based on conditional mutual information among labels (CRMIL). Firstly, we analyze how to reduce the redundancy among features based on existing papers. Secondly, we propose a new approach to diminish the redundancy among labels. This method takes label sets as conditions to calculate the relevance between features and labels. This approach can weaken the impact of the redundancy among labels on feature selection results. Finally, we analyze this algorithm and balance the effects of relevance and redundancy on the evaluation function. For testing CRMIL, we compare it with the other eight multi-label feature selection algorithms on ten datasets and use four evaluation criteria to examine the results. Experimental results illustrate that CRMIL performs better than other existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Pu, Tao, Tianshui Chen, Hefeng Wu, and Liang Lin. "Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 2091–98. http://dx.doi.org/10.1609/aaai.v36i2.20105.

Full text
Abstract:
Training the multi-label image recognition models with partial labels, in which merely some labels are known while others are unknown for each image, is a considerably challenging and practical task. To address this task, current algorithms mainly depend on pre-training classification or similarity models to generate pseudo labels for the unknown labels. However, these algorithms depend on sufficient multi-label annotations to train the models, leading to poor performance especially with low known label proportion. In this work, we propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels, which can get rid of pre-training models and thus does not depend on sufficient annotations. To this end, we design a unified semantic-aware representation blending (SARB) framework that exploits instance-level and prototype-level semantic representation to complement unknown labels by two complementary modules: 1) an instance-level representation blending (ILRB) module blends the representations of the known labels in an image to the representations of the unknown labels in another image to complement these unknown labels. 2) a prototype-level representation blending (PLRB) module learns more stable representation prototypes for each category and blends the representation of unknown labels with the prototypes of corresponding labels to complement these labels. Extensive experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors on all known label proportion settings, i.e., with the mAP improvement of 4.6%, 4.6%, 2.2% on these three datasets when the known label proportion is 10%. Codes are available at https://github.com/HCPLab-SYSU/HCP-MLR-PL.
APA, Harvard, Vancouver, ISO, and other styles
26

Kolber, Anna, and Oliver Meixner. "Effects of Multi-Level Eco-Labels on the Product Evaluation of Meat and Meat Alternatives—A Discrete Choice Experiment." Foods 12, no. 15 (August 3, 2023): 2941. http://dx.doi.org/10.3390/foods12152941.

Full text
Abstract:
Eco-labels are an instrument for enabling informed food choices and supporting a demand-sided change towards an urgently needed sustainable food system. Lately, novel eco-labels that depict a product’s environmental life cycle assessment on a multi-level scale are being tested across Europe’s retailers. This study elicits consumers’ preferences and willingness to pay (WTP) for a multi-level eco-label. A Discrete Choice Experiment was conducted; a representative sample (n = 536) for the Austrian population was targeted via an online survey. Individual partworth utilities were estimated by means of the Hierarchical Bayes. The results show higher WTP for a positively evaluated multi-level label, revealing consumers’ perceived benefits of colorful multi-level labels over binary black-and-white designs. Even a negatively evaluated multi-level label was associated with a higher WTP compared to one with no label, pointing towards the limited effectiveness of eco-labels. Respondents’ preferences for eco-labels were independent from their subjective eco-label knowledge, health consciousness, and environmental concern. The attribute “protein source” was most important, and preference for an animal-based protein source (beef) was strongly correlated with consumers’ meat attachment, implying that a shift towards more sustainable protein sources is challenging, and sustainability labels have only a small impact on the meat product choice of average consumers.
APA, Harvard, Vancouver, ISO, and other styles
27

Song, Hwanjun, Minseok Kim, and Jae-Gil Lee. "Toward Robustness in Multi-Label Classification: A Data Augmentation Strategy against Imbalance and Noise." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21592–601. http://dx.doi.org/10.1609/aaai.v38i19.30157.

Full text
Abstract:
Multi-label classification poses challenges due to imbalanced and noisy labels in training data. In this paper, we propose a unified data augmentation method, named BalanceMix, to address these challenges. Our approach includes two samplers for imbalanced labels, generating minority-augmented instances with high diversity. It also refines multi-labels at the label-wise granularity, categorizing noisy labels as clean, re-labeled, or ambiguous for robust optimization. Extensive experiments on three benchmark datasets demonstrate that BalanceMix outperforms existing state-of-the-art methods. We release the code at https://github.com/DISL-Lab/BalanceMix.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Zhen, Yiqun Duan, Liu Liu, and Dacheng Tao. "Multi-label Few-shot Learning with Semantic Inference (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15917–18. http://dx.doi.org/10.1609/aaai.v35i18.17955.

Full text
Abstract:
Few-shot learning can adapt the classification model to new labels with only a few labeled examples. Previous studies mainly focus on the scenario of a single category label per example but have not solved the more challenging multi-label scenario with exponential-sized output space and low-data effectively. In this paper, we propose a semantic-aware meta-learning model for multi-label few-shot learning. Our approach can learn and infer the semantic correlation between unseen labels and historical labels to quickly adapt multi-label tasks from only a few examples. Specifically, features can be mapped into the semantic embedding space via label word vectors to explore and exploit the label correlation, and thus cope with the challenge on the overwhelming size of the output space. Then a novel semantic inference mechanism is designed for leveraging prior knowledge learned from historical labels, which will produce good generalization performance on new labels to alleviate the low-data problem. Finally, extensive empirical results show that the proposed method significantly outperforms the existing state-of-the-art methods on the multi-label few-shot learning tasks.
APA, Harvard, Vancouver, ISO, and other styles
29

Cui, Zijun, Yong Zhang, and Qiang Ji. "Label Error Correction and Generation through Label Relationships." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3693–700. http://dx.doi.org/10.1609/aaai.v34i04.5778.

Full text
Abstract:
For multi-label supervised learning, the quality of the label annotation is important. However, for many real world multi-label classification applications, label annotations often lack quality, in particular when label annotation requires special expertise, such as annotating fine-grained labels. The relationships among labels, on other hand, are usually stable and robust to errors. For this reason, we propose to capture and leverage label relationships at different levels to improve fine-grained label annotation quality and to generate labels. Two levels of labels, including object-level labels and property-level labels, are considered. The object-level labels characterize object category based on its overall appearance, while the property-level labels describe specific local object properties. A Bayesian network (BN) is learned to capture the relationships among the multiple labels at the two levels. A MAP inference is then performed to identify the most stable and consistent label relationships and they are then used to improve data annotations for the same dataset and to generate labels for a new dataset. Experimental evaluations on six benchmark databases for two different tasks (facial action unit and object attribute classification) demonstrate the effectiveness of the proposed method in improving data annotation and in generating effective new labels.
APA, Harvard, Vancouver, ISO, and other styles
30

Rottoli, Giovanni Daian, and Carlos Casanova. "Multi-criteria and Multi-expert Requirement Prioritization using Fuzzy Linguistic Labels." ParadigmPlus 3, no. 1 (February 8, 2022): 1–18. http://dx.doi.org/10.55969/paradigmplus.v3n1a1.

Full text
Abstract:
Requirement prioritization in Software Engineering is the activity that helps to select and order for the requirements to be implemented in each software development process iteration. Thus, requirement prioritization assists the decision-making process during iteration management. This work presents a method for requirement prioritization that considers many experts' opinions on multiple decision criteria provided using fuzzy linguistic labels, a tool that allows capturing the imprecision of each experts' judgment. These opinions are then aggregated using the fuzzy aggregation operator MLIOWA considering different weights for each expert. Then, an order for the requirements is given considering the aggregated opinions and different weights for each evaluated dimension or criteria. The method proposed in this work has been implemented and demonstrated using a synthetic dataset. A statistical evaluation of the results obtained using different t-norms was also carried out.
APA, Harvard, Vancouver, ISO, and other styles
31

Siringoringo, Rimbun, Jamaluddin Jamaluddin, and Resianta Perangin-angin. "TEXT MINING DAN KLASIFIKASI MULTI LABEL MENGGUNAKAN XGBOOST." METHOMIKA Jurnal Manajemen Informatika dan Komputerisasi Akuntansi 6, no. 6 (October 31, 2022): 234–38. http://dx.doi.org/10.46880/jmika.vol6no2.pp234-238.

Full text
Abstract:
The conventional classification process is applied to find a single criterion or label. The multi-label classification process is more complex because a large number of labels results in more classes. Another aspect that must be considered in multi-label classification is the existence of mutual dependencies between data labels. In traditional binary classification, classification analysis only aims to determine the label in the text, whether positive or negative. This method is sub-optimal because the relationship between labels cannot be determined. To overcome the weaknesses of these traditional methods, multi-label classification is one of the solutions in data labeling. With multi-label text classification, it allows the existence of many labels in a document and there is a semantic correlation between these labels. This research performs multi-label classification on research article texts using the ensemble classifier approach, namely XGBoost. Classification performance evaluation is based on several metrics criteria of confusion matrix, accuracy, and f1 score. Model evaluation is also carried out by comparing the performance of XGBoost with Logistic Regression. The results of the study using the train test split and cross-validation obtained an average accuracy of training and testing for Regression Logistics of 0.81, and an average f1 score of 0.47. The average accuracy for XGBoost is 0.88, and the average f1 score is 0.78. The results show that the XGBoost classifier model can be applied to produce a good classification performance.
APA, Harvard, Vancouver, ISO, and other styles
32

Xiao, Lin, Xiangliang Zhang, Liping Jing, Chi Huang, and Mingyang Song. "Does Head Label Help for Long-Tailed Multi-Label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14103–11. http://dx.doi.org/10.1609/aaai.v35i16.17660.

Full text
Abstract:
Multi-label text classification (MLTC) aims to annotate documents with the most relevant labels from a number of candidate labels. In real applications, the distribution of label frequency often exhibits a long tail, i.e., a few labels are associated with a large number of documents (a.k.a. head labels), while a large fraction of labels are associated with a small number of documents (a.k.a. tail labels). To address the challenge of insufficient training data on tail label classification, we propose a Head-to-Tail Network (HTTN) to transfer the meta-knowledge from the data-rich head labels to data-poor tail labels. The meta-knowledge is the mapping from few-shot network parameters to many-shot network parameters, which aims to promote the generalizability of tail classifiers. Extensive experimental results on three benchmark datasets demonstrate that HTTN consistently outperforms the state-of-the-art methods. The code and hyper-parameter settings are released for reproducibility.
APA, Harvard, Vancouver, ISO, and other styles
33

Jiang, Ting, Deqing Wang, Leilei Sun, Huayi Yang, Zhengyang Zhao, and Fuzhen Zhuang. "LightXML: Transformer with Dynamic Negative Sampling for High-Performance Extreme Multi-label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7987–94. http://dx.doi.org/10.1609/aaai.v35i9.16974.

Full text
Abstract:
Extreme multi-label text classification(XMC) is a task for finding the most relevant labels from a large label set. Nowadays deep learning-based methods have shown significant success in XMC. However, the existing methods (e.g., AttentionXML and X-Transformer etc) still suffer from 1) combining several models to train and predict for one dataset, and 2) sampling negative labels statically during the process of training label ranking model, which will harm the performance and accuracy of model. To address the above problems, we propose LightXML, which adopts end-to-end training and dynamical negative labels sampling. In LightXML, we use GAN like networks to recall and rank labels. The label recalling part will generate negative and positive labels, and the label ranking part will distinguish positive labels from these labels. Based on these networks, negative labels are sampled dynamically during label ranking part training. With feeding both label recalling and ranking parts with the same text representation, LightXML can reach high performance. Extensive experiments show that LightXML outperforms state-of-the-art methods in five extreme multi-label datasets with much smaller model size and lower computational complexity. In particular, on the Amazon dataset with 670K labels, LightXML can reduce the model size up to 72% compared to AttentionXML. Our code is available at http://github.com/kongds/LightXML.
APA, Harvard, Vancouver, ISO, and other styles
34

Mu, Dejun, Junhong Duan, Xiaoyu Li, Hang Dai, Xiaoyan Cai, and Lantian Guo. "Expede Herculem: Learning Multi Labels From Single Label." IEEE Access 6 (2018): 61410–18. http://dx.doi.org/10.1109/access.2018.2876014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ma, Jianghong, Zhaoyang Tian, Haijun Zhang, and Tommy W. S. Chow. "Multi-Label Low-dimensional Embedding with Missing Labels." Knowledge-Based Systems 137 (December 2017): 65–82. http://dx.doi.org/10.1016/j.knosys.2017.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Frasca, Marco, Simone Bassis, and Giorgio Valentini. "Learning node labels with multi-category Hopfield networks." Neural Computing and Applications 27, no. 6 (June 23, 2015): 1677–92. http://dx.doi.org/10.1007/s00521-015-1965-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yu, Tianyu, Cuiwei Liu, Zhuo Yan, and Xiangbin Shi. "A Multi-Task Framework for Action Prediction." Information 11, no. 3 (March 16, 2020): 158. http://dx.doi.org/10.3390/info11030158.

Full text
Abstract:
Predicting the categories of actions in partially observed videos is a challenging task in the computer vision field. The temporal progress of an ongoing action is of great importance for action prediction, since actions can present different characteristics at different temporal stages. To this end, we propose a novel multi-task deep forest framework, which treats temporal progress analysis as a relevant task to action prediction and takes advantage of observation ratio labels of incomplete videos during training. The proposed multi-task deep forest is a cascade structure of random forests and multi-task random forests. Unlike the traditional single-task random forests, multi-task random forests are built upon incomplete training videos annotated with action labels as well as temporal progress labels. Meanwhile, incorporating both random forests and multi-task random forests can increase the diversity of classifiers and improve the discriminative power of the multi-task deep forest. Experiments on the UT-Interaction and the BIT-Interaction datasets demonstrate the effectiveness of the proposed multi-task deep forest.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Pengyu, Lin Xiao, Bing Liu, Sijin Lu, Liping Jing, and Jian Yu. "Label-Specific Feature Augmentation for Long-Tailed Multi-Label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10602–10. http://dx.doi.org/10.1609/aaai.v37i9.26259.

Full text
Abstract:
Multi-label text classification (MLTC) involves tagging a document with its most relevant subset of labels from a label set. In real applications, labels usually follow a long-tailed distribution, where most labels (called as tail-label) only contain a small number of documents and limit the performance of MLTC. To facilitate this low-resource problem, researchers introduced a simple but effective strategy, data augmentation (DA). However, most existing DA approaches struggle in multi-label settings. The main reason is that the augmented documents for one label may inevitably influence the other co-occurring labels and further exaggerate the long-tailed problem. To mitigate this issue, we propose a new pair-level augmentation framework for MLTC, called Label-Specific Feature Augmentation (LSFA), which merely augments positive feature-label pairs for the tail-labels. LSFA contains two main parts. The first is for label-specific document representation learning in the high-level latent space, the second is for augmenting tail-label features in latent space by transferring the documents second-order statistics (intra-class semantic variations) from head labels to tail labels. At last, we design a new loss function for adjusting classifiers based on augmented datasets. The whole learning procedure can be effectively trained. Comprehensive experiments on benchmark datasets have shown that the proposed LSFA outperforms the state-of-the-art counterparts.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Tianci, Haoyu Wang, Yaqing Wang, Xiaoqian Wang, Lu Su, and Jing Gao. "SimFair: A Unified Framework for Fairness-Aware Multi-Label Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14338–46. http://dx.doi.org/10.1609/aaai.v37i12.26677.

Full text
Abstract:
Recent years have witnessed increasing concerns towards unfair decisions made by machine learning algorithms. To improve fairness in model decisions, various fairness notions have been proposed and many fairness-aware methods are developed. However, most of existing definitions and methods focus only on single-label classification. Fairness for multi-label classification, where each instance is associated with more than one labels, is still yet to establish. To fill this gap, we study fairness-aware multi-label classification in this paper. We start by extending Demographic Parity (DP) and Equalized Opportunity (EOp), two popular fairness notions, to multi-label classification scenarios. Through a systematic study, we show that on multi-label data, because of unevenly distributed labels, EOp usually fails to construct a reliable estimate on labels with few instances. We then propose a new framework named Similarity s-induced Fairness (sγ -SimFair). This new framework utilizes data that have similar labels when estimating fairness on a particular label group for better stability, and can unify DP and EOp. Theoretical analysis and experimental results on real-world datasets together demonstrate the advantage of sγ -SimFair over existing methods on multi-label classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Yi, Zhecheng Zhang, Mingyuan Chen, Hengyang Lu, Lei Zhang, and Chongjun Wang. "LAMB: A novel algorithm of label collaboration based multi-label learning." Intelligent Data Analysis 26, no. 5 (September 5, 2022): 1229–45. http://dx.doi.org/10.3233/ida-215946.

Full text
Abstract:
Exploiting label correlation is crucially important in multi-label learning, where each instance is associated with multiple labels simultaneously. Multi-label learning is more complex than single-label learning for that the labels tend to be correlated. Traditional multi-label learning algorithms learn independent classifiers for each label and employ ranking or threshold on the classification results. Most existing methods take label correlation as prior knowledge, which have worked well, but they failed to make full use of label dependency. As a result, the real relationship among labels may not be correctly characterized and the final prediction is not explicitly correlated. To address these problems, we propose a novel high-order multi-label learning algorithm of Label collAboration based Multi-laBel learning (LAMB). With regard to each label, LAMB utilizes collaboration between its own prediction and the prediction of other labels. Extensive experiments on various datasets demonstrate that our proposed LAMB algorithm achieves superior performance over existing state-of-the-art algorithms. In addition, one real-world dataset of channelrhodopsins chimeras is assessed, which would be of great value as pre-screen for membrane proteins function.
APA, Harvard, Vancouver, ISO, and other styles
41

Khandagale, Sujay, Han Xiao, and Rohit Babbar. "Bonsai: diverse and shallow trees for extreme multi-label classification." Machine Learning 109, no. 11 (August 23, 2020): 2099–119. http://dx.doi.org/10.1007/s10994-020-05888-2.

Full text
Abstract:
Abstract Extreme multi-label classification (XMC) refers to supervised multi-label learning involving hundreds of thousands or even millions of labels. In this paper, we develop a suite of algorithms, called , which generalizes the notion of label representation in XMC, and partitions the labels in the representation space to learn shallow trees. We show three concrete realizations of this label representation space including: (i) the input space which is spanned by the input features, (ii) the output space spanned by label vectors based on their co-occurrence with other labels, and (iii) the joint space by combining the input and output representations. Furthermore, the constraint-free multi-way partitions learnt iteratively in these spaces lead to shallow trees. By combining the effect of shallow trees and generalized label representation, achieves the best of both worlds—fast training which is comparable to state-of-the-art tree-based methods in XMC, and much better prediction accuracy, particularly on tail-labels. On a benchmark Amazon-3M dataset with 3 million labels, outperforms a state-of-the-art one-vs-rest method in terms of prediction accuracy, while being approximately 200 times faster to train. The code for is available at https://github.com/xmc-aalto/bonsai.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Yejiang, Yuhai Zhao, Zhengkui Wang, Wen Shan, and Xingwei Wang. "Limited-Supervised Multi-Label Learning with Dependency Noise." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15662–70. http://dx.doi.org/10.1609/aaai.v38i14.29494.

Full text
Abstract:
Limited-supervised multi-label learning (LML) leverages weak or noisy supervision for multi-label classification model training over data with label noise, which contain missing labels and/or redundant labels. Existing studies usually solve LML problems by assuming that label noise is independent of the input features and class labels, while ignoring the fact that noisy labels may depend on the input features (instance-dependent) and the classes (label-dependent) in many real-world applications. In this paper, we propose limited-supervised Multi-label Learning with Dependency Noise (MLDN) to simultaneously identify the instance-dependent and label-dependent label noise by factorizing the noise matrix as the outputs of a mapping from the feature and label representations. Meanwhile, we regularize the problem with the manifold constraint on noise matrix to preserve local relationships and uncover the manifold structure. Theoretically, we bound noise recover error for the resulting problem. We solve the problem by using a first-order scheme based on proximal operator, and the convergence rate of it is at least sub-linear. Extensive experiments conducted on various datasets demonstrate the superiority of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
43

Paul, Dipanjyoti, Rahul Kumar, Sriparna Saha, and Jimson Mathew. "Multi-objective Cuckoo Search-based Streaming Feature Selection for Multi-label Dataset." ACM Transactions on Knowledge Discovery from Data 15, no. 6 (May 19, 2021): 1–24. http://dx.doi.org/10.1145/3447586.

Full text
Abstract:
The feature selection method is the process of selecting only relevant features by removing irrelevant or redundant features amongst the large number of features that are used to represent data. Nowadays, many application domains especially social media networks, generate new features continuously at different time stamps. In such a scenario, when the features are arriving in an online fashion, to cope up with the continuous arrival of features, the selection task must also have to be a continuous process. Therefore, the streaming feature selection based approach has to be incorporated, i.e., every time a new feature or a group of features arrives, the feature selection process has to be invoked. Again, in recent years, there are many application domains that generate data where samples may belong to more than one classes called multi-label dataset. The multiple labels that the instances are being associated with, may have some dependencies amongst themselves. Finding the co-relation amongst the class labels helps to select the discriminative features across multiple labels. In this article, we develop streaming feature selection methods for multi-label data where the multiple labels are reduced to a lower-dimensional space. The similar labels are grouped together before performing the selection method to improve the selection quality and to make the model time efficient. The multi-objective version of the cuckoo search-based approach is used to select the optimal feature set. The proposed method develops two versions of the streaming feature selection method: ) when the features arrive individually and ) when the features arrive in the form of a batch. Various multi-label datasets from various domains such as text, biology, and audio have been used to test the developed streaming feature selection methods. The proposed methods are compared with many previous feature selection methods and from the comparison, the superiority of using multiple objectives and label co-relation in the feature selection process can be established.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Ning, Yun-Peng Liu, and Xin Geng. "Partial Multi-Label Learning with Label Distribution." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6510–17. http://dx.doi.org/10.1609/aaai.v34i04.6124.

Full text
Abstract:
Partial multi-label learning (PML) aims to learn from training examples each associated with a set of candidate labels, among which only a subset are valid for the training example. The common strategy to induce predictive model is trying to disambiguate the candidate label set, such as identifying the ground-truth label via utilizing the confidence of each candidate label or estimating the noisy labels in the candidate label sets. Nonetheless, these strategies ignore considering the essential label distribution corresponding to each instance since the label distribution is not explicitly available in the training set. In this paper, a new partial multi-label learning strategy named Pml-ld is proposed to learn from partial multi-label examples via label enhancement. Specifically, label distributions are recovered by leveraging the topological information of the feature space and the correlations among the labels. After that, a multi-class predictive model is learned by fitting a regularized multi-output regressor with the recovered label distributions. Experimental results on synthetic as well as real-world datasets clearly validate the effectiveness of Pml-ld for solving PML problems.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Xinran, Wuyin Jin, Xiangyang Xu, and Hao Yang. "A Domain-Adversarial Multi-Graph Convolutional Network for Unsupervised Domain Adaptation Rolling Bearing Fault Diagnosis." Symmetry 14, no. 12 (December 15, 2022): 2654. http://dx.doi.org/10.3390/sym14122654.

Full text
Abstract:
The transfer learning method, based on unsupervised domain adaptation (UDA), has been broadly utilized in research on fault diagnosis under variable working conditions with certain results. However, traditional UDA methods pay more attention to extracting information for the class labels and domain labels of data, ignoring the influence of data structure information on the extracted features. Therefore, we propose a domain-adversarial multi-graph convolutional network (DAMGCN) for UDA. A multi-graph convolutional network (MGCN), integrating three graph convolutional layers (multi-receptive field graph convolutional (MRFConv) layer, local extreme value convolutional (LEConv) layer, and graph attention convolutional (GATConv) layer) was used to mine data structure information. The domain discriminators and classifiers were utilized to model domain labels and class labels, respectively, and align the data structure differences through the correlation alignment (CORAL) index. The classification and feature extraction ability of the DAMGCN was significantly enhanced compared with other UDA algorithms by two example validation results, which can effectively achieve rolling bearing cross-domain fault diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
46

Shao, Zhenfeng, Ke Yang, and Weixun Zhou. "Performance Evaluation of Single-Label and Multi-Label Remote Sensing Image Retrieval Using a Dense Labeling Dataset." Remote Sensing 10, no. 6 (June 16, 2018): 964. http://dx.doi.org/10.3390/rs10060964.

Full text
Abstract:
Benchmark datasets are essential for developing and evaluating remote sensing image retrieval (RSIR) approaches. However, most of the existing datasets are single-labeled, with each image in these datasets being annotated by a single label representing the most significant semantic content of the image. This is sufficient for simple problems, such as distinguishing between a building and a beach, but multiple labels and sometimes even dense (pixel) labels are required for more complex problems, such as RSIR and semantic segmentation.We therefore extended the existing multi-labeled dataset collected for multi-label RSIR and presented a dense labeling remote sensing dataset termed "DLRSD". DLRSD contained a total of 17 classes, and the pixels of each image were assigned with 17 pre-defined labels. We used DLRSD to evaluate the performance of RSIR methods ranging from traditional handcrafted feature-based methods to deep learning-based ones. More specifically, we evaluated the performances of RSIR methods from both single-label and multi-label perspectives. These results demonstrated the advantages of multiple labels over single labels for interpreting complex remote sensing images. DLRSD provided the literature a benchmark for RSIR and other pixel-based problems such as semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Yu-Feng, Ju-Hua Hu, Yuang Jiang, and Zhi-Hua Zhou. "Towards Discovering What Patterns Trigger What Labels." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1012–18. http://dx.doi.org/10.1609/aaai.v26i1.8285.

Full text
Abstract:
In many real applications, especially those involving data objects with complicated semantics, it is generally desirable to discover the relation between patterns in the input space and labels corresponding to different semantics in the output space. This task becomes feasible with MIML (Multi-Instance Multi-Label learning), a recently developed learning framework, where each data object is represented by multiple instances and is allowed to be associated with multiple labels simultaneously. In this paper, we propose KISAR, an MIML algorithm that is able to discover what instances trigger what labels. By considering the fact that highly relevant labels usually share some patterns, we develop a convex optimization formulation and provide an alternating optimization solution. Experiments show that KISAR is able to discover reasonable relations between input patterns and output labels, and achieves performances that are highly competitive with many state-of-the-art MIML algorithms.
APA, Harvard, Vancouver, ISO, and other styles
48

Almi, Stefano, Marco Morandotti, and Francesco Solombrino. "A multi-step Lagrangian scheme for spatially inhomogeneous evolutionary games." Journal of Evolution Equations 21, no. 2 (April 24, 2021): 2691–733. http://dx.doi.org/10.1007/s00028-021-00702-5.

Full text
Abstract:
AbstractA multi-step Lagrangian scheme at discrete times is proposed for the approximation of a nonlinear continuity equation arising as a mean-field limit of spatially inhomogeneous evolutionary games, describing the evolution of a system of spatially distributed agents with strategies, or labels, whose payoff depends also on the current position of the agents. The scheme is Lagrangian, as it traces the evolution of position and labels along characteristics, and is a multi-step scheme, as it develops on the following two stages: First, the distribution of strategies or labels is updated according to a best performance criterion, and then, this is used by the agents to evolve their position. A general convergence result is provided in the space of probability measures. In the special cases of replicator-type systems and reversible Markov chains, variants of the scheme, where the explicit step in the evolution of the labels is replaced by an implicit one, are also considered and convergence results are provided.
APA, Harvard, Vancouver, ISO, and other styles
49

Gao, Zijun, Jun Wang, Guoxian Yu, Zhongmin Yan, Carlotta Domeniconi, and Jinglin Zhang. "Long-Tail Cross Modal Hashing." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7642–50. http://dx.doi.org/10.1609/aaai.v37i6.25927.

Full text
Abstract:
Existing Cross Modal Hashing (CMH) methods are mainly designed for balanced data, while imbalanced data with long-tail distribution is more general in real-world. Several long-tail hashing methods have been proposed but they can not adapt for multi-modal data, due to the complex interplay between labels and individuality and commonality information of multi-modal data. Furthermore, CMH methods mostly mine the commonality of multi-modal data to learn hash codes, which may override tail labels encoded by the individuality of respective modalities. In this paper, we propose LtCMH (Long-tail CMH) to handle imbalanced multi-modal data. LtCMH firstly adopts auto-encoders to mine the individuality and commonality of different modalities by minimizing the dependency between the individuality of respective modalities and by enhancing the commonality of these modalities. Then it dynamically combines the individuality and commonality with direct features extracted from respective modalities to create meta features that enrich the representation of tail labels, and binaries meta features to generate hash codes. LtCMH significantly outperforms state-of-the-art baselines on long-tail datasets and holds a better (or comparable) performance on datasets with balanced labels.
APA, Harvard, Vancouver, ISO, and other styles
50

Feng, Lei, Bo An, and Shuo He. "Collaboration Based Multi-Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3550–57. http://dx.doi.org/10.1609/aaai.v33i01.33013550.

Full text
Abstract:
It is well-known that exploiting label correlations is crucially important to multi-label learning. Most of the existing approaches take label correlations as prior knowledge, which may not correctly characterize the real relationships among labels. Besides, label correlations are normally used to regularize the hypothesis space, while the final predictions are not explicitly correlated. In this paper, we suggest that for each individual label, the final prediction involves the collaboration between its own prediction and the predictions of other labels. Based on this assumption, we first propose a novel method to learn the label correlations via sparse reconstruction in the label space. Then, by seamlessly integrating the learned label correlations into model training, we propose a novel multi-label learning approach that aims to explicitly account for the correlated predictions of labels while training the desired model simultaneously. Extensive experimental results show that our approach outperforms the state-of-the-art counterparts.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography