Articles de revues sur le sujet « Multi-multi instance learning »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Multi-multi instance learning.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Multi-multi instance learning ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Zhou, Zhi-Hua, Min-Ling Zhang, Sheng-Jun Huang et Yu-Feng Li. « Multi-instance multi-label learning ». Artificial Intelligence 176, no 1 (janvier 2012) : 2291–320. http://dx.doi.org/10.1016/j.artint.2011.10.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Briggs, Forrest, Xiaoli Z. Fern, Raviv Raich et Qi Lou. « Instance Annotation for Multi-Instance Multi-Label Learning ». ACM Transactions on Knowledge Discovery from Data 7, no 3 (1 septembre 2013) : 1–30. http://dx.doi.org/10.1145/2513092.2500491.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Briggs, Forrest, Xiaoli Z. Fern, Raviv Raich et Qi Lou. « Instance Annotation for Multi-Instance Multi-Label Learning ». ACM Transactions on Knowledge Discovery from Data 7, no 3 (septembre 2013) : 1–30. http://dx.doi.org/10.1145/2500491.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Huang, Sheng-Jun, Wei Gao et Zhi-Hua Zhou. « Fast Multi-Instance Multi-Label Learning ». IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no 11 (1 novembre 2019) : 2614–27. http://dx.doi.org/10.1109/tpami.2018.2861732.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Pei, Yuanli, et Xiaoli Z. Fern. « Constrained instance clustering in multi-instance multi-label learning ». Pattern Recognition Letters 37 (février 2014) : 107–14. http://dx.doi.org/10.1016/j.patrec.2013.07.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Xing, Yuying, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Zili Zhang et Maozu Guo. « Multi-View Multi-Instance Multi-Label Learning Based on Collaborative Matrix Factorization ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 5508–15. http://dx.doi.org/10.1609/aaai.v33i01.33015508.

Texte intégral
Résumé :
Multi-view Multi-instance Multi-label Learning (M3L) deals with complex objects encompassing diverse instances, represented with different feature views, and annotated with multiple labels. Existing M3L solutions only partially explore the inter or intra relations between objects (or bags), instances, and labels, which can convey important contextual information for M3L. As such, they may have a compromised performance.\ In this paper, we propose a collaborative matrix factorization based solution called M3Lcmf. M3Lcmf first uses a heterogeneous network composed of nodes of bags, instances, and labels, to encode different types of relations via multiple relational data matrices. To preserve the intrinsic structure of the data matrices, M3Lcmf collaboratively factorizes them into low-rank matrices, explores the latent relationships between bags, instances, and labels, and selectively merges the data matrices. An aggregation scheme is further introduced to aggregate the instance-level labels into bag-level and to guide the factorization. An empirical study on benchmark datasets show that M3Lcmf outperforms other related competitive solutions both in the instance-level and bag-level prediction.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ohkura, Kazuhiro, et Ryota Washizaki. « Robust Instance-Based Reinforcement Learning for Multi-Robot Systems(Multi-agent and Learning,Session : TP2-A) ». Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM 2004.4 (2004) : 51. http://dx.doi.org/10.1299/jsmeicam.2004.4.51_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Sun, Yu-Yin, Michael Ng et Zhi-Hua Zhou. « Multi-Instance Dimensionality Reduction ». Proceedings of the AAAI Conference on Artificial Intelligence 24, no 1 (3 juillet 2010) : 587–92. http://dx.doi.org/10.1609/aaai.v24i1.7700.

Texte intégral
Résumé :
Multi-instance learning deals with problems that treat bags of instances as training examples. In single-instance learning problems, dimensionality reduction is an essential step for high-dimensional data analysis and has been studied for years. The curse of dimensionality also exists in multiinstance learning tasks, yet this difficult task has not been studied before. Direct application of existing single-instance dimensionality reduction objectives to multi-instance learning tasks may not work well since it ignores the characteristic of multi-instance learning that the labels of bags are known while the labels of instances are unknown. In this paper, we propose an effective model and develop an efficient algorithm to solve the multi-instance dimensionality reduction problem. We formulate the objective as an optimization problem by considering orthonormality and sparsity constraints in the projection matrix for dimensionality reduction, and then solve it by the gradient descent along the tangent space of the orthonormal matrices. We also propose an approximation for improving the efficiency. Experimental results validate the effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
9

JIANG, Yuan, Zhi-Hua ZHOU et Yue ZHU. « Multi-instance multi-label new label learning ». SCIENTIA SINICA Informationis 48, no 12 (1 décembre 2018) : 1670–80. http://dx.doi.org/10.1360/n112018-00143.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wang, Wei, et ZhiHua Zhou. « Learnability of multi-instance multi-label learning ». Chinese Science Bulletin 57, no 19 (28 avril 2012) : 2488–91. http://dx.doi.org/10.1007/s11434-012-5133-z.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Hsu, Yen-Chi, Cheng-Yao Hong, Ming-Sui Lee et Tyng-Luh Liu. « Query-Driven Multi-Instance Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 04 (3 avril 2020) : 4158–65. http://dx.doi.org/10.1609/aaai.v34i04.5836.

Texte intégral
Résumé :
We introduce a query-driven approach (qMIL) to multi-instance learning where the queries aim to uncover the class labels embodied in a given bag of instances. Specifically, it solves a multi-instance multi-label learning (MIML) problem with a more challenging setting than the conventional one. Each MIML bag in our formulation is annotated only with a binary label indicating whether the bag contains the instance of a certain class and the query is specified by the word2vec of a class label/name. To learn a deep-net model for qMIL, we construct a network component that achieves a generalized compatibility measure for query-visual co-embedding and yields proper instance attentions to the given query. The bag representation is then formed as the attention-weighted sum of the instances' weights, and passed to the classification layer at the end of the network. In addition, the qMIL formulation is flexible for extending the network to classify unseen class labels, leading to a new technique to solve the zero-shot MIML task through an iterative querying process. Experimental results on action classification over video clips and three MIML datasets from MNIST, CIFAR10 and Scene are provided to demonstrate the effectiveness of our method.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Wang, Hua, Feiping Nie et Heng Huang. « Learning Instance Specific Distance for Multi-Instance Classification ». Proceedings of the AAAI Conference on Artificial Intelligence 25, no 1 (4 août 2011) : 507–12. http://dx.doi.org/10.1609/aaai.v25i1.7893.

Texte intégral
Résumé :
Multi-Instance Learning (MIL) deals with problems where each training example is a bag, and each bag contains a set of instances. Multi-instance representation is useful in many real world applications, because it is able to capture more structural information than traditional flat single-instance representation. However, it also brings new challenges. Specifically, the distance between data objects in MIL is a set-to-set distance, which is harder to estimate than vector distances used in single-instance data. Moreover, because in MIL labels are assigned to bags instead of instances, although a bag belongs to a class, some, or even most, of its instances may not be truly related to the class. In order to address these difficulties, in this paper we propose a novel Instance Specific Distance (ISD) method for MIL, which computes the Class-to-Bag (C2B) distance by further considering the relevances of training instances with respect to their labeled classes. Taking into account the outliers caused by the weak label association in MIL, we learn ISD by solving an l0+-norm minimization problem. An efficient algorithm to solve the optimization problem is presented, together with the rigorous proof of its convergence. The promising results on five benchmark multi-instance data sets and two real world multi-instance applications validate the effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Pham, Anh T., Raviv Raich et Xiaoli Z. Fern. « Dynamic Programming for Instance Annotation in Multi-Instance Multi-Label Learning ». IEEE Transactions on Pattern Analysis and Machine Intelligence 39, no 12 (1 décembre 2017) : 2381–94. http://dx.doi.org/10.1109/tpami.2017.2647944.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Liu, Chanjuan, Tongtong Chen, Xinmiao Ding, Hailin Zou et Yan Tong. « A multi-instance multi-label learning algorithm based on instance correlations ». Multimedia Tools and Applications 75, no 19 (6 avril 2016) : 12263–84. http://dx.doi.org/10.1007/s11042-016-3494-z.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Guo, Hai-Feng, Lixin Han, Shoubao Su et Zhou-Bao Sun. « Deep Multi-Instance Multi-Label Learning for Image Annotation ». International Journal of Pattern Recognition and Artificial Intelligence 32, no 03 (22 novembre 2017) : 1859005. http://dx.doi.org/10.1142/s021800141859005x.

Texte intégral
Résumé :
Multi-Instance Multi-Label learning (MIML) is a popular framework for supervised classification where an example is described by multiple instances and associated with multiple labels. Previous MIML approaches have focused on predicting labels for instances. The idea of tackling the problem is to identify its equivalence in the traditional supervised learning framework. Motivated by the recent advancement in deep learning, in this paper, we still consider the problem of predicting labels and attempt to model deep learning in MIML learning framework. The proposed approach enables us to train deep convolutional neural network with images from social networks where images are well labeled, even labeled with several labels or uncorrelated labels. Experiments on real-world datasets demonstrate the effectiveness of our proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Shen, Yi, et Jian-ping Fan. « Multi-taskmulti-labelmultiple instance learning ». Journal of Zhejiang University SCIENCE C 11, no 11 (novembre 2010) : 860–71. http://dx.doi.org/10.1631/jzus.c1001005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Yin, Ying, Yuhai Zhao, Chengguang Li et Bin Zhang. « Improving Multi-Instance Multi-Label Learning by Extreme Learning Machine ». Applied Sciences 6, no 6 (24 mai 2016) : 160. http://dx.doi.org/10.3390/app6060160.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Lin, Yi, et Honggang Zhang. « Regularized Instance Embedding for Deep Multi-Instance Learning ». Applied Sciences 10, no 1 (20 décembre 2019) : 64. http://dx.doi.org/10.3390/app10010064.

Texte intégral
Résumé :
In the era of Big Data, multi-instance learning, as a weakly supervised learning framework, has various applications since it is helpful to reduce the cost of the data-labeling process. Due to this weakly supervised setting, learning effective instance representation/embedding is challenging. To address this issue, we propose an instance-embedding regularizer that can boost the performance of both instance- and bag-embedding learning in a unified fashion. Specifically, the crux of the instance-embedding regularizer is to maximize correlation between instance-embedding and underlying instance-label similarities. The embedding-learning framework was implemented using a neural network and optimized in an end-to-end manner using stochastic gradient descent. In experiments, various applications were studied, and the results show that the proposed instance-embedding-regularization method is highly effective, having state-of-the-art performance.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Feng, Wen Gang, et Xue Chen. « Synergetic Multi-Semantic Multi-Instance Learning for Scene Recognition ». Applied Mechanics and Materials 220-223 (novembre 2012) : 2188–91. http://dx.doi.org/10.4028/www.scientific.net/amm.220-223.2188.

Texte intégral
Résumé :
In this paper, the problem of scene representation is modeled by simultaneously considering the stimulus-driven and instance-related factors in a probabilistic framework. In this framework, a stimulus-driven component simulates the low-level processes in human vision system using semantic constrain; while a instance-related component simulate the high-level processes to bias the competition of the input features. We interpret the synergetic multi-semantic multi-instance learning on five scene database of LabelMe benchmark, and validate scene classification on the fifteen scene database via the SVM inference with comparison to the state-of-arts methods.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ji, Ruyi, Zeyu Liu, Libo Zhang, Jianwei Liu, Xin Zuo, Yanjun Wu, Chen Zhao, Haofeng Wang et Lin Yang. « Multi-peak Graph-based Multi-instance Learning for Weakly Supervised Object Detection ». ACM Transactions on Multimedia Computing, Communications, and Applications 17, no 2s (10 juin 2021) : 1–21. http://dx.doi.org/10.1145/3432861.

Texte intégral
Résumé :
Weakly supervised object detection (WSOD), aiming to detect objects with only image-level annotations, has become one of the research hotspots over the past few years. Recently, much effort has been devoted to WSOD for the simple yet effective architecture and remarkable improvements have been achieved. Existing approaches using multiple-instance learning usually pay more attention to the proposals individually, ignoring relation information between proposals. Besides, to obtain pseudo-ground-truth boxes for WSOD, MIL-based methods tend to select the region with the highest confidence score and regard those with small overlap as background category, which leads to mislabeled instances. As a result, these methods suffer from mislabeling instances and lacking relations between proposals, degrading the performance of WSOD. To tackle these issues, this article introduces a multi-peak graph-based model for WSOD. Specifically, we use the instance graph to model the relations between proposals, which reinforces multiple-instance learning process. In addition, a multi-peak discovery strategy is designed to avert mislabeling instances. The proposed model is trained by stochastic gradients decent optimizer using back-propagation in an end-to-end manner. Extensive quantitative and qualitative evaluations on two publicly challenging benchmarks, PASCAL VOC 2007 and PASCAL VOC 2012, demonstrate the superiority and effectiveness of the proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Sun, Yu-Yin, Yin Zhang et Zhi-Hua Zhou. « Multi-Label Learning with Weak Label ». Proceedings of the AAAI Conference on Artificial Intelligence 24, no 1 (3 juillet 2010) : 593–98. http://dx.doi.org/10.1609/aaai.v24i1.7699.

Texte intégral
Résumé :
Multi-label learning deals with data associated with multiple labels simultaneously. Previous work on multi-label learning assumes that for each instance, the “full” label set associated with each training instance is given by users. In many applications, however, to get the full label set for each instance is difficult and only a “partial” set of labels is available. In such cases, the appearance of a label means that the instance is associated with this label, while the absence of a label does not imply that this label is not proper for the instance. We call this kind of problem “weak label” problem. In this paper, we propose the WELL (WEak Label Learning) method to solve the weak label problem. We consider that the classification boundary for each label should go across low density regions, and that each label generally has much smaller number of positive examples than negative examples. The objective is formulated as a convex optimization problem which can be solved efficiently. Moreover, we exploit the correlation between labels by assuming that there is a group of low-rank base similarities, and the appropriate similarities between instances for different labels can be derived from these base similarities. Experiments validate the performance of WELL.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Tang, Jingjing, Dewei Li et Yingjie Tian. « Image classification with multi-view multi-instance metric learning ». Expert Systems with Applications 189 (mars 2022) : 116117. http://dx.doi.org/10.1016/j.eswa.2021.116117.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

苏, 可政. « Multi-View Multi-Label Learning by Exploiting Instance Correlations ». Computer Science and Application 12, no 04 (2022) : 785–96. http://dx.doi.org/10.12677/csa.2022.124080.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Gu, Zhiwei, Tao Mei, Xian-Sheng Hua, Jinhui Tang et Xiuqing Wu. « Multi-Layer Multi-Instance Learning for Video Concept Detection ». IEEE Transactions on Multimedia 10, no 8 (décembre 2008) : 1605–16. http://dx.doi.org/10.1109/tmm.2008.2007290.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Cano, Alberto. « An ensemble approach to multi-view multi-instance learning ». Knowledge-Based Systems 136 (novembre 2017) : 46–57. http://dx.doi.org/10.1016/j.knosys.2017.08.022.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Shen, Yi, Jinye Peng, Xiaoyi Feng et Jianping Fan. « Multi-label multi-instance learning with missing object tags ». Multimedia Systems 19, no 1 (14 août 2012) : 17–36. http://dx.doi.org/10.1007/s00530-012-0290-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Lin, Tiancheng, Hongteng Xu, Canqian Yang et Yi Xu. « Interventional Multi-Instance Learning with Deconfounded Instance-Level Prediction ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 2 (28 juin 2022) : 1601–9. http://dx.doi.org/10.1609/aaai.v36i2.20051.

Texte intégral
Résumé :
When applying multi-instance learning (MIL) to make predictions for bags of instances, the prediction accuracy of an instance often depends on not only the instance itself but also its context in the corresponding bag. From the viewpoint of causal inference, such bag contextual prior works as a confounder and may result in model robustness and interpretability issues. Focusing on this problem, we propose a novel interventional multi-instance learning (IMIL) framework to achieve deconfounded instance-level prediction. Unlike traditional likelihood-based strategies, we design an Expectation-Maximization (EM) algorithm based on causal intervention, providing a robust instance selection in the training phase and suppressing the bias caused by the bag contextual prior. Experiments on pathological image analysis demonstrate that our IMIL method substantially reduces false positives and outperforms state-of-the-art MIL methods.
Styles APA, Harvard, Vancouver, ISO, etc.
28

M, Kavitha, et Jasmin Thomas. « Survey of Multi Instance learning Algorithms ». IJARCCE 7, no 8 (30 août 2018) : 52–56. http://dx.doi.org/10.17148/ijarcce.2018.7811.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Huang, Shiluo, Zheng Liu, Wei Jin et Ying Mu. « Bag dissimilarity regularized multi-instance learning ». Pattern Recognition 126 (juin 2022) : 108583. http://dx.doi.org/10.1016/j.patcog.2022.108583.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Qiao, Maoying, Liu Liu, Jun Yu, Chang Xu et Dacheng Tao. « Diversified dictionaries for multi-instance learning ». Pattern Recognition 64 (avril 2017) : 407–16. http://dx.doi.org/10.1016/j.patcog.2016.08.026.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Gan, Rui, et Jian Yin. « Feature selection in multi-instance learning ». Neural Computing and Applications 23, no 3-4 (7 juillet 2012) : 907–12. http://dx.doi.org/10.1007/s00521-012-1015-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Wei, Xiu-Shen, Jianxin Wu et Zhi-Hua Zhou. « Scalable Algorithms for Multi-Instance Learning ». IEEE Transactions on Neural Networks and Learning Systems 28, no 4 (avril 2017) : 975–87. http://dx.doi.org/10.1109/tnnls.2016.2519102.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Wang, Ke, Jiayong Liu et Daniel González. « Domain transfer multi-instance dictionary learning ». Neural Computing and Applications 28, S1 (11 juin 2016) : 983–92. http://dx.doi.org/10.1007/s00521-016-2406-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Zhou, Zhi-Hua. « Multi-Instance Learning from Supervised View ». Journal of Computer Science and Technology 21, no 5 (septembre 2006) : 800–809. http://dx.doi.org/10.1007/s11390-006-0800-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Zhou, Zhi-Hua, Kai Jiang et Ming Li. « Multi-Instance Learning Based Web Mining ». Applied Intelligence 22, no 2 (mars 2005) : 135–47. http://dx.doi.org/10.1007/s10489-005-5602-z.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Xu, Xinzheng, Qiaoyu Guo, Zhongnian Li et Dechun Li. « Uncertainty Ordinal Multi-Instance Learning for Breast Cancer Diagnosis ». Healthcare 10, no 11 (17 novembre 2022) : 2300. http://dx.doi.org/10.3390/healthcare10112300.

Texte intégral
Résumé :
Ordinal multi-instance learning (OMIL) deals with the weak supervision scenario wherein instances in each training bag are not only multi-class but also have rank order relationships between classes, such as breast cancer, which has become one of the most frequent diseases in women. Most of the existing work has generally been to classify the region of interest (mass or microcalcification) on the mammogram as either benign or malignant, while ignoring the normal mammogram classification. Early screening for breast disease is particularly important for further diagnosis. Since early benign lesion areas on a mammogram are very similar to normal tissue, three classifications of mammograms for the improved screening of early benign lesions are necessary. In OMIL, an expert will only label the set of instances (bag), instead of labeling every instance. When labeling efforts are focused on the class of bags, ordinal classes of the instance inside the bag are not labeled. However, recent work on ordinal multi-instance has used the traditional support vector machine to solve the multi-classification problem without utilizing the ordinal information regarding the instances in the bag. In this paper, we propose a method that explicitly models the ordinal class information for bags and instances in bags. Specifically, we specify a key instance from the bag as a positive instance of bags, and design ordinal minimum uncertainty loss to iteratively optimize the selected key instances from the bags. The extensive experimental results clearly prove the effectiveness of the proposed ordinal instance-learning approach, which achieves 52.021% accuracy, 61.471% sensitivity, 47.206% specificity, 57.895% precision, and an 59.629% F1 score on a DDSM dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Li, Bing, Chunfeng Yuan, Weihua Xiong, Weiming Hu, Houwen Peng, Xinmiao Ding et Steve Maybank. « Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning ». IEEE Transactions on Pattern Analysis and Machine Intelligence 39, no 12 (1 décembre 2017) : 2554–60. http://dx.doi.org/10.1109/tpami.2017.2669303.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

WU, Jiansheng, Mao ZHENG, Haifeng HU, Weijian WU et Jun WANG. « Protein function prediction through multi-instance multi-label transfer learning ». SCIENTIA SINICA Informationis 47, no 11 (1 novembre 2017) : 1538–50. http://dx.doi.org/10.1360/n112017-00090.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Lin, Ying, Feng Guo, Liujuan Cao et Jinlin Wang. « Person re-identification based on multi-instance multi-label learning ». Neurocomputing 217 (décembre 2016) : 19–26. http://dx.doi.org/10.1016/j.neucom.2016.04.060.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Zhang, Min-Ling, et Zhi-Jian Wang. « MIMLRBF : RBF neural networks for multi-instance multi-label learning ». Neurocomputing 72, no 16-18 (octobre 2009) : 3951–56. http://dx.doi.org/10.1016/j.neucom.2009.07.008.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

He, Jianjun, Hong Gu et Zhelong Wang. « Bayesian multi-instance multi-label learning using Gaussian process prior ». Machine Learning 88, no 1-2 (10 mars 2012) : 273–95. http://dx.doi.org/10.1007/s10994-012-5283-x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Qiu, Sichao, Mengyi Wang, Yuanlin Yang, Guoxian Yu, Jun Wang, Zhongmin Yan, Carlotta Domeniconi et Maozu Guo. « Meta Multi-Instance Multi-Label learning by heterogeneous network fusion ». Information Fusion 94 (juin 2023) : 272–83. http://dx.doi.org/10.1016/j.inffus.2023.02.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Birant, Kokten Ulas, et Derya Birant. « Multi-Objective Multi-Instance Learning : A New Approach to Machine Learning for eSports ». Entropy 25, no 1 (23 décembre 2022) : 28. http://dx.doi.org/10.3390/e25010028.

Texte intégral
Résumé :
The aim of this study is to develop a new approach to be able to correctly predict the outcome of electronic sports (eSports) matches using machine learning methods. Previous research has emphasized player-centric prediction and has used standard (single-instance) classification techniques. However, a team-centric classification is required since team cooperation is essential in completing game missions and achieving final success. To bridge this gap, in this study, we propose a new approach, called Multi-Objective Multi-Instance Learning (MOMIL). It is the first study that applies the multi-instance learning technique to make win predictions in eSports. The proposed approach jointly considers the objectives of the players in a team to capture relationships between players during the classification. In this study, entropy was used as a measure to determine the impurity (uncertainty) of the training dataset when building decision trees for classification. The experiments that were carried out on a publicly available eSports dataset show that the proposed multi-objective multi-instance classification approach outperforms the standard classification approach in terms of accuracy. Unlike the previous studies, we built the models on season-based data. Our approach is up to 95% accurate for win prediction in eSports. Our method achieved higher performance than the state-of-the-art methods tested on the same dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ping, Wei, Ye Xu, Kexin Ren, Chi-Hung Chi et Furao Shen. « Non-I.I.D. Multi-Instance Dimensionality Reduction by Learning a Maximum Bag Margin Subspace ». Proceedings of the AAAI Conference on Artificial Intelligence 24, no 1 (3 juillet 2010) : 551–56. http://dx.doi.org/10.1609/aaai.v24i1.7653.

Texte intégral
Résumé :
Multi-instance learning, as other machine learning tasks, also suffers from the curse of dimensionality. Although dimensionality reduction methods have been investigated for many years, multi-instance dimensionality reduction methods remain untouched. On the other hand, most algorithms in multi- instance framework treat instances in each bag as independently and identically distributed samples, which fails to utilize the structure information conveyed by instances in a bag. In this paper, we propose a multi-instance dimensionality reduction method, which treats instances in each bag as non-i.i.d. samples. We regard every bag as a whole entity and define a bag margin objective function. By maximizing the margin of positive and negative bags, we learn a subspace to obtain more salient representation of original data. Experiments demonstrate the effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Lyu, Gengyu, Xiang Deng, Yanan Wu et Songhe Feng. « Beyond Shared Subspace : A View-Specific Fusion for Multi-View Multi-Label Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 7 (28 juin 2022) : 7647–54. http://dx.doi.org/10.1609/aaai.v36i7.20731.

Texte intégral
Résumé :
In multi-view multi-label learning (MVML), each instance is described by several heterogeneous feature representations and associated with multiple valid labels simultaneously. Although diverse MVML methods have been proposed over the last decade, most previous studies focus on leveraging the shared subspace across different views to represent the multi-view consensus information, while it is still an open issue whether such shared subspace representation is necessary when formulating the desired MVML model. In this paper, we propose a DeepGCN based View-Specific MVML method (D-VSM) which can bypass seeking for the shared subspace representation, and instead directly encoding the feature representation of each individual view through the deep GCN to couple with the information derived from the other views. Specifically, we first construct all instances under different feature representations into the corresponding feature graphs respectively, and then integrate them into a unified graph by integrating the different feature representations of each instance. Afterwards, the graph attention mechanism is adopted to aggregate and update all nodes on the unified graph to form structural representation for each instance, where both intra-view correlations and inter-view alignments have been jointly encoded to discover the underlying semantic relations. Finally, we derive a label confidence score for each instance by averaging the label confidence of its different feature representations with the multi-label soft margin loss. Extensive experiments have demonstrated that our proposed method significantly outperforms state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Alam, Fardina Fathmiul, et Amarda Shehu. « Unsupervised multi-instance learning for protein structure determination ». Journal of Bioinformatics and Computational Biology 19, no 01 (février 2021) : 2140002. http://dx.doi.org/10.1142/s0219720021400023.

Texte intégral
Résumé :
Many regions of the protein universe remain inaccessible by wet-laboratory or computational structure determination methods. A significant challenge in elucidating these dark regions in silico relates to the ability to discriminate relevant structure(s) among many structures/decoys computed for a protein of interest, a problem known as decoy selection. Clustering decoys based on geometric similarity remains popular. However, it is unclear how exactly to exploit the groups of decoys revealed via clustering to select individual structures for prediction. In this paper, we provide an intuitive formulation of the decoy selection problem as an instance of unsupervised multi-instance learning. We address the problem in three stages, first organizing given decoys of a protein molecule into bags, then identifying relevant bags, and finally drawing individual instances from these bags to offer as prediction. We propose both non-parametric and parametric algorithms for drawing individual instances. Our evaluation utilizes two datasets, one benchmark dataset of ensembles of decoys for a varied list of protein molecules, and a dataset of decoy ensembles for targets drawn from recent CASP competitions. A comparative analysis with state-of-the-art methods reveals that the proposed approach outperforms existing methods, thus warranting further investigation of multi-instance learning to advance our treatment of decoy selection.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Soleimani, Hossein, et David J. Miller. « Semisupervised, Multilabel, Multi-Instance Learning for Structured Data ». Neural Computation 29, no 4 (avril 2017) : 1053–102. http://dx.doi.org/10.1162/neco_a_00939.

Texte intégral
Résumé :
Many classification tasks require both labeling objects and determining label associations for parts of each object. Example applications include labeling segments of images or determining relevant parts of a text document when the training labels are available only at the image or document level. This task is usually referred to as multi-instance (MI) learning, where the learner typically receives a collection of labeled (or sometimes unlabeled) bags, each containing several segments (instances). We propose a semisupervised MI learning method for multilabel classification. Most MI learning methods treat instances in each bag as independent and identically distributed samples. However, in many practical applications, instances are related to each other and should not be considered independent. Our model discovers a latent low-dimensional space that captures structure within each bag. Further, unlike many other MI learning methods, which are primarily developed for binary classification, we model multiple classes jointly, thus also capturing possible dependencies between different classes. We develop our model within a semisupervised framework, which leverages both labeled and, typically, a larger set of unlabeled bags for training. We develop several efficient inference methods for our model. We first introduce a Markov chain Monte Carlo method for inference, which can handle arbitrary relations between bag labels and instance labels, including the standard hard-max MI assumption. We also develop an extension of our model that uses stochastic variational Bayes methods for inference, and thus scales better to massive data sets. Experiments show that our approach outperforms several MI learning and standard classification methods on both bag-level and instance-level label prediction. All code for replicating our experiments is available from https://github.com/hsoleimani/MLTM .
Styles APA, Harvard, Vancouver, ISO, etc.
48

Foulds, James, et Eibe Frank. « A review of multi-instance learning assumptions ». Knowledge Engineering Review 25, no 1 (mars 2010) : 1–25. http://dx.doi.org/10.1017/s026988890999035x.

Texte intégral
Résumé :
AbstractMulti-instance (MI) learning is a variant of inductive machine learning, where each learning example contains a bag of instances instead of a single feature vector. The term commonly refers to the supervised setting, where each bag is associated with a label. This type of representation is a natural fit for a number of real-world learning scenarios, including drug activity prediction and image classification, hence many MI learning algorithms have been proposed. Any MI learning method must relate instances to bag-level class labels, but many types of relationships between instances and class labels are possible. Although all early work in MI learning assumes a specific MI concept class known to be appropriate for a drug activity prediction domain; this ‘standard MI assumption’ is not guaranteed to hold in other domains. Much of the recent work in MI learning has concentrated on a relaxed view of the MI problem, where the standard MI assumption is dropped, and alternative assumptions are considered instead. However, often it is not clearly stated what particular assumption is used and how it relates to other assumptions that have been proposed. In this paper, we aim to clarify the use of alternative MI assumptions by reviewing the work done in this area.
Styles APA, Harvard, Vancouver, ISO, etc.
49

HUO, JING, YANG GAO, WANQI YANG et HUJUN YIN. « MULTI-INSTANCE DICTIONARY LEARNING FOR DETECTING ABNORMAL EVENTS IN SURVEILLANCE VIDEOS ». International Journal of Neural Systems 24, no 03 (19 février 2014) : 1430010. http://dx.doi.org/10.1142/s0129065714300101.

Texte intégral
Résumé :
In this paper, a novel method termed Multi-Instance Dictionary Learning (MIDL) is presented for detecting abnormal events in crowded video scenes. With respect to multi-instance learning, each event (video clip) in videos is modeled as a bag containing several sub-events (local observations); while each sub-event is regarded as an instance. The MIDL jointly learns a dictionary for sparse representations of sub-events (instances) and multi-instance classifiers for classifying events into normal or abnormal. We further adopt three different multi-instance models, yielding the Max-Pooling-based MIDL (MP-MIDL), Instance-based MIDL (Inst-MIDL) and Bag-based MIDL (Bag-MIDL), for detecting both global and local abnormalities. The MP-MIDL classifies observed events by using bag features extracted via max-pooling over sparse representations. The Inst-MIDL and Bag-MIDL classify observed events by the predicted values of corresponding instances. The proposed MIDL is evaluated and compared with the state-of-the-art methods for abnormal event detection on the UMN (for global abnormalities) and the UCSD (for local abnormalities) datasets and results show that the proposed MP-MIDL and Bag-MIDL achieve either comparable or improved detection performances. The proposed MIDL method is also compared with other multi-instance learning methods on the task and superior results are obtained by the MP-MIDL scheme.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Sisi Chen, et Liangxiao Jiang. « An Empirical Study on Multi-instance Learning ». INTERNATIONAL JOURNAL ON Advances in Information Sciences and Service Sciences 4, no 6 (30 avril 2012) : 193–202. http://dx.doi.org/10.4156/aiss.vol4.issue6.23.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie