Academic literature on the topic 'Multi-multi instance learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-multi instance learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-multi instance learning"

1

Zhou, Zhi-Hua, Min-Ling Zhang, Sheng-Jun Huang, and Yu-Feng Li. "Multi-instance multi-label learning." Artificial Intelligence 176, no. 1 (January 2012): 2291–320. http://dx.doi.org/10.1016/j.artint.2011.10.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Briggs, Forrest, Xiaoli Z. Fern, Raviv Raich, and Qi Lou. "Instance Annotation for Multi-Instance Multi-Label Learning." ACM Transactions on Knowledge Discovery from Data 7, no. 3 (September 1, 2013): 1–30. http://dx.doi.org/10.1145/2513092.2500491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Briggs, Forrest, Xiaoli Z. Fern, Raviv Raich, and Qi Lou. "Instance Annotation for Multi-Instance Multi-Label Learning." ACM Transactions on Knowledge Discovery from Data 7, no. 3 (September 2013): 1–30. http://dx.doi.org/10.1145/2500491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Sheng-Jun, Wei Gao, and Zhi-Hua Zhou. "Fast Multi-Instance Multi-Label Learning." IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no. 11 (November 1, 2019): 2614–27. http://dx.doi.org/10.1109/tpami.2018.2861732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pei, Yuanli, and Xiaoli Z. Fern. "Constrained instance clustering in multi-instance multi-label learning." Pattern Recognition Letters 37 (February 2014): 107–14. http://dx.doi.org/10.1016/j.patrec.2013.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xing, Yuying, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Zili Zhang, and Maozu Guo. "Multi-View Multi-Instance Multi-Label Learning Based on Collaborative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5508–15. http://dx.doi.org/10.1609/aaai.v33i01.33015508.

Full text
Abstract:
Multi-view Multi-instance Multi-label Learning (M3L) deals with complex objects encompassing diverse instances, represented with different feature views, and annotated with multiple labels. Existing M3L solutions only partially explore the inter or intra relations between objects (or bags), instances, and labels, which can convey important contextual information for M3L. As such, they may have a compromised performance.\ In this paper, we propose a collaborative matrix factorization based solution called M3Lcmf. M3Lcmf first uses a heterogeneous network composed of nodes of bags, instances, and labels, to encode different types of relations via multiple relational data matrices. To preserve the intrinsic structure of the data matrices, M3Lcmf collaboratively factorizes them into low-rank matrices, explores the latent relationships between bags, instances, and labels, and selectively merges the data matrices. An aggregation scheme is further introduced to aggregate the instance-level labels into bag-level and to guide the factorization. An empirical study on benchmark datasets show that M3Lcmf outperforms other related competitive solutions both in the instance-level and bag-level prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

Ohkura, Kazuhiro, and Ryota Washizaki. "Robust Instance-Based Reinforcement Learning for Multi-Robot Systems(Multi-agent and Learning,Session: TP2-A)." Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM 2004.4 (2004): 51. http://dx.doi.org/10.1299/jsmeicam.2004.4.51_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Yu-Yin, Michael Ng, and Zhi-Hua Zhou. "Multi-Instance Dimensionality Reduction." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 3, 2010): 587–92. http://dx.doi.org/10.1609/aaai.v24i1.7700.

Full text
Abstract:
Multi-instance learning deals with problems that treat bags of instances as training examples. In single-instance learning problems, dimensionality reduction is an essential step for high-dimensional data analysis and has been studied for years. The curse of dimensionality also exists in multiinstance learning tasks, yet this difficult task has not been studied before. Direct application of existing single-instance dimensionality reduction objectives to multi-instance learning tasks may not work well since it ignores the characteristic of multi-instance learning that the labels of bags are known while the labels of instances are unknown. In this paper, we propose an effective model and develop an efficient algorithm to solve the multi-instance dimensionality reduction problem. We formulate the objective as an optimization problem by considering orthonormality and sparsity constraints in the projection matrix for dimensionality reduction, and then solve it by the gradient descent along the tangent space of the orthonormal matrices. We also propose an approximation for improving the efficiency. Experimental results validate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

JIANG, Yuan, Zhi-Hua ZHOU, and Yue ZHU. "Multi-instance multi-label new label learning." SCIENTIA SINICA Informationis 48, no. 12 (December 1, 2018): 1670–80. http://dx.doi.org/10.1360/n112018-00143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Wei, and ZhiHua Zhou. "Learnability of multi-instance multi-label learning." Chinese Science Bulletin 57, no. 19 (April 28, 2012): 2488–91. http://dx.doi.org/10.1007/s11434-012-5133-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multi-multi instance learning"

1

Foulds, James Richard. "Learning Instance Weights in Multi-Instance Learning." The University of Waikato, 2008. http://hdl.handle.net/10289/2460.

Full text
Abstract:
Multi-instance (MI) learning is a variant of supervised machine learning, where each learning example contains a bag of instances instead of just a single feature vector. MI learning has applications in areas such as drug activity prediction, fruit disease management and image classification. This thesis investigates the case where each instance has a weight value determining the level of influence that it has on its bag's class label. This is a more general assumption than most existing approaches use, and thus is more widely applicable. The challenge is to accurately estimate these weights in order to make predictions at the bag level. An existing approach known as MILES is retroactively identified as an algorithm that uses instance weights for MI learning, and is evaluated using a variety of base learners on benchmark problems. New algorithms for learning instance weights for MI learning are also proposed and rigorously evaluated on both artificial and real-world datasets. The new algorithms are shown to achieve better root mean squared error rates than existing approaches on artificial data generated according to the algorithms' underlying assumptions. Experimental results also demonstrate that the new algorithms are competitive with existing approaches on real-world problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Lin. "A Comparison of Multi-instance Learning Algorithms." The University of Waikato, 2006. http://hdl.handle.net/10289/2453.

Full text
Abstract:
Motivated by various challenging real-world applications, such as drug activity prediction and image retrieval, multi-instance (MI) learning has attracted considerable interest in recent years. Compared with standard supervised learning, the MI learning task is more difficult as the label information of each training example is incomplete. Many MI algorithms have been proposed. Some of them are specifically designed for MI problems whereas others have been upgraded or adapted from standard single-instance learning algorithms. Most algorithms have been evaluated on only one or two benchmark datasets, and there is a lack of systematic comparisons of MI learning algorithms. This thesis presents a comprehensive study of MI learning algorithms that aims to compare their performance and find a suitable way to properly address different MI problems. First, it briefly reviews the history of research on MI learning. Then it discusses five general classes of MI approaches that cover a total of 16 MI algorithms. After that, it presents empirical results for these algorithms that were obtained from 15 datasets which involve five different real-world application domains. Finally, some conclusions are drawn from these results: (1) applying suitable standard single-instance learners to MI problems can often generate the best result on the datasets that were tested, (2) algorithms exploiting the standard asymmetric MI assumption do not show significant advantages over approaches using the so-called collective assumption, and (3) different MI approaches are suitable for different application domains, and no MI algorithm works best on all MI problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Xin. "Statistical Learning in Multiple Instance Problems." The University of Waikato, 2003. http://hdl.handle.net/10289/2328.

Full text
Abstract:
Multiple instance (MI) learning is a relatively new topic in machine learning. It is concerned with supervised learning but differs from normal supervised learning in two points: (1) it has multiple instances in an example (and there is only one instance in an example in standard supervised learning), and (2) only one class label is observable for all the instances in an example (whereas each instance has its own class label in normal supervised learning). In MI learning there is a common assumption regarding the relationship between the class label of an example and the ``unobservable'' class labels of the instances inside it. This assumption, which is called the ``MI assumption'' in this thesis, states that ``An example is positive if at least one of its instances is positive and negative otherwise''. In this thesis, we first categorize current MI methods into a new framework. According to our analysis, there are two main categories of MI methods, instance-based and metadata-based approaches. Then we propose a new assumption for MI learning, called the ``collective assumption''. Although this assumption has been used in some previous MI methods, it has never been explicitly stated,\footnote{As a matter of fact, for some of these methods, it is actually claimed that they use the standard MI assumption stated above.} and this is the first time that it is formally specified. Using this new assumption we develop new algorithms --- more specifically two instance-based and one metadata-based methods. All of these methods build probabilistic models and thus implement statistical learning algorithms. The exact generative models underlying these methods are explicitly stated and illustrated so that one may clearly understand the situations to which they can best be applied. The empirical results presented in this thesis show that they are competitive on standard benchmark datasets. Finally, we explore some practical applications of MI learning, both existing and new ones. This thesis makes three contributions: a new framework for MI learning, new MI methods based on this framework and experimental results for new applications of MI learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Wei. "Event Detection and Extraction from News Articles." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82238.

Full text
Abstract:
Event extraction is a type of information extraction(IE) that works on extracting the specific knowledge of certain incidents from texts. Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. Therefore, it becomes imperative to develop algorithms that automatically extract the machine-readable information from large volumes of text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves event detection and critical information extractions from news articles. (3) Third, the efforts concentrate on event-encoding which aims to extract event extent and arguments from texts. We start by investigating the two large-scale event extraction systems (ICEWS and GDELT) in the political science domain. We design a set of experiments to evaluate the quality of the extracted events from the two target systems, in terms of reliability and correctness. The results show that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of both systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction. Inspired by the successful application of deep learning in Natural Language Processing (NLP), we propose a Multi-Instance Convolutional Neural Network (MI-CNN) model for event detection and critical sentences extraction without sentence level labels. To evaluate the model, we run a set of experiments on a real-world protest event dataset. The result shows that our model could be able to outperform the strong baseline models and extract the meaningful key sentences without domain knowledge and manually designed features. We also extend the MI-CNN model and propose an MIMTRNN model for event extraction with distant supervision to overcome the problem of lacking fine level labels and small size training data. The proposed MIMTRNN model systematically integrates the RNN, Multi-Instance Learning, and Multi-Task Learning into a unified framework. The RNN module aims to encode into the representation of entity mentions the sequential information as well as the dependencies between event arguments, which are very useful in the event extraction task. The Multi-Instance Learning paradigm makes the system does not require the precise labels in entity mention level and make it perfect to work together with distant supervision for event extraction. And the Multi-Task Learning module in our approach is designed to alleviate the potential overfitting problem caused by the relatively small size of training data. The results of the experiments on two real-world datasets(Cyber-Attack and Civil Unrest) show that our model could be able to benefit from the advantage of each component and outperform other baseline methods significantly.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Xiaoguang. "Design and Analysis of Techniques for Multiple-Instance Learning in the Presence of Balanced and Skewed Class Distributions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32184.

Full text
Abstract:
With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, the Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Existing knowledge discovery and data analyzing techniques have shown great success in many real-world applications such as applying Automatic Target Recognition (ATR) methods to detect targets of interest in imagery, drug activity prediction, computer vision recognition, and so on. Among these techniques, Multiple-Instance (MI) learning is different from standard classification since it uses a set of bags containing many instances as input. The instances in each bag are not labeled | instead the bags themselves are labeled. In this area many researchers have accomplished a lot of work and made a lot of progress. However, there still exist some areas which are not covered. In this thesis, we focus on two topics of MI learning: (1) Investigating the relationship between MI learning and other multiple pattern learning methods, which include multi-view learning, data fusion method and multi-kernel SVM. (2) Dealing with the class imbalance problem of MI learning. In the first topic, three different learning frameworks will be presented for general MI learning. The first uses multiple view approaches to deal with MI problem, the second is a data fusion framework, and the third framework, which is an extension of the first framework, uses multiple-kernel SVM. Experimental results show that the approaches presented work well on solving MI problem. The second topic is concerned with the imbalanced MI problem. Here we investigate the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. For this problem, we propose three solution frameworks: a data re-sampling framework, a cost-sensitive boosting framework and an adaptive instance-weighted boosting SVM (with the name IB_SVM) for MI learning. Experimental results - on both benchmark datasets and application datasets - show that the proposed frameworks are proved to be effective solutions for the imbalanced problem of MI learning.
APA, Harvard, Vancouver, ISO, and other styles
6

Melki, Gabriella A. "Novel Support Vector Machines for Diverse Learning Paradigms." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5630.

Full text
Abstract:
This dissertation introduces novel support vector machines (SVM) for the following traditional and non-traditional learning paradigms: Online classification, Multi-Target Regression, Multiple-Instance classification, and Data Stream classification. Three multi-target support vector regression (SVR) models are first presented. The first involves building independent, single-target SVR models for each target. The second builds an ensemble of randomly chained models using the first single-target method as a base model. The third calculates the targets' correlations and forms a maximum correlation chain, which is used to build a single chained SVR model, improving the model's prediction performance, while reducing computational complexity. Under the multi-instance paradigm, a novel SVM multiple-instance formulation and an algorithm with a bag-representative selector, named Multi-Instance Representative SVM (MIRSVM), are presented. The contribution trains the SVM based on bag-level information and is able to identify instances that highly impact classification, i.e. bag-representatives, for both positive and negative bags, while finding the optimal class separation hyperplane. Unlike other multi-instance SVM methods, this approach eliminates possible class imbalance issues by allowing both positive and negative bags to have at most one representative, which constitute as the most contributing instances to the model. Due to the shortcomings of current popular SVM solvers, especially in the context of large-scale learning, the third contribution presents a novel stochastic, i.e. online, learning algorithm for solving the L1-SVM problem in the primal domain, dubbed OnLine Learning Algorithm using Worst-Violators (OLLAWV). This algorithm, unlike other stochastic methods, provides a novel stopping criteria and eliminates the need for using a regularization term. It instead uses early stopping. Because of these characteristics, OLLAWV was proven to efficiently produce sparse models, while maintaining a competitive accuracy. OLLAWV's online nature and success for traditional classification inspired its implementation, as well as its predecessor named OnLine Learning Algorithm - List 2 (OLLA-L2), under the batch data stream classification setting. Unlike other existing methods, these two algorithms were chosen because their properties are a natural remedy for the time and memory constraints that arise from the data stream problem. OLLA-L2's low spacial complexity deals with memory constraints imposed by the data stream setting, and OLLAWV's fast run time, early self-stopping capability, as well as the ability to produce sparse models, agrees with both memory and time constraints. The preliminary results for OLLAWV showed a superior performance to its predecessor and was chosen to be used in the final set of experiments against current popular data stream methods. Rigorous experimental studies and statistical analyses over various metrics and datasets were conducted in order to comprehensively compare the proposed solutions against modern, widely-used methods from all paradigms. The experimental studies and analyses confirm that the proposals achieve better performances and more scalable solutions than the methods compared, making them competitive in their respected fields.
APA, Harvard, Vancouver, ISO, and other styles
7

Quispe, Sonia Castelo. "Uma abordagem visual para apoio ao aprendizado multi-instâncias." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11012016-095352/.

Full text
Abstract:
Aprendizado múltipla instância (MIL) é um paradigma de aprendizado de máquina que tem o objetivo de classificar um conjunto (bags) de objetos (instâncias), atribuindo rótulos só para os bags. Em MIL apenas os rótulos dos bags estão disponíveis para treinamento, enquanto os rótulos das instâncias são desconhecidos. Este problema é frequentemente abordado através da seleção de uma instância para representar cada bag, transformando um problema MIL em um problema de aprendizado supervisionado padrão. No entanto, não se conhecem abordagens que apoiem o usuário na realização desse processo. Neste trabalho, propomos uma visualização baseada em árvore multi-escala chamada MILTree que ajuda os usuários na realização de tarefas relacionadas com MIL, e também dois novos métodos de seleção de instâncias, chamados MILTree-SI e MILTree-Med, para melhorar os modelos MIL. MILTree é um layout de árvore de dois níveis, sendo que o primeiro projeta os bags, e o segundo nível projeta as instâncias pertencentes a cada bag, permitindo que o usuário explore e analise os dados multi-instância de uma forma intuitiva. Já os métodos de seleção de instãncias objetivam definir uma instância protótipo para cada bag, etapa crucial para a obtenção de uma alta precisão na classificação de dados multi-instância. Ambos os métodos utilizam o layout MILTree para atualizar visualmente as instâncias protótipo, e são capazes de lidar com conjuntos de dados binários e multi-classe. Para realizar a classificação dos bags, usamos um classificador SVM (Support Vector Machine). Além disso, com o apoio do layout MILTree também pode-se atualizar os modelos de classificação, alterando o conjunto de treinamento, a fim de obter uma melhor classificação. Os resultados experimentais validam a eficácia da nossa abordagem, mostrando que a mineração visual através da MILTree pode ajudar os usuários em cenários de classificação multi-instância.
Multiple-instance learning (MIL) is a paradigm of machine learning that aims at classifying a set (bags) of objects (instances), assigning labels only to the bags. In MIL, only the labels of bags are available for training while the labels of instances in bags are unknown. This problem is often addressed by selecting an instance to represent each bag, transforming a MIL problem into a standard supervised learning. However, there is no user support to assess this process. In this work, we propose a multi-scale tree-based visualization called MILTree that supports users in tasks related to MIL, and also two new instance selection methods called MILTree-SI and MILTree-Med to improve MIL models. MILTree is a two-level tree layout, where the first level projects bags, and the second level projects the instances belonging to each bag, allowing the user to understand the data multi-instance in an intuitive way. The developed selection methods define instance prototypes of each bag, which is important to achieve high accuracy in multi-instance classification. Both methods use the MILTree layout to visually update instance prototypes and can handle binary and multiple-class datasets. In order to classify the bags we use a SVM classifier. Moreover, with support of MILTree layout one can also update the classification model by changing the training set in order to obtain a better classifier. Experimental results validate the effectiveness of our approach, showing that visual mining by MILTree can help the users in MIL classification scenarios.
APA, Harvard, Vancouver, ISO, and other styles
8

Zoghlami, Manel. "Multiple instance learning for sequence data : Application on bacterial ionizing radiation resistance prediction." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC078.

Full text
Abstract:
Dans l’apprentissage multi-instances (MI) pour les séquences, les données d’apprentissage consistent en un ensemble de sacs où chaque sac contient un ensemble d’instances/séquences. Dans certaines applications du monde réel, comme la bioinformatique, comparer un couple aléatoire de séquences n’a aucun sens. En fait, chaque instance de chaque sac peut avoir une relation structurelle et/ou fonctionnelle avec d’autres instances dans d’autres sacs. Ainsi, la tâche de classification doit prendre en compte la relation entre les instances sémantiquement liées à travers les sacs. Dans cette thèse, nous présentons deux approches de classification MI des séquences nommées ABClass et ABSim. ABClass extrait les motifs à partir des instances reliées et les utilise pour encoder les séquences. Un classifieur discriminant est ensuite appliqué pour calculer un résultat de classification partiel pour chaque ensemble de séquences liées. ABSim utilise une mesure de similarité pour discriminer les instances reliées et calcule une matrice de scores. Pour les deux approches, une méthode d’agrégation est appliquée afin de générer le résultat final de la classification. Nous appliquons les deux approches au problème de prédiction de la résistance aux rayonnements ionisants chez les bactéries.Les résultats expérimentaux sont satisfaisants
In Multiple Instance Learning (MIL) problem for sequence data, the instances inside the bags aresequences. In some real world applications such as bioinformatics, comparing a random couple ofsequences makes no sense. In fact, each instance may have structural and/or functional relationshipwith instances of other bags. Thus, the classification task should take into account this across bagrelationship. In this thesis, we present two novel MIL approaches for sequence data classificationnamed ABClass and ABSim. ABClass extracts motifs from related instances and use them to encodesequences. A discriminative classifier is then applied to compute a partial classification result for eachset of related sequences. ABSim uses a similarity measure to discriminate the related instances andto compute a scores matrix. For both approaches, an aggregation method is applied in order togenerate the final classification result. We applied both approaches to the problem of bacterialionizing radiation resistance prediction. The experimental results were satisfactory
APA, Harvard, Vancouver, ISO, and other styles
9

Dickens, James. "Depth-Aware Deep Learning Networks for Object Detection and Image Segmentation." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42619.

Full text
Abstract:
The rise of convolutional neural networks (CNNs) in the context of computer vision has occurred in tandem with the advancement of depth sensing technology. Depth cameras are capable of yielding two-dimensional arrays storing at each pixel the distance from objects and surfaces in a scene from a given sensor, aligned with a regular color image, obtaining so-called RGBD images. Inspired by prior models in the literature, this work develops a suite of RGBD CNN models to tackle the challenging tasks of object detection, instance segmentation, and semantic segmentation. Prominent architectures for object detection and image segmentation are modified to incorporate dual backbone approaches inputting RGB and depth images, combining features from both modalities through the use of novel fusion modules. For each task, the models developed are competitive with state-of-the-art RGBD architectures. In particular, the proposed RGBD object detection approach achieves 53.5% mAP on the SUN RGBD 19-class object detection benchmark, while the proposed RGBD semantic segmentation architecture yields 69.4% accuracy with respect to the SUN RGBD 37-class semantic segmentation benchmark. An original 13-class RGBD instance segmentation benchmark is introduced for the SUN RGBD dataset, for which the proposed model achieves 38.4% mAP. Additionally, an original depth-aware panoptic segmentation model is developed, trained, and tested for new benchmarks conceived for the NYUDv2 and SUN RGBD datasets. These benchmarks offer researchers a baseline for the task of RGBD panoptic segmentation on these datasets, where the novel depth-aware model outperforms a comparable RGB counterpart.
APA, Harvard, Vancouver, ISO, and other styles
10

Huebner, Uwe. "Workshop: INFRASTRUKTUR DER ¨DIGITALEN UNIVERSIAET¨." Universitätsbibliothek Chemnitz, 2000. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200000692.

Full text
Abstract:
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur Rechnernetze und verteilte Systeme (Fakultaet fuer Informatik) der TU Chemnitz. Workshop-Thema: Infrastruktur der ¨Digitalen Universitaet¨
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-multi instance learning"

1

Vluymans, Sarah. "Multi-instance Learning." In Dealing with Imbalanced and Weakly Labelled Data in Machine Learning using Fuzzy and Rough Set Methods, 131–87. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04663-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fürnkranz, Johannes, Philip K. Chan, Susan Craw, Claude Sammut, William Uther, Adwait Ratnaparkhi, Xin Jin, et al. "Multi-Instance Learning." In Encyclopedia of Machine Learning, 701–10. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ray, Soumya, Stephen Scott, and Hendrik Blockeel. "Multi-Instance Learning." In Encyclopedia of Machine Learning and Data Mining, 864–75. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Herrera, Francisco, Sebastián Ventura, Rafael Bello, Chris Cornelis, Amelia Zafra, Dánel Sánchez-Tarragó, and Sarah Vluymans. "Multi-instance Classification." In Multiple Instance Learning, 35–66. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47759-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Herrera, Francisco, Sebastián Ventura, Rafael Bello, Chris Cornelis, Amelia Zafra, Dánel Sánchez-Tarragó, and Sarah Vluymans. "Multi-instance Regression." In Multiple Instance Learning, 127–40. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47759-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Retz, Robert, and Friedhelm Schwenker. "Active Multi-Instance Multi-Label Learning." In Analysis of Large and Complex Data, 91–101. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-25226-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Herrera, Francisco, Sebastián Ventura, Rafael Bello, Chris Cornelis, Amelia Zafra, Dánel Sánchez-Tarragó, and Sarah Vluymans. "Imbalanced Multi-instance Data." In Multiple Instance Learning, 191–208. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47759-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Chenguang, Ying Yin, Yuhai Zhao, Guang Chen, and Libo Qin. "Multi-instance Multi-label Learning by Extreme Learning Machine." In Proceedings in Adaptation, Learning and Optimization, 325–34. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28373-9_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Zhi-Hua, and Min-Ling Zhang. "Ensembles of Multi-instance Learners." In Machine Learning: ECML 2003, 492–502. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39857-8_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tibo, Alessandro, Paolo Frasconi, and Manfred Jaeger. "A Network Architecture for Multi-Multi-Instance Learning." In Machine Learning and Knowledge Discovery in Databases, 737–52. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71249-9_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-multi instance learning"

1

Zhang, Ya-Lin, and Zhi-Hua Zhou. "Multi-Instance Learning with Key Instance Shift." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/481.

Full text
Abstract:
Multi-instance learning (MIL) deals with the tasks where each example is represented by a bag of instances. A bag is positive if it contains at least one positive instance, and negative otherwise. The positive instances are also called key instances. Only bag labels are observed, whereas specific instance labels are not available in MIL. Previous studies typically assume that training and test data follow the same distribution, which may be violated in many real-world tasks. In this paper, we address the problem that the distribution of key instances varies between training and test phase. We refer to this problem as MIL with key instance shift and solve it by proposing an embedding based method MIKI. Specifically, to transform the bags into informative vectors, we propose a weighted multi-class model to select the instances with high positiveness as instance prototypes. Then we learn the importance weights for transformed bag vectors and incorporate original instance weights into them to narrow the gap between training/test distributions. Experimental results validate the effectiveness of our approach when key instance shift occurs.
APA, Harvard, Vancouver, ISO, and other styles
2

Xing, Yuying, Guoxian Yu, Jun Wang, Carlotta Domeniconi, and Xiangliang Zhang. "Weakly-Supervised Multi-view Multi-instance Multi-label Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/432.

Full text
Abstract:
Multi-view, Multi-instance, and Multi-label Learning (M3L) can model complex objects (bags), which are represented with different feature views, made of diverse instances, and annotated with discrete non-exclusive labels. Existing M3L approaches assume a complete correspondence between bags and views, and also assume a complete annotation for training. However, in practice, neither the correspondence between bags, nor the bags' annotations are complete. To tackle such a weakly-supervised M3L task, a solution called WSM3L is introduced. WSM3L adapts multimodal dictionary learning to learn a shared dictionary (representational space) across views and individual encoding vectors of bags for each view. The label similarity and feature similarity of encoded bags are jointly used to match bags across views. In addition, it replenishes the annotations of a bag based on the annotations of its neighborhood bags, and introduces a dispatch and aggregation term to dispatch bag-level annotations to instances and to reversely aggregate instance-level annotations to bags. WSM3L unifies these objectives and processes in a joint objective function to predict the instance-level and bag-level annotations in a coordinated fashion, and it further introduces an alternative solution for the objective function optimization. Extensive experimental results show the effectiveness of WSM3L on benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Sheng-Jun, Nengneng Gao, and Songcan Chen. "Multi-instance multi-label active learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/262.

Full text
Abstract:
Multi-instance multi-label learning(MIML) has been successfully applied into many real-world applications. Along with the enhancing of the expressive power, the cost of labelling a MIML example increases significantly. And thus it becomes an important task to train an effective MIML model with as few labelled examples as possible. Active learning, which actively selects the most valuable data to query their labels, is a main approach to reducing labeling cost. Existing active methods achieved great success in traditional learning tasks, but cannot be directly applied to MIML problems. In this paper, we propose a MIML active learning algorithm, which exploits diversity and uncertainty in both the input and output space to query the most valuable information. This algorithm designs a novel query strategy for MIML objects specifically and acquires more precise information from the oracle without addition cost. Based on the queried information, the MIML model is then effectively trained by simultaneously optimizing the relative rank among instances and labels.
APA, Harvard, Vancouver, ISO, and other styles
4

Pham, Anh T., and Raviv Raich. "Kernel-based instance annotation in multi-instance multi-label learning." In 2014 IEEE 24th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2014. http://dx.doi.org/10.1109/mlsp.2014.6958876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Ye, Wei Ping, and Andrew T. Campbell. "Multi-instance Metric Learning." In 2011 IEEE 11th International Conference on Data Mining (ICDM). IEEE, 2011. http://dx.doi.org/10.1109/icdm.2011.106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blockeel, Hendrik, David Page, and Ashwin Srinivasan. "Multi-instance tree learning." In the 22nd international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1102351.1102359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wei, Xiu-Shen, Jianxin Wu, and Zhi-Hua Zhou. "Scalable Multi-instance Learning." In 2014 IEEE International Conference on Data Mining (ICDM). IEEE, 2014. http://dx.doi.org/10.1109/icdm.2014.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Qifan, Gal Chechik, Chen Sun, and Bin Shen. "Instance-Level Label Propagation with Multi-Instance Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/410.

Full text
Abstract:
Label propagation is a popular semi-supervised learning technique that transfers information from labeled examples to unlabeled examples through a graph. Most label propagation methods construct a graph based on example-to-example similarity, assuming that the resulting graph connects examples that share similar labels. Unfortunately, example-level similarity is sometimes badly defined. For instance, two images may contain two different objects, but have similar overall appearance due to large similar background. In this case, computing similarities based on whole-image would fail propagating information to the right labels. This paper proposes a novel Instance-Level Label Propagation (ILLP) approach that integrates label propagation with multi-instance learning. Each example is treated as containing multiple instances, as in the case of an image consisting of multiple regions. We first construct a graph based on instance-level similarity and then simultaneously identify the instances carrying the labels and propagate the labels across instances in the graph. Optimization is based on an iterative Expectation Maximization (EM) algorithm. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed approach over several state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Herman, Gunawan, Getian Ye, Yang Wang, Jie Xu, and Bang Zhang. "Multi-instance learning with relational information of instances." In 2009 Workshop on Applications of Computer Vision (WACV). IEEE, 2009. http://dx.doi.org/10.1109/wacv.2009.5403078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pham, Anh T., Raviv Raich, and Xiaoli Z. Fern. "Efficient instance annotation in multi-instance learning." In 2014 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2014. http://dx.doi.org/10.1109/ssp.2014.6884594.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multi-multi instance learning"

1

Ray, Jaideep, Fulton Wang, and Christopher Young. A Multi-Instance learning Framework for Seismic Detectors. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1673169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Haohang, Erol Tutumluer, Jiayi Luo, Kelin Ding, Issam Qamhia, and John Hart. 3D Image Analysis Using Deep Learning for Size and Shape Characterization of Stockpile Riprap Aggregates—Phase 2. Illinois Center for Transportation, September 2022. http://dx.doi.org/10.36501/0197-9191/22-017.

Full text
Abstract:
Riprap rock and aggregates are extensively used in structural, transportation, geotechnical, and hydraulic engineering applications. Field determination of morphological properties of aggregates such as size and shape can greatly facilitate the quality assurance/quality control (QA/QC) process for proper aggregate material selection and engineering use. Many aggregate imaging approaches have been developed to characterize the size and morphology of individual aggregates by computer vision. However, 3D field characterization of aggregate particle morphology is challenging both during the quarry production process and at construction sites, particularly for aggregates in stockpile form. This research study presents a 3D reconstruction-segmentation-completion approach based on deep learning techniques by combining three developed research components: field 3D reconstruction procedures, 3D stockpile instance segmentation, and 3D shape completion. The approach was designed to reconstruct aggregate stockpiles from multi-view images, segment the stockpile into individual instances, and predict the unseen side of each instance (particle) based on the partial visible shapes. Based on the dataset constructed from individual aggregate models, a state-of-the-art 3D instance segmentation network and a 3D shape completion network were implemented and trained, respectively. The application of the integrated approach was demonstrated on re-engineered stockpiles and field stockpiles. The validation of results using ground-truth measurements showed satisfactory algorithm performance in capturing and predicting the unseen sides of aggregates. The algorithms are integrated into a software application with a user-friendly graphical user interface. Based on the findings of this study, this stockpile aggregate analysis approach is envisioned to provide efficient field evaluation of aggregate stockpiles by offering convenient and reliable solutions for on-site QA/QC tasks of riprap rock and aggregate stockpiles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography