Journal articles on the topic 'Multi-view machine learning'

To see the other types of publications on this topic, follow the link: Multi-view machine learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-view machine learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

WANG, ZHE, MINGZHE LU, ZENGXIN NIU, XIANGYANG XUE, and DAQI GAO. "COST-SENSITIVE MULTI-VIEW LEARNING MACHINE." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 03 (May 2014): 1451004. http://dx.doi.org/10.1142/s0218001414510045.

Full text
Abstract:
Multi-view learning aims to effectively learn from data represented by multiple independent sets of attributes, where each set is taken as one view of the original data. In real-world application, each view should be acquired in unequal cost. Taking web-page classification for example, it is cheaper to get the words on itself (view one) than to get the words contained in anchor texts of inbound hyper-links (view two). However, almost all the existing multi-view learning does not consider the cost of acquiring the views or the cost of evaluating them. In this paper, we support that different views should adopt different representations and lead to different acquisition cost. Thus we develop a new view-dependent cost different from the existing both class-dependent cost and example-dependent cost. To this end, we generalize the framework of multi-view learning with the cost-sensitive technique and further propose a Cost-sensitive Multi-View Learning Machine named CMVLM for short. In implementation, we take into account and measure both the acquisition cost and the discriminant scatter of each view. Then through eliminating the useless views with a predefined threshold, we use the reserved views to train the final classifier. The experimental results on a broad range of data sets including the benchmark UCI, image, and bioinformatics data sets validate that the proposed algorithm can effectively reduce the total cost and have a competitive even better classification performance. The contributions of this paper are that: (1) first proposing a view-dependent cost; (2) establishing a cost-sensitive multi-view learning framework; (3) developing a wrapper technique that is universal to most multiple kernel based classifier.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Qiang, Yong Dou, Xinwang Liu, Qi Lv, and Shijie Li. "Multi-view clustering with extreme learning machine." Neurocomputing 214 (November 2016): 483–94. http://dx.doi.org/10.1016/j.neucom.2016.06.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Shiliang. "A survey of multi-view machine learning." Neural Computing and Applications 23, no. 7-8 (February 17, 2013): 2031–38. http://dx.doi.org/10.1007/s00521-013-1362-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karaaba, Mahir Faik, Lambert Schomaker, and Marco Wiering. "Machine learning for multi-view eye-pair detection." Engineering Applications of Artificial Intelligence 33 (August 2014): 69–79. http://dx.doi.org/10.1016/j.engappai.2014.04.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yongshan, Jia Wu, Chuan Zhou, Zhihua Cai, Jian Yang, and Philip S. Yu. "Multi-View Fusion with Extreme Learning Machine for Clustering." ACM Transactions on Intelligent Systems and Technology 10, no. 5 (November 14, 2019): 1–23. http://dx.doi.org/10.1145/3340268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Jingjing, Dewei Li, Yingjie Tian, and Dalian Liu. "Multi-view learning based on nonparallel support vector machine." Knowledge-Based Systems 158 (October 2018): 94–108. http://dx.doi.org/10.1016/j.knosys.2018.05.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Changming, Chao Chen, Rigui Zhou, Lai Wei, and Xiafen Zhang. "A new multi-view learning machine with incomplete data." Pattern Analysis and Applications 23, no. 3 (February 11, 2020): 1085–116. http://dx.doi.org/10.1007/s10044-020-00863-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wan, Zhibin, Changqing Zhang, Pengfei Zhu, and Qinghua Hu. "Multi-View Information-Bottleneck Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 10085–92. http://dx.doi.org/10.1609/aaai.v35i11.17210.

Full text
Abstract:
In real-world applications, clustering or classification can usually be improved by fusing information from different views. Therefore, unsupervised representation learning on multi-view data becomes a compelling topic in machine learning. In this paper, we propose a novel and flexible unsupervised multi-view representation learning model termed Collaborative Multi-View Information Bottleneck Networks (CMIB-Nets), which comprehensively explores the common latent structure and the view-specific intrinsic information, and discards the superfluous information in the data significantly improving the generalization capability of the model. Specifically, our proposed model relies on the information bottleneck principle to integrate the shared representation among different views and the view-specific representation of each view, prompting the multi-view complete representation and flexibly balancing the complementarity and consistency among multiple views. We conduct extensive experiments (including clustering analysis, robustness experiment, and ablation study) on real-world datasets, which empirically show promising generalization ability and robustness compared to state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
9

姚, 瑞. "Semi-Supervised Learning Machine Based on Multi-View Twin Support Vector Machine." Operations Research and Fuzziology 09, no. 02 (2019): 177–88. http://dx.doi.org/10.12677/orf.2019.92021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yanchao, Yongli Wang, Junlong Zhou, and Xiaohui Jiang. "Robust Transductive Support Vector Machine for Multi-View Classification." Journal of Circuits, Systems and Computers 27, no. 12 (June 22, 2018): 1850185. http://dx.doi.org/10.1142/s0218126618501852.

Full text
Abstract:
Semi-Supervised Learning (SSL) aims to improve the performance of models trained with a small set of labeled data and a large collection of unlabeled data. Learning multi-view representations from different perspectives of data has proved to be very effectively for improving generalization performance. However, existing semi-supervised multi-view learning methods tend to ignore the specific difficulty of different unlabeled examples, such as the outliers and noise, leading to error-prone classification. To address this problem, this paper proposes Robust Transductive Support Vector Machine (RTSVM) that introduces the margin distribution into TSVM, which is robust to the outliers and noise. Specifically, the first-order (margin mean) and second-order statistics (margin variance) are regularized into TSVM, which try to achieve strong generalization performance. Then, we impose a global similarity constraint between distinct RTSVMs each trained from one view of the data. Moreover, our algorithm runs with fast convergence by using concave–convex procedure. Finally, we validate our proposed method on a variety of multi-view datasets, and the experimental results demonstrate that our proposed algorithm is effective. By exploring large amount of unlabeled examples and being robust to the outliers and noise among different views, the generalization performance of our method show the superiority to single-view learning and other semi-supervised multi-view learning methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Zhe, Jin Xu, Songcan Chen, and Daqi Gao. "Regularized multi-view learning machine based on response surface technique." Neurocomputing 97 (November 2012): 201–13. http://dx.doi.org/10.1016/j.neucom.2012.05.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhu, Changming, Duoqian Miao, Rigui Zhou, and Lai Wei. "Weight-and-Universum-based semi-supervised multi-view learning machine." Soft Computing 24, no. 14 (December 3, 2019): 10657–79. http://dx.doi.org/10.1007/s00500-019-04572-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gao, Jingyue, Xiting Wang, Yasha Wang, and Xing Xie. "Explainable Recommendation through Attentive Multi-View Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.

Full text
Abstract:
Recommender systems have been playing an increasingly important role in our daily life due to the explosive growth of information. Accuracy and explainability are two core aspects when we evaluate a recommendation model and have become one of the fundamental trade-offs in machine learning. In this paper, we propose to alleviate the trade-off between accuracy and explainability by developing an explainable deep model that combines the advantages of deep learning-based models and existing explainable methods. The basic idea is to build an initial network based on an explainable deep hierarchy (e.g., Microsoft Concept Graph) and improve the model accuracy by optimizing key variables in the hierarchy (e.g., node importance and relevance). To ensure accurate rating prediction, we propose an attentive multi-view learning framework. The framework enables us to handle sparse and noisy data by co-regularizing among different feature levels and combining predictions attentively. To mine readable explanations from the hierarchy, we formulate personalized explanation generation as a constrained tree node selection problem and propose a dynamic programming algorithm to solve it. Experimental results show that our model outperforms state-of-the-art methods in terms of both accuracy and explainability.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Huiru, and Zhijian Zhou. "Multi-view learning based on maximum margin of twin spheres support vector machine." Journal of Intelligent & Fuzzy Systems 40, no. 6 (June 21, 2021): 11273–86. http://dx.doi.org/10.3233/jifs-202427.

Full text
Abstract:
Multi-view learning utilizes information from multiple representations to advance the performance of categorization. Most of the multi-view learning algorithms based on support vector machines seek the separating hyperplanes in different feature spaces, which may be unreasonable in practical application. Besides, most of them are designed to balanced data, which may lead to poor performance. In this work, a novel multi-view learning algorithm based on maximum margin of twin spheres support vector machine (MvMMTSSVM) is introduced. The proposed method follows both maximum margin principle and consensus principle. By following the maximum margin principle, it constructs two homocentric spheres and tries to maximize the margin between the two spheres for each view separately. To realize the consensus principle, the consistency constraints of two views are introduced in the constraint conditions. Therefore, it not only deals with multi-view class-imbalanced data effectively, but also has fast calculation efficiency. To verify the validity and rationlity of our MvMMTSSVM, we do the experiments on 24 binary datasets. Furthermore, we use Friedman test to verify the effectiveness of MvMMTSSVM.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Qiang, Yong Dou, Xinwang Liu, Fei Xia, Qi Lv, and Ke Yang. "Local kernel alignment based multi-view clustering using extreme learning machine." Neurocomputing 275 (January 2018): 1099–111. http://dx.doi.org/10.1016/j.neucom.2017.09.060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Iosifidis, Alexandros, Anastasios Tefas, and Ioannis Pitas. "Regularized extreme learning machine for multi-view semi-supervised action recognition." Neurocomputing 145 (December 2014): 250–62. http://dx.doi.org/10.1016/j.neucom.2014.05.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Iosifidis, Alexandros, Anastasios Tefas, and Ioannis Pitas. "Human Action Recognition Based on Multi-View Regularized Extreme Learning Machine." International Journal on Artificial Intelligence Tools 24, no. 05 (October 2015): 1540020. http://dx.doi.org/10.1142/s0218213015400205.

Full text
Abstract:
In this paper, we employ multiple Single-hidden Layer Feedforward Neural Networks for multi-view action recognition. We propose an extension of the Extreme Learning Machine algorithm that is able to exploit multiple action representations and scatter information in the corresponding ELM spaces for the calculation of the networks’ parameters and the determination of optimized network combination weights. The proposed algorithm is evaluated by using two state-of-the-art action video representation approaches on five publicly available action recognition databases designed for different application scenarios. Experimental comparison of the proposed approach with three commonly used video representation combination approaches and relating classification schemes illustrates that ELM networks employing a supervised view combination scheme generally outperform those exploiting unsupervised combination approaches, as well as that the exploitation of scatter information in ELM-based neural network training enhances the network’s performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Ding, Yulian, Xiujuan Lei, Bo Liao, and Fang-Xiang Wu. "Machine learning approaches for predicting biomolecule–disease associations." Briefings in Functional Genomics 20, no. 4 (February 8, 2021): 273–87. http://dx.doi.org/10.1093/bfgp/elab002.

Full text
Abstract:
Abstract Biomolecules, such as microRNAs, circRNAs, lncRNAs and genes, are functionally interdependent in human cells, and all play critical roles in diverse fundamental and vital biological processes. The dysregulations of such biomolecules can cause diseases. Identifying the associations between biomolecules and diseases can uncover the mechanisms of complex diseases, which is conducive to their diagnosis, treatment, prognosis and prevention. Due to the time consumption and cost of biologically experimental methods, many computational association prediction methods have been proposed in the past few years. In this study, we provide a comprehensive review of machine learning-based approaches for predicting disease–biomolecule associations with multi-view data sources. Firstly, we introduce some databases and general strategies for integrating multi-view data sources in the prediction models. Then we discuss several feature representation methods for machine learning-based prediction models. Thirdly, we comprehensively review machine learning-based prediction approaches in three categories: basic machine learning methods, matrix completion-based methods and deep learning-based methods, while discussing their advantages and disadvantages. Finally, we provide some perspectives for further improving biomolecule–disease prediction methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Khan, Muhammad Raza, and Joshua E. Blumenstock. "Multi-GCN: Graph Convolutional Networks for Multi-View Networks, with Applications to Global Poverty." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 606–13. http://dx.doi.org/10.1609/aaai.v33i01.3301606.

Full text
Abstract:
With the rapid expansion of mobile phone networks in developing countries, large-scale graph machine learning has gained sudden relevance in the study of global poverty. Recent applications range from humanitarian response and poverty estimation to urban planning and epidemic containment. Yet the vast majority of computational tools and algorithms used in these applications do not account for the multi-view nature of social networks: people are related in myriad ways, but most graph learning models treat relations as binary. In this paper, we develop a graph-based convolutional network for learning on multi-view networks. We show that this method outperforms state-of-the-art semi-supervised learning algorithms on three different prediction tasks using mobile phone datasets from three different developing countries. We also show that, while designed specifically for use in poverty research, the algorithm also outperforms existing benchmarks on a broader set of learning tasks on multi-view networks, including node labelling in citation networks.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Zhi Cong, Kai Shun Hu, Hui Yu Huang, Shuai Li, and Shao Yong Zhao. "A Multi-Step Reinforcement Learning Algorithm." Applied Mechanics and Materials 44-47 (December 2010): 3611–15. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.3611.

Full text
Abstract:
Reinforcement learning (RL) is a state or action value based machine learning method which approximately solves large-scale Markov Decision Process (MDP) or Semi-Markov Decision Process (SMDP). A multi-step RL algorithm called Sarsa(,k) is proposed, which is a compromised variation of Sarsa and Sarsa(). It is equivalent to Sarsa if k is 1 and is equivalent to Sarsa() if k is infinite. Sarsa(,k) adjust its performance by setting k value. Two forms of Sarsa(,k), forward view Sarsa(,k) and backward view Sarsa(,k), are constructed and proved equivalent in off-line updating.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Bo, Haowen Zhong, and Yanshan Xiao. "New Multi-View Classification Method with Uncertain Data." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–23. http://dx.doi.org/10.1145/3458282.

Full text
Abstract:
Multi-view classification aims at designing a multi-view learning strategy to train a classifier from multi-view data, which are easily collected in practice. Most of the existing works focus on multi-view classification by assuming the multi-view data are collected with precise information. However, we always collect the uncertain multi-view data due to the collection process is corrupted with noise in real-life application. In this case, this article proposes a novel approach, called uncertain multi-view learning with support vector machine (UMV-SVM) to cope with the problem of multi-view learning with uncertain data. The method first enforces the agreement among all the views to seek complementary information of multi-view data and takes the uncertainty of the multi-view data into consideration by modeling reachability area of the noise. Then it proposes an iterative framework to solve the proposed UMV-SVM model such that we can obtain the multi-view classifier for prediction. Extensive experiments on real-life datasets have shown that the proposed UMV-SVM can achieve a better performance for uncertain multi-view classification in comparison to the state-of-the-art multi-view classification methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Yan, Danjv Lv, and Yili Zhao. "Multiple-View Active Learning for Environmental Sound Classification." International Journal of Online Engineering (iJOE) 12, no. 12 (December 25, 2016): 49. http://dx.doi.org/10.3991/ijoe.v12i12.6458.

Full text
Abstract:
<p class="Abstract">Multi-view learning with multiple distinct feature sets is a rapid growing direction in machine learning with boosting the performance of supervised learning classification under the case of few labeled data. The paper proposes Multi-view Simple Disagreement Sampling (MV-SDS) and Multi-view Entropy Priority Sampling (MV-EPS) methods as the selecting samples strategies in active learning with multiple-view. For the given environmental sound data, the CELP features in 10 dimensions and the MFCC features in 13 dimensions are two views respectively. The experiments with a single view single classifier, SVML, MV-SDS and MV-EPS on the environmental sound extracted two of views, CELP &amp; MFCC are carried out to illustrate the results of the proposed methods and their performances are compared under different percent training examples. The experimental results show that multi-view active learning can effectively improve the performance of classification for environmental sound data, and MV-EPS method outperforms the MV-SDS.</p><div> </div>
APA, Harvard, Vancouver, ISO, and other styles
23

Zhu, Jinlong, Xiujian Hu, Chao Zhang, and Guanglei Sheng. "Multi-View Modeling Method for Functional MRI Images." Journal of Medical Imaging and Health Informatics 11, no. 2 (February 1, 2021): 432–36. http://dx.doi.org/10.1166/jmihi.2021.3300.

Full text
Abstract:
This paper proposes a new unsupervised fuzzy feature mapping method based on fMRI data and combines it with multi-view support vector machine to construct a classification model for computer-aided diagnosis of autism. Firstly, a multi-output TSK fuzzy system is adopted to map the original feature data to the linear separable high-dimensional space. Then a manifold regularization learning framework is introduced, and a new method of unsupervised fuzzy feature learning is proposed. Finally, a multi-view SVM algorithm is used for classification tasks. The experimental results show that the method in this paper can effectively extract important features from the fMRI data in the resting state and improve the model's interpretability on the premise of ensuring the superior and stable classification performance of the model.
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Yanquan, Haoliang Yuan, and Loi Lei Lai. "Latent multi-view semi-supervised classification by using graph learning." International Journal of Wavelets, Multiresolution and Information Processing 18, no. 05 (June 20, 2020): 2050039. http://dx.doi.org/10.1142/s0219691320500393.

Full text
Abstract:
Multi-view learning is a hot research direction in the field of machine learning and pattern recognition, which is attracting more and more attention recently. In the real world, the available data commonly include a small number of labeled samples and a large number of unlabeled samples. In this paper, we propose a latent multi-view semi-supervised classification method by using graph learning. This work recovers a latent intact representation to utilize the complementary information of the multi-view data. In addition, an adaptive graph learning technique is adopted to explore the local structure of this latent intact representation. To fully use this latent intact representation to discover the label information of the unlabeled data, we consider to unify the procedures of computing the latent intact representation and the labels of unlabeled data as a whole. An alternating optimization algorithm is designed to effectively solve the optimization of the proposed method. Extensive experimental results demonstrate the effectiveness of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
25

Yu, Miao. "Regularized K-means Clustering for Multi-View Data." Journal of Physics: Conference Series 2381, no. 1 (December 1, 2022): 012036. http://dx.doi.org/10.1088/1742-6596/2381/1/012036.

Full text
Abstract:
Abstract Clustering is a common problem in machine learning. Multi-view data can better express the integrity of data because most of the data currently processed is single-view data. Therefore, the problem of clustering analysis of multi-view data has attracted a lot of attention, and how to improve the accuracy of clustering analysis of multi-view data has become a hot topic. We propose the regularized K-means (RKM) algorithm to deal with multi-view data, based on the K-means algorithm, add regularization terms, which can avoid data overfitting. Through numerical analysis, the RKM significantly improves clustering performance to the comparison methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Jiacan, Hao Zheng, Jianhui Wang, Donglin Li, and Xiaoke Fang. "Recognition of EEG Signal Motor Imagery Intention Based on Deep Multi-View Feature Learning." Sensors 20, no. 12 (June 20, 2020): 3496. http://dx.doi.org/10.3390/s20123496.

Full text
Abstract:
Recognition of motor imagery intention is one of the hot current research focuses of brain-computer interface (BCI) studies. It can help patients with physical dyskinesia to convey their movement intentions. In recent years, breakthroughs have been made in the research on recognition of motor imagery task using deep learning, but if the important features related to motor imagery are ignored, it may lead to a decline in the recognition performance of the algorithm. This paper proposes a new deep multi-view feature learning method for the classification task of motor imagery electroencephalogram (EEG) signals. In order to obtain more representative motor imagery features in EEG signals, we introduced a multi-view feature representation based on the characteristics of EEG signals and the differences between different features. Different feature extraction methods were used to respectively extract the time domain, frequency domain, time-frequency domain and spatial features of EEG signals, so as to made them cooperate and complement. Then, the deep restricted Boltzmann machine (RBM) network improved by t-distributed stochastic neighbor embedding(t-SNE) was adopted to learn the multi-view features of EEG signals, so that the algorithm removed the feature redundancy while took into account the global characteristics in the multi-view feature sequence, reduced the dimension of the multi-visual features and enhanced the recognizability of the features. Finally, support vector machine (SVM) was chosen to classify deep multi-view features. Applying our proposed method to the BCI competition IV 2a dataset we obtained excellent classification results. The results show that the deep multi-view feature learning method further improved the classification accuracy of motor imagery tasks.
APA, Harvard, Vancouver, ISO, and other styles
27

Koutris, Aristotelis, Theodoros Siozos, Yannis Kopsinis, Aggelos Pikrakis, Timon Merk, Matthias Mahlig, Stylianos Papaharalabos, and Peter Karlsson. "Deep Learning-Based Indoor Localization Using Multi-View BLE Signal." Sensors 22, no. 7 (April 2, 2022): 2759. http://dx.doi.org/10.3390/s22072759.

Full text
Abstract:
In this paper, we present a novel Deep Neural Network-based indoor localization method that estimates the position of a Bluetooth Low Energy (BLE) transmitter (tag) by using the received signals’ characteristics at multiple Anchor Points (APs). We use the received signal strength indicator (RSSI) value and the in-phase and quadrature-phase (IQ) components of the received BLE signals at a single time instance to simultaneously estimate the angle of arrival (AoA) at all APs. Through supervised learning on simulated data, various machine learning (ML) architectures are trained to perform AoA estimation using varying subsets of anchor points. In the final stage of the system, the estimated AoA values are fed to a positioning engine which uses the least squares (LS) algorithm to estimate the position of the tag. The proposed architectures are trained and rigorously tested on several simulated room scenarios and are shown to achieve a localization accuracy of 70 cm. Moreover, the proposed systems possess generalization capabilities by being robust to modifications in the room’s content or anchors’ configuration. Additionally, some of the proposed architectures have the ability to distribute the computational load over the APs.
APA, Harvard, Vancouver, ISO, and other styles
28

Hu, Menglei, and Songcan Chen. "One-Pass Incomplete Multi-View Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3838–45. http://dx.doi.org/10.1609/aaai.v33i01.33013838.

Full text
Abstract:
Real data are often with multiple modalities or from multiple heterogeneous sources, thus forming so-called multi-view data, which receives more and more attentions in machine learning. Multi-view clustering (MVC) becomes its important paradigm. In real-world applications, some views often suffer from instances missing. Clustering on such multi-view datasets is called incomplete multi-view clustering (IMC) and quite challenging. To date, though many approaches have been developed, most of them are offline and have high computational and memory costs especially for large scale datasets. To address this problem, in this paper, we propose an One-Pass Incomplete Multi-view Clustering framework (OPIMC). With the help of regularized matrix factorization and weighted matrix factorization, OPIMC can relatively easily deal with such problem. Different from the existing and sole online IMC method, OPIMC can directly get clustering results and effectively determine the termination of iteration process by introducing two global statistics. Finally, extensive experiments conducted on four real datasets demonstrate the efficiency and effectiveness of the proposed OPIMC method.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Yu, and Qiang Yang. "An overview of multi-task learning." National Science Review 5, no. 1 (September 1, 2017): 30–43. http://dx.doi.org/10.1093/nsr/nwx105.

Full text
Abstract:
Abstract As a promising area in machine learning, multi-task learning (MTL) aims to improve the performance of multiple related learning tasks by leveraging useful information among them. In this paper, we give an overview of MTL by first giving a definition of MTL. Then several different settings of MTL are introduced, including multi-task supervised learning, multi-task unsupervised learning, multi-task semi-supervised learning, multi-task active learning, multi-task reinforcement learning, multi-task online learning and multi-task multi-view learning. For each setting, representative MTL models are presented. In order to speed up the learning process, parallel and distributed MTL models are introduced. Many areas, including computer vision, bioinformatics, health informatics, speech, natural language processing, web applications and ubiquitous computing, use MTL to improve the performance of the applications involved and some representative works are reviewed. Finally, recent theoretical analyses for MTL are presented.
APA, Harvard, Vancouver, ISO, and other styles
30

Duan, Yiqiang, Haoliang Yuan, Chun Sing Lai, and Loi Lei Lai. "Fusing Local and Global Information for One-Step Multi-View Subspace Clustering." Applied Sciences 12, no. 10 (May 18, 2022): 5094. http://dx.doi.org/10.3390/app12105094.

Full text
Abstract:
Multi-view subspace clustering has drawn significant attention in the pattern recognition and machine learning research community. However, most of the existing multi-view subspace clustering methods are still limited in two aspects. (1) The subspace representation yielded by the self-expression reconstruction model ignores the local structure information of the data. (2) The construction of subspace representation and clustering are used as two individual procedures, which ignores their interactions. To address these problems, we propose a novel multi-view subspace clustering method fusing local and global information for one-step multi-view clustering. Our contribution lies in three aspects. First, we merge the graph learning into the self-expression model to explore the local structure information for constructing the specific subspace representations of different views. Second, we consider the multi-view information fusion by integrating these specific subspace representations into one common subspace representation. Third, we combine the subspace representation learning, multi-view information fusion, and clustering into a joint optimization model to realize the one-step clustering. We also develop an effective optimization algorithm to solve the proposed method. Comprehensive experimental results on nine popular multi-view data sets confirm the effectiveness and superiority of the proposed method by comparing it with many state-of-the-art multi-view clustering methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Geng, Wanxuan, Weixun Zhou, and Shuanggen Jin. "Multi-View Urban Scene Classification with a Complementary-Information Learning Model." Photogrammetric Engineering & Remote Sensing 88, no. 1 (January 1, 2022): 65–72. http://dx.doi.org/10.14358/pers.21-00062r2.

Full text
Abstract:
Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.
APA, Harvard, Vancouver, ISO, and other styles
32

Wei, Bin, and Christopher Pal. "Heterogeneous Transfer Learning with RBMs." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 531–36. http://dx.doi.org/10.1609/aaai.v25i1.7925.

Full text
Abstract:
A common approach in machine learning is to use a large amount of labeled data to train a model. Usually this model can then only be used to classify data in the same feature space. However, labeled data is often expensive to obtain. A number of strategies have been developed by the machine learning community in recent years to address this problem, including: semi-supervised learning,domain adaptation,multi-task learning,and self-taught learning. While training data and test may have different distributions, they must remain in the same feature set. Furthermore, all the above methods work in the same feature space. In this paper, we consider an extreme case of transfer learning called heterogeneous transfer learning — where the feature spaces of the source task and the target tasks are disjoint. Previous approaches mostly fall in the multi-view learning category, where co-occurrence data from both feature spaces is required. We generalize the previous work on cross-lingual adaptation and propose a multi-task strategy for the task. We also propose the use of a restricted Boltzmann machine (RBM), a special type of probabilistic graphical models, as an implementation. We present experiments on two tasks: action recognition and cross-lingual sentiment classification.
APA, Harvard, Vancouver, ISO, and other styles
33

Cao, R., W. Tu, J. Cai, T. Zhao, J. Xiao, J. Cao, Q. Gao, and H. Su. "MACHINE LEARNING-BASED ECONOMIC DEVELOPMENT MAPPING FROM MULTI-SOURCE OPEN GEOSPATIAL DATA." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-4-2022 (May 18, 2022): 259–66. http://dx.doi.org/10.5194/isprs-annals-v-4-2022-259-2022.

Full text
Abstract:
Abstract. Timely and accurate socioeconomic indicators are the prerequisite for smart social governance. For example, the level of economic development and the structure of population are important statistics for regional or national policy-making. However, the collection of these characteristics usually depends on demographic and social surveys, which are time- and labor-intensive. To address these issues, we propose a machine learning-based approach to estimate and map the economic development from multi-source open available geospatial data, including remote sensing imagery and OpenStreetMap road networks. Specifically, we first extract knowledge-based features from different data sources; then the multi-view graphs are constructed through different perspectives of spatial adjacency and feature similarity; and a multi-view graph neural network (MVGNN) model is built on them and trained in a self-supervised learning manner. Then, the handcrafted features and the learned graph representations are combined to estimate the regional economic development indicators via random forest models. Taking China’s county-level gross domestic product (GDP) as an example, extensive experiments have been conducted and the results demonstrate the effectiveness of the proposed method, and the combination of the knowledge-based and learning-based features can significantly outperform baseline methods. Our proposed approach can advance the goal of acquiring timely and accurate socioeconomic variables through widely accessible geospatial data, which has the potential to extend to more social indicators and other geographic regions to support smart governance and policy-making in the future.
APA, Harvard, Vancouver, ISO, and other styles
34

Misbah, Anass, and Ahmed Ettalbi. "Towards Machine Learning Models as a Key Mean to Train and Optimize Multi-view Web Services Proxy Security Layer." International Journal of Recent Contributions from Engineering, Science & IT (iJES) 6, no. 4 (December 19, 2018): 65. http://dx.doi.org/10.3991/ijes.v6i4.9883.

Full text
Abstract:
<p class="0abstractCxSpFirst">Muti-view Web services have brought many advantages regarding the early abstraction of end users needs and constraints. Thus, security has been positively impacted by this paradigm, particularly, within Web services applications area, and then Multi-view Web services.</p><p class="0abstractCxSpMiddle">In our previous work, we introduce the concept of Multi-view Web services to Internet of Things architecture within a Cloud infrastructure by proposing a Proxy Security Layer which consists of Multi-view Web services allowing the identification and categorizing of all interacting IoT objects and applications so as to increase the level of security and improve the control of transactions.</p><p class="0abstractCxSpLast">Besides, Artificial Intelligence and especially Machine Learning are growing fast and are making it possible to simulate human being intelligence in many domains; consequently, it is more and more possible to process automatically a large amount of data in order to make decision, bring new insights or even detect new threats / opportunities that we were not able to detect before by simple human means.</p>In this work, we are bringing together the power of the Machine Learning models and The Multi-view Web services Proxy Security Layer so as to verify permanently the consistency of the access rules, detect the suspicious intrusions, update the policy and also optimize the Multi-view Web services for a better performance of the whole Internet of Things architecture.
APA, Harvard, Vancouver, ISO, and other styles
35

Seeland, Marco, and Patrick Mäder. "Multi-view classification with convolutional neural networks." PLOS ONE 16, no. 1 (January 12, 2021): e0245230. http://dx.doi.org/10.1371/journal.pone.0245230.

Full text
Abstract:
Humans’ decision making process often relies on utilizing visual information from different views or perspectives. However, in machine-learning-based image classification we typically infer an object’s class from just a single image showing an object. Especially for challenging classification problems, the visual information conveyed by a single image may be insufficient for an accurate decision. We propose a classification scheme that relies on fusing visual information captured through images depicting the same object from multiple perspectives. Convolutional neural networks are used to extract and encode visual features from the multiple views and we propose strategies for fusing these information. More specifically, we investigate the following three strategies: (1) fusing convolutional feature maps at differing network depths; (2) fusion of bottleneck latent representations prior to classification; and (3) score fusion. We systematically evaluate these strategies on three datasets from different domains. Our findings emphasize the benefit of integrating information fusion into the network rather than performing it by post-processing of classification scores. Furthermore, we demonstrate through a case study that already trained networks can be easily extended by the best fusion strategy, outperforming other approaches by large margin.
APA, Harvard, Vancouver, ISO, and other styles
36

Charoenkwan, Phasit, Chanin Nantasenamat, Md Mehedi Hasan, Mohammad Ali Moni, Pietro Lio’, and Watshara Shoombuatong. "iBitter-Fuse: A Novel Sequence-Based Bitter Peptide Predictor by Fusing Multi-View Features." International Journal of Molecular Sciences 22, no. 16 (August 19, 2021): 8958. http://dx.doi.org/10.3390/ijms22168958.

Full text
Abstract:
Accurate identification of bitter peptides is of great importance for better understanding their biochemical and biophysical properties. To date, machine learning-based methods have become effective approaches for providing a good avenue for identifying potential bitter peptides from large-scale protein datasets. Although few machine learning-based predictors have been developed for identifying the bitterness of peptides, their prediction performances could be improved. In this study, we developed a new predictor (named iBitter-Fuse) for achieving more accurate identification of bitter peptides. In the proposed iBitter-Fuse, we have integrated a variety of feature encoding schemes for providing sufficient information from different aspects, namely consisting of compositional information and physicochemical properties. To enhance the predictive performance, the customized genetic algorithm utilizing self-assessment-report (GA-SAR) was employed for identifying informative features followed by inputting optimal ones into a support vector machine (SVM)-based classifier for developing the final model (iBitter-Fuse). Benchmarking experiments based on both 10-fold cross-validation and independent tests indicated that the iBitter-Fuse was able to achieve more accurate performance as compared to state-of-the-art methods. To facilitate the high-throughput identification of bitter peptides, the iBitter-Fuse web server was established and made freely available online. It is anticipated that the iBitter-Fuse will be a useful tool for aiding the discovery and de novo design of bitter peptides.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhu, Linli, Gang Hua, and Adnan Aslam. "Ontology learning algorithm using weak functions." Open Physics 16, no. 1 (December 31, 2018): 910–16. http://dx.doi.org/10.1515/phys-2018-0112.

Full text
Abstract:
AbstractOntology is widely used in information retrieval, image processing and other various disciplines. This article discusses how to use machine learning approach to solve the most essential similarity calculation problem in multi-dividing ontology setting. The ontology function is regarded as a combination of several weak ontology functions, and the optimal ontology function is obtained by an iterative algorithm. In addition, the performance of the algorithm is analyzed from a theoretical point of view by statistical methods, and several results are obtained.
APA, Harvard, Vancouver, ISO, and other styles
38

Gonçalves, Carlos Adriano, Adrián Seara Vieira, Célia Talma Gonçalves, Rui Camacho, Eva Lorenzo Iglesias, and Lourdes Borrajo Diz. "A Novel Multi-View Ensemble Learning Architecture to Improve the Structured Text Classification." Information 13, no. 6 (June 1, 2022): 283. http://dx.doi.org/10.3390/info13060283.

Full text
Abstract:
Multi-view ensemble learning exploits the information of data views. To test its efficiency for full text classification, a technique has been implemented where the views correspond to the document sections. For classification and prediction, we use a stacking generalization based on the idea that different learning algorithms provide complementary explanations of the data. The present study implements the stacking approach using support vector machine algorithms as the baseline and a C4.5 implementation as the meta-learner. Views are created with OHSUMED biomedical full text documents. Experimental results lead to the sustained conclusion that the application of multi-view techniques to full texts significantly improves the task of text classification, providing a significant contribution for the biomedical text mining research. We also have evidence to conclude that enriched datasets with text from certain sections are better than using only titles and abstracts.
APA, Harvard, Vancouver, ISO, and other styles
39

You, Cong-Zhe, Zhen-Qiu Shu, and Hong-Hui Fan. "Non-negative sparse Laplacian regularized latent multi-view subspace clustering." Journal of Algorithms & Computational Technology 15 (January 2021): 174830262110249. http://dx.doi.org/10.1177/17483026211024904.

Full text
Abstract:
Recently, in the area of artificial intelligence and machine learning, subspace clustering of multi-view data is a research hotspot. The goal is to divide data samples from different sources into different groups. We proposed a new subspace clustering method for multi-view data which termed as Non-negative Sparse Laplacian regularized Latent Multi-view Subspace Clustering (NSL2MSC) in this paper. The method proposed in this paper learns the latent space representation of multi view data samples, and performs the data reconstruction on the latent space. The algorithm can cluster data in the latent representation space and use the relationship of different views. However, the traditional representation-based method does not consider the non-linear geometry inside the data, and may lose the local and similar information between the data in the learning process. By using the graph regularization method, we can not only capture the global low dimensional structural features of data, but also fully capture the nonlinear geometric structure information of data. The experimental results show that the proposed method is effective and its performance is better than most of the existing alternatives.
APA, Harvard, Vancouver, ISO, and other styles
40

Kyono, Trent, Fiona J. Gilbert, and Mihaela Van Der Schaar. "Triage of 2D Mammographic Images Using Multi-view Multi-task Convolutional Neural Networks." ACM Transactions on Computing for Healthcare 2, no. 3 (July 2021): 1–24. http://dx.doi.org/10.1145/3453166.

Full text
Abstract:
With an aging and growing population, the number of women receiving mammograms is increasing. However, existing techniques for autonomous diagnosis do not surpass a well-trained radiologist. Therefore, to reduce the number of mammograms that require examination by a radiologist, subject to preserving the diagnostic accuracy observed in current clinical practice, we develop Man and Machine Mammography Oracle (MAMMO)—a clinical decision support system capable of determining whether its predicted diagnoses require further radiologist examination. We first introduce a novel multi-view convolutional neural network (CNN) trained using multi-task learning (MTL) to diagnose mammograms and predict the radiological assessments known to be associated with cancer. MTL improves diagnostic performance and triage efficiency while providing an additional layer of model interpretability. Furthermore, we introduce a novel triage network that takes as input the radiological assessment and diagnostic predictions of the multi-view CNN and determines whether the radiologist or CNN will most likely provide the correct diagnosis. Results obtained on a dataset of over 7,000 patients show that MAMMO reduced the number of diagnostic mammograms requiring radiologist reading by 42.8% while improving the overall diagnostic accuracy in comparison to readings done by radiologists alone.
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Seung Jun, Byeong Hak Kim, and Min Young Kim. "Multi-Saliency Map and Machine Learning Based Human Detection for the Embedded Top-View Imaging System." IEEE Access 9 (2021): 70671–82. http://dx.doi.org/10.1109/access.2021.3078623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Szűcs, Gábor, and Marcell Németh. "Double-View Matching Network for Few-Shot Learning to Classify Covid-19 in X-ray images." Infocommunications journal 13, no. 1 (2021): 26–34. http://dx.doi.org/10.36244/icj.2021.1.4.

Full text
Abstract:
The research topic presented in this paper belongs to small training data problem in machine learning (especially in deep learning), it intends to help the work of those working in medicine by analyzing pathological X-ray recordings, using only very few images. This scenario is a particularly hot issue nowadays: how could a new disease for which only limited data are available be diagnosed using features of previous diseases? In this problem, so-called few-shot learning, the difficulty of the classification task is to learn the unique feature characteristics associated with the classes. Although there are solutions, but if the images come from different views, they will not handle these views well. We proposed an improved method, so-called Double-View Matching Network (DVMN based on the deep neural network), which solves the few-shot learning problem as well as the different views of the pathological recordings in the images. The main contribution of this is the convolutional neural network for feature extraction and handling the multi-view in image representation. Our method was tested in the classification of images showing unknown COVID-19 symptoms in an environment designed for learning a few samples, with prior meta-learning on images of other diseases only. The results show that DVMN reaches better accuracy on multi-view dataset than simple Matching Network without multi-view handling.
APA, Harvard, Vancouver, ISO, and other styles
43

He, Pengfei, Yingjie Zhu, Bo Chen, Yanning Lu, Yongling Yao, Jie Xiao, and Jiasheng Si. "Condenser Vacuum Degree Prediction Model with Multi-View Information Fusion." Journal of Physics: Conference Series 2294, no. 1 (June 1, 2022): 012032. http://dx.doi.org/10.1088/1742-6596/2294/1/012032.

Full text
Abstract:
Abstract Vacuum degree is a crucial factor for the operation of thermoelectric generating set. Existing approaches typically use machine learning algorithms to link the relationship between unit operating data and condenser vacuum degree by focusing on the temporal information within the data, while ignoring the frequency information implied in the historical condenser vacuum degree. To fully use of frequency information and further improve prediction accuracy, we propose a novel condenser vacuum degree prediction model with multi-view information fusion. In specific, the implicit frequency information in the historical vacuum degree sequence is explored via a combination of Variational Mode Decomposition (VMD) and Convolutional Neural Network (CNN). Furthermore, Transformer encoder is used to extract the temporal information from the unit operating data. Finally, the two views information are fused for condenser vacuum degree prediction. Extensive experiments conducted on the real data collected from a power plant demonstrate the superiority of the proposed method over several state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Kiranoglu, Volkan, Derya Birant, and Goksu Tuysuzoglu. "Multi-view multi-depth soil temperature prediction (MV-MD-STP): a new approach using machine learning and time series methods." International Journal of Intelligent Engineering Informatics 10, no. 1 (2022): 74. http://dx.doi.org/10.1504/ijiei.2022.10048525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Iatrou, Miltiadis, Christos Karydas, George Iatrou, Ioannis Pitsiorlas, Vassilis Aschonitis, Iason Raptis, Stelios Mpetas, Kostas Kravvas, and Spiros Mourelatos. "Topdressing Nitrogen Demand Prediction in Rice Crop Using Machine Learning Systems." Agriculture 11, no. 4 (April 2, 2021): 312. http://dx.doi.org/10.3390/agriculture11040312.

Full text
Abstract:
This research is an outcome of the R&D activities of Ecodevelopment S.A. (steadily supported by the Hellenic Agricultural Organization—Demeter) towards offering precision farming services to rice growers. Within this framework, a new methodology for topdressing nitrogen prediction was developed based on machine learning. Nitrogen is a key element in rice culture and its rational management can increase productivity, reduce costs, and prevent environmental impacts. A multi-source, multi-temporal, and multi-scale dataset was collected, including optical and radar imagery, soil data, and yield maps by monitoring a 110 ha pilot rice farm in Thessaloniki Plain, Greece, for four consecutive years. RapidEye imagery underwent image segmentation to delineate management zones (ancillary, visual interpretation of unmanned aerial system scenes was employed, too); Sentinel-1 (SAR) imagery was modelled with Computer Vision to detect inundated fields and (through this) indicate the exact growth stage of the crop; and Sentinel-2 image data were used to map leaf nitrogen concentration (LNC) exactly before topdressing applications. Several machine learning algorithms were configured to predict yield for various nitrogen levels, with the XGBoost model resulting in the highest accuracy. Finally, yield curves were used to select the nitrogen dose maximizing yield, which was thus recommended to the grower. Inundation mapping proved to be critical in the prediction process. Currently, Ecodevelopment S.A. is expanding the application of the new method in different study areas, with a view to further empower its generality and operationality.
APA, Harvard, Vancouver, ISO, and other styles
46

Imangaliyev, Sultan, Jörg Schlötterer, Folker Meyer, and Christin Seifert. "Diagnosis of Inflammatory Bowel Disease and Colorectal Cancer through Multi-View Stacked Generalization Applied on Gut Microbiome Data." Diagnostics 12, no. 10 (October 17, 2022): 2514. http://dx.doi.org/10.3390/diagnostics12102514.

Full text
Abstract:
Most of the microbiome studies suggest that using ensemble models such as Random Forest results in best predictive power. In this study, we empirically evaluate a more powerful ensemble learning algorithm, multi-view stacked generalization, on pediatric inflammatory bowel disease and adult colorectal cancer patients’ cohorts. We aim to check whether stacking would lead to better results compared to using a single best machine learning algorithm. Stacking achieves the best test set Average Precision (AP) on inflammatory bowel disease dataset reaching AP = 0.69, outperforming both the best base classifier (AP = 0.61) and the baseline meta learner built on top of base classifiers (AP = 0.63). On colorectal cancer dataset, the stacked classifier also outperforms (AP = 0.81) both the best base classifier (AP = 0.79) and the baseline meta learner (AP = 0.75). Stacking achieves best predictive performance on test set outperforming the best classifiers on both patient cohorts. Application of the stacking solves the issue of choosing the most appropriate machine learning algorithm by automating the model selection procedure. Clinical application of such a model is not limited to diagnosis task only, but it also can be extended to biomarker selection thanks to feature selection procedure.
APA, Harvard, Vancouver, ISO, and other styles
47

Xie, Hui, Li Wei, Dong Liu, and Luda Wang. "Task Scheduling in Heterogeneous Computing Systems Based on Machine Learning Approach." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 12 (May 11, 2020): 2051012. http://dx.doi.org/10.1142/s021800142051012x.

Full text
Abstract:
Task scheduling problem of heterogeneous computing system (HCS), which with increasing popularity, nowadays has become a research hotspot in this domain. The task scheduling problem of HCS, which can be described essentially as assigning tasks to the proper processor for executing, has been shown to be NP-complete. However, the existing scheduling algorithm suffers from an inherent limitation of lacking global view. Here, we reported a novel task scheduling algorithm based on Multi-Logistic Regression theory (called MLRS) in heterogeneous computing environment. First, we collected the best scheduling plans as the historical training set, and then a scheduling model was established by which we could predict the following schedule action. Through the analysis of experimental results, it is interpreted that the proposed algorithm has better optimization effect and robustness.
APA, Harvard, Vancouver, ISO, and other styles
48

Rao, Bing, Chen Zhou, Guoying Zhang, Ran Su, and Leyi Wei. "ACPred-Fuse: fusing multi-view information improves the prediction of anticancer peptides." Briefings in Bioinformatics 21, no. 5 (November 12, 2019): 1846–55. http://dx.doi.org/10.1093/bib/bbz088.

Full text
Abstract:
Abstract Fast and accurate identification of the peptides with anticancer activity potential from large-scale proteins is currently a challenging task. In this study, we propose a new machine learning predictor, namely, ACPred-Fuse, that can automatically and accurately predict protein sequences with or without anticancer activity in peptide form. Specifically, we establish a feature representation learning model that can explore class and probabilistic information embedded in anticancer peptides (ACPs) by integrating a total of 29 different sequence-based feature descriptors. In order to make full use of various multiview information, we further fused the class and probabilistic features with handcrafted sequential features and then optimized the representation ability of the multiview features, which are ultimately used as input for training our prediction model. By comparing the multiview features and existing feature descriptors, we demonstrate that the fused multiview features have more discriminative ability to capture the characteristics of ACPs. In addition, the information from different views is complementary for the performance improvement. Finally, our benchmarking comparison results showed that the proposed ACPred-Fuse is more precise and promising in the identification of ACPs than existing predictors. To facilitate the use of the proposed predictor, we built a web server, which is now freely available via http://server.malab.cn/ACPred-Fuse.
APA, Harvard, Vancouver, ISO, and other styles
49

Di, Wei, and Melba M. Crawford. "Active Learning via Multi-View and Local Proximity Co-Regularization for Hyperspectral Image Classification." IEEE Journal of Selected Topics in Signal Processing 5, no. 3 (June 2011): 618–28. http://dx.doi.org/10.1109/jstsp.2011.2123077.

Full text
Abstract:
A novel co-regularization framework for active learning is proposed for hyperspectral image classification. The first regularizer explores the intrinsic multi-view information embedded in the hyperspectral data. By adaptively and quantitatively measuring the disagreement level, it focuses only on samples with high uncertainty and builds a contention pool which is a small subset of the overall unlabeled data pool, thereby mitigating the computational cost. The second regularizer is based on the &#x201C;consistency assumption&#x201D; and designed on a spatial or the spectral based manifold space. It serves to further focus on the most informative samples within the contention pool by penalizing rapid changes in the classification function evaluated on proximally close samples in a local region. Such changes may be due to the lack of capability of the current learner to describe the unlabeled data. Incorporating manifold learning into the active learning process enforces the clustering assumption and avoids the degradation of the distance measure associated with the original high-dimensional spectral features. One spatial and two local spectral embedding methods are considered in this study, in conjunction with the support vector machine (SVM) classifier implemented with a radial basis function (RBF) kernel. Experiments show excellent performance on AVIRIS and Hyperion hyperspectral data as compared to random sampling and the state-of-the-art SVMSIMPLE.
APA, Harvard, Vancouver, ISO, and other styles
50

Syed, Khajamoinuddin, William C. Sleeman, Michael Hagan, Jatinder Palta, Rishabh Kapoor, and Preetam Ghosh. "Multi-View Data Integration Methods for Radiotherapy Structure Name Standardization." Cancers 13, no. 8 (April 9, 2021): 1796. http://dx.doi.org/10.3390/cancers13081796.

Full text
Abstract:
Standardization of radiotherapy structure names is essential for developing data-driven personalized radiotherapy treatment plans. Different types of data are associated with radiotherapy structures, such as the physician-given text labels, geometric (image) data, and Dose-Volume Histograms (DVH). Prior work on structure name standardization used just one type of data. We present novel approaches to integrate complementary types (views) of structure data to build better-performing machine learning models. We present two methods, namely (a) intermediate integration and (b) late integration, to combine physician-given textual structure name features and geometric information of structures. The dataset consisted of 709 prostate cancer and 752 lung cancer patients across 40 radiotherapy centers administered by the U.S. Veterans Health Administration (VA) and the Department of Radiation Oncology, Virginia Commonwealth University (VCU). We used randomly selected data from 30 centers for training and ten centers for testing. We also used the VCU data for testing. We observed that the intermediate integration approach outperformed the models with a single view of the dataset, while late integration showed comparable performance with single-view results. Thus, we demonstrate that combining different views (types of data) helps build better models for structure name standardization to enable big data analytics in radiation oncology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography