To see the other types of publications on this topic, follow the link: Multi-class classifiers.

Journal articles on the topic 'Multi-class classifiers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-class classifiers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bourke, Chris, Kun Deng, Stephen D. Scott, Robert E. Schapire, and N. V. Vinodchandran. "On reoptimizing multi-class classifiers." Machine Learning 71, no. 2-3 (April 16, 2008): 219–42. http://dx.doi.org/10.1007/s10994-008-5056-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Hung-Yi. "Efficient classifiers for multi-class classification problems." Decision Support Systems 53, no. 3 (June 2012): 473–81. http://dx.doi.org/10.1016/j.dss.2012.02.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Siedlecki, Wojciech W. "A formula for multi-class distributed classifiers." Pattern Recognition Letters 15, no. 8 (August 1994): 739–42. http://dx.doi.org/10.1016/0167-8655(94)90001-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, Seokho, Sungzoon Cho, and Pilsung Kang. "Multi-class classification via heterogeneous ensemble of one-class classifiers." Engineering Applications of Artificial Intelligence 43 (August 2015): 35–43. http://dx.doi.org/10.1016/j.engappai.2015.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bo, Shukui, and Yongju Jing. "Data Distribution Partitioning for One-Class Extraction from Remote Sensing Imagery." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 09 (February 16, 2017): 1754018. http://dx.doi.org/10.1142/s0218001417540180.

Full text
Abstract:
One-class extraction from remotely sensed imagery is researched with multi-class classifiers in this paper. With two supervised multi-class classifiers, Bayesian classifier and nearest neighbor classifier, we firstly analyzed the effect of the data distribution partitioning on one-class extraction from the remote sensing images. The data distribution partitioning refers to the way that the data set is partitioned before classification. As a parametric method, the Bayesian classifier achieved good classification performance when the data distribution was partitioned appropriately. While as a nonparametric method, the NN classifier did not require a detailed partitioning of the data distribution. For simplicity, the data set can be partitioned into two classes, the class of interest and the remainder, to extract the specific class. With appropriate partitioning of the data set, the specific class of interest was well extracted from remotely sensed imagery in the experiments. This study will be helpful for one-class extraction from remote sensing imagery with multi-class classifiers. It provides a way to improve the one-class classification from the aspect of data distribution partitioning.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Jinfu, Mingliang Bai, Na Jiang, Ran Cheng, Xianling Li, Yifang Wang, and Daren Yu. "Interclass Interference Suppression in Multi-Class Problems." Applied Sciences 11, no. 1 (January 5, 2021): 450. http://dx.doi.org/10.3390/app11010450.

Full text
Abstract:
Multi-classifiers are widely applied in many practical problems. But the features that can significantly discriminate a certain class from others are often deleted in the feature selection process of multi-classifiers, which seriously decreases the generalization ability. This paper refers to this phenomenon as interclass interference in multi-class problems and analyzes its reason in detail. Then, this paper summarizes three interclass interference suppression methods including the method based on all-features, one-class classifiers and binary classifiers and compares their effects on interclass interference via the 10-fold cross-validation experiments in 14 UCI datasets. Experiments show that the method based on binary classifiers can suppress the interclass interference efficiently and obtain the best classification accuracy among the three methods. Further experiments were done to compare the suppression effect of two methods based on binary classifiers including the one-versus-one method and one-versus-all method. Results show that the one-versus-one method can obtain a better suppression effect on interclass interference and obtain better classification accuracy. By proposing the concept of interclass inference and studying its suppression methods, this paper significantly improves the generalization ability of multi-classifiers.
APA, Harvard, Vancouver, ISO, and other styles
7

Maximov, Yu, and D. Reshetova. "Tight risk bounds for multi-class margin classifiers." Pattern Recognition and Image Analysis 26, no. 4 (October 2016): 673–80. http://dx.doi.org/10.1134/s105466181604009x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

D’Andrea, Eleonora, and Beatrice Lazzerini. "A hierarchical approach to multi-class fuzzy classifiers." Expert Systems with Applications 40, no. 9 (July 2013): 3828–40. http://dx.doi.org/10.1016/j.eswa.2012.12.097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abdallah, Loai, Murad Badarna, Waleed Khalifa, and Malik Yousef. "MultiKOC: Multi-One-Class Classifier Based K-Means Clustering." Algorithms 14, no. 5 (April 23, 2021): 134. http://dx.doi.org/10.3390/a14050134.

Full text
Abstract:
In the computational biology community there are many biological cases that are considered as multi-one-class classification problems. Examples include the classification of multiple tumor types, protein fold recognition and the molecular classification of multiple cancer types. In all of these cases the real world appropriately characterized negative cases or outliers are impractical to achieve and the positive cases might consist of different clusters, which in turn might lead to accuracy degradation. In this paper we present a novel algorithm named MultiKOC multi-one-class classifiers based K-means to deal with this problem. The main idea is to execute a clustering algorithm over the positive samples to capture the hidden subdata of the given positive data, and then building up a one-class classifier for every cluster member’s examples separately: in other word, train the OC classifier on each piece of subdata. For a given new sample, the generated classifiers are applied. If it is rejected by all of those classifiers, the given sample is considered as a negative sample, otherwise it is a positive sample. The results of MultiKOC are compared with the traditional one-class, multi-one-class, ensemble one-classes and two-class methods, yielding a significant improvement over the one-class and like the two-class performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Krawczyk, Bartosz, Mikel Galar, Michał Woźniak, Humberto Bustince, and Francisco Herrera. "Dynamic ensemble selection for multi-class classification with one-class classifiers." Pattern Recognition 83 (November 2018): 34–51. http://dx.doi.org/10.1016/j.patcog.2018.05.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sultana, Jabeen, Abdul Khader Jilani, and . "Predicting Breast Cancer Using Logistic Regression and Multi-Class Classifiers." International Journal of Engineering & Technology 7, no. 4.20 (November 28, 2018): 22. http://dx.doi.org/10.14419/ijet.v7i4.20.22115.

Full text
Abstract:
The primary identification and prediction of type of the cancer ought to develop a compulsion in cancer study, in order to assist and supervise the patients. The significance of classifying cancer patients into high or low risk clusters needs commanded many investigation teams, from the biomedical and the bioinformatics area, to learn and analyze the application of machine learning (ML) approaches. Logistic Regression method and Multi-classifiers has been proposed to predict the breast cancer. To produce deep predictions in a new environment on the breast cancer data. This paper explores the different data mining approaches using Classification which can be applied on Breast Cancer data to build deep predictions. Besides this, this study predicts the best Model yielding high performance by evaluating dataset on various classifiers. In this paper Breast cancer dataset is collected from the UCI machine learning repository has 569 instances with 31 attributes. Data set is pre-processed first and fed to various classifiers like Simple Logistic-regression method, IBK, K-star, Multi-Layer Perceptron (MLP), Random Forest, Decision table, Decision Trees (DT), PART, Multi-Class Classifiers and REP Tree. 10-fold cross validation is applied, training is performed so that new Models are developed and tested. The results obtained are evaluated on various parameters like Accuracy, RMSE Error, Sensitivity, Specificity, F-Measure, ROC Curve Area and Kappa statistic and time taken to build the model. Result analysis reveals that among all the classifiers Simple Logistic Regression yields the deep predictions and obtains the best model yielding high and accurate results followed by other methods IBK: Nearest Neighbor Classifier, K-Star: instance-based Classifier, MLP- Neural network. Other Methods obtained less accuracy in comparison with Logistic regression method.
APA, Harvard, Vancouver, ISO, and other styles
12

Mei, Kuizhi, Ji Zhang, Guohui Li, Bao Xi, Nanning Zheng, and Jianping Fan. "Training more discriminative multi-class classifiers for hand detection." Pattern Recognition 48, no. 3 (March 2015): 785–97. http://dx.doi.org/10.1016/j.patcog.2014.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Vluymans, Sarah, Dánel Sánchez Tarragó, Yvan Saeys, Chris Cornelis, and Francisco Herrera. "Fuzzy rough classifiers for class imbalanced multi-instance data." Pattern Recognition 53 (May 2016): 36–45. http://dx.doi.org/10.1016/j.patcog.2015.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kang, Seokho, Sungzoon Cho, and Pilsung Kang. "Constructing a multi-class classifier using one-against-one approach with different binary classifiers." Neurocomputing 149 (February 2015): 677–82. http://dx.doi.org/10.1016/j.neucom.2014.08.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fragoso, Rogério C. P., George D. C. Cavalcanti, Roberto H. W. Pinheiro, and Luiz S. Oliveira. "Dynamic selection and combination of one-class classifiers for multi-class classification." Knowledge-Based Systems 228 (September 2021): 107290. http://dx.doi.org/10.1016/j.knosys.2021.107290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

SHYU, MEI-LING, CHAO CHEN, and SHU-CHING CHEN. "MULTI-CLASS CLASSIFICATION VIA SUBSPACE MODELING." International Journal of Semantic Computing 05, no. 01 (March 2011): 55–78. http://dx.doi.org/10.1142/s1793351x1100116x.

Full text
Abstract:
Aiming to build a satisfactory supervised classifier, this paper proposes a Multi-class Subspace Modeling (MSM) classification framework. The framework consists of three parts, namely Principal Component Classifier Training Array, Principal Component Classifier Testing Array, and Label Coordinator. The role of Principal Component Classifier Training Array is to get a set of optimized parameters and principal components from each subspace-based training classifier and pass them to the corresponding subspace-based testing classifier in Principal Component Classifier Testing Array. In each subspace-based training classifier, the instances are projected from the original space into the principal component (PC) subspace, where a PC selection method is developed and applied to construct the PC subspace. In Principal Component Classifier Testing Array, each subspace-based testing classifier will utilize the parameters and PCs from its corresponding subspace-based training classifier to determine whether to assign its class label to the instances. Since one instance may be assigned zero or more than one label by the Principal Component Classifier Testing Array, the Label Coordinator is designed to coordinate the final class label of an instance according to its Attaching Proportion (AP) values towards multiple classes. To evaluate the classification accuracy, 10 rounds of 3-fold cross-validation are conducted and many popular classification algorithms (like SVM, Decision Trees, Multi-layer Perceptron, Logistic, etc.) are served as comparative peers. Experimental results show that our proposed MSM classification framework outperforms those compared classifiers in 10 data sets, among which 8 of them hold a confidence level of significance higher than 99.5%. In addition, our framework shows its ability of handling imbalanced data set. Finally, a demo is built to display the accuracy and detailed information of the classification.
APA, Harvard, Vancouver, ISO, and other styles
17

Qin, Yu Ping, Peng Da Qin, Yi Wang, and Shu Xian Lun. "A New Optimal Binary Tree SVM Multi-Class Classification Algorithm." Applied Mechanics and Materials 373-375 (August 2013): 1085–88. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.1085.

Full text
Abstract:
A improved binary tree SVM multi-class classification algorithm is proposed. Firstly, constructing the minimum hyper ellipsoid for each class sample in the feather space, and then generating optimal binary tree according to the hyper ellipsoid volume, training sub-classifier for every non-leaf node in the binary tree at the same time. For the sample to be classified, the sub-classifiers are used from the root node until one leaf node, and the corresponding class of the leaf node is the class of the sample. The experiments are done on the Statlog database, and the experimental results show that the algorithm improves classification precision and classification speed, especially in the situation that the number of class are more and their distribution area are equal approximately, the algorithm can greatly improve the classification precision and classification speed.
APA, Harvard, Vancouver, ISO, and other styles
18

Pal, Mahendra, Thorkild Rasmussen, and Alok Porwal. "Optimized Lithological Mapping from Multispectral and Hyperspectral Remote Sensing Images Using Fused Multi-Classifiers." Remote Sensing 12, no. 1 (January 3, 2020): 177. http://dx.doi.org/10.3390/rs12010177.

Full text
Abstract:
Most available studies in lithological mapping using spaceborne multispectral and hyperspectral remote sensing images employ different classification and spectral matching algorithms for performing this task; however, our experiment reveals that no single algorithm renders satisfactory results. Therefore, a new approach based on an ensemble of classifiers is presented for lithological mapping using remote sensing images in this paper, which returns enhanced accuracy. The proposed method uses a weighted pooling approach for lithological mapping at each pixel level using the agreement of the class accuracy, overall accuracy and kappa coefficient from the multi-classifiers of an image. The technique is implemented in four steps; (1) classification images are generated using a variety of classifiers; (2) accuracy assessments are performed for each class, overall classification and estimation of kappa coefficient for every classifier; (3) an overall within-class accuracy index is estimated by weighting class accuracy, overall accuracy and kappa coefficient for each class and every classifier; (4) finally each pixel is assigned to a class for which it has the highest overall within-class accuracy index amongst all classes in all classifiers. To demonstrate the strength of the developed approach, four supervised classifiers (minimum distance (MD), spectral angle mapper (SAM), spectral information divergence (SID), support vector machine (SVM)) are used on one hyperspectral image (Hyperion) and two multispectral images (ASTER, Landsat 8-OLI) for mapping lithological units of the Udaipur area, Rajasthan, western India. The method is found significantly effective in increasing the accuracy in lithological mapping.
APA, Harvard, Vancouver, ISO, and other styles
19

Sidiq, S. Jahangeer, Majid Zaman, and Muheet Butt. "An Empirical Comparison of Classifiers for Multi-Class Imbalance Learning." International Journal of Data Mining And Emerging Technologies 8, no. 1 (2018): 115. http://dx.doi.org/10.5958/2249-3220.2018.00013.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Shiraishi, Yuichi, and Kenji Fukumizu. "Statistical approaches to combining binary classifiers for multi-class classification." Neurocomputing 74, no. 5 (February 2011): 680–88. http://dx.doi.org/10.1016/j.neucom.2010.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Viéville, Thierry, and Sylvie Crahay. "Using an Hebbian Learning Rule for Multi-Class SVM Classifiers." Journal of Computational Neuroscience 17, no. 3 (November 2004): 271–87. http://dx.doi.org/10.1023/b:jcns.0000044873.20850.9c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Pahikkala, Tapio, Antti Airola, Fabian Gieseke, and Oliver Kramer. "On Unsupervised Training of Multi-Class Regularized Least-Squares Classifiers." Journal of Computer Science and Technology 29, no. 1 (January 2014): 90–104. http://dx.doi.org/10.1007/s11390-014-1414-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Jie, Yi-Xuan Wang, Yuan-Yuan Qiao, Xiao-Xing Zhao, Fang Liu, and Gang Cheng. "On Evaluating Multi-class Network Traffic Classifiers Based on AUC." Wireless Personal Communications 83, no. 3 (March 4, 2015): 1731–50. http://dx.doi.org/10.1007/s11277-015-2473-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mo, Lingfei, Lujie Zeng, Shaopeng Liu, and Robert X. Gao. "Multi-Sensor Activity Monitoring: Combination of Models with Class-Specific Voting." Information 10, no. 6 (June 4, 2019): 197. http://dx.doi.org/10.3390/info10060197.

Full text
Abstract:
This paper presents a multi-sensor model combination system with class-specific voting for physical activity monitoring, which combines multiple classifiers obtained by splicing sensor data from different nodes into new data frames to improve the diversity of model inputs. Data obtained from a wearable multi-sensor wireless integrated measurement system (WIMS) consisting of two accelerometers and one ventilation sensor have been analysed to identify 10 different activity types of varying intensities performed by 110 voluntary participants. It is noted that each classifier shows better performance on some specific activity classes. Through class-specific weighted majority voting, the recognition accuracy of 10 PA types has been improved from 86% to 92% compared with the non-combination approach. Furthermore, the combination method has shown to be effective in reducing the subject-to-subject variability (standard deviation of recognition accuracies across subjects) in activity recognition and has better performance in monitoring physical activities of varying intensities than traditional homogeneous classifiers.
APA, Harvard, Vancouver, ISO, and other styles
25

Ropelewska, Ewa. "The Application of Computer Image Analysis Based on Textural Features for the Identification of Barley Kernels Infected with Fungi of the Genus Fusarium." Agricultural Engineering 22, no. 3 (September 1, 2018): 49–56. http://dx.doi.org/10.1515/agriceng-2018-0026.

Full text
Abstract:
AbstractThe aim of this study was to develop discrimination models based on textural features for the identification of barley kernels infected with fungi of the genus Fusarium and healthy kernels. Infected barley kernels with altered shape and discoloration and healthy barley kernels were scanned. Textures were computed using MaZda software. The kernels were classified as infected and healthy with the use of the WEKA application. In the case of RGB, Lab and XYZ color models, the classification accuracies based on 10 selected textures with the highest discriminative power ranged from 95 to 100%. The lowest result (95%) was noted in XYZ color model and Multi Class Classifier for the textures selected using the Ranker method and the OneR attribute evaluator. Selected classifiers were characterized by 100% accuracy in the case of all color models and selection methods. The highest number of 100% results was obtained for the Lab color model with Naive Bayes, LDA, IBk, Multi Class Classifier and J48 classifiers in the Best First selection method with the CFS subset evaluator.
APA, Harvard, Vancouver, ISO, and other styles
26

XU, XINYU, and BAOXIN LI. "MULTIPLE CLASS MULTIPLE-INSTANCE LEARNING AND ITS APPLICATION TO IMAGE CATEGORIZATION." International Journal of Image and Graphics 07, no. 03 (July 2007): 427–44. http://dx.doi.org/10.1142/s021946780700274x.

Full text
Abstract:
We propose a Multiple Class Multiple-Instance (MCMI) learning approach and demonstrate its application to the problem of image categorization. Our method extends the binary Multiple-Instance learning approach for image categorization. Instead of constructing a set of binary classifiers (each trained to separate one category from the rest) and then making the final decision based on the winner of all the binary classifiers, our method directly allows the computation of a multi-class classifier by first projecting each training image onto a multi-class feature space and then simultaneously minimizing the multi-class objective function in a Support Vector Machine framework. The multi-class feature space is constructed based on the instance prototypes obtained by Multiple-Instance learning which treats an image as a set of instances with training labels being associated with images rather than instances. The experiment results on two challenging data sets demonstrate that our method achieved better classification accuracy and is less sensitive to the training sample size compared with traditional one-versus-the-rest binary MI classification methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Xiao, Jie, Yunpeng Wang, and Hua Su. "Combining Support Vector Machines with Distance-based Relative Competence Weighting for Remote Sensing Image Classification: A Case Study." Journal of Imaging Science and Technology 64, no. 1 (January 1, 2020): 10503–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2020.64.1.010503.

Full text
Abstract:
Abstract A classification problem involving multi-class samples is typically divided into a set of two-class sub-problems. The pairwise probabilities produced by the binary classifiers are subsequently combined to generate a final result. However, only the binary classifiers that have been trained with the unknown real class of an unlabeled sample are relevant to the multi-class problem. A distance-based relative competence weighting (DRCW) combination mechanism can estimate the competence of the binary classifiers. In this work, we adapt the DRCW mechanism to the support vector machine (SVM) approach for the classification of remote sensing images. The application of DRCW can allow the competence of a binary classifier to be estimated from the spectral information. It is therefore possible to distinguish the relevant and irrelevant binary classifiers. The SVM+DRCW classification approach is applied to analyzing the land-use/land-cover patterns in Guangzhou, China from the remotely sensed images from Landsat-5 TM and SPOT-5. The results show that the SVM+DRVW approach can achieve higher classification accuracies compared to the conventional SVM and SVMs combined with other combination mechanisms such as weighted voting (WV) and probability estimates by pairwise coupling (PE).
APA, Harvard, Vancouver, ISO, and other styles
28

Alhudhaif, Adi. "A novel multi-class imbalanced EEG signals classification based on the adaptive synthetic sampling (ADASYN) approach." PeerJ Computer Science 7 (May 14, 2021): e523. http://dx.doi.org/10.7717/peerj-cs.523.

Full text
Abstract:
Background Brain signals (EEG—Electroencephalography) are a gold standard frequently used in epilepsy prediction. It is crucial to predict epilepsy, which is common in the community. Early diagnosis is essential to reduce the treatment process of the disease and to keep the process healthier. Methods In this study, a five-classes dataset was used: EEG signals from different individuals, healthy EEG signals from tumor document, EEG signal with epilepsy, EEG signal with eyes closed, and EEG signal with eyes open. Four different methods have been proposed to classify five classes of EEG signals. In the first approach, the EEG signal was first divided into four different bands (beta, alpha, theta, and delta), and then 25 time-domain features were extracted from each band, and the main EEG signal and these extracted features were combined to obtain 125-time domain features (feature extraction). Using the Random Forests classifier, EEG activities were classified into five classes. In the second approach, each One-Against-One (OVO) approach with 125 attributes was split into ten parts, pairwise, and then each piece was classified with the Random Forests classifier. The majority voting scheme was used to combine decisions from the ten classifiers. In the third proposed method, each One-Against-All (OVA) approach with 125 attributes was divided into five parts, and then each piece was classified with the Random Forests classifier. The majority voting scheme was used to combine decisions from the five classifiers. In the fourth proposed approach, each One-Against-All (OVA) approach with 125 attributes was divided into five parts. Since each piece obtained had an imbalanced data distribution, an adaptive synthetic (ADASYN) sampling approach was used to stabilize each piece. Then, each balanced piece was classified with the Random Forests classifier. To combine the decisions obtanied from each classifier, the majority voting scheme has been used. Results The first approach achieved 71.90% classification success in classifying five-class EEG signals. The second approach achieved a classification success of 91.08% in classifying five-class EEG signals. The third method achieved 89% success, while the fourth proposed approach achieved 91.72% success. The results obtained show that the proposed fourth approach (the combination of the ADASYN sampling approach and Random Forest Classifier) achieved the best success in classifying five class EEG signals. This proposed method could be used in the detection of epilepsy events in the EEG signals.
APA, Harvard, Vancouver, ISO, and other styles
29

Iosifidis, Alexandros, and Moncef Gabbouj. "Multi-class Support Vector Machine classifiers using intrinsic and penalty graphs." Pattern Recognition 55 (July 2016): 231–46. http://dx.doi.org/10.1016/j.patcog.2016.02.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sánchez-Monedero, Javier, Pedro A. Gutiérrez, F. Fernández-Navarro, and C. Hervás-Martínez. "Weighting Efficient Accuracy and Minimum Sensitivity for Evolving Multi-Class Classifiers." Neural Processing Letters 34, no. 2 (May 28, 2011): 101–16. http://dx.doi.org/10.1007/s11063-011-9186-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Takenouchi, Takashi, and Shin Ishii. "Binary classifiers ensemble based on Bregman divergence for multi-class classification." Neurocomputing 273 (January 2018): 424–34. http://dx.doi.org/10.1016/j.neucom.2017.08.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Diri, Banu, and Songul Albayrak. "Visualization and analysis of classifiers performance in multi-class medical data." Expert Systems with Applications 34, no. 1 (January 2008): 628–34. http://dx.doi.org/10.1016/j.eswa.2006.10.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lorrentz, P., W. G. J. Howells, and K. D. Mcdonald-Maier. "An advanced combination strategy for multi-classifiers employed in large multi-class problem domains." Applied Soft Computing 11, no. 2 (March 2011): 2151–63. http://dx.doi.org/10.1016/j.asoc.2010.07.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lv, Feng, Ni Du, and Hai Lian Du. "A Method of Multi-Classifier Combination Based on Dempster-Shafer Evidence Theory and the Application in the Fault Diagnosis." Advanced Materials Research 490-495 (March 2012): 1402–6. http://dx.doi.org/10.4028/www.scientific.net/amr.490-495.1402.

Full text
Abstract:
A problem is aroused in multi-classifier system that normally each of the classifiers is considered equally important in evidences’ combination, which gone against with the knowledge that different classifier has various performance due to diversity of classifiers. Therefore, how to determine the weights of individual classifier in order to get more accurate results becomes a question need to be solved. An optimal weight learning method is presented in this paper. First, the training samples are respectively input into the multi-classifier system based on Dempster-Shafer theory in order to obtain the output vector. Then the error is calculated by means of figuring up the distance between the output vector and class vector of corresponding training sample, and the objective function is defined as mean-square error of all the training samples. The optimal weight vector is obtained by means of minimizing the objective function. Finally, new samples are classified according to the optimal weight vector. The effectiveness of this method is illustrated by the UCI standard data set and electric actuator fault diagnostic experiment.
APA, Harvard, Vancouver, ISO, and other styles
35

Eslami, Elham, and Hae-Bum Yun. "Attention-Based Multi-Scale Convolutional Neural Network (A+MCNN) for Multi-Class Classification in Road Images." Sensors 21, no. 15 (July 29, 2021): 5137. http://dx.doi.org/10.3390/s21155137.

Full text
Abstract:
Automated pavement distress recognition is a key step in smart infrastructure assessment. Advances in deep learning and computer vision have improved the automated recognition of pavement distresses in road surface images. This task remains challenging due to the high variation of defects in shapes and sizes, demanding a better incorporation of contextual information into deep networks. In this paper, we show that an attention-based multi-scale convolutional neural network (A+MCNN) improves the automated classification of common distress and non-distress objects in pavement images by (i) encoding contextual information through multi-scale input tiles and (ii) employing a mid-fusion approach with an attention module for heterogeneous image contexts from different input scales. A+MCNN is trained and tested with four distress classes (crack, crack seal, patch, pothole), five non-distress classes (joint, marker, manhole cover, curbing, shoulder), and two pavement classes (asphalt, concrete). A+MCNN is compared with four deep classifiers that are widely used in transportation applications and a generic CNN classifier (as the control model). The results show that A+MCNN consistently outperforms the baselines by 1∼26% on average in terms of the F-score. A comprehensive discussion is also presented regarding how these classifiers perform differently on different road objects, which has been rarely addressed in the existing literature.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Shuang, Evgueni Nikolaevich Smirnov, and Ralf Peeters. "Conformal Region Classification with Instance-Transfer Boosting." International Journal on Artificial Intelligence Tools 24, no. 06 (December 2015): 1560002. http://dx.doi.org/10.1142/s0218213015600027.

Full text
Abstract:
Conformal region classification focuses on developing region classifiers; i.e., classifiers that output regions (sets) of classes for new test instances.2,13,16 Conformal region classifiers have been proven to be valid for any significance level [Formula: see text] in the sense that the probability the class regions do not contain the true instances' classes does not exceed [Formula: see text]. In practice, however, conformal region classifiers need to be also efficient; i.e., they have to output non-empty and relatively small class regions. In this paper we show that conformal region classification can benefit from instance transfer learning. Our new approach consists of the basic conformal region classifier with a nonconformity function that implements instance transfer. We propose to learn such a function using a new multi-class Transfer AdaBoost.M1 algorithm. The function and its relation to the conformal region classification are theoretically justified. The experiments showed that our approach is valid for any significance level [Formula: see text] and its efficiency can be improved with instance transfer.
APA, Harvard, Vancouver, ISO, and other styles
37

Valverde-Albacete, Francisco J., and Carmen Peláez-Moreno. "Two information-theoretic tools to assess the performance of multi-class classifiers." Pattern Recognition Letters 31, no. 12 (September 2010): 1665–71. http://dx.doi.org/10.1016/j.patrec.2010.05.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Uchiyama, Emiko, Tomoyuki Maekawa, Ikuo Kusajima, Wataru Takano, and Yoshihiko Nakamura. "Comparing performance of multi-class classifiers for grasping patterns from EEG data." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2016 (2016): 1P1–12b3. http://dx.doi.org/10.1299/jsmermd.2016.1p1-12b3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Farid, Dewan Md, Li Zhang, Chowdhury Mofizur Rahman, M. A. Hossain, and Rebecca Strachan. "Hybrid decision tree and naïve Bayes classifiers for multi-class classification tasks." Expert Systems with Applications 41, no. 4 (March 2014): 1937–46. http://dx.doi.org/10.1016/j.eswa.2013.08.089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Maximov, Yury, Massih-Reza Amini, and Zaid Harchaoui. "Rademacher Complexity Bounds for a Penalized Multi-class Semi-supervised Algorithm." Journal of Artificial Intelligence Research 61 (April 11, 2018): 761–86. http://dx.doi.org/10.1613/jair.5638.

Full text
Abstract:
We propose Rademacher complexity bounds for multi-class classifiers trained with a two-step semi-supervised model. In the first step, the algorithm partitions the partially labeled data and then identifies dense clusters containing k predominant classes using the labeled training examples such that the proportion of their non-predominant classes is below a fixed threshold stands for clustering consistency. In the second step, a classifier is trained by minimizing a margin empirical loss over the labeled training set and a penalization term measuring the disability of the learner to predict the k predominant classes of the identified clusters. The resulting data-dependent generalization error bound involves the margin distribution of the classifier, the stability of the clustering technique used in the first step and Rademacher complexity terms corresponding to partially labeled training data. Our theoretical result exhibit convergence rates extending those proposed in the literature for the binary case, and experimental results on different multi-class classification problems show empirical evidence that supports the theory.
APA, Harvard, Vancouver, ISO, and other styles
41

La, Lei, Qiao Guo, Dequan Yang, and Qimin Cao. "Multiclass Boosting with Adaptive Group-BasedkNN and Its Application in Text Categorization." Mathematical Problems in Engineering 2012 (2012): 1–24. http://dx.doi.org/10.1155/2012/793490.

Full text
Abstract:
AdaBoost is an excellent committee-based tool for classification. However, its effectiveness and efficiency in multiclass categorization face the challenges from methods based on support vector machine (SVM), neural networks (NN), naïve Bayes, andk-nearest neighbor (kNN). This paper uses a novel multi-class AdaBoost algorithm to avoid reducing the multi-class classification problem to multiple two-class classification problems. This novel method is more effective. In addition, it keeps the accuracy advantage of existing AdaBoost. An adaptive group-basedkNN method is proposed in this paper to build more accurate weak classifiers and in this way control the number of basis classifiers in an acceptable range. To further enhance the performance, weak classifiers are combined into a strong classifier through a double iterative weighted way and construct an adaptive group-basedkNN boosting algorithm (AGkNN-AdaBoost). We implement AGkNN-AdaBoost in a Chinese text categorization system. Experimental results showed that the classification algorithm proposed in this paper has better performance both in precision and recall than many other text categorization methods including traditional AdaBoost. In addition, the processing speed is significantly enhanced than original AdaBoost and many other classic categorization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

SU, XIAOYUAN, and TAGHI M. KHOSHGOFTAAR. "COLLABORATIVE FILTERING FOR MULTI-CLASS DATA USING BAYESIAN NETWORKS." International Journal on Artificial Intelligence Tools 17, no. 01 (February 2008): 71–85. http://dx.doi.org/10.1142/s0218213008003789.

Full text
Abstract:
As one of the most successful recommender systems, collaborative filtering (CF) algorithms are required to deal with high sparsity and high requirement of scalability amongst other challenges. Bayesian networks (BNs), one of the most frequently used classifiers, can be used for CF tasks. Previous works on applying BNs to CF tasks were mainly focused on binary-class data, and used simple or basic Bayesian classifiers.1,2 In this work, we apply advanced BNs models to CF tasks instead of simple ones, and work on real-world multi-class CF data instead of synthetic binary-class data. Empirical results show that with their ability to deal with incomplete data, the extended logistic regression on tree augmented naïve Bayes (TAN-ELR)3 CF model consistently performs better than the traditional Pearson correlation-based CF algorithm for the rating data that have few items or high missing rates. In addition, the ELR-optimized BNs CF models are robust in terms of the ability to make predictions, while the robustness of the Pearson correlation-based CF algorithm degrades as the sparseness of the data increases.
APA, Harvard, Vancouver, ISO, and other styles
43

DE STEFANO, CLAUDIO, CIRO D'ELIA, ALESSANDRA SCOTTO DI FRECA, and ANGELO MARCELLI. "CLASSIFIER COMBINATION BY BAYESIAN NETWORKS FOR HANDWRITING RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 05 (August 2009): 887–905. http://dx.doi.org/10.1142/s0218001409007387.

Full text
Abstract:
In the field of handwriting recognition, classifier combination received much more interest than the study of powerful individual classifiers. This is mainly due to the enormous variability among the patterns to be classified, that typically requires the definition of complex high dimensional feature spaces: as the overall complexity increases, the risk of inconsistency in the decision of the classifier increases as well. In this framework, we propose a new combining method based on the use of a Bayesian Network. In particular, we suggest to reformulate the classifier combination problem as a pattern recognition one in which each input pattern is associated to a feature vector composed by the output of the classifiers to be combined. A Bayesian Network is then used to automatically infer the probability distribution for each class and eventually to perform the final classification. Experiments have been performed by using two different pools of classifiers, namely an ensemble of Learning Vector Quantization neural networks and an ensemble of Back Propagation neural networks, and handwritten specimen from the UCI Machine Learning Repository. The obtained performance has been compared with those exhibited by multi-classifier systems adopting the classifiers, but three of the most effective and widely used combining rules: the Majority Vote, the Weighted Majority Vote and the Borda Count.
APA, Harvard, Vancouver, ISO, and other styles
44

Emamipour, Sajad, Rasoul Sali, and Zahra Yousefi. "A Multi-Objective Ensemble Method for Class Imbalance Learning." International Journal of Big Data and Analytics in Healthcare 2, no. 1 (January 2017): 16–34. http://dx.doi.org/10.4018/ijbdah.2017010102.

Full text
Abstract:
This article describes how class imbalance learning has attracted great attention in recent years as many real world domain applications suffer from this problem. Imbalanced class distribution occurs when the number of training examples for one class far surpasses the training examples of the other class often the one that is of more interest. This problem may produce an important deterioration of the classifier performance, in particular with patterns belonging to the less represented classes. Toward this end, the authors developed a hybrid model to address the class imbalance learning with focus on binary class problems. This model combines benefits of the ensemble classifiers with a multi objective feature selection technique to achieve higher classification performance. The authors' model also proposes non-dominated sets of features. Then they evaluate the performance of the proposed model by comparing its results with notable algorithms for solving imbalanced data problem. Finally, the authors utilize the proposed model in medical domain of predicting life expectancy in post-operative of thoracic surgery patients.
APA, Harvard, Vancouver, ISO, and other styles
45

Hadjadji, Bilal, Youcef Chibani, and Yasmine Guerbai. "Combining diverse one-class classifiers by means of dynamic weighted average for multi-class pattern classification." Intelligent Data Analysis 21, no. 3 (June 29, 2017): 515–35. http://dx.doi.org/10.3233/ida-150420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Riri, Hicham, Mohammed Ed-Dhahraouy, Abdelmajid Elmoutaouakkil, Abderrahim Beni-Hssane, and Farid Bourzgui. "Extracted features based multi-class classification of orthodontic images." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 4 (August 1, 2020): 3558. http://dx.doi.org/10.11591/ijece.v10i4.pp3558-3567.

Full text
Abstract:
The purpose of this study is to investigate computer vision and machine learning methods for classification of orthodontic images in order to provide orthodontists with a solution for multi-class classification of patients’ images to evaluate the evolution of their treatment. Of which, we proposed three algorithms based on extracted features, such as facial features and skin colour using YCbCrcolour space, assigned to nodes of a decision tree to classify orthodontic images: an algorithm for intra-oral images, an algorithm for mould images and an algorithm for extra-oral images. Then, we compared our method by implementing the Local Binary Pattern (LBP) algorithm to extract textural features from images. After that, we applied the principal component analysis (PCA) algorithm to optimize the redundant parameters in order to classify LBP features with six classifiers; Quadratic Support Vector Machine (SVM), Cubic SVM, Radial Basis Function SVM, Cosine K-Nearest Neighbours (KNN), Euclidian KNN, and Linear Discriminant Analysis (LDA). The presented algorithms have been evaluated on a dataset of images of 98 different patients, and experimental results demonstrate the good performances of our proposed method with a high accuracy compared with machine learning algorithms. Where LDA classifier achieves an accuracy of 84.5%.
APA, Harvard, Vancouver, ISO, and other styles
47

CHATELAIN, CLEMENT, SEBASTIEN ADAM, YVES LECOURTIER, LAURENT HEUTTE, and THIERRY PAQUET. "NONCOST SENSITIVE SVM TRAINING USING MULTIPLE MODEL SELECTION." Journal of Circuits, Systems and Computers 19, no. 01 (February 2010): 231–42. http://dx.doi.org/10.1142/s0218126610005937.

Full text
Abstract:
In this paper, we propose a multi-objective optimization framework for SVM hyperparameters tuning. The key idea is to manage a population of classifiers optimizing both False Positive and True Positive rates rather than a single classifier optimizing a scalar criterion. Hence, each classifier in the population optimizes a particular trade-off between the objectives. Within the context of two-class classification problems, our work introduces "the receiver operating characteristics (ROC) front concept" depicting a population of SVM classifiers as an alternative to the receiver operating characteristics (ROC) curve representation. The proposed framework leads to a noncost sensitive SVM training relying on the pool of classifiers. The comparison with a traditional scalar optimization technique based on an AUC criterion shows promising results on UCI datasets.
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Shuang, Peng Chen, and Keqiu Li. "Multiple sub-hyper-spheres support vector machine for multi-class classification." International Journal of Wavelets, Multiresolution and Information Processing 12, no. 03 (May 2014): 1450035. http://dx.doi.org/10.1142/s0219691314500350.

Full text
Abstract:
Support vector machine (SVM) is originally proposed to solve binary classification problem. Multi-class classification is solved by combining multiple binary classifiers, which leads to high computation cost by introducing many quadratic programming (QP) problems. To decrease computation cost, hyper-sphere SVM is put forward to compute class-specific hyper-sphere for each class. If all resulting hyper-spheres are independent, all training and test samples can be correctly classified. When some of hyper-spheres intersect, new decision rules should be adopted. To solve this problem, a multiple sub-hyper-sphere SVM is put forward in this paper. New algorithm computed hyper-spheres by SMO algorithm for all classes first, and then obtained position relationships between hyper-spheres. If hyper-spheres belong to the intersection set, overlap coefficient is computed based on map of key value index and mother hyper-spheres are partitioned into a series of sub-hyper-spheres. For the new intersecting hyper-spheres, one similarity function or same error sub-hyper-sphere or different error sub-hyper-sphere are used as decision rule. If hyper-spheres belong to the inclusion set, the hyper-sphere with larger radius is partitioned into sub-hyper-spheres. If hyper-spheres belong to the independence set, a decision function is defined for classification. With experimental results compared to other hyper-sphere SVMs, our new proposed algorithm improves the performance of the resulting classifier and decreases computation complexity for decision on both artificial and benchmark data set.
APA, Harvard, Vancouver, ISO, and other styles
49

Uchiyama, Emiko, Wataru Takano, and Yoshihiko Nakamura. "Multi-class grasping classifiers using EEG data and a common spatial pattern filter." Advanced Robotics 31, no. 9 (January 27, 2017): 468–81. http://dx.doi.org/10.1080/01691864.2017.1279569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Patel, Jitendra, and Anurag Jain. "Hybrid Genetic and Dempster Shafer Theory based Classifiers for Multi-Class Classification Tasks." International Journal of Computer Applications 137, no. 2 (March 17, 2016): 5–9. http://dx.doi.org/10.5120/ijca2016908679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography