Journal articles on the topic 'Large margin classifiers'

To see the other types of publications on this topic, follow the link: Large margin classifiers.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Large margin classifiers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Leite, Saul C., and Raul Fonseca Neto. "Incremental margin algorithm for large margin classifiers." Neurocomputing 71, no. 7-9 (March 2008): 1550–60. http://dx.doi.org/10.1016/j.neucom.2007.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qi, Zhengling, and Yufeng Liu. "Convex Bidirectional Large Margin Classifiers." Technometrics 61, no. 2 (September 12, 2018): 176–86. http://dx.doi.org/10.1080/00401706.2018.1497544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Yichao, and Yufeng Liu. "Adaptively Weighted Large Margin Classifiers." Journal of Computational and Graphical Statistics 22, no. 2 (April 2013): 416–32. http://dx.doi.org/10.1080/10618600.2012.680866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Domeniconi, C., D. Gunopulos, and J. Peng. "Large Margin Nearest Neighbor Classifiers." IEEE Transactions on Neural Networks 16, no. 4 (July 2005): 899–909. http://dx.doi.org/10.1109/tnn.2005.849821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, J., X. Shen, and Y. Liu. "Probability estimation for large-margin classifiers." Biometrika 95, no. 1 (January 31, 2008): 149–67. http://dx.doi.org/10.1093/biomet/asm077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gottlieb, Lee-Ad, Eran Kaufman, and Aryeh Kontorovich. "Apportioned margin approach for cost sensitive large margin classifiers." Annals of Mathematics and Artificial Intelligence 89, no. 12 (October 8, 2021): 1215–35. http://dx.doi.org/10.1007/s10472-021-09776-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fu, Sheng, Sanguo Zhang, and Yufeng Liu. "Adaptively weighted large-margin angle-based classifiers." Journal of Multivariate Analysis 166 (July 2018): 282–99. http://dx.doi.org/10.1016/j.jmva.2018.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cevikalp, Hakan, Bill Triggs, Hasan Serhan Yavuz, Yalçın Küçük, Mahide Küçük, and Atalay Barkana. "Large margin classifiers based on affine hulls." Neurocomputing 73, no. 16-18 (October 2010): 3160–68. http://dx.doi.org/10.1016/j.neucom.2010.06.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaizhu Huang, Haiqin Yang, I. King, and M. R. Lyu. "Maxi–Min Margin Machine: Learning Large Margin Classifiers Locally and Globally." IEEE Transactions on Neural Networks 19, no. 2 (February 2008): 260–72. http://dx.doi.org/10.1109/tnn.2007.905855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bermejo, Sergio, and Joan Cabestany. "Oriented principal component analysis for large margin classifiers." Neural Networks 14, no. 10 (December 2001): 1447–61. http://dx.doi.org/10.1016/s0893-6080(01)00106-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Haffner, Patrick. "Scaling large margin classifiers for spoken language understanding." Speech Communication 48, no. 3-4 (March 2006): 239–61. http://dx.doi.org/10.1016/j.specom.2005.06.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Neumann, Julia, Christoph Schnörr, and Gabriele Steidl. "Efficient wavelet adaptation for hybrid wavelet–large margin classifiers." Pattern Recognition 38, no. 11 (November 2005): 1815–30. http://dx.doi.org/10.1016/j.patcog.2005.01.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Qinghua, Pengfei Zhu, Yongbin Yang, and Daren Yu. "Large-margin nearest neighbor classifiers via sample weight learning." Neurocomputing 74, no. 4 (January 2011): 656–60. http://dx.doi.org/10.1016/j.neucom.2010.09.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Sheng, Yixin Chen, and Dawn Wilkins. "Large margin classifiers and Random Forests for integrated biological prediction." International Journal of Bioinformatics Research and Applications 8, no. 1/2 (2012): 38. http://dx.doi.org/10.1504/ijbra.2012.045975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ladeira Marques, Marcelo, Saulo Moraes Villela, and Carlos Cristiano Hasenclever Borges. "Large margin classifiers to generate synthetic data for imbalanced datasets." Applied Intelligence 50, no. 11 (June 22, 2020): 3678–94. http://dx.doi.org/10.1007/s10489-020-01719-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Qiang, and Ding-Xuan Zhou. "SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming." Neural Computation 17, no. 5 (May 1, 2005): 1160–87. http://dx.doi.org/10.1162/0899766053491896.

Full text
Abstract:
Support vector machine (SVM) soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifiers are especially efficient for very large size samples. But little is known about their convergence, compared with the well-understood quadratic programming SVM classifier. In this article, we point out the difficulty and provide an error analysis. Our analysis shows that the convergence behavior of the linear programming SVM is almost the same as that of the quadratic programming SVM. This is implemented by setting a stepping-stone between the linear programming SVM and the classical 1-norm soft margin classifier. An upper bound for the misclassification error is presented for general probability distributions. Explicit learning rates are derived for deterministic and weakly separable distributions, and for distributions satisfying some Tsybakov noise condition.
APA, Harvard, Vancouver, ISO, and other styles
17

CHEN, YEN-LUN, YUAN F. ZHENG, and YI LIU. "MARGIN AND DOMAIN INTEGRATED CLASSIFICATION FOR IMAGES." International Journal of Information Acquisition 08, no. 01 (March 2011): 1–16. http://dx.doi.org/10.1142/s0219878911002343.

Full text
Abstract:
Multi-category classification is an ongoing research topic in image acquisition and processing for numerous applications. In this paper, a novel approach called margin and domain integrated classifier (MDIC) is addressed. It merges the conventional support vector machine (SVM) and support vector domain description (SVDD) classifiers, and handles multi-class problems as a combination of several target classes plus outliers. The basic idea behind the proposed approach is that target classes possess structured characteristics while outliers scatter around in the feature space. In our approach, the domain description and large-margin discrimination are adjustable and therefore yield higher classification accuracy which leads to better performance than conventional classifiers. The properties of MDIC are analyzed and the performance comparisons using synthetic and real data are presented.
APA, Harvard, Vancouver, ISO, and other styles
18

Artemiou, Andreas. "Using adaptively weighted large margin classifiers for robust sufficient dimension reduction." Statistics 53, no. 5 (July 4, 2019): 1037–51. http://dx.doi.org/10.1080/02331888.2019.1636050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Guanhua, Yufeng Liu, Dinggang Shen, and Michael R. Kosorok. "Composite large margin classifiers with latent subclasses for heterogeneous biomedical data." Statistical Analysis and Data Mining: The ASA Data Science Journal 9, no. 2 (January 8, 2016): 75–88. http://dx.doi.org/10.1002/sam.11300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Drosou, Krystallenia, Andreas Artemiou, and Christos Koukouvinos. "A comparative study of the use of large margin classifiers on seismic data." Journal of Applied Statistics 42, no. 1 (July 21, 2014): 180–201. http://dx.doi.org/10.1080/02664763.2014.938619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shen, Jianqiang, and Thomas G. Dietterich. "A family of large margin linear classifiers and its application in dynamic environments." Statistical Analysis and Data Mining: The ASA Data Science Journal 2, no. 5-6 (November 17, 2009): 328–45. http://dx.doi.org/10.1002/sam.10055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

LINIAL, NATI, and ADI SHRAIBMAN. "Learning Complexity vs Communication Complexity." Combinatorics, Probability and Computing 18, no. 1-2 (March 2009): 227–45. http://dx.doi.org/10.1017/s0963548308009656.

Full text
Abstract:
This paper has two main focal points. We first consider an important class of machine learning algorithms: large margin classifiers, such as Support Vector Machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas.In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity.Communication is a key ingredient in many types of learning. This explains the relations between the field of learning theory and that of communication complexity [6, l0, 16, 26]. The results of this paper constitute another link in this rich web of relations. These new results have already been applied toward the solution of several open problems in communication complexity [18, 20, 29].
APA, Harvard, Vancouver, ISO, and other styles
23

Song, Yunyan, Wenxin Zhu, Yingyuan Xiao, and Ping Zhong. "Robust relative margin support vector machines." Journal of Algorithms & Computational Technology 11, no. 2 (November 30, 2016): 186–91. http://dx.doi.org/10.1177/1748301816680503.

Full text
Abstract:
Recently, a class of classifiers, called relative margin machine, has been developed. Relative margin machine has shown significant improvements over the large margin counterparts on real-world problems. In binary classification, the most widely used loss function is the hinge loss, which results in the hinge loss relative margin machine. The hinge loss relative margin machine is sensitive to outliers. In this article, we proposed to change maximizing the shortest distance used in relative margin machine into maximizing the quantile distance, the pinball loss which is related to quantiles was used in classification. The proposed method is less sensitive to noise, especially the feature noise around the decision boundary. Meanwhile, the computational complexity of the proposed method is similar to that of the relative margin machine.
APA, Harvard, Vancouver, ISO, and other styles
24

Feng, Yunlong, Yuning Yang, Xiaolin Huang, Siamak Mehrkanoon, and Johan A. K. Suykens. "Robust Support Vector Machines for Classification with Nonconvex and Smooth Losses." Neural Computation 28, no. 6 (June 2016): 1217–47. http://dx.doi.org/10.1162/neco_a_00837.

Full text
Abstract:
This letter addresses the robustness problem when learning a large margin classifier in the presence of label noise. In our study, we achieve this purpose by proposing robustified large margin support vector machines. The robustness of the proposed robust support vector classifiers (RSVC), which is interpreted from a weighted viewpoint in this work, is due to the use of nonconvex classification losses. Besides the robustness, we also show that the proposed RSCV is simultaneously smooth, which again benefits from using smooth classification losses. The idea of proposing RSVC comes from M-estimation in statistics since the proposed robust and smooth classification losses can be taken as one-sided cost functions in robust statistics. Its Fisher consistency property and generalization ability are also investigated. Besides the robustness and smoothness, another nice property of RSVC lies in the fact that its solution can be obtained by solving weighted squared hinge loss–based support vector machine problems iteratively. We further show that in each iteration, it is a quadratic programming problem in its dual space and can be solved by using state-of-the-art methods. We thus propose an iteratively reweighted type algorithm and provide a constructive proof of its convergence to a stationary point. Effectiveness of the proposed classifiers is verified on both artificial and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
25

Steyrl, David, Reinhold Scherer, Josef Faller, and Gernot R. Müller-Putz. "Random forests in non-invasive sensorimotor rhythm brain-computer interfaces: a practical and convenient non-linear classifier." Biomedical Engineering / Biomedizinische Technik 61, no. 1 (February 1, 2016): 77–86. http://dx.doi.org/10.1515/bmt-2014-0117.

Full text
Abstract:
Abstract There is general agreement in the brain-computer interface (BCI) community that although non-linear classifiers can provide better results in some cases, linear classifiers are preferable. Particularly, as non-linear classifiers often involve a number of parameters that must be carefully chosen. However, new non-linear classifiers were developed over the last decade. One of them is the random forest (RF) classifier. Although popular in other fields of science, RFs are not common in BCI research. In this work, we address three open questions regarding RFs in sensorimotor rhythm (SMR) BCIs: parametrization, online applicability, and performance compared to regularized linear discriminant analysis (LDA). We found that the performance of RF is constant over a large range of parameter values. We demonstrate – for the first time – that RFs are applicable online in SMR-BCIs. Further, we show in an offline BCI simulation that RFs statistically significantly outperform regularized LDA by about 3%. These results confirm that RFs are practical and convenient non-linear classifiers for SMR-BCIs. Taking into account further properties of RFs, such as independence from feature distributions, maximum margin behavior, multiclass and advanced data mining capabilities, we argue that RFs should be taken into consideration for future BCIs.
APA, Harvard, Vancouver, ISO, and other styles
26

Guermeur, Yann, Andr� Elisseeff, and Dominique Zelus. "A comparative study of multi-class support vector machines in the unifying framework of large margin classifiers." Applied Stochastic Models in Business and Industry 21, no. 2 (2005): 199–214. http://dx.doi.org/10.1002/asmb.534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Huerta, Ramón, Shankar Vembu, José M. Amigó, Thomas Nowotny, and Charles Elkan. "Inhibition in Multiclass Classification." Neural Computation 24, no. 9 (September 2012): 2473–507. http://dx.doi.org/10.1162/neco_a_00321.

Full text
Abstract:
The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions, that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems. These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches.
APA, Harvard, Vancouver, ISO, and other styles
28

Xu, Kaiquan, Stephen Shaoyi Liao, Raymond Y. K. Lau, and J. Leon Zhao. "Effective Active Learning Strategies for the Use of Large-Margin Classifiers in Semantic Annotation: An Optimal Parameter Discovery Perspective." INFORMS Journal on Computing 26, no. 3 (August 2014): 461–83. http://dx.doi.org/10.1287/ijoc.2013.0578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

SONG, FENGXI, JANE YOU, DAVID ZHANG, and YONG XU. "IMPACT OF FULL RANK PRINCIPAL COMPONENT ANALYSIS ON CLASSIFICATION ALGORITHMS FOR FACE RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 03 (May 2012): 1256005. http://dx.doi.org/10.1142/s0218001412560058.

Full text
Abstract:
Full rank principal component analysis (FR-PCA) is a special form of principal component analysis (PCA) which retains all nonzero components of PCA. Generally speaking, it is hard to estimate how the accuracy of a classifier will change after data are compressed by PCA. However, this paper reveals an interesting fact that the transformation by FR-PCA does not change the accuracy of many well-known classification algorithms. It predicates that people can safely use FR-PCA as a preprocessing tool to compress high-dimensional data without deteriorating the accuracies of these classifiers. The main contribution of the paper is that it theoretically proves that the transformation by FR-PCA does not change accuracies of the k nearest neighbor, the minimum distance, support vector machine, large margin linear projection, and maximum scatter difference classifiers. In addition, through extensive experimental studies conducted on several benchmark face image databases, this paper demonstrates that FR-PCA can greatly promote the efficiencies of above-mentioned five classification algorithms in appearance-based face recognition.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Liwei, Masashi Sugiyama, Cheng Yang, Kohei Hatano, and Jufu Feng. "Theory and Algorithm for Learning with Dissimilarity Functions." Neural Computation 21, no. 5 (May 2009): 1459–84. http://dx.doi.org/10.1162/neco.2008.08-06-805.

Full text
Abstract:
We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity-based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.
APA, Harvard, Vancouver, ISO, and other styles
31

Cherry, Colin, Saif M. Mohammad, and Berry De Bruijn. "Binary Classifiers and Latent Sequence Models for Emotion Detection in Suicide Notes." Biomedical Informatics Insights 5s1 (January 2012): BII.S8933. http://dx.doi.org/10.4137/bii.s8933.

Full text
Abstract:
This paper describes the National Research Council of Canada's submission to the 2011 i2b2 NLP challenge on the detection of emotions in suicide notes. In this task, each sentence of a suicide note is annotated with zero or more emotions, making it a multi-label sentence classification task. We employ two distinct large-margin models capable of handling multiple labels. The first uses one classifier per emotion, and is built to simplify label balance issues and to allow extremely fast development. This approach is very effective, scoring an F-measure of 55.22 and placing fourth in the competition, making it the best system that does not use web-derived statistics or re-annotated training data. Second, we present a latent sequence model, which learns to segment the sentence into a number of emotion regions. This model is intended to gracefully handle sentences that convey multiple thoughts and emotions. Preliminary work with the latent sequence model shows promise, resulting in comparable performance using fewer features.
APA, Harvard, Vancouver, ISO, and other styles
32

Phetkaew, Thimaporn, Wanchai Rivepiboon, and Boonserm Kijsirikul. "Reordering Adaptive Directed Acyclic Graphs for Multiclass Support Vector Machines." Journal of Advanced Computational Intelligence and Intelligent Informatics 7, no. 3 (October 20, 2003): 315–21. http://dx.doi.org/10.20965/jaciii.2003.p0315.

Full text
Abstract:
The problem of extending binary support vector machines (SVMs) for multiclass classification is still an ongoing research issue. Ussivakul and Kijsirikul proposed the Adaptive Directed Acyclic Graph (ADAG) approach that provides accuracy comparable to that of the standard algorithm-Max Wins and requires low computation. However, different sequences of nodes in the ADAG may provide different accuracy. In this paper we present a new method for multiclass classification, Reordering ADAG, which is the modification of the original ADAG method. We show examples to exemplify that the margin (or 2/|w| value) between two classes of each binary SVM classifier affects the accuracy of classification, and this margin indicates the magnitude of confusion between the two classes. In this paper, we propose an algorithm to choose an optimal sequence of nodes in the ADAG by considering the |w| values of all classifiers to be used in data classification. We then compare our performance with previous methods including the ADAG and the Max Wins algorithm. Experimental results demonstrate that our method gives higher accuracy. Moreover it runs faster than Max Wins, especially when the number of classes and/or the number of dimensions are relatively large.
APA, Harvard, Vancouver, ISO, and other styles
33

Chung, SueYeon, Uri Cohen, Haim Sompolinsky, and Daniel D. Lee. "Learning Data Manifolds with a Cutting Plane Method." Neural Computation 30, no. 10 (October 2018): 2593–615. http://dx.doi.org/10.1162/neco_a_01119.

Full text
Abstract:
We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom. Conventional data augmentation methods rely on sampling large numbers of training examples from these manifolds. Instead, we propose an iterative algorithm, [Formula: see text], based on a cutting plane approach that efficiently solves a quadratic semi-infinite programming problem to find the maximum margin solution. We provide a proof of convergence as well as a polynomial bound on the number of iterations required for a desired tolerance in the objective function. The efficiency and performance of [Formula: see text] are demonstrated in high-dimensional simulations and on image manifolds generated from the ImageNet data set. Our results indicate that [Formula: see text] is able to rapidly learn good classifiers and shows superior generalization performance compared with conventional maximum margin methods using data augmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Zeng, Hong, Chen Yang, Hua Zhang, Zhenhua Wu, Jiaming Zhang, Guojun Dai, Fabio Babiloni, and Wanzeng Kong. "A LightGBM-Based EEG Analysis Method for Driver Mental States Classification." Computational Intelligence and Neuroscience 2019 (September 9, 2019): 1–11. http://dx.doi.org/10.1155/2019/3761203.

Full text
Abstract:
Fatigue driving can easily lead to road traffic accidents and bring great harm to individuals and families. Recently, electroencephalography- (EEG-) based physiological and brain activities for fatigue detection have been increasingly investigated. However, how to find an effective method or model to timely and efficiently detect the mental states of drivers still remains a challenge. In this paper, we combine common spatial pattern (CSP) and propose a light-weighted classifier, LightFD, which is based on gradient boosting framework for EEG mental states identification. The comparable results with traditional classifiers, such as support vector machine (SVM), convolutional neural network (CNN), gated recurrent unit (GRU), and large margin nearest neighbor (LMNN), show that the proposed model could achieve better classification performance, as well as the decision efficiency. Furthermore, we also test and validate that LightFD has better transfer learning performance in EEG classification of driver mental states. In summary, our proposed LightFD classifier has better performance in real-time EEG mental state prediction, and it is expected to have broad application prospects in practical brain-computer interaction (BCI).
APA, Harvard, Vancouver, ISO, and other styles
35

Kristiansen, Stein, Konstantinos Nikolaidis, Thomas Plagemann, Vera Goebel, Gunn Marit Traaen, Britt Øverland, Lars Aakerøy, et al. "Machine Learning for Sleep Apnea Detection with Unattended Sleep Monitoring at Home." ACM Transactions on Computing for Healthcare 2, no. 2 (March 2021): 1–25. http://dx.doi.org/10.1145/3433987.

Full text
Abstract:
Sleep apnea is a common and strongly under-diagnosed severe sleep-related respiratory disorder with periods of disrupted or reduced breathing during sleep. To diagnose sleep apnea, sleep data are collected with either polysomnography or polygraphy and scored by a sleep expert. We investigate in this work the use of supervised machine learning to automate the analysis of polygraphy data from the A3 study containing more than 7,400 hours of sleep monitoring data from 579 patients. We conduct a systematic comparative study of classification performance and resource use with different combinations of 27 classifiers and four sleep signals. The classifiers achieve up to 0.8941 accuracy (kappa: 0.7877) when using all four signal types simultaneously and up to 0.8543 accuracy (kappa: 0.7080) with only one signal, i.e., oxygen saturation. Methods based on deep learning outperform other methods by a large margin. All deep learning methods achieve nearly the same maximum classification performance even when they have very different architectures and sizes. When jointly accounting for classification performance, resource consumption and the ability to achieve with less training data high classification performance, we find that convolutional neural networks substantially outperform the other classifiers.
APA, Harvard, Vancouver, ISO, and other styles
36

Herath, Herath Mudiyanselage Dhammike Piyumal Madhurajith, Weraniyagoda Arachchilage Sahanaka Anuththara Weraniyagoda, Rajapakshage Thilina Madhushan Rajapaksha, Patikiri Arachchige Don Shehan Nilmantha Wijesekara, Kalupahana Liyanage Kushan Sudheera, and Peter Han Joo Chong. "Automatic Assessment of Aphasic Speech Sensed by Audio Sensors for Classification into Aphasia Severity Levels to Recommend Speech Therapies." Sensors 22, no. 18 (September 14, 2022): 6966. http://dx.doi.org/10.3390/s22186966.

Full text
Abstract:
Aphasia is a type of speech disorder that can cause speech defects in a person. Identifying the severity level of the aphasia patient is critical for the rehabilitation process. In this research, we identify ten aphasia severity levels motivated by specific speech therapies based on the presence or absence of identified characteristics in aphasic speech in order to give more specific treatment to the patient. In the aphasia severity level classification process, we experiment on different speech feature extraction techniques, lengths of input audio samples, and machine learning classifiers toward classification performance. Aphasic speech is required to be sensed by an audio sensor and then recorded and divided into audio frames and passed through an audio feature extractor before feeding into the machine learning classifier. According to the results, the mel frequency cepstral coefficient (MFCC) is the most suitable audio feature extraction method for the aphasic speech level classification process, as it outperformed the classification performance of all mel-spectrogram, chroma, and zero crossing rates by a large margin. Furthermore, the classification performance is higher when 20 s audio samples are used compared with 10 s chunks, even though the performance gap is narrow. Finally, the deep neural network approach resulted in the best classification performance, which was slightly better than both K-nearest neighbor (KNN) and random forest classifiers, and it was significantly better than decision tree algorithms. Therefore, the study shows that aphasia level classification can be completed with accuracy, precision, recall, and F1-score values of 0.99 using MFCC for 20 s audio samples using the deep neural network approach in order to recommend corresponding speech therapy for the identified level. A web application was developed for English-speaking aphasia patients to self-diagnose the severity level and engage in speech therapies.
APA, Harvard, Vancouver, ISO, and other styles
37

Gestel, T. Van, J. A. K. Suykens, G. Lanckriet, A. Lambrechts, B. De Moor, and J. Vandewalle. "Bayesian Framework for Least-Squares Support Vector Machine Classifiers, Gaussian Processes, and Kernel Fisher Discriminant Analysis." Neural Computation 14, no. 5 (May 1, 2002): 1115–47. http://dx.doi.org/10.1162/089976602753633411.

Full text
Abstract:
The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence frame-work is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model param-eters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence framework consistently yields good generalization performances.
APA, Harvard, Vancouver, ISO, and other styles
38

Cevikalp, Hakan, and Bill Triggs. "Hyperdisk based large margin classifier." Pattern Recognition 46, no. 6 (June 2013): 1523–31. http://dx.doi.org/10.1016/j.patcog.2012.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Hongxin, Xubing Yang, Fuquan Zhang, Qiaolin Ye, and Xijian Fan. "Infinite norm large margin classifier." International Journal of Machine Learning and Cybernetics 10, no. 9 (October 29, 2018): 2449–57. http://dx.doi.org/10.1007/s13042-018-0881-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Alexandridis, Georgios, Iraklis Varlamis, Konstantinos Korovesis, George Caridakis, and Panagiotis Tsantilas. "A Survey on Sentiment Analysis and Opinion Mining in Greek Social Media." Information 12, no. 8 (August 18, 2021): 331. http://dx.doi.org/10.3390/info12080331.

Full text
Abstract:
As the amount of content that is created on social media is constantly increasing, more and more opinions and sentiments are expressed by people in various subjects. In this respect, sentiment analysis and opinion mining techniques can be valuable for the automatic analysis of huge textual corpora (comments, reviews, tweets etc.). Despite the advances in text mining algorithms, deep learning techniques, and text representation models, the results in such tasks are very good for only a few high-density languages (e.g., English) that possess large training corpora and rich linguistic resources; nevertheless, there is still room for improvement for the other lower-density languages as well. In this direction, the current work employs various language models for representing social media texts and text classifiers in the Greek language, for detecting the polarity of opinions expressed on social media. The experimental results on a related dataset collected by the authors of the current work are promising, since various classifiers based on the language models (naive bayesian, random forests, support vector machines, logistic regression, deep feed-forward neural networks) outperform those of word or sentence-based embeddings (word2vec, GloVe), achieving a classification accuracy of more than 80%. Additionally, a new language model for Greek social media has also been trained on the aforementioned dataset, proving that language models based on domain specific corpora can improve the performance of generic language models by a margin of 2%. Finally, the resulting models are made freely available to the research community.
APA, Harvard, Vancouver, ISO, and other styles
41

Tang, Liang, Qi Xuan, Rong Xiong, Tie-jun Wu, and Jian Chu. "A multi-class large margin classifier." Journal of Zhejiang University-SCIENCE A 10, no. 2 (February 2009): 253–62. http://dx.doi.org/10.1631/jzus.a0820122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Yuru, Qiaoyuan Liu, Minghao Yin, and ShengSheng Wang. "Large margin classifier-based ensemble tracking." Journal of Electronic Imaging 25, no. 4 (July 12, 2016): 043006. http://dx.doi.org/10.1117/1.jei.25.4.043006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chai, Jing, Hongwei Liu, Bo Chen, and Zheng Bao. "Large margin nearest local mean classifier." Signal Processing 90, no. 1 (January 2010): 236–48. http://dx.doi.org/10.1016/j.sigpro.2009.06.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Barak, Omri, and Mattia Rigotti. "A Simple Derivation of a Bound on the Perceptron Margin Using Singular Value Decomposition." Neural Computation 23, no. 8 (August 2011): 1935–43. http://dx.doi.org/10.1162/neco_a_00152.

Full text
Abstract:
The perceptron is a simple supervised algorithm to train a linear classifier that has been analyzed and used extensively. The classifier separates the data into two groups using a decision hyperplane, with the margin between the data and the hyperplane determining the classifier's ability to generalize and its robustness to input noise. Exact results for the maximal size of the separating margin are known for specific input distributions, and bounds exist for arbitrary distributions, but both rely on lengthy statistical mechanics calculations carried out in the limit of infinite input size. Here we present a short analysis of perceptron classification using singular value decomposition. We provide a simple derivation of a lower bound on the margin and an explicit formula for the perceptron weights that converges to the optimal result for large separating margins.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhu, Rixing, Jianwu Fang, Hongke Xu, and Jianru Xue. "Progressive Temporal-Spatial-Semantic Analysis of Driving Anomaly Detection and Recounting." Sensors 19, no. 23 (November 21, 2019): 5098. http://dx.doi.org/10.3390/s19235098.

Full text
Abstract:
For analyzing the traffic anomaly within dashcam videos from the perspective of ego-vehicles, the agent should spatial-temporally localize the abnormal occasion and regions and give a semantically recounting of what happened. Most existing formulations concentrate on the former spatial-temporal aspect and mainly approach this goal by training normal pattern classifiers/regressors/dictionaries with large-scale availably labeled data. However, anomalies are context-related, and it is difficult to distinguish the margin of abnormal and normal clearly. This paper proposes a progressive unsupervised driving anomaly detection and recounting (D&R) framework. The highlights are three-fold: (1) We formulate driving anomaly D&R as a temporal-spatial-semantic (TSS) model, which achieves a coarse-to-fine focusing and generates convincing driving anomaly D&R. (2) This work contributes an unsupervised D&R without any training data while performing an effective performance. (3) We novelly introduce the traffic saliency, isolation forest, visual semantic causal relations of driving scene to effectively construct the TSS model. Extensive experiments on a driving anomaly dataset with 106 video clips (temporal-spatial-semantically labeled carefully by ourselves) demonstrate superior performance over existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
46

Krishnan, Neeraja M., Mohanraj I, Janani Hariharan, and Binay Panda. "CAFE MOCHA: An Integrated Platform for Discovering Clinically Relevant Molecular Changes in Cancer—An Example of Distant Metastasis– and Recurrence-Linked Classifiers in Head and Neck Squamous Cell Carcinoma." JCO Clinical Cancer Informatics, no. 2 (December 2018): 1–11. http://dx.doi.org/10.1200/cci.17.00045.

Full text
Abstract:
Purpose With large amounts of multidimensional molecular data on cancers generated and deposited into public repositories such as The Cancer Genome Atlas and International Cancer Genome Consortium, a cancer type agnostic and integrative platform will help to identify signatures with clinical relevance. We devised such a platform and showcase it by identifying a molecular signature for patients with metastatic and recurrent (MR) head and neck squamous cell carcinoma (HNSCC). Methods We devised a statistical framework accompanied by a graphical user interface–driven application, Clinical Association of Functionally Established MOlecular CHAnges ( CAFE MOCHA; https://github.com/binaypanda/CAFEMOCHA), to discover molecular signatures linked to a specific clinical attribute in a cancer type. The platform integrates mutations and indels, gene expression, DNA methylation, and copy number variations to discover a classifier first and then to predict an incoming tumor for the same by pulling defined class variables into a single framework that incorporates a coordinate geometry–based algorithm called complete specificity margin-based clustering, which ensures maximum specificity. CAFE MOCHA classifies an incoming tumor sample using either its matched normal or a built-in database of normal tissues. The application is packed and deployed using the install4j multiplatform installer. We tested CAFE MOCHA in HNSCC tumors (n = 513) followed by validation in tumors from an independent cohort (n = 18) for discovering a signature linked to distant MR. Results CAFE MOCHA identified an integrated signature, MR44, associated with distant MR HNSCC, with 80% sensitivity and 100% specificity in the discovery stage and 100% sensitivity and 100% specificity in the validation stage. Conclusion CAFE MOCHA is a cancer type and clinical attribute agnostic statistical framework to discover integrated molecular signatures.
APA, Harvard, Vancouver, ISO, and other styles
47

Spasic, Irena, and Kate Button. "Patient Triage by Topic Modeling of Referral Letters: Feasibility Study." JMIR Medical Informatics 8, no. 11 (November 6, 2020): e21252. http://dx.doi.org/10.2196/21252.

Full text
Abstract:
Background Musculoskeletal conditions are managed within primary care, but patients can be referred to secondary care if a specialist opinion is required. The ever-increasing demand for health care resources emphasizes the need to streamline care pathways with the ultimate aim of ensuring that patients receive timely and optimal care. Information contained in referral letters underpins the referral decision-making process but is yet to be explored systematically for the purposes of treatment prioritization for musculoskeletal conditions. Objective This study aims to explore the feasibility of using natural language processing and machine learning to automate the triage of patients with musculoskeletal conditions by analyzing information from referral letters. Specifically, we aim to determine whether referral letters can be automatically assorted into latent topics that are clinically relevant, that is, considered relevant when prescribing treatments. Here, clinical relevance is assessed by posing 2 research questions. Can latent topics be used to automatically predict treatment? Can clinicians interpret latent topics as cohorts of patients who share common characteristics or experiences such as medical history, demographics, and possible treatments? Methods We used latent Dirichlet allocation to model each referral letter as a finite mixture over an underlying set of topics and model each topic as an infinite mixture over an underlying set of topic probabilities. The topic model was evaluated in the context of automating patient triage. Given a set of treatment outcomes, a binary classifier was trained for each outcome using previously extracted topics as the input features of the machine learning algorithm. In addition, a qualitative evaluation was performed to assess the human interpretability of topics. Results The prediction accuracy of binary classifiers outperformed the stratified random classifier by a large margin, indicating that topic modeling could be used to predict the treatment, thus effectively supporting patient triage. The qualitative evaluation confirmed the high clinical interpretability of the topic model. Conclusions The results established the feasibility of using natural language processing and machine learning to automate triage of patients with knee or hip pain by analyzing information from their referral letters.
APA, Harvard, Vancouver, ISO, and other styles
48

Hochreiter, Sepp, and Klaus Obermayer. "Support Vector Machines for Dyadic Data." Neural Computation 18, no. 6 (June 2006): 1472–510. http://dx.doi.org/10.1162/neco.2006.18.6.1472.

Full text
Abstract:
We describe a new technique for the analysis of dyadic data, where two sets of objects (row and column objects) are characterized by a matrix of numerical values that describe their mutual relationships. The new technique, called potential support vector machine (P-SVM), is a large-margin method for the construction of classifiers and regression functions for the column objects. Contrary to standard support vector machine approaches, the P-SVM minimizes a scale-invariant capacity measure and requires a new set of constraints. As a result, the P-SVM method leads to a usually sparse expansion of the classification and regression functions in terms of the row rather than the column objects and can handle data and kernel matrices that are neither positive definite nor square. We then describe two complementary regularization schemes. The first scheme improves generalization performance for classification and regression tasks; the second scheme leads to the selection of a small, informative set of row support objects and can be applied to feature selection. Benchmarks for classification, regression, and feature selection tasks are performed with toy data as well as with several real-world data sets. The results show that the new method is at least competitive with but often performs better than the benchmarked standard methods for standard vectorial as well as true dyadic data sets. In addition, a theoretical justification is provided for the new approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Torres, L. C. B., C. L. Castro, F. Coelho, F. Sill Torres, and A. P. Braga. "Distance‐based large margin classifier suitable for integrated circuit implementation." Electronics Letters 51, no. 24 (November 2015): 1967–69. http://dx.doi.org/10.1049/el.2015.1644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Xinjun Peng and Yifei Wang. "Geometric Algorithms to Large Margin Classifier Based on Affine Hulls." IEEE Transactions on Neural Networks and Learning Systems 23, no. 2 (February 2012): 236–46. http://dx.doi.org/10.1109/tnnls.2011.2179120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography