Academic literature on the topic 'Greedy algorithms; Kernel discrimination'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Greedy algorithms; Kernel discrimination.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Greedy algorithms; Kernel discrimination"

1

Zang, Miao, Huimin Xu, and Yongmei Zhang. "Kernel-Based Multiview Joint Sparse Coding for Image Annotation." Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6727105.

Full text
Abstract:
It remains a challenging task for automatic image annotation problem due to the semantic gap between visual features and semantic concepts. To reduce the gap, this paper puts forward a kernel-based multiview joint sparse coding (KMVJSC) framework for image annotation. In KMVJSC, different visual features as well as label information are considered as distinct views and are mapped to an implicit kernel space, in which the original nonlinear separable data become linearly separable. Then, all the views are integrated into a multiview joint sparse coding framework aiming to find a set of optimal sparse representations and discriminative dictionaries adaptively, which can effectively employ the complementary information of different views. An optimization algorithm is presented by extending K-singular value decomposition (KSVD) and accelerated proximal gradient (APG) algorithms to the kernel multiview framework. In addition, a label propagation scheme using the sparse reconstruction and weighted greedy label transfer algorithm is also proposed. Comparative experiments on three datasets have demonstrated the competitiveness of proposed approach compared with other related methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Diethe, Tom. "An Empirical Study of Greedy Kernel Fisher Discriminants." Mathematical Problems in Engineering 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/793986.

Full text
Abstract:
A sparse version of Kernel Fisher Discriminant Analysis using an approach based on Matching Pursuit (MPKFDA) has been shown to be competitive with Kernel Fisher Discriminant Analysis and the Support Vector Machines on publicly available datasets, with additional experiments showing that MPKFDA on average outperforms these algorithms in extremely high dimensional settings. In (nearly) all cases, the resulting classifier was sparser than the Support Vector Machine. Natural questions that arise are what is the relative importance of the use of the Fisher criterion for selecting bases and the deflation step? Can we speed the algorithm up without degrading performance? Here we analyse the algorithm in more detail, providing alternatives to the optimisation criterion and the deflation procedure of the algorithm, and also propose a stagewise version. We demonstrate empirically that these alternatives can provide considerable improvements in the computational complexity, whilst maintaining the performance of the original algorithm (and in some cases improving it).
APA, Harvard, Vancouver, ISO, and other styles
3

SRIVASTAVA, ANKUR, and ANDREW J. MEADE. "A SPARSE GREEDY SELF-ADAPTIVE ALGORITHM FOR CLASSIFICATION OF DATA." Advances in Adaptive Data Analysis 02, no. 01 (January 2010): 97–114. http://dx.doi.org/10.1142/s1793536910000355.

Full text
Abstract:
Kernels have become an integral part of most data classification algorithms. However, the kernel parameters are generally not optimized during learning. In this work a novel adaptive technique called Sequential Function Approximation (SFA) has been developed for classification that determines the values of the control and kernel hyper-parameters during learning. This tool constructs sparse radial basis function networks in a greedy fashion. Experiments were carried out on synthetic and real-world data sets where SFA had comparable performance to other popular classification schemes with parameters optimized by an exhaustive grid search.
APA, Harvard, Vancouver, ISO, and other styles
4

Wenzel, Tizian, Gabriele Santin, and Bernard Haasdonk. "A novel class of stabilized greedy kernel approximation algorithms: Convergence, stability and uniform point distribution." Journal of Approximation Theory 262 (February 2021): 105508. http://dx.doi.org/10.1016/j.jat.2020.105508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bian, Lu Sha, Yong Fang Yao, Xiao Yuan Jing, Sheng Li, Jiang Yue Man, and Jie Sun. "Face Recognition Based on a Fast Kernel Discriminant Analysis Approach." Advanced Materials Research 433-440 (January 2012): 6205–11. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.6205.

Full text
Abstract:
The computational cost of kernel discrimination is usually higher than linear discrimination, making many kernel methods impractically slow. To overcome this disadvantage, several accelerated algorithms have been presented, which express kernel discriminant vectors using a part of mapped training samples that are selected by some criterions. However, they still need to calculate a large kernel matrix using all training samples, so they only save rather limited computing time. In this paper, we propose the fast and effective kernel discriminations based on the mapped mean samples (MMS). It calculates a small kernel matrix by constructing a few mean samples in input space, then expresses the kernel discriminant vectors using MMS. The proposed kernel approach is tested on the public AR and FERET face databases. Experimental results show that this approach is effective in both saving computing time and acquiring favorable recognition results.
APA, Harvard, Vancouver, ISO, and other styles
6

SCHLEIF, F. M., THOMAS VILLMANN, BARBARA HAMMER, and PETRA SCHNEIDER. "EFFICIENT KERNELIZED PROTOTYPE BASED CLASSIFICATION." International Journal of Neural Systems 21, no. 06 (December 2011): 443–57. http://dx.doi.org/10.1142/s012906571100295x.

Full text
Abstract:
Prototype based classifiers are effective algorithms in modeling classification problems and have been applied in multiple domains. While many supervised learning algorithms have been successfully extended to kernels to improve the discrimination power by means of the kernel concept, prototype based classifiers are typically still used with Euclidean distance measures. Kernelized variants of prototype based classifiers are currently too complex to be applied for larger data sets. Here we propose an extension of Kernelized Generalized Learning Vector Quantization (KGLVQ) employing a sparsity and approximation technique to reduce the learning complexity. We provide generalization error bounds and experimental results on real world data, showing that the extended approach is comparable to SVM on different public data.
APA, Harvard, Vancouver, ISO, and other styles
7

SONG, HAN, FENG LI, PEIWEN GUANG, XINHAO YANG, HUANYU PAN, and FURONG HUANG. "Detection of Aflatoxin B1 in Peanut Oil Using Attenuated Total Reflection Fourier Transform Infrared Spectroscopy Combined with Partial Least Squares Discriminant Analysis and Support Vector Machine Models." Journal of Food Protection 84, no. 8 (March 12, 2021): 1315–20. http://dx.doi.org/10.4315/jfp-20-447.

Full text
Abstract:
ABSTRACT This study was conducted to establish a rapid and accurate method for identifying aflatoxin contamination in peanut oil. Attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy combined with either partial least squares discriminant analysis (PLS-DA) or a support vector machine (SVM) algorithm were used to construct discriminative models for distinguishing between uncontaminated and aflatoxin-contaminated peanut oil. Peanut oil samples containing various concentrations of aflatoxin B1 were examined with an ATR-FTIR spectrometer. Preprocessed spectral data were input to PLS-DA and SVM algorithms to construct discriminative models for aflatoxin contamination in peanut oil. SVM penalty and kernel function parameters were optimized using grid search, a genetic algorithm, and particle swarm optimization. The PLS-DA model established using spectral data had an accuracy of 94.64% and better discrimination than did models established based on preprocessed data. The SVM model established after data normalization and grid search optimization with a penalty parameter of 16 and a kernel function parameter of 0.0359 had the best discrimination, with 98.2143% accuracy. The discriminative models for aflatoxin contamination in peanut oil established by combining ATR-FTIR spectral data and nonlinear SVM algorithm were superior to the linear PLS-DA models. HIGHLIGHTS
APA, Harvard, Vancouver, ISO, and other styles
8

Xiao, Wendong, and Yingjie Lu. "Daily Human Physical Activity Recognition Based on Kernel Discriminant Analysis and Extreme Learning Machine." Mathematical Problems in Engineering 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/790412.

Full text
Abstract:
Wearable sensor based human physical activity recognition has extensive applications in many fields such as physical training and health care. This paper will be focused on the development of highly efficient approach for daily human activity recognition by a triaxial accelerometer. In the proposed approach, a number of features, including the tilt angle, the signal magnitude area (SMA), and the wavelet energy, are extracted from the raw measurement signal via the time domain, the frequency domain, and the time-frequency domain analysis. A nonlinear kernel discriminant analysis (KDA) scheme is introduced to enhance the discrimination between different activities. Extreme learning machine (ELM) is proposed as a novel activity recognition algorithm. Experimental results show that the proposed KDA based ELM classifier can achieve superior recognition performance with higher accuracy and faster learning speed than the back-propagation (BP) and the support vector machine (SVM) algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Jeonghun, and Ohbyung Kwon. "A Model for Rapid Selection and COVID-19 Prediction with Dynamic and Imbalanced Data." Sustainability 13, no. 6 (March 11, 2021): 3099. http://dx.doi.org/10.3390/su13063099.

Full text
Abstract:
The COVID-19 pandemic is threatening our quality of life and economic sustainability. The rapid spread of COVID-19 around the world requires each country or region to establish appropriate anti-proliferation policies in a timely manner. It is important, in making COVID-19-related health policy decisions, to predict the number of confirmed COVID-19 patients as accurately and quickly as possible. Predictions are already being made using several traditional models such as the susceptible, infected, and recovered (SIR) and susceptible, exposed, infected, and resistant (SEIR) frameworks, but these predictions may not be accurate due to the simplicity of the models, so a prediction model with more diverse input features is needed. However, it is difficult to propose a universal predictive model globally because there are differences in data availability by country and region. Moreover, the training data for predicting confirmed patients is typically an imbalanced dataset consisting mostly of normal data; this imbalance negatively affects the accuracy of prediction. Hence, the purposes of this study are to extract rules for selecting appropriate prediction algorithms and data imbalance resolution methods according to the characteristics of the datasets available for each country or region, and to predict the number of COVID-19 patients based on these algorithms. To this end, a decision tree-type rule was extracted to identify 13 data characteristics and a discrimination algorithm was selected based on those characteristics. With this system, we predicted the COVID-19 situation in four regions: Africa, China, Korea, and the United States. The proposed method has higher prediction accuracy than the random selection method, the ensemble method, or the greedy method of discriminant analysis, and prediction takes very little time.
APA, Harvard, Vancouver, ISO, and other styles
10

Villa, Amalia, Abhijith Mundanad Narayanan, Sabine Van Huffel, Alexander Bertrand, and Carolina Varon. "Utility metric for unsupervised feature selection." PeerJ Computer Science 7 (April 21, 2021): e477. http://dx.doi.org/10.7717/peerj-cs.477.

Full text
Abstract:
Feature selection techniques are very useful approaches for dimensionality reduction in data analysis. They provide interpretable results by reducing the dimensions of the data to a subset of the original set of features. When the data lack annotations, unsupervised feature selectors are required for their analysis. Several algorithms for this aim exist in the literature, but despite their large applicability, they can be very inaccessible or cumbersome to use, mainly due to the need for tuning non-intuitive parameters and the high computational demands. In this work, a publicly available ready-to-use unsupervised feature selector is proposed, with comparable results to the state-of-the-art at a much lower computational cost. The suggested approach belongs to the methods known as spectral feature selectors. These methods generally consist of two stages: manifold learning and subset selection. In the first stage, the underlying structures in the high-dimensional data are extracted, while in the second stage a subset of the features is selected to replicate these structures. This paper suggests two contributions to this field, related to each of the stages involved. In the manifold learning stage, the effect of non-linearities in the data is explored, making use of a radial basis function (RBF) kernel, for which an alternative solution for the estimation of the kernel parameter is presented for cases with high-dimensional data. Additionally, the use of a backwards greedy approach based on the least-squares utility metric for the subset selection stage is proposed. The combination of these new ingredients results in the utility metric for unsupervised feature selection U2FS algorithm. The proposed U2FS algorithm succeeds in selecting the correct features in a simulation environment. In addition, the performance of the method on benchmark datasets is comparable to the state-of-the-art, while requiring less computational time. Moreover, unlike the state-of-the-art, U2FS does not require any tuning of parameters.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Greedy algorithms; Kernel discrimination"

1

Harper, Gavin. "The selection of compounds for screening in pharmaceutical research." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Minh-Lien Jeanne. "Estimation non paramétrique de densités conditionnelles : grande dimension, parcimonie et algorithmes gloutons." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS185/document.

Full text
Abstract:
Nous considérons le problème d’estimation de densités conditionnelles en modérément grandes dimensions. Beaucoup plus informatives que les fonctions de régression, les densités condi- tionnelles sont d’un intérêt majeur dans les méthodes récentes, notamment dans le cadre bayésien (étude de la distribution postérieure, recherche de ses modes...). Après avoir rappelé les problèmes liés à l’estimation en grande dimension dans l’introduction, les deux chapitres suivants développent deux méthodes qui s’attaquent au fléau de la dimension en demandant : d’être efficace computation- nellement grâce à une procédure itérative gloutonne, de détecter les variables pertinentes sous une hypothèse de parcimonie, et converger à vitesse minimax quasi-optimale. Plus précisément, les deux méthodes considèrent des estimateurs à noyau bien adaptés à l’estimation de densités conditionnelles et sélectionnent une fenêtre multivariée ponctuelle en revisitant l’algorithme glouton RODEO (Re- gularisation Of Derivative Expectation Operator). La première méthode ayant des problèmes d’ini- tialisation et des facteurs logarithmiques supplémentaires dans la vitesse de convergence, la seconde méthode résout ces problèmes, tout en ajoutant l’adaptation à la régularité. Dans l’avant-dernier cha- pitre, on traite de la calibration et des performances numériques de ces deux procédures, avant de donner quelques commentaires et perspectives dans le dernier chapitre
We consider the problem of conditional density estimation in moderately large dimen- sions. Much more informative than regression functions, conditional densities are of main interest in recent methods, particularly in the Bayesian framework (studying the posterior distribution, find- ing its modes...). After recalling the estimation issues in high dimension in the introduction, the two following chapters develop on two methods which address the issues of the curse of dimensionality: being computationally efficient by a greedy iterative procedure, detecting under some suitably defined sparsity conditions the relevant variables, while converging at a quasi-optimal minimax rate. More precisely, the two methods consider kernel estimators well-adapted for conditional density estimation and select a pointwise multivariate bandwidth by revisiting the greedy algorithm RODEO (Regular- isation Of Derivative Expectation Operator). The first method having some initialization problems and extra logarithmic factors in its convergence rate, the second method solves these problems, while adding adaptation to the smoothness. In the penultimate chapter, we discuss the calibration and nu- merical performance of these two procedures, before giving some comments and perspectives in the last chapter
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography