Journal articles on the topic 'Machine learning, kernel methods'

To see the other types of publications on this topic, follow the link: Machine learning, kernel methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Machine learning, kernel methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hofmann, Thomas, Bernhard Schölkopf, and Alexander J. Smola. "Kernel methods in machine learning." Annals of Statistics 36, no. 3 (June 2008): 1171–220. http://dx.doi.org/10.1214/009053607000000677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schaback, Robert, and Holger Wendland. "Kernel techniques: From machine learning to meshless methods." Acta Numerica 15 (May 2006): 543–639. http://dx.doi.org/10.1017/s0962492906270016.

Full text
Abstract:
Kernels are valuable tools in various fields of numerical analysis, including approximation, interpolation, meshless methods for solving partial differential equations, neural networks, and machine learning. This contribution explains why and how kernels are applied in these disciplines. It uncovers the links between them, in so far as they are related to kernel techniques. It addresses non-expert readers and focuses on practical guidelines for using kernels in applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Mengoni, Riccardo, and Alessandra Di Pierro. "Kernel methods in Quantum Machine Learning." Quantum Machine Intelligence 1, no. 3-4 (November 15, 2019): 65–71. http://dx.doi.org/10.1007/s42484-019-00007-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Senyue, and Wenan Tan. "An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel." Discrete Dynamics in Nature and Society 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/7293278.

Full text
Abstract:
According to the characteristics that the kernel function of extreme learning machine (ELM) and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM) methods.
APA, Harvard, Vancouver, ISO, and other styles
5

INOKUCHI, RYO, and SADAAKI MIYAMOTO. "KERNEL METHODS FOR CLUSTERING: COMPETITIVE LEARNING AND c-MEANS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 14, no. 04 (August 2006): 481–93. http://dx.doi.org/10.1142/s0218488506004138.

Full text
Abstract:
Recently kernel methods in support vector machines have widely been used in machine learning algorithms to obtain nonlinear models. Clustering is an unsupervised learning method which divides whole data set into subgroups, and popular clustering algorithms such as c-means are employing kernel methods. Other kernel-based clustering algorithms have been inspired from kernel c-means. However, the formulation of kernel c-means has a high computational complexity. This paper gives an alternative formulation of kernel-based clustering algorithms derived from competitive learning clustering. This new formulation obviously uses sequential updating or on-line learning to avoid high computational complexity. We apply kernel methods to related algorithms: learning vector quantization and self-organizing map. We moreover consider kernel methods for sequential c-means and its fuzzy version by the proposed formulation.
APA, Harvard, Vancouver, ISO, and other styles
6

Christmann, Andreas, Florian Dumpert, and Dao-Hong Xiang. "On extension theorems and their connection to universal consistency in machine learning." Analysis and Applications 14, no. 06 (October 25, 2016): 795–808. http://dx.doi.org/10.1142/s0219530516400029.

Full text
Abstract:
Statistical machine learning plays an important role in modern statistics and computer science. One main goal of statistical machine learning is to provide universally consistent algorithms, i.e. the estimator converges in probability or in some stronger sense to the Bayes risk or to the Bayes decision function. Kernel methods based on minimizing the regularized risk over a reproducing kernel Hilbert space (RKHS) belong to these statistical machine learning methods. It is in general unknown which kernel yields optimal results for a particular data set or for the unknown probability measure. Hence various kernel learning methods were proposed to choose the kernel and therefore also its RKHS in a data adaptive manner. Nevertheless, many practitioners often use the classical Gaussian RBF kernel or certain Sobolev kernels with good success. The goal of this paper is to offer one possible theoretical explanation for this empirical fact.
APA, Harvard, Vancouver, ISO, and other styles
7

Saxena, Arti, and Vijay Kumar. "Bayesian Kernel Methods." International Journal of Big Data and Analytics in Healthcare 6, no. 1 (January 2021): 26–39. http://dx.doi.org/10.4018/ijbdah.20210101.oa3.

Full text
Abstract:
In the healthcare industry, sources look after different customers with diverse diseases and complications. Thus, at the source, a great amount of data in all aspects like status of the patients, behaviour of the diseases, etc. are collected, and now it becomes the job of the practitioner at source to use the available data for diagnosing the diseases accurately and then prescribe the relevant treatment. Machine learning techniques are useful to deal with large datasets, with an aim to produce meaningful information from the raw information for the purpose of decision making. The inharmonious behavior of the data is the motivation behind the development of new tools and demonstrates the available information to some meaningful information for decision making. As per the literature, healthcare of patients can be analyzed through machine learning tools, and henceforth, in the article, a Bayesian kernel method for medical decision-making problems has been discussed, which suits the purpose of researchers in the enhancement of their research in the domain of medical decision making.
APA, Harvard, Vancouver, ISO, and other styles
8

Vidnerová, Petra, and Roman Neruda. "Air Pollution Modelling by Machine Learning Methods." Modelling 2, no. 4 (November 17, 2021): 659–74. http://dx.doi.org/10.3390/modelling2040035.

Full text
Abstract:
Precise environmental modelling of pollutants distributions represents a key factor for addresing the issue of urban air pollution. Nowadays, urban air pollution monitoring is primarily carried out by employing sparse networks of spatially distributed fixed stations. The work in this paper aims at improving the situation by utilizing machine learning models to process the outputs of multi-sensor devices that are small, cheap, albeit less reliable, thus a massive urban deployment of those devices is possible. The main contribution of the paper is the design of a mathematical model providing sensor fusion to extract the information and transform it into the desired pollutant concentrations. Multi-sensor outputs are used as input information for a particular machine learning model trained to produce the CO, NO2, and NOx concentration estimates. Several state-of-the-art machine learning methods, including original algorithms proposed by the authors, are utilized in this study: kernel methods, regularization networks, regularization networks with composite kernels, and deep neural networks. All methods are augmented with a proper hyper-parameter search to achieve the optimal performance for each model. All the methods considered achieved vital results, deep neural networks exhibited the best generalization ability, and regularization networks with product kernels achieved the best fitting of the training set.
APA, Harvard, Vancouver, ISO, and other styles
9

Rahmati, Marzie, and Mohammad Ali Zare Chahooki. "Improvement in bug localization based on kernel extreme learning machine." Journal of Communications Technology, Electronics and Computer Science 5 (April 30, 2016): 1. http://dx.doi.org/10.22385/jctecs.v5i0.77.

Full text
Abstract:
Bug localization uses bug reports received from users, developers and testers to locate buggy files. Since finding a buggy file among thousands of files is time consuming and tedious for developers, various methods based on information retrieval is suggested to automate this process. In addition to information retrieval methods for error localization, machine learning methods are used too. Machine learning-based approach, improves methods of describing bug report and program code by representing them in feature vectors. Learning hypothesis on Extreme Learning Machine (ELM) has been recently effective in many areas. This paper shows effectiveness of none-linear kernel of ELM in bug localization. Furthermore the effectiveness of Different kernels in ELM compare to other kernel-based learning methods is analyzed. The experimental results for hypothesis evaluation on Mozilla Firefox dataset show effectiveness of Kernel ELM for bug localization in software projects.
APA, Harvard, Vancouver, ISO, and other styles
10

Price, Stanton R., Derek T. Anderson, Timothy C. Havens, and Steven R. Price. "Kernel Matrix-Based Heuristic Multiple Kernel Learning." Mathematics 10, no. 12 (June 11, 2022): 2026. http://dx.doi.org/10.3390/math10122026.

Full text
Abstract:
Kernel theory is a demonstrated tool that has made its way into nearly all areas of machine learning. However, a serious limitation of kernel methods is knowing which kernel is needed in practice. Multiple kernel learning (MKL) is an attempt to learn a new tailored kernel through the aggregation of a set of valid known kernels. There are generally three approaches to MKL: fixed rules, heuristics, and optimization. Optimization is the most popular; however, a shortcoming of most optimization approaches is that they are tightly coupled with the underlying objective function and overfitting occurs. Herein, we take a different approach to MKL. Specifically, we explore different divergence measures on the values in the kernel matrices and in the reproducing kernel Hilbert space (RKHS). Experiments on benchmark datasets and a computer vision feature learning task in explosive hazard detection demonstrate the effectiveness and generalizability of our proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Kai, Rongchun Li, Yong Dou, Zhengfa Liang, and Qi Lv. "Ranking Support Vector Machine with Kernel Approximation." Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4629534.

Full text
Abstract:
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Chiuso, A., and G. Pillonetto. "System Identification: A Machine Learning Perspective." Annual Review of Control, Robotics, and Autonomous Systems 2, no. 1 (May 3, 2019): 281–304. http://dx.doi.org/10.1146/annurev-control-053018-023744.

Full text
Abstract:
Estimation of functions from sparse and noisy data is a central theme in machine learning. In the last few years, many algorithms have been developed that exploit Tikhonov regularization theory and reproducing kernel Hilbert spaces. These are the so-called kernel-based methods, which include powerful approaches like regularization networks, support vector machines, and Gaussian regression. Recently, these techniques have also gained popularity in the system identification community. In both linear and nonlinear settings, kernels that incorporate information on dynamic systems, such as the smoothness and stability of the input–output map, can challenge consolidated approaches based on parametric model structures. In the classical parametric setting, the complexity of the model (the model order) needs to be chosen, typically from a finite family of alternatives, by trading bias and variance. This (discrete) model order selection step may be critical, especially when the true model does not belong to the model class. In regularization-based approaches, model complexity is controlled by tuning (continuous) regularization parameters, making the model selection step more robust. In this article, we review these new kernel-based system identification approaches and discuss extensions based on nuclear and [Formula: see text] norms.
APA, Harvard, Vancouver, ISO, and other styles
13

Abdelhamid, Abdelaziz A., El-Sayed M. El El-Kenawy, Abdelhameed Ibrahim, and Marwa M. Eid. "Intelligent Wheat Types Classification Model Using New Voting Classifier." Journal of Intelligent Systems and Internet of Things 7, no. 1 (2022): 30–39. http://dx.doi.org/10.54216/jisiot.070103.

Full text
Abstract:
When assessing the quality of the grain supply chain's quality, it is essential to identify and authenticate wheat types, as this is where the process begins with the examination of seeds. Manual inspection by eye is used for both grain identification and confirmation. High-speed, low-effort options became available thanks to automatic classification methods based on machine learning and computer vision. To this day, classifying at the varietal level is still challenging. Classification of wheat seeds was performed using machine learning techniques in this work. Wheat area, wheat perimeter, compactness, kernel length, kernel width, asymmetry coefficient, and kernel groove length are the 7 physical parameters used to categorize the seeds. The dataset includes 210 separate instances of wheat kernels, and was compiled from the UCI library. The 70 components of the dataset were selected randomly and included wheat kernels from three different varieties: Kama, Rosa, and Canadian. In the first stage, we use single machine learning models for classification, including multilayer neural networks, decision trees, and support vector machines. Each algorithm's output is measured against that of the machine learning ensemble method, which is optimized using the whale optimization and stochastic fractal search algorithms. In the end, the findings show that the proposed optimized ensemble is achieving promising results when compared to single machine learning models.
APA, Harvard, Vancouver, ISO, and other styles
14

Butnaru, Andrei-Mădălin. "Machine learning applied in natural language processing." ACM SIGIR Forum 54, no. 1 (June 2020): 1–3. http://dx.doi.org/10.1145/3451964.3451979.

Full text
Abstract:
Machine Learning is present in our lives now more than ever. One of the most researched areas in machine learning is focused on creating systems that are able to understand natural language. Natural language processing is a broad domain, having a vast number of applications with a significant impact in society. In our current era, we rely on tools that can ease our lives. We can search through thousands of documents to find something that we need, but this can take a lot of time. Having a system that can understand a simple query and return only relevant documents is more efficient. Although current approaches are well capable of understanding natural language, there is still space for improvement. This thesis studies multiple natural language processing tasks, presenting approaches on applications such as information retrieval, polarity detection, dialect identification [Butnaru and Ionescu, 2018], automatic essay scoring [Cozma et al., 2018], and methods that can help other systems to understand documents better. Part of the described approaches from this thesis are employing kernel methods, especially string kernels. A method based on string kernels that can determine in what dialect a document is written is presented in this thesis. The approach is treating texts at the character level, extracting features in the form of p -grams of characters, and combining several kernels, including presence bits kernel and intersection kernel. Kernel methods are also presented as a solution for defining the complexity of a specific word. By combining multiple low-level features and high-level semantic features, the approach can find if a non-native speaker of a language can see a word as complicated or not. With one focus on string kernels, this thesis proposes two transductive methods that can improve the results obtained by employing string kernels. One approach suggests using the pairwise string kernel similarities between samples from the training and test sets as features. The other method defines a simple self-training algorithm composed of two iterations. As usual, a classifier is trained over the training data, then is it used to predict the labels of the test samples. In the second iteration, the algorithm adds a predefined number of test samples to the training set for another round of training. These two transductive methods work by adapting the learning method to the test set. A novel cross-dialectal corpus is shown in this thesis. The Moldavian versus Romanian Corpus (MOROCO) [Butnaru and Ionescu, 2019a] contains over 30.000 samples collected from the news domain, split across six categories. Several studies can be employed over this corpus such as binary classification between Romanian and Moldavian samples, intra-dialect multi-class categorization by topic, and cross-dialect multi-class classification by topic. Two baseline approaches are presented for this collection of texts. One method is based on a simple string kernel model. The second approach consists of a character-level deep neural network, which includes several Squeeze-and-Excitation Blocks (SE-blocks). As known at this moment, this is the first time when a SE-block is employed in a natural language processing context. This thesis also presents a method for German Dialect Identification composed on a voting scheme that combines a Character-level Convolutional Neural Network, a Long Short-Term Memory Network, and a model based on String Kernels. Word sense disambiguation is still one of the challenges of the NLP domain. In this context, this thesis tackles this challenge and presents a novel disambiguation algorithm, known as ShowtgunWSD [Butnaru and Ionescu, 2019b]. By treating the global disambiguation problem as multiple local disambiguation problems, ShotgunWSD is capable of determining the sense of the words in an unsupervised and deterministic way, using WordNet as a resource. For this method to work, three functions that can compute the similarity between two words senses are defined. The disambiguation algorithm works as follows. The document is split into multiple windows of words of a specific size for each window. After that, a brute-force algorithm that computes every combination of senses for each word within that window is employed. For every window combination, a score is calculated using one of the three similarity functions. The last step merges the windows using a prefix and suffix matching to form more significant and relevant windows. In the end, the formed windows are ranked by the length and score, and the top ones, based on a voting scheme, will determine the sense for each word. Documents can contain a variable number of words, therefore employing them in machine learning may be hard at times. This thesis presents two novel approaches [Ionescu and Butnaru, 2019] that can represent documents using a finite number of features. Both methods are inspired by computer vision, and they work by first transforming the words within documents to a word representation, such as word2vec. Having words represented in this way, a k-means clustering algorithm can be applied over the words. The centroids of the formed clusters are gathered into a vocabulary. Each word from a document is then represented by the closest centroid from the previously formed vocabulary. To this point, both methods share the same steps. One approach is designed to compute the final representation of a document by calculating the frequency of each centroid found inside it. This method is named Bag of Super Word Embeddings (BOSWE) because each centroid can be viewed as a super word. The second approach presented in this thesis, known as Vector of Locally-Aggregated Word Embeddings (VLAWE), computes the document representation by accumulating the differences between each centroid and each word vector associated with the respective centroid. This thesis also describes a new way to score essays automatically by combining a low-level string kernel model with a high-level semantic feature representation, namely the BOSWE representation. The methods described in this thesis exhibit state-of-the-art performance levels over multiple tasks. One fact to support this claim is that the string kernel method employed for Arabic Dialect Identification obtained the first place, two years in a row at the Fourth and Fifth Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial). The same string kernel model obtained the fifth place at the German Dialect Identification Closed Shared Task at VarDial Workshop of EACL 2017. Second of all, the Complex Word Identification model scored a third-place at the CWI Shared Task of the BEA-13 of NAACL 2018. Third of all, it is worth to mention that the ShotgunWSD algorithm surpassed the MCS baseline on several datasets. Lastly, the model that combines string kernel and bag of super word embeddings obtained state-of-the-art performance over the Automated Student Assessment Prize dataset.
APA, Harvard, Vancouver, ISO, and other styles
15

Khatri, Ajay, Shweta Agrawal, and Jyotir M. Chatterjee. "Wheat Seed Classification: Utilizing Ensemble Machine Learning Approach." Scientific Programming 2022 (February 2, 2022): 1–9. http://dx.doi.org/10.1155/2022/2626868.

Full text
Abstract:
Recognizing and authenticating wheat varieties is critical for quality evaluation in the grain supply chain, particularly for methods for seed inspection. Recognition and verification of grains are carried out manually through direct visual examination. Automatic categorization techniques based on machine learning and computer vision offered fast and high-throughput solutions. Even yet, categorization remains a complicated process at the varietal level. The paper utilized machine learning approaches for classifying wheat seeds. The seed classification is performed based on 7 physical features: area of wheat, perimeter of wheat, compactness, length of the kernel, width of the kernel, asymmetry coefficient, and kernel groove length. The dataset is collected from the UCI library and has 210 occurrences of wheat kernels. The dataset contains kernels from three wheat varieties Kama, Rosa, and Canadian, with 70 components chosen at random for the experiment. In the first phase, K-nearest neighbor, classification and regression tree, and Gaussian Naïve Bayes algorithms are implemented for classification. The results of these algorithms are compared with the ensemble approach of machine learning. The results reveal that accuracies calculated for KNN, decision, and Naïve Bayes classifiers are 92%, 94%, and 92%, respectively. The highest accuracy of 95% is achieved through the ensemble classifier in which decision is made based on hard voting.
APA, Harvard, Vancouver, ISO, and other styles
16

Haghiabi, Amir Hamzeh, Ali Heidar Nasrolahi, and Abbas Parsaie. "Water quality prediction using machine learning methods." Water Quality Research Journal 53, no. 1 (January 19, 2018): 3–13. http://dx.doi.org/10.2166/wqrj.2018.025.

Full text
Abstract:
Abstract This study investigates the performance of artificial intelligence techniques including artificial neural network (ANN), group method of data handling (GMDH) and support vector machine (SVM) for predicting water quality components of Tireh River located in the southwest of Iran. To develop the ANN and SVM, different types of transfer and kernel functions were tested, respectively. Reviewing the results of ANN and SVM indicated that both models have suitable performance for predicting water quality components. During the process of development of ANN and SVM, it was found that tansig and RBF as transfer and kernel functions have the best performance among the tested functions. Comparison of outcomes of GMDH model with other applied models shows that although this model has acceptable performance for predicting the components of water quality, its accuracy is slightly less than ANN and SVM. The evaluation of the accuracy of the applied models according to the error indexes declared that SVM was the most accurate model. Examining the results of the models showed that all of them had some over-estimation properties. By evaluating the results of the models based on the DDR index, it was found that the lowest DDR value was related to the performance of the SVM model.
APA, Harvard, Vancouver, ISO, and other styles
17

Ren, Jinsheng, Ke Qin, Ying Ma, and Guangchun Luo. "On Software Defect Prediction Using Machine Learning." Journal of Applied Mathematics 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/785435.

Full text
Abstract:
This paper mainly deals with how kernel method can be used for software defect prediction, since the class imbalance can greatly reduce the performance of defect prediction. In this paper, two classifiers, namely, the asymmetric kernel partial least squares classifier (AKPLSC) and asymmetric kernel principal component analysis classifier (AKPCAC), are proposed for solving the class imbalance problem. This is achieved by applying kernel function to the asymmetric partial least squares classifier and asymmetric principal component analysis classifier, respectively. The kernel function used for the two classifiers is Gaussian function. Experiments conducted on NASA and SOFTLAB data sets usingF-measure, Friedman’s test, and Tukey’s test confirm the validity of our methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Chao, and Shaogao Lv. "An Efficient Kernel Learning Algorithm for Semisupervised Regression Problems." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/451947.

Full text
Abstract:
Kernel selection is a central issue in kernel methods of machine learning. In this paper, we investigate the regularized learning schemes based on kernel design methods. Our ideal kernel is derived from a simple iterative procedure using large scale unlabeled data in a semisupervised framework. Compared with most of existing approaches, our algorithm avoids multioptimization in the process of learning kernels and its computation is as efficient as the standard single kernel-based algorithms. Moreover, large amounts of information associated with input space can be exploited, and thus generalization ability is improved accordingly. We provide some theoretical support for the least square cases in our settings; also these advantages are shown by a simulation experiment and a real data analysis.
APA, Harvard, Vancouver, ISO, and other styles
19

Stock, Michiel, Tapio Pahikkala, Antti Airola, Bernard De Baets, and Willem Waegeman. "A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression." Neural Computation 30, no. 8 (August 2018): 2245–83. http://dx.doi.org/10.1162/neco_a_01096.

Full text
Abstract:
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Gao, Wen, Rong Yu, Zhaolei Yu, Zhuang Ma, and Md Masum. "Auxiliary Diagnosis Method of Chest Pain Based on Machine Learning." International Journal of Engineering and Technology 14, no. 4 (November 2022): 79–83. http://dx.doi.org/10.7763/ijet.2022.v14.1207.

Full text
Abstract:
Chest pain is sudden, its pathological causes are complex and various, fatal or non-fatal so that improving the diagnostic accuracy is extremely important in the emergency system of prehospital and hospitals. Therefore, we propose a method of introducing a decision tree, support vector machine, and KNN algorithm in machine learning into the auxiliary diagnosis of chest pain. First select the algorithm with better performance among decision tree, support vector machine, and KNN algorithm; Then compare the classification performance of the CART algorithm, the support vector machine using the Gaussian kernel function, and the K nearest neighbor algorithm using the Euclidean distance to select the best; Finally, through the analysis of the experimental results, the support vector machine algorithm with Gaussian kernel function is obtained. Its detection time and diagnosis accuracy rate are the best among the three algorithms, which can assist medical staff in the emergency system to carry out targeted chest pain diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
21

Yan Yan, Yan Yan, Hongzhong Ma Yan Yan, Dongdong Song Hongzhong Ma, Yang Feng Dongdong Song, and Dawei Duan Yang Feng. "OLTC Fault Diagnosis Method Based on Time Domain Analysis and Kernel Extreme Learning Machine." 電腦學刊 33, no. 6 (December 2022): 091–106. http://dx.doi.org/10.53106/199115992022123306008.

Full text
Abstract:
<p>Aiming at the problems of limited feature information and low diagnosis accuracy of traditional on-load tap changers (OLTCs), an OLTC fault diagnosis method based on time-domain analysis and kernel extreme learning machine (KELM) is proposed in this paper. Firstly, the time-frequency analysis method is used to analyze the collected OLTC vibration signal, extract the feature information and form the feature matrix; Then, the PCA algorithm is used to select effective features to build the initial optimal feature matrix; Finally, a kernel extreme learning machine optimized by improved grasshopper optimization algorithm (IGOA), is used to handle the optimal feature matrix for classifying fault patterns. Evaluation of algorithm performance in comparison with other existing methods indicates that the proposed method can improve the diagnostic accuracy by at least 7%.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Deist, Timo M., Andrew Patti, Zhaoqi Wang, David Krane, Taylor Sorenson, and David Craft. "Simulation-assisted machine learning." Bioinformatics 35, no. 20 (March 23, 2019): 4072–80. http://dx.doi.org/10.1093/bioinformatics/btz199.

Full text
Abstract:
Abstract Motivation In a predictive modeling setting, if sufficient details of the system behavior are known, one can build and use a simulation for making predictions. When sufficient system details are not known, one typically turns to machine learning, which builds a black-box model of the system using a large dataset of input sample features and outputs. We consider a setting which is between these two extremes: some details of the system mechanics are known but not enough for creating simulations that can be used to make high quality predictions. In this context we propose using approximate simulations to build a kernel for use in kernelized machine learning methods, such as support vector machines. The results of multiple simulations (under various uncertainty scenarios) are used to compute similarity measures between every pair of samples: sample pairs are given a high similarity score if they behave similarly under a wide range of simulation parameters. These similarity values, rather than the original high dimensional feature data, are used to build the kernel. Results We demonstrate and explore the simulation-based kernel (SimKern) concept using four synthetic complex systems—three biologically inspired models and one network flow optimization model. We show that, when the number of training samples is small compared to the number of features, the SimKern approach dominates over no-prior-knowledge methods. This approach should be applicable in all disciplines where predictive models are sought and informative yet approximate simulations are available. Availability and implementation The Python SimKern software, the demonstration models (in MATLAB, R), and the datasets are available at https://github.com/davidcraft/SimKern. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
23

Pilario, Karl Ezra, Mahmood Shafiee, Yi Cao, Liyun Lao, and Shuang-Hua Yang. "A Review of Kernel Methods for Feature Extraction in Nonlinear Process Monitoring." Processes 8, no. 1 (December 23, 2019): 24. http://dx.doi.org/10.3390/pr8010024.

Full text
Abstract:
Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries.
APA, Harvard, Vancouver, ISO, and other styles
24

Yang, Tianbao, Mehrdad Mahdavi, Rong Jin, Jinfeng Yi, and Steven Hoi. "Online Kernel Selection: Algorithms and Evaluations." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1197–203. http://dx.doi.org/10.1609/aaai.v26i1.8298.

Full text
Abstract:
Kernel methods have been successfully applied to many machine learning problems. Nevertheless, since the performance of kernel methods depends heavily on the type of kernels being used, identifying good kernels among a set of given kernels is important to the success of kernel methods. A straightforward approach to address this problem is cross-validation by training a separate classifier for each kernel and choosing the best kernel classifier out of them. Another approach is Multiple Kernel Learning (MKL), which aims to learn a single kernel classifier from an optimal combination of multiple kernels. However, both approaches suffer from a high computational cost in computing the full kernel matrices and in training, especially when the number of kernels or the number of training examples is very large. In this paper, we tackle this problem by proposing an efficient online kernel selection algorithm. It incrementally learns a weight for each kernel classifier. The weight for each kernel classifier can help us to select a good kernel among a set of given kernels. The proposed approach is efficient in that (i) it is an online approach and therefore avoids computing all the full kernel matrices before training; (ii) it only updates a single kernel classifier each time by a sampling technique and therefore saves time on updating kernel classifiers with poor performance; (iii) it has a theoretically guaranteed performance compared to the best kernel predictor. Empirical studies on image classification tasks demonstrate the effectiveness of the proposed approach for selecting a good kernel among a set of kernels.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Xinbiao, Yuxuan Du, Yong Luo, and Dacheng Tao. "Towards understanding the power of quantum kernels in the NISQ era." Quantum 5 (August 30, 2021): 531. http://dx.doi.org/10.22331/q-2021-08-30-531.

Full text
Abstract:
A key problem in the field of quantum computing is understanding whether quantum machine learning (QML) models implemented on noisy intermediate-scale quantum (NISQ) machines can achieve quantum advantages. Recently, Huang et al. [Nat Commun 12, 2631] partially answered this question by the lens of quantum kernel learning. Namely, they exhibited that quantum kernels can learn specific datasets with lower generalization error over the optimal classical kernel methods. However, most of their results are established on the ideal setting and ignore the caveats of near-term quantum machines. To this end, a crucial open question is: does the power of quantum kernels still hold under the NISQ setting? In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered. Concretely, we first prove that the advantage of quantum kernels is vanished for large size of datasets, few number of measurements, and large system noise. With the aim of preserving the superiority of quantum kernels in the NISQ era, we further devise an effective method via indefinite kernel learning. Numerical simulations accord with our theoretical results. Our work provides theoretical guidance of exploring advanced quantum kernels to attain quantum advantages on NISQ devices.
APA, Harvard, Vancouver, ISO, and other styles
26

Caraka, Rezzy Eko, Hasbi Yasin, and Adi Waridi Basyiruddin. "Peramalan Crude Palm Oil (CPO) Menggunakan Support Vector Regression Kernel Radial Basis." Jurnal Matematika 7, no. 1 (June 10, 2017): 43. http://dx.doi.org/10.24843/jmat.2017.v07.i01.p81.

Full text
Abstract:
Recently, instead of selecting a kernel has been proposed which uses SVR, where the weight of each kernel is optimized during training. Along this line of research, many pioneering kernel learning algorithms have been proposed. The use of kernels provides a powerful and principled approach to modeling nonlinear patterns through linear patterns in a feature space. Another bene?t is that the design of kernels and linear methods can be decoupled, which greatly facilitates the modularity of machine learning methods. We perform experiments on real data sets crude palm oil prices for application and better illustration using kernel radial basis. We see that evaluation gives a good to fit prediction and actual also good values showing the validity and accuracy of the realized model based on MAPE and R2. Keywords: Crude Palm Oil; Forecasting; SVR; Radial Basis; Kernel
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Xumin, Yan Feng, Yanlong Gao, Yingbiao Jia, and Shaohui Mei. "Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification." Remote Sensing 13, no. 3 (February 1, 2021): 508. http://dx.doi.org/10.3390/rs13030508.

Full text
Abstract:
Due to its excellent performance in high-dimensional space, the kernel extreme learning machine has been widely used in pattern recognition and machine learning fields. In this paper, we propose a dual-weighted kernel extreme learning machine for hyperspectral imagery classification. First, diverse spatial features are extracted by guided filtering. Then, the spatial features and spectral features are composited by a weighted kernel summation form. Finally, the weighted extreme learning machine is employed for the hyperspectral imagery classification task. This dual-weighted framework guarantees that the subtle spatial features are extracted, while the importance of minority samples is emphasized. Experiments carried on three public data sets demonstrate that the proposed dual-weighted kernel extreme learning machine (DW-KELM) performs better than other kernel methods, in terms of accuracy of classification, and can achieve satisfactory results.
APA, Harvard, Vancouver, ISO, and other styles
28

Jiang, Hao, and Wai-Ki Ching. "Correlation Kernels for Support Vector Machines Classification with Applications in Cancer Data." Computational and Mathematical Methods in Medicine 2012 (2012): 1–7. http://dx.doi.org/10.1155/2012/205025.

Full text
Abstract:
High dimensional bioinformatics data sets provide an excellent and challenging research problem in machine learning area. In particular, DNA microarrays generated gene expression data are of high dimension with significant level of noise. Supervised kernel learning with an SVM classifier was successfully applied in biomedical diagnosis such as discriminating different kinds of tumor tissues. Correlation Kernel has been recently applied to classification problems with Support Vector Machines (SVMs). In this paper, we develop a novel and parsimonious positive semidefinite kernel. The proposed kernel is shown experimentally to have better performance when compared to the usual correlation kernel. In addition, we propose a new kernel based on the correlation matrix incorporating techniques dealing with indefinite kernel. The resulting kernel is shown to be positive semidefinite and it exhibits superior performance to the two kernels mentioned above. We then apply the proposed method to some cancer data in discriminating different tumor tissues, providing information for diagnosis of diseases. Numerical experiments indicate that our method outperforms the existing methods such as the decision tree method and KNN method.
APA, Harvard, Vancouver, ISO, and other styles
29

Duan, Kangkang, Shuangyin Cao, Jinbao Li, and Chongfa Xu. "Prediction of Neutralization Depth of R.C. Bridges Using Machine Learning Methods." Crystals 11, no. 2 (February 20, 2021): 210. http://dx.doi.org/10.3390/cryst11020210.

Full text
Abstract:
Machine learning techniques have become a popular solution to prediction problems. These approaches show excellent performance without being explicitly programmed. In this paper, 448 sets of data were collected to predict the neutralization depth of concrete bridges in China. Random forest was used for parameter selection. Besides this, four machine learning methods, such as support vector machine (SVM), k-nearest neighbor (KNN) and XGBoost, were adopted to develop models. The results show that machine learning models obtain a high accuracy (>80%) and an acceptable macro recall rate (>80%) even with only four parameters. For SVM models, the radial basis function has a better performance than other kernel functions. The radial basis kernel SVM method has the highest verification accuracy (91%) and the highest macro recall rate (86%). Besides this, the preference of different methods is revealed in this study.
APA, Harvard, Vancouver, ISO, and other styles
30

Jiang, Qiangrong, and Jiajia Ma. "A novel graph kernel on chemical compound classification." Journal of Bioinformatics and Computational Biology 16, no. 06 (December 2018): 1850026. http://dx.doi.org/10.1142/s0219720018500269.

Full text
Abstract:
Considering the classification of compounds as a nonlinear problem, the use of kernel methods is a good choice. Graph kernels provide a nice framework combining machine learning methods with graph theory, whereas the essence of graph kernels is to compare the substructures of two graphs, how to extract the substructures is a question. In this paper, we propose a novel graph kernel based on matrix named the local block kernel, which can compare the similarity of partial substructures that contain any number of vertexes. The paper finally tests the efficacy of this novel graph kernel in comparison with a number of published mainstream methods and results with two datasets: NCI1 and NCI109 for the convenience of comparison.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhong, Li Yun, and Yu Ze Liu. "Using Kernel Fisher Criterion for Gaussian Kernel Optimization." Applied Mechanics and Materials 734 (February 2015): 534–38. http://dx.doi.org/10.4028/www.scientific.net/amm.734.534.

Full text
Abstract:
Empirical success of kernel-based learning methods is very much dependent on the kernel used. We propose an effective Gaussian kernel optimization approach for support vector machine (SVM). The key property of the proposed approach is that it adopts the kernel Fisher criterion (KFC) as the evaluation criterion to measure the goodness of the kernel used. After introducing a distance-based representation of KFC, we optimize the Gaussian kernel by using a gradient-based algorithm, which is based on the possibility of computing the gradient of KFC with respect to the width parameter of Gaussian kernel. The proposed approach is demonstrated with two popular UCI machine learning benchmark examples.
APA, Harvard, Vancouver, ISO, and other styles
32

Holmes, Irina, and Ambar N. Sengupta. "The Gaussian Radon transform and machine learning." Infinite Dimensional Analysis, Quantum Probability and Related Topics 18, no. 03 (September 2015): 1550019. http://dx.doi.org/10.1142/s0219025715500198.

Full text
Abstract:
There has been growing recent interest in probabilistic interpretations of kernel-based methods as well as learning in Banach spaces. The absence of a useful Lebesgue measure on an infinite-dimensional reproducing kernel Hilbert space is a serious obstacle for such stochastic models. We propose an estimation model for the ridge regression problem within the framework of abstract Wiener spaces and show how the support vector machine solution to such problems can be interpreted in terms of the Gaussian Radon transform.
APA, Harvard, Vancouver, ISO, and other styles
33

NIIJIMA, SATOSHI, and SATORU KUHARA. "MULTICLASS MOLECULAR CANCER CLASSIFICATION BY KERNEL SUBSPACE METHODS WITH EFFECTIVE KERNEL PARAMETER SELECTION." Journal of Bioinformatics and Computational Biology 03, no. 05 (October 2005): 1071–88. http://dx.doi.org/10.1142/s0219720005001491.

Full text
Abstract:
Microarray techniques provide new insights into molecular classification of cancer types, which is critical for cancer treatments and diagnosis. Recently, an increasing number of supervised machine learning methods have been applied to cancer classification problems using gene expression data. Support vector machines (SVMs), in particular, have become one of the most effective and leading methods. However, there exist few studies on the application of other kernel methods in the literature. We apply a kernel subspace (KS) method to multiclass cancer classification problems, and assess its validity by comparing it with multiclass SVMs. Our comparative study using seven multiclass cancer datasets demonstrates that the KS method has high performance that is comparable to multiclass SVMs. Furthermore, we propose an effective criterion for kernel parameter selection, which is shown to be useful for the computation of the KS method.
APA, Harvard, Vancouver, ISO, and other styles
34

Alpay, Daniel, Fabrizio Colombo, Kamal Diki, and Irene Sabadini. "An approach to the Gaussian RBF kernels via Fock spaces." Journal of Mathematical Physics 63, no. 11 (November 1, 2022): 113506. http://dx.doi.org/10.1063/5.0060342.

Full text
Abstract:
We use methods from the Fock space and Segal–Bargmann theories to prove several results on the Gaussian RBF kernel in complex analysis. The latter is one of the most used kernels in modern machine learning kernel methods and in support vector machine classification algorithms. Complex analysis techniques allow us to consider several notions linked to the radial basis function (RBF) kernels, such as the feature space and the feature map, using the so-called Segal–Bargmann transform. We also show how the RBF kernels can be related to some of the most used operators in quantum mechanics and time frequency analysis; specifically, we prove the connections of such kernels with creation, annihilation, Fourier, translation, modulation, and Weyl operators. For the Weyl operators, we also study a semigroup property in this case.
APA, Harvard, Vancouver, ISO, and other styles
35

Apsemidis, Anastasios, Stelios Psarakis, and Javier M. Moguerza. "A review of machine learning kernel methods in statistical process monitoring." Computers & Industrial Engineering 142 (April 2020): 106376. http://dx.doi.org/10.1016/j.cie.2020.106376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

FRANDI, EMANUELE, RICARDO ÑANCULEF, MARIA GRAZIA GASPARO, STEFANO LODI, and CLAUDIO SARTORI. "TRAINING SUPPORT VECTOR MACHINES USING FRANK–WOLFE OPTIMIZATION METHODS." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 03 (May 2013): 1360003. http://dx.doi.org/10.1142/s0218001413600033.

Full text
Abstract:
Training a support vector machine (SVM) requires the solution of a quadratic programming problem (QP) whose computational complexity becomes prohibitively expensive for large scale datasets. Traditional optimization methods cannot be directly applied in these cases, mainly due to memory restrictions. By adopting a slightly different objective function and under mild conditions on the kernel used within the model, efficient algorithms to train SVMs have been devised under the name of core vector machines (CVMs). This framework exploits the equivalence of the resulting learning problem with the task of building a minimal enclosing ball (MEB) problem in a feature space, where data is implicitly embedded by a kernel function. In this paper, we improve on the CVM approach by proposing two novel methods to build SVMs based on the Frank–Wolfe algorithm, recently revisited as a fast method to approximate the solution of a MEB problem. In contrast to CVMs, our algorithms do not require to compute the solutions of a sequence of increasingly complex QPs and are defined by using only analytic optimization steps. Experiments on a large collection of datasets show that our methods scale better than CVMs in most cases, sometimes at the price of a slightly lower accuracy. As CVMs, the proposed methods can be easily extended to machine learning problems other than binary classification. However, effective classifiers are also obtained using kernels which do not satisfy the condition required by CVMs, and thus our methods can be used for a wider set of problems.
APA, Harvard, Vancouver, ISO, and other styles
37

Ding, Yijie, Feng Chen, Xiaoyi Guo, Jijun Tang, and Hongjie Wu. "Identification of DNA-Binding Proteins by Multiple Kernel Support Vector Machine and Sequence Information." Current Proteomics 17, no. 4 (June 29, 2020): 302–10. http://dx.doi.org/10.2174/1570164616666190417100509.

Full text
Abstract:
Background: The DNA-binding proteins is an important process in multiple biomolecular functions. However, the tradition experimental methods for DNA-binding proteins identification are still time consuming and extremely expensive. Objective: In past several years, various computational methods have been developed to detect DNAbinding proteins. However, most of them do not integrate multiple information. Methods: In this study, we propose a novel computational method to predict DNA-binding proteins by two steps Multiple Kernel Support Vector Machine (MK-SVM) and sequence information. Firstly, we extract several feature and construct multiple kernels. Then, multiple kernels are linear combined by Multiple Kernel Learning (MKL). At last, a final SVM model, constructed by combined kernel, is built to predict DNA-binding proteins. Results: The proposed method is tested on two benchmark data sets. Compared with other existing method, our approach is comparable, even better than other methods on some data sets. Conclusion: We can conclude that MK-SVM is more suitable than common SVM, as the classifier for DNA-binding proteins identification.
APA, Harvard, Vancouver, ISO, and other styles
38

Bodó, Zalán, and Lehel Csató. "Hierarchical and Reweighting Cluster Kernels for Semi-Supervised Learning." International Journal of Computers Communications & Control 5, no. 4 (November 1, 2010): 469. http://dx.doi.org/10.15837/ijccc.2010.4.2496.

Full text
Abstract:
Recently semi-supervised methods gained increasing attention and many novel semi-supervised learning algorithms have been proposed. These methods exploit the information contained in the usually large unlabeled data set in order to improve classification or generalization performance. Using data-dependent kernels for kernel machines one can build semi-supervised classifiers by building the kernel in such a way that feature space dot products incorporate the structure of the data set. In this paper we propose two such methods: one using specific hierarchical clustering, and another kernel for reweighting an arbitrary base kernel taking into account the cluster structure of the data.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Tong. "Learning Bounds for Kernel Regression Using Effective Data Dimensionality." Neural Computation 17, no. 9 (September 1, 2005): 2077–98. http://dx.doi.org/10.1162/0899766054323008.

Full text
Abstract:
Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the “curse-of-dimensionality” phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.
APA, Harvard, Vancouver, ISO, and other styles
40

Huang, Shian-Chang, and Cheng-Feng Wu. "Energy Commodity Price Forecasting with Deep Multiple Kernel Learning." Energies 11, no. 11 (November 5, 2018): 3029. http://dx.doi.org/10.3390/en11113029.

Full text
Abstract:
Oil is an important energy commodity. The difficulties of forecasting oil prices stem from the nonlinearity and non-stationarity of their dynamics. However, the oil prices are closely correlated with global financial markets and economic conditions, which provides us with sufficient information to predict them. Traditional models are linear and parametric, and are not very effective in predicting oil prices. To address these problems, this study developed a new strategy. Deep (or hierarchical) multiple kernel learning (DMKL) was used to predict the oil price time series. Traditional methods from statistics and machine learning usually involve shallow models; however, they are unable to fully represent complex, compositional, and hierarchical data features. This explains why traditional methods fail to track oil price dynamics. This study aimed to solve this problem by combining deep learning and multiple kernel machines using information from oil, gold, and currency markets. DMKL is good at exploiting multiple information sources. It can effectively identify the relevant information and simultaneously select an apposite data representation. The kernels of DMKL were embedded in a directed acyclic graph (DAG), which is a deep model and efficient at representing complex and compositional data features. This provided a solid foundation for extracting the key features of oil price dynamics. By using real data for empirical testing, our new system robustly outperformed traditional models and significantly reduced the forecasting errors.
APA, Harvard, Vancouver, ISO, and other styles
41

Majumder, Irani, Naeem Hannoon, Niranjan Nayak, and Ranjeeta Bisoi. "Solar power forecasting using robust kernel extreme learning machine and decomposition methods." International Journal of Power and Energy Conversion 11, no. 3 (2020): 260. http://dx.doi.org/10.1504/ijpec.2020.10027381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Majumder, Irani, Ranjeeta Bisoi, Niranjan Nayak, and Naeem Hannoon. "Solar power forecasting using robust kernel extreme learning machine and decomposition methods." International Journal of Power and Energy Conversion 11, no. 3 (2020): 260. http://dx.doi.org/10.1504/ijpec.2020.107958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Pillonetto, Gianluigi, Francesco Dinuzzo, Tianshi Chen, Giuseppe De Nicolao, and Lennart Ljung. "Kernel methods in system identification, machine learning and function estimation: A survey." Automatica 50, no. 3 (March 2014): 657–82. http://dx.doi.org/10.1016/j.automatica.2014.01.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Shutaywi, Meshal, and Nezamoddin N. Kachouie. "Silhouette Analysis for Performance Evaluation in Machine Learning with Applications to Clustering." Entropy 23, no. 6 (June 16, 2021): 759. http://dx.doi.org/10.3390/e23060759.

Full text
Abstract:
Grouping the objects based on their similarities is an important common task in machine learning applications. Many clustering methods have been developed, among them k-means based clustering methods have been broadly used and several extensions have been developed to improve the original k-means clustering method such as k-means ++ and kernel k-means. K-means is a linear clustering method; that is, it divides the objects into linearly separable groups, while kernel k-means is a non-linear technique. Kernel k-means projects the elements to a higher dimensional feature space using a kernel function, and then groups them. Different kernel functions may not perform similarly in clustering of a data set and, in turn, choosing the right kernel for an application could be challenging. In our previous work, we introduced a weighted majority voting method for clustering based on normalized mutual information (NMI). NMI is a supervised method where the true labels for a training set are required to calculate NMI. In this study, we extend our previous work of aggregating the clustering results to develop an unsupervised weighting function where a training set is not available. The proposed weighting function here is based on Silhouette index, as an unsupervised criterion. As a result, a training set is not required to calculate Silhouette index. This makes our new method more sensible in terms of clustering concept.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Ming, Shu He, and Yong Jun Cheng. "The Application of Kernel Methods for Image Classification." Advanced Materials Research 1044-1045 (October 2014): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1388.

Full text
Abstract:
Kernel methods are famous for their efficiency and robustness in processing non-linear machine learning problems in the high dimensional feature space, and thus widely applied in image classification and detection. The proper principal components are selected for KPCA reconstruction according to noise features. Finally, the improved image is obtained by performing inverse method. Experimental results show that the proposed method can suppress noise interference in remote sensing images, and preserve the useful information of original data more completely.
APA, Harvard, Vancouver, ISO, and other styles
46

Pal, Mahesh, N. K. Singh, and N. K. Tiwari. "Kernel methods for pier scour modeling using field data." Journal of Hydroinformatics 16, no. 4 (November 13, 2013): 784–96. http://dx.doi.org/10.2166/hydro.2013.024.

Full text
Abstract:
Three kernel-based modeling approaches are proposed to predict the local scour around bridge piers using field data. Modeling approaches include Gaussian processes regression (GPR), relevance vector machines (RVM) and a kernlised extreme learning machine (KELM). A dataset consisting of 232 upstream pier scour measurements derived from the Bridge Scour Data Management System (BSDMS) was used. The radial basis kernel function was used with all three kernel-based approaches and results were compared with support vector regression and four empirical relations. Coefficient of determination value of 0.922, 0.922 and 0.900 (root mean square error, RMSE = 0.297, 0.310 and 0.343 m) was achieved by GPR, RVM and KELM algorithm respectively. Comparisons of results with support vector regression and Froehlich equation, Froehlich design, HEC-18 and HEC-18/Mueller predictive equations suggest an improved performance by the proposed approaches. Results with dimensionless data using all three algorithms suggest a better performance by dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
47

Gao, Hong Bing, Liao Yang, Xian Zhang, and Chen Cheng. "Application and Experimental Study of Support Vector Machine in Rolling Bearing Fault." Applied Mechanics and Materials 48-49 (February 2011): 241–45. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.241.

Full text
Abstract:
A brief introduction of the basic concepts of the classification interval, the optimal classification surface and support vector; explained derivation of SVM based on Lagrange optimization method; Sigmoid kernel function and so on. It describes three methods of C-SVM、V-SVM and least squares SVM based on Sigmoid kernel function. To a bearing failure as a example to compare three results of SVM training of the kernel linear function, polynomial kernel function, Sigmoid kernel function, The results show that satisfactory fault analysis demand the appropriate kernel function selection. Fault in the gear box, the bearing failure is 19%, In addition, the rate is as high as 30% in other rotating machinery system failure [1,2].Thus, rolling bearing condition monitoring and fault diagnosis are very important to production safety, and many scholars have done numerous studies [3,4]. Support vector machine method is a learning methods based on statistical learning theory Vapnik-Chervonenkis dimension theory and structural risk minimization [5,6].
APA, Harvard, Vancouver, ISO, and other styles
48

Kurita, Takio. "Support Vector Machine and Generalization." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 2 (March 20, 2004): 84–92. http://dx.doi.org/10.20965/jaciii.2004.p0084.

Full text
Abstract:
The support vector machine (SVM) has been extended to build up nonlinear classifiers using the kernel trick. As a learning model, it has the best recognition performance among the many methods currently known because it is devised to obtain high performance for unlearned data. This paper reviews how to enhance generalization in learning classifiers centering on the SVM.
APA, Harvard, Vancouver, ISO, and other styles
49

Boueshagh, M., and M. Hasanlou. "ESTIMATING WATER LEVEL IN THE URMIA LAKE USING SATELLITE DATA: A MACHINE LEARNING APPROACH." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 219–26. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-219-2019.

Full text
Abstract:
Abstract. Lakes play a pivotal role in the development of cities and have major impacts on the ecosystem balancing of the area. Remote sensing techniques and advanced modeling methods make it possible to monitor natural phenomena, such as lakes’ water level. The ecosystem of Urmia Lake is one of the most momentous ecosystems in Iran, which is almost close-ended and has become a global environmental issue in recent years. One of the parameters affecting this lake water level is snowfall, which has a key role in the fluctuations of its water level and water resources management. Hence, the purpose of this paper is the Urmia Lake water level estimation during 2000–2006 using observed water level, snow cover, direct precipitation, and evaporation. For this purpose, Support Vector Regression (SVR), which is the most outstanding kernel method (with various kernel types), has been used. Furthermore, four scenarios are considered with different variables as inputs, and the output of all scenarios is the water level of the lake. The results of training and testing data indicate the substantial impact of snow on retrieving the water level of the Urmia Lake at the desired period, and due to the complexity of the data relationships, the Gaussian kernel generally had better results. On the other hand, Quadratic and Cubic kernels did not work well. The fourth scenario, with RBF kernel has the best results [Training: R2 = 97% and RMSE = 0.09 m, Testing: R2 = 96.97% and RMSE = 0.08 m].
APA, Harvard, Vancouver, ISO, and other styles
50

Chaouch, Hanen, Samia Charfeddine, Sondess Ben Aoun, Houssem Jerbi, and Víctor Leiva. "Multiscale Monitoring Using Machine Learning Methods: New Methodology and an Industrial Application to a Photovoltaic System." Mathematics 10, no. 6 (March 10, 2022): 890. http://dx.doi.org/10.3390/math10060890.

Full text
Abstract:
In this study, a multiscale monitoring method for nonlinear processes was developed. We introduced a machine learning tool for fault detection and isolation based on the kernel principal component analysis (PCA) and discrete wavelet transform. The principle of our proposal involved decomposing multivariate data into wavelet coefficients by employing the discrete wavelet transform. Then, the kernel PCA was applied on every matrix of coefficients to detect defects. Only those scales that manifest overruns of the squared prediction errors in control limits were considered in the data reconstruction phase. Thus, the kernel PCA was approached on the reconstructed matrix for detecting defects and isolation. This approach exploits the kernel PCA performance for nonlinear process monitoring in combination with multiscale analysis when processing time-frequency scales. The proposed method was validated on a photovoltaic system related to a complex industrial process. A data matrix was determined from the variables that characterize this process corresponding to motor current, angular speed, convertor output voltage, and power voltage system output. We tested the developed methodology on 1000 observations of photovoltaic variables. A comparison with monitoring methods based on neural PCA was established, proving the efficiency of the developed methodology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography