Добірка наукової літератури з теми "Machine learning, kernel methods"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Machine learning, kernel methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Machine learning, kernel methods"

1

Hofmann, Thomas, Bernhard Schölkopf, and Alexander J. Smola. "Kernel methods in machine learning." Annals of Statistics 36, no. 3 (June 2008): 1171–220. http://dx.doi.org/10.1214/009053607000000677.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Schaback, Robert, and Holger Wendland. "Kernel techniques: From machine learning to meshless methods." Acta Numerica 15 (May 2006): 543–639. http://dx.doi.org/10.1017/s0962492906270016.

Повний текст джерела
Анотація:
Kernels are valuable tools in various fields of numerical analysis, including approximation, interpolation, meshless methods for solving partial differential equations, neural networks, and machine learning. This contribution explains why and how kernels are applied in these disciplines. It uncovers the links between them, in so far as they are related to kernel techniques. It addresses non-expert readers and focuses on practical guidelines for using kernels in applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mengoni, Riccardo, and Alessandra Di Pierro. "Kernel methods in Quantum Machine Learning." Quantum Machine Intelligence 1, no. 3-4 (November 15, 2019): 65–71. http://dx.doi.org/10.1007/s42484-019-00007-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Senyue, and Wenan Tan. "An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel." Discrete Dynamics in Nature and Society 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/7293278.

Повний текст джерела
Анотація:
According to the characteristics that the kernel function of extreme learning machine (ELM) and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM) methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

INOKUCHI, RYO, and SADAAKI MIYAMOTO. "KERNEL METHODS FOR CLUSTERING: COMPETITIVE LEARNING AND c-MEANS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 14, no. 04 (August 2006): 481–93. http://dx.doi.org/10.1142/s0218488506004138.

Повний текст джерела
Анотація:
Recently kernel methods in support vector machines have widely been used in machine learning algorithms to obtain nonlinear models. Clustering is an unsupervised learning method which divides whole data set into subgroups, and popular clustering algorithms such as c-means are employing kernel methods. Other kernel-based clustering algorithms have been inspired from kernel c-means. However, the formulation of kernel c-means has a high computational complexity. This paper gives an alternative formulation of kernel-based clustering algorithms derived from competitive learning clustering. This new formulation obviously uses sequential updating or on-line learning to avoid high computational complexity. We apply kernel methods to related algorithms: learning vector quantization and self-organizing map. We moreover consider kernel methods for sequential c-means and its fuzzy version by the proposed formulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Christmann, Andreas, Florian Dumpert, and Dao-Hong Xiang. "On extension theorems and their connection to universal consistency in machine learning." Analysis and Applications 14, no. 06 (October 25, 2016): 795–808. http://dx.doi.org/10.1142/s0219530516400029.

Повний текст джерела
Анотація:
Statistical machine learning plays an important role in modern statistics and computer science. One main goal of statistical machine learning is to provide universally consistent algorithms, i.e. the estimator converges in probability or in some stronger sense to the Bayes risk or to the Bayes decision function. Kernel methods based on minimizing the regularized risk over a reproducing kernel Hilbert space (RKHS) belong to these statistical machine learning methods. It is in general unknown which kernel yields optimal results for a particular data set or for the unknown probability measure. Hence various kernel learning methods were proposed to choose the kernel and therefore also its RKHS in a data adaptive manner. Nevertheless, many practitioners often use the classical Gaussian RBF kernel or certain Sobolev kernels with good success. The goal of this paper is to offer one possible theoretical explanation for this empirical fact.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Saxena, Arti, and Vijay Kumar. "Bayesian Kernel Methods." International Journal of Big Data and Analytics in Healthcare 6, no. 1 (January 2021): 26–39. http://dx.doi.org/10.4018/ijbdah.20210101.oa3.

Повний текст джерела
Анотація:
In the healthcare industry, sources look after different customers with diverse diseases and complications. Thus, at the source, a great amount of data in all aspects like status of the patients, behaviour of the diseases, etc. are collected, and now it becomes the job of the practitioner at source to use the available data for diagnosing the diseases accurately and then prescribe the relevant treatment. Machine learning techniques are useful to deal with large datasets, with an aim to produce meaningful information from the raw information for the purpose of decision making. The inharmonious behavior of the data is the motivation behind the development of new tools and demonstrates the available information to some meaningful information for decision making. As per the literature, healthcare of patients can be analyzed through machine learning tools, and henceforth, in the article, a Bayesian kernel method for medical decision-making problems has been discussed, which suits the purpose of researchers in the enhancement of their research in the domain of medical decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Vidnerová, Petra, and Roman Neruda. "Air Pollution Modelling by Machine Learning Methods." Modelling 2, no. 4 (November 17, 2021): 659–74. http://dx.doi.org/10.3390/modelling2040035.

Повний текст джерела
Анотація:
Precise environmental modelling of pollutants distributions represents a key factor for addresing the issue of urban air pollution. Nowadays, urban air pollution monitoring is primarily carried out by employing sparse networks of spatially distributed fixed stations. The work in this paper aims at improving the situation by utilizing machine learning models to process the outputs of multi-sensor devices that are small, cheap, albeit less reliable, thus a massive urban deployment of those devices is possible. The main contribution of the paper is the design of a mathematical model providing sensor fusion to extract the information and transform it into the desired pollutant concentrations. Multi-sensor outputs are used as input information for a particular machine learning model trained to produce the CO, NO2, and NOx concentration estimates. Several state-of-the-art machine learning methods, including original algorithms proposed by the authors, are utilized in this study: kernel methods, regularization networks, regularization networks with composite kernels, and deep neural networks. All methods are augmented with a proper hyper-parameter search to achieve the optimal performance for each model. All the methods considered achieved vital results, deep neural networks exhibited the best generalization ability, and regularization networks with product kernels achieved the best fitting of the training set.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rahmati, Marzie, and Mohammad Ali Zare Chahooki. "Improvement in bug localization based on kernel extreme learning machine." Journal of Communications Technology, Electronics and Computer Science 5 (April 30, 2016): 1. http://dx.doi.org/10.22385/jctecs.v5i0.77.

Повний текст джерела
Анотація:
Bug localization uses bug reports received from users, developers and testers to locate buggy files. Since finding a buggy file among thousands of files is time consuming and tedious for developers, various methods based on information retrieval is suggested to automate this process. In addition to information retrieval methods for error localization, machine learning methods are used too. Machine learning-based approach, improves methods of describing bug report and program code by representing them in feature vectors. Learning hypothesis on Extreme Learning Machine (ELM) has been recently effective in many areas. This paper shows effectiveness of none-linear kernel of ELM in bug localization. Furthermore the effectiveness of Different kernels in ELM compare to other kernel-based learning methods is analyzed. The experimental results for hypothesis evaluation on Mozilla Firefox dataset show effectiveness of Kernel ELM for bug localization in software projects.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Price, Stanton R., Derek T. Anderson, Timothy C. Havens, and Steven R. Price. "Kernel Matrix-Based Heuristic Multiple Kernel Learning." Mathematics 10, no. 12 (June 11, 2022): 2026. http://dx.doi.org/10.3390/math10122026.

Повний текст джерела
Анотація:
Kernel theory is a demonstrated tool that has made its way into nearly all areas of machine learning. However, a serious limitation of kernel methods is knowing which kernel is needed in practice. Multiple kernel learning (MKL) is an attempt to learn a new tailored kernel through the aggregation of a set of valid known kernels. There are generally three approaches to MKL: fixed rules, heuristics, and optimization. Optimization is the most popular; however, a shortcoming of most optimization approaches is that they are tightly coupled with the underlying objective function and overfitting occurs. Herein, we take a different approach to MKL. Specifically, we explore different divergence measures on the values in the kernel matrices and in the reproducing kernel Hilbert space (RKHS). Experiments on benchmark datasets and a computer vision feature learning task in explosive hazard detection demonstrate the effectiveness and generalizability of our proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Machine learning, kernel methods"

1

Tsang, Wai-Hung. "Kernel methods in supervised and unsupervised learning /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20TSANG.

Повний текст джерела
Анотація:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 46-49). Also available in electronic version. Access restricted to campus users.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Xiaoyi. "Transfer Learning with Kernel Methods." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0005.

Повний текст джерела
Анотація:
Le transfert d‘apprentissage regroupe les méthodes permettant de transférer l’apprentissage réalisé sur des données (appelées Source) à des données nouvelles, différentes, mais liées aux données Source. Ces travaux sont une contribution au transfert d’apprentissage homogène (les domaines de représentation des Source et Cible sont identiques) et transductif (la tâche à effectuer sur les données Cible est identique à celle sur les données Source), lorsque nous ne disposons pas d’étiquettes des données Cible. Dans ces travaux, nous relâchons la contrainte d’égalité des lois des étiquettes conditionnellement aux observations, souvent considérée dans la littérature. Notre approche permet de traiter des cas de plus en plus généraux. Elle repose sur la recherche de transformations permettant de rendre similaires les données Source et Cible. Dans un premier temps, nous recherchons cette transformation par Maximum de Vraisemblance. Ensuite, nous adaptons les Machines à Vecteur de Support en intégrant une contrainte additionnelle sur la similitude des données Source et Cible. Cette similitude est mesurée par la Maximum Mean Discrepancy. Enfin, nous proposons l’utilisation de l’Analyse en Composantes Principales à noyau pour rechercher un sous espace, obtenu à partir d’une transformation non linéaire des données Source et Cible, dans lequel les lois des observations sont les plus semblables possibles. Les résultats expérimentaux montrent l’efficacité de nos approches
Transfer Learning aims to take advantage of source data to help the learning task of related but different target data. This thesis contributes to homogeneous transductive transfer learning where no labeled target data is available. In this thesis, we relax the constraint on conditional probability of labels required by covariate shift to be more and more general, based on which the alignment of marginal probabilities of source and target observations renders source and target similar. Thus, firstly, a maximum likelihood based approach is proposed. Secondly, SVM is adapted to transfer learning with an extra MMD-like constraint where Maximum Mean Discrepancy (MMD) measures this similarity. Thirdly, KPCA is used to align data in a RKHS on minimizing MMD. We further develop the KPCA based approach so that a linear transformation in the input space is enough for a good and robust alignment in the RKHS. Experimentally, our proposed approaches are very promising
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wu, Zhili. "Kernel based learning methods for pattern and feature analysis." HKBU Institutional Repository, 2004. http://repository.hkbu.edu.hk/etd_ra/619.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Braun, Mikio Ludwig. "Spectral properties of the kernel matrix and their relation to kernel methods in machine learning." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=978607309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Samo, Yves-Laurent Kom. "Advances in kernel methods : towards general-purpose and scalable models." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:e0ff5f8c-bc28-4d96-8ddb-2d49152b2eee.

Повний текст джерела
Анотація:
A wide range of statistical and machine learning problems involve learning one or multiple latent functions, or properties thereof, from datasets. Examples include regression, classification, principal component analysis, optimisation, learning intensity functions of point processes and reinforcement learning to name but a few. For all these problems, positive semi-definite kernels (or simply kernels) provide a powerful tool for postulating flexible nonparametric hypothesis spaces over functions. Despite recent work on such kernel methods, parametric alternatives, such as deep neural networks, have been at the core of most artificial intelligence breakthroughs in recent years. In this thesis, both theoretical and methodological foundations are presented for constructing fully automated, scalable, and general-purpose kernel machines that perform very well over a wide range of input dimensions and sample sizes. This thesis aims to contribute towards bridging the gap between kernel methods and deep learning and to propose methods that have the advantage over deep learning in performing well on both small and large scale problems. In Part I we provide a gentle introduction to kernel methods, review recent work, identify remaining gaps and outline our contributions. In Part II we develop flexible and scalable Bayesian kernel methods in order to address gaps in methods capable of dealing with the special case of datasets exhibiting locally homogeneous patterns. We begin with two motivating applications. First we consider inferring the intensity function of an inhomogeneous point process in Chapter 2. This application is used to illustrate that often, by carefully adding some mild asymmetry in the dependency structure in Bayesian kernel methods, one may considerably scale-up inference while improving flexibility and accuracy. In Chapter 3 we propose a scalable scheme for online forecasting of time series and fully-online learning of related model parameters, under a kernel-based generative model that is provably sufficiently flexible. This application illustrates that, for one-dimensional input spaces, restricting the degree of differentiability of the latent function of interest may considerably speed-up inference without resorting to approximations and without any adverse effect on flexibility or accuracy. Chapter 4 generalizes these approaches and proposes a novel class of stochastic processes we refer to as string Gaussian processes (string GPs) that, when used as functional prior in a Bayesian nonparametric framework, allow for inference in linear time complexity and linear memory requirement, without resorting to approximations. More importantly, the corresponding inference scheme, which we derive in Chapter 5, also allows flexible learning of locally homogeneous patterns and automated learning of model complexity - that is automated learning of whether there are local patterns in the data in the first place, how much local patterns are present, and where they are located. In Part III we provide a broader discussion covering all types of patterns (homogeneous, locally homogeneous or heterogeneous patterns) and both Bayesian or frequentist kernel methods. In Chapter 6 we begin by discussing what properties a family of kernels should possess to enable fully automated kernel methods that are applicable to any type of datasets. In this chapter, we discuss a novel mathematical formalism for the notion of ‘general-purpose' families of kernels, and we argue that existing families of kernels are not general-purpose. In Chapter 7 we derive weak sufficient conditions for families of kernels to be general-purpose, and we exhibit tractable such families that enjoy a suitable parametrisation, that we refer to as generalized spectral kernels (GSKs). In Chapter 8 we provide a scalable inference scheme for automated kernel learning using general-purpose families of kernels. The proposed inference scheme scales linearly with the sample size and enables automated learning of nonstationarity and model complexity from the data, in virtually any kernel method. Finally, we conclude with a discussion in Chapter 9 where we show that deep learning can be regarded as a particular type of kernel learning method, and we discuss possible extensions in Chapter 10.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lee, Dong Ryeol. "A distributed kernel summation framework for machine learning and scientific applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44727.

Повний текст джерела
Анотація:
The class of computational problems I consider in this thesis share the common trait of requiring consideration of pairs (or higher-order tuples) of data points. I focus on the problem of kernel summation operations ubiquitous in many data mining and scientific algorithms. In machine learning, kernel summations appear in popular kernel methods which can model nonlinear structures in data. Kernel methods include many non-parametric methods such as kernel density estimation, kernel regression, Gaussian process regression, kernel PCA, and kernel support vector machines (SVM). In computational physics, kernel summations occur inside the classical N-body problem for simulating positions of a set of celestial bodies or atoms. This thesis attempts to marry, for the first time, the best relevant techniques in parallel computing, where kernel summations are in low dimensions, with the best general-dimension algorithms from the machine learning literature. We provide a unified, efficient parallel kernel summation framework that can utilize: (1) various types of deterministic and probabilistic approximations that may be suitable for both low and high-dimensional problems with a large number of data points; (2) indexing the data using any multi-dimensional binary tree with both distributed memory (MPI) and shared memory (OpenMP/Intel TBB) parallelism; (3) a dynamic load balancing scheme to adjust work imbalances during the computation. I will first summarize my previous research in serial kernel summation algorithms. This work started from Greengard/Rokhlin's earlier work on fast multipole methods for the purpose of approximating potential sums of many particles. The contributions of this part of this thesis include the followings: (1) reinterpretation of Greengard/Rokhlin's work for the computer science community; (2) the extension of the algorithms to use a larger class of approximation strategies, i.e. probabilistic error bounds via Monte Carlo techniques; (3) the multibody series expansion: the generalization of the theory of fast multipole methods to handle interactions of more than two entities; (4) the first O(N) proof of the batch approximate kernel summation using a notion of intrinsic dimensionality. Then I move onto the problem of parallelization of the kernel summations and tackling the scaling of two other kernel methods, Gaussian process regression (kernel matrix inversion) and kernel PCA (kernel matrix eigendecomposition). The artifact of this thesis has contributed to an open-source machine learning package called MLPACK which has been first demonstrated at the NIPS 2008 and subsequently at the NIPS 2011 Big Learning Workshop. Completing a portion of this thesis involved utilization of high performance computing resource at XSEDE (eXtreme Science and Engineering Discovery Environment) and NERSC (National Energy Research Scientific Computing Center).
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Vishwanathan, S. V. N. "Kernel Methods Fast Algorithms and real life applications." Thesis, Indian Institute of Science, 2003. http://hdl.handle.net/2005/49.

Повний текст джерела
Анотація:
Support Vector Machines (SVM) have recently gained prominence in the field of machine learning and pattern classification (Vapnik, 1995, Herbrich, 2002, Scholkopf and Smola, 2002). Classification is achieved by finding a separating hyperplane in a feature space, which can be mapped back onto a non-linear surface in the input space. However, training an SVM involves solving a quadratic optimization problem, which tends to be computationally intensive. Furthermore, it can be subject to stability problems and is non-trivial to implement. This thesis proposes an fast iterative Support Vector training algorithm which overcomes some of these problems. Our algorithm, which we christen Simple SVM, works mainly for the quadratic soft margin loss (also called the l2 formulation). We also sketch an extension for the linear soft-margin loss (also called the l1 formulation). Simple SVM works by incrementally changing a candidate Support Vector set using a locally greedy approach, until the supporting hyperplane is found within a finite number of iterations. It is derived by a simple (yet computationally crucial) modification of the incremental SVM training algorithms of Cauwenberghs and Poggio (2001) which allows us to perform update operations very efficiently. Constant-time methods for initialization of the algorithm and experimental evidence for the speed of the proposed algorithm, when compared to methods such as Sequential Minimal Optimization and the Nearest Point Algorithm are given. We present results on a variety of real life datasets to validate our claims. In many real life applications, especially for the l2 formulation, the kernel matrix K є R n x n can be written as K = Z T Z + Λ , where, Z є R n x m with m << n and Λ є R n x n is diagonal with nonnegative entries. Hence the matrix K - Λ is rank-degenerate, Extending the work of Fine and Scheinberg (2001) and Gill et al. (1975) we propose an efficient factorization algorithm which can be used to find a L D LT factorization of K in 0(nm2) time. The modified factorization, after a rank one update of K, can be computed in 0(m2) time. We show how the Simple SVM algorithm can be sped up by taking advantage of this new factorization. We also demonstrate applications of our factorization to interior point methods. We show a close relation between the LDV factorization of a rectangular matrix and our LDLT factorization (Gill et al., 1975). An important feature of SVM's is that they can work with data from any input domain as long as a suitable mapping into a Hilbert space can be found, in other words, given the input data we should be able to compute a positive semi definite kernel matrix of the data (Scholkopf and Smola, 2002). In this thesis we propose kernels on a variety of discrete objects, such as strings, trees, Finite State Automata, and Pushdown Automata. We show that our kernels include as special cases the celebrated Pair-HMM kernels (Durbin et al., 1998, Watkins, 2000), the spectrum kernel (Leslie et al., 20024, convolution kernels for NLP (Collins and Duffy, 2001), graph diffusion kernels (Kondor and Lafferty, 2002) and various other string-matching kernels. Because of their widespread applications in bio-informatics and web document based algorithms, string kernels are of special practical importance. By intelligently using the matching statistics algorithm of Chang and Lawler (1994), we propose, perhaps, the first ever algorithm to compute string kernels in linear time. This obviates dynamic programming with quadratic time complexity and makes string kernels a viable alternative for the practitioner. We also propose extensions of our string kernels to compute kernels on trees efficiently. This thesis presents a linear time algorithm for ordered trees and a log-linear time algorithm for unordered trees. In general, SVM's require time proportional to the number of Support Vectors for prediction. In case the dataset is noisy a large fraction of the data points become Support Vectors and thus time required for prediction increases. But, in many applications like search engines or web document retrieval, the dataset is noisy, yet, the speed of prediction is critical. We propose a method for string kernels by which the prediction time can be reduced to linear in the length of the sequence to be classified, regardless of the number of Support Vectors. We achieve this by using a weighted version of our string kernel algorithm. We explore the relationship between dynamic systems and kernels. We define kernels on various kinds of dynamic systems including Markov chains (both discrete and continuous), diffusion processes on graphs and Markov chains, Finite State Automata, various linear time-invariant systems etc Trajectories arc used to define kernels introduced on initial conditions lying underlying dynamic system. The same idea is extended to define Kernels on a. dynamic system with respect to a set of initial conditions. This framework leads to a large number of novel kernels and also generalize many previously proposed kernels. Lack of adequate training data is a problem which plagues classifiers. We propose n new method to generate virtual training samples in the case of handwritten digit data. Our method uses the two dimensional suffix tree representation of a set of matrices to encode an exponential number of virtual samples in linear space thus leading to an increase in classification accuracy. This in turn, leads us naturally to a, compact data dependent representation of a test pattern which we call the description tree. We propose a new kernel for images and demonstrate a quadratic time algorithm for computing it by wing the suffix tree representation of an image. We also describe a method to reduce the prediction time to quadratic in the size of the test image by using techniques similar to those used for string kernels.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chu, C. Y. C. "Pattern recognition and machine learning for magnetic resonance images with kernel methods." Thesis, University College London (University of London), 2009. http://discovery.ucl.ac.uk/18519/.

Повний текст джерела
Анотація:
The aim of this thesis is to apply a particular category of machine learning and pattern recognition algorithms, namely the kernel methods, to both functional and anatomical magnetic resonance images (MRI). This work specifically focused on supervised learning methods. Both methodological and practical aspects are described in this thesis. Kernel methods have the computational advantage for high dimensional data, therefore they are idea for imaging data. The procedures can be broadly divided into two components: the construction of the kernels and the actual kernel algorithms themselves. Pre-processed functional or anatomical images can be computed into a linear kernel or a non-linear kernel. We introduce both kernel regression and kernel classification algorithms in two main categories: probabilistic methods and non-probabilistic methods. For practical applications, kernel classification methods were applied to decode the cognitive or sensory states of the subject from the fMRI signal and were also applied to discriminate patients with neurological diseases from normal people using anatomical MRI. Kernel regression methods were used to predict the regressors in the design of fMRI experiments, and clinical ratings from the anatomical scans.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rowland, Mark. "Structure in machine learning : graphical models and Monte Carlo methods." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287479.

Повний текст джерела
Анотація:
This thesis is concerned with two main areas: approximate inference in discrete graphical models, and random embeddings for dimensionality reduction and approximate inference in kernel methods. Approximate inference is a fundamental problem in machine learning and statistics, with strong connections to other domains such as theoretical computer science. At the same time, there has often been a gap between the success of many algorithms in this area in practice, and what can be explained by theory; thus, an important research effort is to bridge this gap. Random embeddings for dimensionality reduction and approximate inference have led to great improvements in scalability of a wide variety of methods in machine learning. In recent years, there has been much work on how the stochasticity introduced by these approaches can be better controlled, and what further computational improvements can be made. In the first part of this thesis, we study approximate inference algorithms for discrete graphical models. Firstly, we consider linear programming methods for approximate MAP inference, and develop our understanding of conditions for exactness of these approximations. Such guarantees of exactness are typically based on either structural restrictions on the underlying graph corresponding to the model (such as low treewidth), or restrictions on the types of potential functions that may be present in the model (such as log-supermodularity). We contribute two new classes of exactness guarantees: the first of these takes the form of particular hybrid restrictions on a combination of graph structure and potential types, whilst the second is given by excluding particular substructures from the underlying graph, via graph minor theory. We also study a particular family of transformation methods of graphical models, uprooting and rerooting, and their effect on approximate MAP and marginal inference methods. We prove new theoretical results on the behaviour of particular approximate inference methods under these transformations, in particular showing that the triplet relaxation of the marginal polytope is unique in being universally rooted. We also introduce a heuristic which quickly picks a rerooting, and demonstrate benefits empirically on models over several graph topologies. In the second part of this thesis, we study Monte Carlo methods for both linear dimensionality reduction and approximate inference in kernel machines. We prove the statistical benefit of coupling Monte Carlo samples to be almost-surely orthogonal in a variety of contexts, and study fast approximate methods of inducing this coupling. A surprising result is that these approximate methods can simultaneously offer improved statistical benefits, time complexity, and space complexity over i.i.d. Monte Carlo samples. We evaluate our methods on a variety of datasets, directly studying their effects on approximate kernel evaluation, as well as on downstream tasks such as Gaussian process regression.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Que, Qichao. "Integral Equations For Machine Learning Problems." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461258998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Machine learning, kernel methods"

1

Bernhard, Schölkopf, Burges Christopher J. C, and Smola Alexander J, eds. Advances in kernel methods: Support vector learning. Cambridge, Mass: MIT Press, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Suzuki, Joe. Kernel Methods for Machine Learning with Math and R. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0398-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Suzuki, Joe. Kernel Methods for Machine Learning with Math and Python. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0401-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Camps-Valls, Gustavo. Kernel methods for remote sensing 1: Data analysis 2. Hoboken, NJ: Wiley, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Léon-Charles, Tranchevent, Moor Bart, Moreau Yves, and SpringerLink (Online service), eds. Kernel-based Data Fusion for Machine Learning: Methods and Applications in Bioinformatics and Text Mining. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hsieh, William Wei. Machine learning methods in the environmental sciences: Neural networks and kernels. Cambridge, UK: Cambridge University Press, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Machine learning methods in the environmental sciences: Neural networks and kernels. Cambridge, UK: Cambridge University Press, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yu, Shi, Léon-Charles Tranchevent, Bart De Moor, and Yves Moreau. Kernel-based Data Fusion for Machine Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19406-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Learning kernel classifiers: Theory and algorithms. Cambridge, Mass: MIT Press, 2002.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

G, Carbonell Jaime, ed. Machine learning: Paradigms and methods. Cambridge, Mass: MIT Press, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Machine learning, kernel methods"

1

Mannor, Shie, Xin Jin, Jiawei Han, Xin Jin, Jiawei Han, Xin Jin, Jiawei Han, and Xinhua Zhang. "Kernel Methods." In Encyclopedia of Machine Learning, 566–70. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_430.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Smola, Alexander J., and Bernhard Schölkopf. "Bayesian Kernel Methods." In Advanced Lectures on Machine Learning, 65–117. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36434-x_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Xinhua. "Kernel Methods." In Encyclopedia of Machine Learning and Data Mining, 1–5. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_144-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Xinhua. "Kernel Methods." In Encyclopedia of Machine Learning and Data Mining, 690–95. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Montesinos López, Osval Antonio, Abelardo Montesinos López, and Jose Crossa. "Reproducing Kernel Hilbert Spaces Regression and Classification Methods." In Multivariate Statistical Machine Learning Methods for Genomic Prediction, 251–336. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89010-0_8.

Повний текст джерела
Анотація:
AbstractThe fundamentals for Reproducing Kernel Hilbert Spaces (RKHS) regression methods are described in this chapter. We first point out the virtues of RKHS regression methods and why these methods are gaining a lot of acceptance in statistical machine learning. Key elements for the construction of RKHS regression methods are provided, the kernel trick is explained in some detail, and the main kernel functions for building kernels are provided. This chapter explains some loss functions under a fixed model framework with examples of Gaussian, binary, and categorical response variables. We illustrate the use of mixed models with kernels by providing examples for continuous response variables. Practical issues for tuning the kernels are illustrated. We expand the RKHS regression methods under a Bayesian framework with practical examples applied to continuous and categorical response variables and by including in the predictor the main effects of environments, genotypes, and the genotype ×environment interaction. We show examples of multi-trait RKHS regression methods for continuous response variables. Finally, some practical issues of kernel compression methods are provided which are important for reducing the computation cost of implementing conventional RKHS methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pronobis, Wiktor, and Klaus-Robert Müller. "Kernel Methods for Quantum Chemistry." In Machine Learning Meets Quantum Physics, 25–36. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40245-7_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Collins, Michael. "Tutorial: Machine Learning Methods in Natural Language Processing." In Learning Theory and Kernel Machines, 655. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45167-9_47.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Suzuki, Joe. "Kernel Computations." In Kernel Methods for Machine Learning with Math and R, 89–122. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0398-4_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Suzuki, Joe. "Kernel Computations." In Kernel Methods for Machine Learning with Math and Python, 91–128. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0401-1_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Suzuki, Joe. "Reproducing Kernel Hilbert Space." In Kernel Methods for Machine Learning with Math and R, 59–87. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0398-4_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Machine learning, kernel methods"

1

Ramazanli, Ilqar. "Nearest Neighbor outperforms Kernel-Kernel Methods for Distribution Regression." In 2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML). IEEE, 2022. http://dx.doi.org/10.1109/cacml55074.2022.00009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Melacci, Stefano, and Marco Gori. "Kernel Methods for Minimum Entropy Encoding." In 2011 Tenth International Conference on Machine Learning and Applications (ICMLA). IEEE, 2011. http://dx.doi.org/10.1109/icmla.2011.83.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xue, Hui, Yu Song, and Hai-Ming Xu. "Multiple Indefinite Kernel Learning for Feature Selection." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/448.

Повний текст джерела
Анотація:
Multiple kernel learning for feature selection (MKL-FS) utilizes kernels to explore complex properties of features and performs better in embedded methods. However, the kernels in MKL-FS are generally limited to be positive definite. In fact, indefinite kernels often emerge in actual applications and can achieve better empirical performance. But due to the non-convexity of indefinite kernels, existing MKL-FS methods are usually inapplicable and the corresponding research is also relatively little. In this paper, we propose a novel multiple indefinite kernel feature selection method (MIK-FS) based on the primal framework of indefinite kernel support vector machine (IKSVM), which applies an indefinite base kernel for each feature and then exerts an l1-norm constraint on kernel combination coefficients to select features automatically. A two-stage algorithm is further presented to optimize the coefficients of IKSVM and kernel combination alternately. In the algorithm, we reformulate the non-convex optimization problem of primal IKSVM as a difference of convex functions (DC) programming and transform the non-convex problem into a convex one with the affine minorization approximation. Experiments on real-world datasets demonstrate that MIK-FS is superior to some related state-of-the-art methods in both feature selection and classification performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Trindade, Luis A., Hui Wang, William Blackburn, and Niall Rooney. "Text classification using word sequence kernel methods." In 2011 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2011. http://dx.doi.org/10.1109/icmlc.2011.6016983.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Deen, Anjna Jayant, and Manasi Gyanchandani. "Machine Learning Kernel Methods for Protein Function Prediction." In 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT). IEEE, 2019. http://dx.doi.org/10.1109/icssit46314.2019.8987852.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jiang, Qingnan, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, and Lei Li. "Learning Kernel-Smoothed Machine Translation with Retrieved Examples." In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.emnlp-main.579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Da-Nian Zheng, Jia-Xin Wang, Yan-Nan Zhao, and Ze-Hong Yang. "Reduced sets and fast approximation for kernel methods." In Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xu, Yong, Bin Sun, Chong-yang Zhang, Zhong Jin, Chuan-cai Liu, and Jing-yu Yang. "An Implementation Framework for Kernel Methods with High-Dimensional Patterns." In 2006 International Conference on Machine Learning and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icmlc.2006.258439.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhao, Ziyi, Dan Shi, Hong Huo, and Tao Fang. "Feature Encoding Methods Evaluation based on Multiple kernel Learning." In ICMLC 2018: 2018 10th International Conference on Machine Learning and Computing. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3195106.3195152.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nguyen, Khanh. "Nonparametric Online Machine Learning with Kernels." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/758.

Повний текст джерела
Анотація:
Max-margin and kernel methods are dominant approaches to solve many tasks in machine learning. However, the paramount question is how to solve model selection problem in these methods. It becomes urgent in online learning context. Grid search is a common approach, but it turns out to be highly problematic in real-world applications. Our approach is to view max-margin and kernel methods under a Bayesian setting, then use Bayesian inference tools to learn model parameters and infer hyper-parameters in principle ways for both batch and online setting.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Machine learning, kernel methods"

1

Xu, Yuesheng. Adaptive Kernel Based Machine Learning Methods. Fort Belvoir, VA: Defense Technical Information Center, October 2012. http://dx.doi.org/10.21236/ada588768.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Vesselinov, Velimir Valentinov. TensorDecompostions : Unsupervised machine learning methods. Office of Scientific and Technical Information (OSTI), February 2019. http://dx.doi.org/10.2172/1493534.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Tong. Multi-Stage Convex Relaxation Methods for Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, March 2013. http://dx.doi.org/10.21236/ada580533.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jesneck, Jonathan, and Joseph Lo. Modular Machine Learning Methods for Computer-Aided Diagnosis of Breast Cancer. Fort Belvoir, VA: Defense Technical Information Center, May 2004. http://dx.doi.org/10.21236/ada430017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hedyehzadeh, Mohammadreza, Shadi Yoosefian, Dezfuli Nezhad, and Naser Safdarian. Evaluation of Conventional Machine Learning Methods for Brain Tumour Type Classification. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, June 2020. http://dx.doi.org/10.7546/crabs.2020.06.14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Semen, Peter M. A Generalized Approach to Soil Strength Prediction With Machine Learning Methods. Fort Belvoir, VA: Defense Technical Information Center, July 2006. http://dx.doi.org/10.21236/ada464726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chernozhukov, Victor, Kaspar Wüthrich, and Yinchu Zhu. Exact and robust conformal inference methods for predictive machine learning with dependent data. The IFS, March 2018. http://dx.doi.org/10.1920/wp.cem.2018.1618.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hemphill, Geralyn M. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction. Office of Scientific and Technical Information (OSTI), September 2016. http://dx.doi.org/10.2172/1329544.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mishra, Umakant, and Sagar Gautam. Improving and testing machine learning methods for benchmarking soil carbon dynamics representation of land surface models. Office of Scientific and Technical Information (OSTI), September 2022. http://dx.doi.org/10.2172/1891184.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Martinez, Carianne, John P. Korbin, Kevin Matthew Potter, Emily Donahue, Jeremy David Gamet, and Matthew David Smith. Investigating Machine Learning Based X-Ray Computed Tomography Reconstruction Methods to Enhance the Accuracy of CT Scans. Office of Scientific and Technical Information (OSTI), October 2019. http://dx.doi.org/10.2172/1571551.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії