Academic literature on the topic 'Fast kernel methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fast kernel methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fast kernel methods"

1

Li, Jun-yi, and Jian-hua Li. "Fast Image Search with Locality-Sensitive Hashing and Homogeneous Kernels Map." Scientific World Journal 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/350676.

Full text
Abstract:
Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach’s linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH) algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhai, Yuejing, Zhouzheng Li, and Haizhong Liu. "Multi-Angle Fast Neural Tangent Kernel Classifier." Applied Sciences 12, no. 21 (October 26, 2022): 10876. http://dx.doi.org/10.3390/app122110876.

Full text
Abstract:
Multi-kernel learning methods are essential kernel learning methods. Still, the base kernel functions in most multi-kernel learning methods only with select kernel functions with shallow structures, which are weak for large-scale uneven data. We propose two types of acceleration models from a multidimensional perspective of the data: the neural tangent kernel (NTK)-based multi-kernel learning method is proposed, where the NTK kernel regressor is shown to be equivalent to an infinitely wide neural network predictor, and the NTK with deep structure is used as the base kernel function to enhance the learning ability of multi-kernel models; and a parallel computing kernel model based on data partitioning techniques. An RBF, POLY-based multi-kernel model is also proposed. All models use historical memory-based PSO (HMPSO) for efficient search of parameters within the model. Since NTK has a multi-layer structure and thus has a significant computational complexity, the use of a Monotone Disjunctive Kernel (MDK) to store and train Boolean features in binary achieves a 15–60% training time compression of NTK models in different datasets while obtaining a 1–25% accuracy improvement.
APA, Harvard, Vancouver, ISO, and other styles
3

Shitong Wang, Jun Wang, and Fu-lai Chung. "Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets." IEEE Transactions on Cybernetics 44, no. 1 (January 2014): 1–20. http://dx.doi.org/10.1109/tsmcb.2012.2236828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Viljanen, Markus, Antti Airola, and Tapio Pahikkala. "Generalized vec trick for fast learning of pairwise kernel models." Machine Learning 111, no. 2 (January 28, 2022): 543–73. http://dx.doi.org/10.1007/s10994-021-06127-y.

Full text
Abstract:
AbstractPairwise learning corresponds to the supervised learning setting where the goal is to make predictions for pairs of objects. Prominent applications include predicting drug-target or protein-protein interactions, or customer-product preferences. In this work, we present a comprehensive review of pairwise kernels, that have been proposed for incorporating prior knowledge about the relationship between the objects. Specifically, we consider the standard, symmetric and anti-symmetric Kronecker product kernels, metric-learning, Cartesian, ranking, as well as linear, polynomial and Gaussian kernels. Recently, a $$O(nm+nq)$$ O ( n m + n q ) time generalized vec trick algorithm, where $$n$$ n , $$m$$ m , and $$q$$ q denote the number of pairs, drugs and targets, was introduced for training kernel methods with the Kronecker product kernel. This was a significant improvement over previous $$O(n^2)$$ O ( n 2 ) training methods, since in most real-world applications $$m,q<< n$$ m , q < < n . In this work we show how all the reviewed kernels can be expressed as sums of Kronecker products, allowing the use of generalized vec trick for speeding up their computation. In the experiments, we demonstrate how the introduced approach allows scaling pairwise kernels to much larger data sets than previously feasible, and provide an extensive comparison of the kernels on a number of biological interaction prediction tasks.
APA, Harvard, Vancouver, ISO, and other styles
5

Trifonov, P. V. "Design and decoding of polar codes with large kernels: a survey." Проблемы передачи информации 59, no. 1 (December 15, 2023): 25–45. http://dx.doi.org/10.31857/s0555292323010035.

Full text
Abstract:
We present techniques for the construction of polar codes with large kernels and their decoding. A crucial problem in the implementation of the successive cancellation decoding algorithm and its derivatives is kernel processing, i.e., fast evaluation of the log-likelihood ratios for kernel input symbols. We discuss window and recursive trellis processing methods. We consider techniques for evaluation of the reliability of bit subchannels and for obtaining codes with improved distance properties.
APA, Harvard, Vancouver, ISO, and other styles
6

Rejwer-Kosińska, Ewa, Liliana Rybarska-Rusinek, and Aleksandr Linkov. "On accuracy of translations by kernel independent fast multipole methods." Computers & Mathematics with Applications 124 (October 2022): 227–40. http://dx.doi.org/10.1016/j.camwa.2022.08.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Kai, Rongchun Li, Yong Dou, Zhengfa Liang, and Qi Lv. "Ranking Support Vector Machine with Kernel Approximation." Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4629534.

Full text
Abstract:
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Bian, Lu Sha, Yong Fang Yao, Xiao Yuan Jing, Sheng Li, Jiang Yue Man, and Jie Sun. "Face Recognition Based on a Fast Kernel Discriminant Analysis Approach." Advanced Materials Research 433-440 (January 2012): 6205–11. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.6205.

Full text
Abstract:
The computational cost of kernel discrimination is usually higher than linear discrimination, making many kernel methods impractically slow. To overcome this disadvantage, several accelerated algorithms have been presented, which express kernel discriminant vectors using a part of mapped training samples that are selected by some criterions. However, they still need to calculate a large kernel matrix using all training samples, so they only save rather limited computing time. In this paper, we propose the fast and effective kernel discriminations based on the mapped mean samples (MMS). It calculates a small kernel matrix by constructing a few mean samples in input space, then expresses the kernel discriminant vectors using MMS. The proposed kernel approach is tested on the public AR and FERET face databases. Experimental results show that this approach is effective in both saving computing time and acquiring favorable recognition results.
APA, Harvard, Vancouver, ISO, and other styles
9

Kriege, Nils M., Marion Neumann, Christopher Morris, Kristian Kersting, and Petra Mutzel. "A unifying view of explicit and implicit feature maps of graph kernels." Data Mining and Knowledge Discovery 33, no. 6 (September 17, 2019): 1505–47. http://dx.doi.org/10.1007/s10618-019-00652-0.

Full text
Abstract:
Abstract Non-linear kernel methods can be approximated by fast linear ones using suitable explicit feature maps allowing their application to large scale problems. We investigate how convolution kernels for structured data are composed from base kernels and construct corresponding feature maps. On this basis we propose exact and approximative feature maps for widely used graph kernels based on the kernel trick. We analyze for which kernels and graph properties computation by explicit feature maps is feasible and actually more efficient. In particular, we derive approximative, explicit feature maps for state-of-the-art kernels supporting real-valued attributes including the GraphHopper and graph invariant kernels. In extensive experiments we show that our approaches often achieve a classification accuracy close to the exact methods based on the kernel trick, but require only a fraction of their running time. Moreover, we propose and analyze algorithms for computing random walk, shortest-path and subgraph matching kernels by explicit and implicit feature maps. Our theoretical results are confirmed experimentally by observing a phase transition when comparing running time with respect to label diversity, walk lengths and subgraph size, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Jiang, Shunhua, Yunze Man, Zhao Song, Zheng Yu, and Danyang Zhuo. "Fast Graph Neural Tangent Kernel via Kronecker Sketching." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 7033–41. http://dx.doi.org/10.1609/aaai.v36i6.20662.

Full text
Abstract:
Many deep learning tasks need to deal with graph data (e.g., social networks, protein structures, code ASTs). Due to the importance of these tasks, people turned to Graph Neural Networks (GNNs) as the de facto method for machine learning on graph data. GNNs have become widely applied due to their convincing performance. Unfortunately, one major barrier to using GNNs is that GNNs require substantial time and resources to train. Recently, a new method for learning on graph data is Graph Neural Tangent Kernel (GNTK). GNTK is an application of Neural Tangent Kernel (NTK) (a kernel method) on graph data, and solving NTK regression is equivalent to using gradient descent to train an infinite-wide neural network. The key benefit of using GNTK is that, similar to any kernel method, GNTK's parameters can be solved directly in a single step, avoiding time-consuming gradient descent. Meanwhile, sketching has become increasingly used in speeding up various optimization problems, including solving kernel regression. Given a kernel matrix of n graphs, using sketching in solving kernel regression can reduce the running time to o(n^3). But unfortunately such methods usually require extensive knowledge about the kernel matrix beforehand, while in the case of GNTK we find that the construction of the kernel matrix is already O(n^2N^4), assuming each graph has N nodes. The kernel matrix construction time can be a major performance bottleneck when the size of graphs N increases. A natural question to ask is thus whether we can speed up the kernel matrix construction to improve GNTK regression's end-to-end running time. This paper provides the first algorithm to construct the kernel matrix in o(n^2N^3) running time.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Fast kernel methods"

1

Vishwanathan, S. V. N. "Kernel Methods Fast Algorithms and real life applications." Thesis, Indian Institute of Science, 2003. https://etd.iisc.ac.in/handle/2005/3923.

Full text
Abstract:
Support Vector Machines (SVM) have recently gained prominence in the field of machine learning and pattern classification (Vapnik, 1995, Herbrich, 2002, Scholkopf and Smola, 2002). Classification is achieved by finding a separating hyperplane in a feature space, which can be mapped back onto a non-linear surface in the input space. However, training an SVM involves solving a quadratic optimization problem, which tends to be computationally intensive. Furthermore, it can be subject to stability problems and is non-trivial to implement. This thesis proposes an fast iterative Support Vector training algorithm which overcomes some of these problems. Our algorithm, which we christen Simple SVM, works mainly for the quadratic soft margin loss (also called the l2 formulation). We also sketch an extension for the linear soft-margin loss (also called the l1 formulation). Simple SVM works by incrementally changing a candidate Support Vector set using a locally greedy approach, until the supporting hyperplane is found within a finite number of iterations. It is derived by a simple (yet computationally crucial) modification of the incremental SVM training algorithms of Cauwenberghs and Poggio (2001) which allows us to perform update operations very efficiently. Constant-time methods for initialization of the algorithm and experimental evidence for the speed of the proposed algorithm, when compared to methods such as Sequential Minimal Optimization and the Nearest Point Algorithm are given. We present results on a variety of real life datasets to validate our claims. In many real life applications, especially for the l2 formulation, the kernel matrix K є R n x n can be written as K = Z T Z + Λ , where, Z є R n x m with m << n and Λ є R n x n is diagonal with nonnegative entries. Hence the matrix K - Λ is rank-degenerate, Extending the work of Fine and Scheinberg (2001) and Gill et al. (1975) we propose an efficient factorization algorithm which can be used to find a L D LT factorization of K in 0(nm2) time. The modified factorization, after a rank one update of K, can be computed in 0(m2) time. We show how the Simple SVM algorithm can be sped up by taking advantage of this new factorization. We also demonstrate applications of our factorization to interior point methods. We show a close relation between the LDV factorization of a rectangular matrix and our LDLT factorization (Gill et al., 1975). An important feature of SVM's is that they can work with data from any input domain as long as a suitable mapping into a Hilbert space can be found, in other words, given the input data we should be able to compute a positive semi definite kernel matrix of the data (Scholkopf and Smola, 2002). In this thesis we propose kernels on a variety of discrete objects, such as strings, trees, Finite State Automata, and Pushdown Automata. We show that our kernels include as special cases the celebrated Pair-HMM kernels (Durbin et al., 1998, Watkins, 2000), the spectrum kernel (Leslie et al., 20024, convolution kernels for NLP (Collins and Duffy, 2001), graph diffusion kernels (Kondor and Lafferty, 2002) and various other string-matching kernels. Because of their widespread applications in bio-informatics and web document based algorithms, string kernels are of special practical importance. By intelligently using the matching statistics algorithm of Chang and Lawler (1994), we propose, perhaps, the first ever algorithm to compute string kernels in linear time. This obviates dynamic programming with quadratic time complexity and makes string kernels a viable alternative for the practitioner. We also propose extensions of our string kernels to compute kernels on trees efficiently. This thesis presents a linear time algorithm for ordered trees and a log-linear time algorithm for unordered trees. In general, SVM's require time proportional to the number of Support Vectors for prediction. In case the dataset is noisy a large fraction of the data points become Support Vectors and thus time required for prediction increases. But, in many applications like search engines or web document retrieval, the dataset is noisy, yet, the speed of prediction is critical. We propose a method for string kernels by which the prediction time can be reduced to linear in the length of the sequence to be classified, regardless of the number of Support Vectors. We achieve this by using a weighted version of our string kernel algorithm. We explore the relationship between dynamic systems and kernels. We define kernels on various kinds of dynamic systems including Markov chains (both discrete and continuous), diffusion processes on graphs and Markov chains, Finite State Automata, various linear time-invariant systems etc Trajectories arc used to define kernels introduced on initial conditions lying underlying dynamic system. The same idea is extended to define Kernels on a. dynamic system with respect to a set of initial conditions. This framework leads to a large number of novel kernels and also generalize many previously proposed kernels. Lack of adequate training data is a problem which plagues classifiers. We propose n new method to generate virtual training samples in the case of handwritten digit data. Our method uses the two dimensional suffix tree representation of a set of matrices to encode an exponential number of virtual samples in linear space thus leading to an increase in classification accuracy. This in turn, leads us naturally to a, compact data dependent representation of a test pattern which we call the description tree. We propose a new kernel for images and demonstrate a quadratic time algorithm for computing it by wing the suffix tree representation of an image. We also describe a method to reduce the prediction time to quadratic in the size of the test image by using techniques similar to those used for string kernels.
APA, Harvard, Vancouver, ISO, and other styles
2

Vishwanathan, S. V. N. "Kernel Methods Fast Algorithms and real life applications." Thesis, Indian Institute of Science, 2003. http://hdl.handle.net/2005/49.

Full text
Abstract:
Support Vector Machines (SVM) have recently gained prominence in the field of machine learning and pattern classification (Vapnik, 1995, Herbrich, 2002, Scholkopf and Smola, 2002). Classification is achieved by finding a separating hyperplane in a feature space, which can be mapped back onto a non-linear surface in the input space. However, training an SVM involves solving a quadratic optimization problem, which tends to be computationally intensive. Furthermore, it can be subject to stability problems and is non-trivial to implement. This thesis proposes an fast iterative Support Vector training algorithm which overcomes some of these problems. Our algorithm, which we christen Simple SVM, works mainly for the quadratic soft margin loss (also called the l2 formulation). We also sketch an extension for the linear soft-margin loss (also called the l1 formulation). Simple SVM works by incrementally changing a candidate Support Vector set using a locally greedy approach, until the supporting hyperplane is found within a finite number of iterations. It is derived by a simple (yet computationally crucial) modification of the incremental SVM training algorithms of Cauwenberghs and Poggio (2001) which allows us to perform update operations very efficiently. Constant-time methods for initialization of the algorithm and experimental evidence for the speed of the proposed algorithm, when compared to methods such as Sequential Minimal Optimization and the Nearest Point Algorithm are given. We present results on a variety of real life datasets to validate our claims. In many real life applications, especially for the l2 formulation, the kernel matrix K є R n x n can be written as K = Z T Z + Λ , where, Z є R n x m with m << n and Λ є R n x n is diagonal with nonnegative entries. Hence the matrix K - Λ is rank-degenerate, Extending the work of Fine and Scheinberg (2001) and Gill et al. (1975) we propose an efficient factorization algorithm which can be used to find a L D LT factorization of K in 0(nm2) time. The modified factorization, after a rank one update of K, can be computed in 0(m2) time. We show how the Simple SVM algorithm can be sped up by taking advantage of this new factorization. We also demonstrate applications of our factorization to interior point methods. We show a close relation between the LDV factorization of a rectangular matrix and our LDLT factorization (Gill et al., 1975). An important feature of SVM's is that they can work with data from any input domain as long as a suitable mapping into a Hilbert space can be found, in other words, given the input data we should be able to compute a positive semi definite kernel matrix of the data (Scholkopf and Smola, 2002). In this thesis we propose kernels on a variety of discrete objects, such as strings, trees, Finite State Automata, and Pushdown Automata. We show that our kernels include as special cases the celebrated Pair-HMM kernels (Durbin et al., 1998, Watkins, 2000), the spectrum kernel (Leslie et al., 20024, convolution kernels for NLP (Collins and Duffy, 2001), graph diffusion kernels (Kondor and Lafferty, 2002) and various other string-matching kernels. Because of their widespread applications in bio-informatics and web document based algorithms, string kernels are of special practical importance. By intelligently using the matching statistics algorithm of Chang and Lawler (1994), we propose, perhaps, the first ever algorithm to compute string kernels in linear time. This obviates dynamic programming with quadratic time complexity and makes string kernels a viable alternative for the practitioner. We also propose extensions of our string kernels to compute kernels on trees efficiently. This thesis presents a linear time algorithm for ordered trees and a log-linear time algorithm for unordered trees. In general, SVM's require time proportional to the number of Support Vectors for prediction. In case the dataset is noisy a large fraction of the data points become Support Vectors and thus time required for prediction increases. But, in many applications like search engines or web document retrieval, the dataset is noisy, yet, the speed of prediction is critical. We propose a method for string kernels by which the prediction time can be reduced to linear in the length of the sequence to be classified, regardless of the number of Support Vectors. We achieve this by using a weighted version of our string kernel algorithm. We explore the relationship between dynamic systems and kernels. We define kernels on various kinds of dynamic systems including Markov chains (both discrete and continuous), diffusion processes on graphs and Markov chains, Finite State Automata, various linear time-invariant systems etc Trajectories arc used to define kernels introduced on initial conditions lying underlying dynamic system. The same idea is extended to define Kernels on a. dynamic system with respect to a set of initial conditions. This framework leads to a large number of novel kernels and also generalize many previously proposed kernels. Lack of adequate training data is a problem which plagues classifiers. We propose n new method to generate virtual training samples in the case of handwritten digit data. Our method uses the two dimensional suffix tree representation of a set of matrices to encode an exponential number of virtual samples in linear space thus leading to an increase in classification accuracy. This in turn, leads us naturally to a, compact data dependent representation of a test pattern which we call the description tree. We propose a new kernel for images and demonstrate a quadratic time algorithm for computing it by wing the suffix tree representation of an image. We also describe a method to reduce the prediction time to quadratic in the size of the test image by using techniques similar to those used for string kernels.
APA, Harvard, Vancouver, ISO, and other styles
3

Plumlee, Matthew. "Fast methods for identifying high dimensional systems using observations." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53544.

Full text
Abstract:
This thesis proposes new analysis tools for simulation models in the presence of data. To achieve a representation close to reality, simulation models are typically endowed with a set of inputs, termed parameters, that represent several controllable, stochastic or unknown components of the system. Because these models often utilize computationally expensive procedures, even modern supercomputers require a nontrivial amount of time, money, and energy to run for complex systems. Existing statistical frameworks avoid repeated evaluations of deterministic models through an emulator, constructed by conducting an experiment on the code. In high dimensional scenarios, the traditional framework for emulator-based analysis can fail due to the computational burden of inference. This thesis proposes a new class of experiments where inference from half a million observations is possible in seconds versus the days required for the traditional technique. In a case study presented in this thesis, the parameter of interest is a function as opposed to a scalar or a set of scalars, meaning the problem exists in the high dimensional regime. This work develops a new modeling strategy to nonparametrically study the functional parameter using Bayesian inference. Stochastic simulations are also investigated in the thesis. I describe the development of emulators through a framework termed quantile kriging, which allows for non-parametric representations of the stochastic behavior of the output whereas previous work has focused on normally distributed outputs. Furthermore, this work studied asymptotic properties of this methodology that yielded practical insights. Under certain regulatory conditions, there is the following result: By using an experiment that has the appropriate ratio of replications to sets of different inputs, we can achieve an optimal rate of convergence. Additionally, this method provided the basic tool for the study of defect patterns and a case study is explored.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Dong Ryeol. "A distributed kernel summation framework for machine learning and scientific applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44727.

Full text
Abstract:
The class of computational problems I consider in this thesis share the common trait of requiring consideration of pairs (or higher-order tuples) of data points. I focus on the problem of kernel summation operations ubiquitous in many data mining and scientific algorithms. In machine learning, kernel summations appear in popular kernel methods which can model nonlinear structures in data. Kernel methods include many non-parametric methods such as kernel density estimation, kernel regression, Gaussian process regression, kernel PCA, and kernel support vector machines (SVM). In computational physics, kernel summations occur inside the classical N-body problem for simulating positions of a set of celestial bodies or atoms. This thesis attempts to marry, for the first time, the best relevant techniques in parallel computing, where kernel summations are in low dimensions, with the best general-dimension algorithms from the machine learning literature. We provide a unified, efficient parallel kernel summation framework that can utilize: (1) various types of deterministic and probabilistic approximations that may be suitable for both low and high-dimensional problems with a large number of data points; (2) indexing the data using any multi-dimensional binary tree with both distributed memory (MPI) and shared memory (OpenMP/Intel TBB) parallelism; (3) a dynamic load balancing scheme to adjust work imbalances during the computation. I will first summarize my previous research in serial kernel summation algorithms. This work started from Greengard/Rokhlin's earlier work on fast multipole methods for the purpose of approximating potential sums of many particles. The contributions of this part of this thesis include the followings: (1) reinterpretation of Greengard/Rokhlin's work for the computer science community; (2) the extension of the algorithms to use a larger class of approximation strategies, i.e. probabilistic error bounds via Monte Carlo techniques; (3) the multibody series expansion: the generalization of the theory of fast multipole methods to handle interactions of more than two entities; (4) the first O(N) proof of the batch approximate kernel summation using a notion of intrinsic dimensionality. Then I move onto the problem of parallelization of the kernel summations and tackling the scaling of two other kernel methods, Gaussian process regression (kernel matrix inversion) and kernel PCA (kernel matrix eigendecomposition). The artifact of this thesis has contributed to an open-source machine learning package called MLPACK which has been first demonstrated at the NIPS 2008 and subsequently at the NIPS 2011 Big Learning Workshop. Completing a portion of this thesis involved utilization of high performance computing resource at XSEDE (eXtreme Science and Engineering Discovery Environment) and NERSC (National Energy Research Scientific Computing Center).
APA, Harvard, Vancouver, ISO, and other styles
5

Holmes, Michael P. "Multi-tree Monte Carlo methods for fast, scalable machine learning." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33865.

Full text
Abstract:
As modern applications of machine learning and data mining are forced to deal with ever more massive quantities of data, practitioners quickly run into difficulty with the scalability of even the most basic and fundamental methods. We propose to provide scalability through a marriage between classical, empirical-style Monte Carlo approximation and deterministic multi-tree techniques. This union entails a critical compromise: losing determinism in order to gain speed. In the face of large-scale data, such a compromise is arguably often not only the right but the only choice. We refer to this new approximation methodology as Multi-Tree Monte Carlo. In particular, we have developed the following fast approximation methods: 1. Fast training for kernel conditional density estimation, showing speedups as high as 10⁵ on up to 1 million points. 2. Fast training for general kernel estimators (kernel density estimation, kernel regression, etc.), showing speedups as high as 10⁶ on tens of millions of points. 3. Fast singular value decomposition, showing speedups as high as 10⁵ on matrices containing billions of entries. The level of acceleration we have shown represents improvement over the prior state of the art by several orders of magnitude. Such improvement entails a qualitative shift, a commoditization, that opens doors to new applications and methods that were previously invisible, outside the realm of practicality. Further, we show how these particular approximation methods can be unified in a Multi-Tree Monte Carlo meta-algorithm which lends itself as scaffolding to the further development of new fast approximation methods. Thus, our contribution includes not just the particular algorithms we have derived but also the Multi-Tree Monte Carlo methodological framework, which we hope will lead to many more fast algorithms that can provide the kind of scalability we have shown here to other important methods from machine learning and related fields.
APA, Harvard, Vancouver, ISO, and other styles
6

Strengbom, Kristoffer. "Mobile Services Based Traffic Modeling." Thesis, Linköpings universitet, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-116459.

Full text
Abstract:
Traditionally, communication systems have been dominated by voice applications. Today with the emergence of smartphones, focus has shifted towards packet switched networks. The Internet provides a wide variety of services such as video streaming, web browsing, e-mail etc, and IP trac models are needed in all stages of product development, from early research to system tests. In this thesis, we propose a multi-level model of IP traffic where the user behavior and the actual IP traffic generated from different services are considered as being two independent random processes. The model is based on observations of IP packet header logs from live networks. In this way models can be updated to reflect the ever changing service and end user equipment usage. Thus, the work can be divided into two parts. The first part is concerned with modeling the traffic from different services. A subscriber is interested in enjoying the services provided on the Internet and traffic modeling should reflect the characteristics of these services. An underlying assumption is that different services generate their own characteristic pattern of data. The FFT is used to analyze the packet traces. We show that the traces contains strong periodicities and that some services are more or less deterministic. For some services this strong frequency content is due to the characteristics of cellular network and for other it is actually a programmed behavior of the service. The periodicities indicate that there are strong correlations between individual packets or bursts of packets. The second part is concerned with the user behavior, i.e. how the users access the different services in time. We propose a model based on a Markov renewal process and estimate the model parameters. In order to evaluate the model we compare it to two simpler models. We use model selection, using the model's ability to predict future observations as selection criterion. We show that the proposed Markov renewal model is the best of the three models in this sense. The model selection framework can be used to evaluate future models.
APA, Harvard, Vancouver, ISO, and other styles
7

Previti, Alberto <1985&gt. "Fast and accurate numerical solutions in some problems of particle and radiation transport: synthetic acceleration for the method of short characteristics, Doppler-broadened scattering kernel, remote sensing of the cryosphere." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6599/1/Previti_Alberto_tesi.pdf.

Full text
Abstract:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Questo lavoro si propone di presentare diversi aspetti della simulazione numerica del trasporto di particelle e di radiazione per applicazioni industriali e di protezione ambientale, per consentire l'analisi di processi fisici complessi in modo veloce, affidabile ed efficiente. Nella prima parte è trattata la velocizzazione della simulazione numerica del trasporto di neutroni per l'analisi del nocciolo di un reattore nucleare. Le proprietà di convergenza della source iteration del Metodo delle Caratteristiche applicate a geometrie strutturate eterogenee sono state migliorate per mezzo della Boundary Projection Acceleration, consentendo lo studio di geometrie 2D e 3D con la teoria del trasporto senza omogeneizzazione spaziale. Le prestazioni computazionali sono state verificate tramite il benchmark C5G7 2D e 3D, mostrando una sensibile riduzione del numero di iterazioni e del tempo di calcolo. La seconda parte è dedicata allo studio dello scattering elastico dei neutroni con isotopi pesanti in funzione della temperatura vicino alla zona termica. È presentato il calcolo numerico della convoluzione Doppler del kernel di scattering elastico col modello gas per una generale sezione d'urto dipendente dall'energia e per una generica legge di scattering nel sistema del centro di massa. L'intervallo di integrazione è stata ottimizzato utilizzando un cutoff numerico, consentendo una valutazione numerica più veloce dell'integrale. I momenti di Legendre del kernel di trasferimento sono successivamente ottenuti per quadratura diretta e validati tramite un'analisi numerica della convergenza. La terza parte è focalizzata alle applicazioni di telerilevamento del trasferimento radiativo per indagini sulla criosfera terrestre. L'equazione del trasporto per fotoni è applicata per simulare la riflettività dei ghiacciai a diverse età dello strato di neve o ghiaccio, al suo spessore, alla presenza o meno di altri strati sottostanti, al grado di polvere inclusa nella neve, creando un sistema in grado di decifrare segnali spettrali raccolti dai rivelatori orbitanti.
APA, Harvard, Vancouver, ISO, and other styles
8

Previti, Alberto <1985&gt. "Fast and accurate numerical solutions in some problems of particle and radiation transport: synthetic acceleration for the method of short characteristics, Doppler-broadened scattering kernel, remote sensing of the cryosphere." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6599/.

Full text
Abstract:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Questo lavoro si propone di presentare diversi aspetti della simulazione numerica del trasporto di particelle e di radiazione per applicazioni industriali e di protezione ambientale, per consentire l'analisi di processi fisici complessi in modo veloce, affidabile ed efficiente. Nella prima parte è trattata la velocizzazione della simulazione numerica del trasporto di neutroni per l'analisi del nocciolo di un reattore nucleare. Le proprietà di convergenza della source iteration del Metodo delle Caratteristiche applicate a geometrie strutturate eterogenee sono state migliorate per mezzo della Boundary Projection Acceleration, consentendo lo studio di geometrie 2D e 3D con la teoria del trasporto senza omogeneizzazione spaziale. Le prestazioni computazionali sono state verificate tramite il benchmark C5G7 2D e 3D, mostrando una sensibile riduzione del numero di iterazioni e del tempo di calcolo. La seconda parte è dedicata allo studio dello scattering elastico dei neutroni con isotopi pesanti in funzione della temperatura vicino alla zona termica. È presentato il calcolo numerico della convoluzione Doppler del kernel di scattering elastico col modello gas per una generale sezione d'urto dipendente dall'energia e per una generica legge di scattering nel sistema del centro di massa. L'intervallo di integrazione è stata ottimizzato utilizzando un cutoff numerico, consentendo una valutazione numerica più veloce dell'integrale. I momenti di Legendre del kernel di trasferimento sono successivamente ottenuti per quadratura diretta e validati tramite un'analisi numerica della convergenza. La terza parte è focalizzata alle applicazioni di telerilevamento del trasferimento radiativo per indagini sulla criosfera terrestre. L'equazione del trasporto per fotoni è applicata per simulare la riflettività dei ghiacciai a diverse età dello strato di neve o ghiaccio, al suo spessore, alla presenza o meno di altri strati sottostanti, al grado di polvere inclusa nella neve, creando un sistema in grado di decifrare segnali spettrali raccolti dai rivelatori orbitanti.
APA, Harvard, Vancouver, ISO, and other styles
9

Jain, Prateek. "Large scale optimization methods for metric and kernel learning." Thesis, 2009. http://hdl.handle.net/2152/27132.

Full text
Abstract:
A large number of machine learning algorithms are critically dependent on the underlying distance/metric/similarity function. Learning an appropriate distance function is therefore crucial to the success of many methods. The class of distance functions that can be learned accurately is characterized by the amount and type of supervision available to the particular application. In this thesis, we explore a variety of such distance learning problems using different amounts/types of supervision and provide efficient and scalable algorithms to learn appropriate distance functions for each of these problems. First, we propose a generic regularized framework for Mahalanobis metric learning and prove that for a wide variety of regularization functions, metric learning can be used for efficiently learning a kernel function incorporating the available side-information. Furthermore, we provide a method for fast nearest neighbor search using the learned distance/kernel function. We show that a variety of existing metric learning methods are special cases of our general framework. Hence, our framework also provides a kernelization scheme and fast similarity search scheme for such methods. Second, we consider a variation of our standard metric learning framework where the side-information is incremental, streaming and cannot be stored. For this problem, we provide an efficient online metric learning algorithm that compares favorably to existing methods both theoretically and empirically. Next, we consider a contrasting scenario where the amount of supervision being provided is extremely small compared to the number of training points. For this problem, we consider two different modeling assumptions: 1) data lies on a low-dimensional linear subspace, 2) data lies on a low-dimensional non-linear manifold. The first assumption, in particular, leads to the problem of matrix rank minimization over polyhedral sets, which is a problem of immense interest in numerous fields including optimization, machine learning, computer vision, and control theory. We propose a novel online learning based optimization method for the rank minimization problem and provide provable approximation guarantees for it. The second assumption leads to our geometry-aware metric/kernel learning formulation, where we jointly model the metric/kernel over the data along with the underlying manifold. We provide an efficient alternating minimization algorithm for this problem and demonstrate its wide applicability and effectiveness by applying it to various machine learning tasks such as semi-supervised classification, colored dimensionality reduction, manifold alignment etc. Finally, we consider the task of learning distance functions under no supervision, which we cast as a problem of learning disparate clusterings of the data. To this end, we propose a discriminative approach and a generative model based approach and we provide efficient algorithms with convergence guarantees for both the approaches.
text
APA, Harvard, Vancouver, ISO, and other styles
10

Negi, Yoginder Kumar. "Fast Solvers and Preconditioning Methods in Computational Electromagnetics." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/4509.

Full text
Abstract:
Method of Moments (MoM) is an integral equation based solver and is one of the most popular computational techniques to solve complex 3D Electromagnetic problems efficiently and accurately. Compared to the conventional differential equation solvers, MoM does not require a volumetric discretization of the entire bounding box containing the structure or imposes absorbing boundary condition or perfect match layer. However, due to Green's function interactions, the MoM matrix is dense leading to quadratic matrix fill time and cubic solve time complexity. As the scale and the complexity of the problem increases, matrix size increases with high storage memory and solve time requirement. Thus, to further expedite the solve time and decrease the storage requirement, fast solvers are extremely important. This thesis addresses the above problem by developing fast solvers and proposing a couple of novel preconditioning approaches for improving the solution speed of the fast solvers. The recent development of kernel-independent fast solvers has gained more popularity among the CEM community because of ease of their implementation. In this thesis, first, a brief overview of kernel dependent and kernel independent fast solvers is presented along with their parallelization. A new compression method is introduced based on the Adaptive Cross Approximation (ACA) sampling, where the pivot rows and columns can give a QR factorized orthogonal compressed matrices. These orthogonal matrices can be compressed further by using Interpolative Decomposition (ID) or can further be compressed by using Singular Value Decomposition (SVD) compression. The entire spectrum of fast solvers is highly dependent on the condition number of the matrices, specifically the spread of their eigenvalues. Hence, an ill-conditioned matrix jeopardizes the solution time and accuracy. This ill-conditioning mostly due to either geometry, meshing or the frequency of operation. Preconditioning the system of equations is the most efficient way to overcome ill-condition. Conventional preconditioners are either difficult to parallelize or lack linear scaling with problem size. In this thesis, we propose new preconditioners, which overcome the lacuna of the existing preconditioners. In the null-field preconditioner, near-field blocks are scaled to the diagonal blocks using the null-field method. The preconditioner is computed from all near-field blocks and selected far-field blocks in accordance with the null-field procedure. The final form of the preconditioner is block diagonal, therefore generates no additional fill-ins in its inverse and is also amenable to parallelization. A complexity analysis is presented to show the near-linear cost of preconditioner construction and usage in terms of computation time and memory. Numerical experiments with a sequential implementation demonstrate on an average 1.5-3x speed-up in the iterative solution time over Incomplete LU (ILUT) based preconditioners. Thus, giving a robust and stable preconditioner better than ILUT. The main drawback of the null-field method is the improper scaling of the near-field blocks. The next preconditioning method proposed in the thesis is based on the Schur’s complement method. The Schur’s complement method diagonalizes the near-field blocks to a block-diagonal form. The fill-in blocks generated in the process can be compressed efficiently and used for completely scaling the near-field blocks. Further fill-in reduction can be achieved by arranging the near-field blocks by using graph algorithms. Both these processes lead to a reduction in final matrix-vector product time in the iterative solver. Numerical experiments demonstrate a significant advantage over ILUT or recently published null-field based methods. In the case of multiport system or the problem with multiple Right Hand Side (RHS) iteration, the fast solver may turn out to be costly since each RHS may take different iteration thus taking more time to solve all RHSs so in such cases a direct solver is highly desirable. The complexity of the conventional direct solver is cubic so in this thesis, a fast direct solver based on the extended sparsification of FMM is proposed.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Fast kernel methods"

1

Sun, Ning, Hai-xian Wang, Zhen-hai Ji, Cai-rong Zou, and Li Zhao. "A Fast Feature Extraction Method for Kernel 2DPCA." In Lecture Notes in Computer Science, 767–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11816157_93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gałkowski, Tomasz, and Adam Krzyżak. "Fast Estimation of Multidimensional Regression Functions by the Parzen Kernel-Based Method." In Communications in Computer and Information Science, 251–62. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1639-9_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Yajuan, Lina Wang, and Shiyin Qin. "A Robust Real-Time Tracking Method of Fast Video Object Based on Gaussian Kernel and Random Projection." In Lecture Notes in Computer Science, 376–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42057-3_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abduljabbar, Mustafa, Mohammed Al Farhan, Rio Yokota, and David Keyes. "Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture." In Lecture Notes in Computer Science, 553–64. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64203-1_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Newton Methods for Fast Semisupervised Linear SVMs." In Large-Scale Kernel Machines. The MIT Press, 2007. http://dx.doi.org/10.7551/mitpress/7496.003.0009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Platt, John C. "Fast Training of Support Vector Machines Using Sequential Minimal Optimization." In Advances in Kernel Methods. The MIT Press, 1998. http://dx.doi.org/10.7551/mitpress/1130.003.0016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Fast Kernels for String and Tree Matching." In Kernel Methods in Computational Biology. The MIT Press, 2004. http://dx.doi.org/10.7551/mitpress/4057.003.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Mengmeng, Paola Pirinoli, Francesca Vipiana, and Giuseppe Vecchi. "Kernel-independent fast factorization methods for multiscale electromagnetic problems." In Integral Equations for Real-Life Multiscale Electromagnetic Problems, 125–77. Institution of Engineering and Technology, 2023. http://dx.doi.org/10.1049/sbew559e_ch4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Fast and efficient kernel machines using random kitchen sink and ensemble methods." In Emerging Trends in Engineering, Science and Technology for Society, Energy and Environment, 817–22. CRC Press, 2018. http://dx.doi.org/10.1201/9781351124140-128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Qi, Xiangtao, and Li Zhu. "Research on Culinary Fruits and Vegetables and Fresh-Cut Vegetables Recognition Based on Convolutional Neural Network." In Advances in Transdisciplinary Engineering. IOS Press, 2024. http://dx.doi.org/10.3233/atde240099.

Full text
Abstract:
The recognition of fruits and vegetables plays an important role in cooking intelligence, and the preparation of fresh-cut vegetables is a key part of food processing, so there is a high value in the recognition of fruits and vegetables and fresh-cut vegetables. Due to the fact that there are many different types of fruits and vegetables, and the styles of fruits and vegetables and fresh-cut vegetables also differ according to the cooking requirements, and the appearance of different types of fruits and vegetables processed into fresh-cut vegetables is very similar. A Convolutional Neural Network (CNN) based method for fruits and vegetables and fresh-cut vegetables recognition is proposed, based on the fruits and vegetables and fresh-cut The convolutional kernel training learning model for fresh-cut vegetables recognition is designed, the convolutional kernel contour recognition matrix is optimized based on the convolutional recognition of fruits and vegetables ingredients, and the convolutional pooling step is compressed. The efficiency and accuracy of fresh-cut vegetables recognition is improved by compressing the convolutional pooling step based on the convolutional kernel recognition. Through experimental analysis, the converged convolutional kernel and full feature sequence sets can be obtained after learning from a large number of white-box trained fruits and vegetables picture data. The development of prepared vegetables is a new hot spot in the culinary industry, and the fresh-cut vegetables recognition method proposed in this paper has high speed and accuracy, which can provide a high-quality new modular production solution for the intelligent preparation of prepared vegetables.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Fast kernel methods"

1

Kudo, Taku, and Yuji Matsumoto. "Fast methods for kernel-based text analysis." In the 41st Annual Meeting. Morristown, NJ, USA: Association for Computational Linguistics, 2003. http://dx.doi.org/10.3115/1075096.1075100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Da-Nian Zheng, Jia-Xin Wang, Yan-Nan Zhao, and Ze-Hong Yang. "Reduced sets and fast approximation for kernel methods." In Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haffner, Patrick. "Fast transpose methods for kernel learning on sparse data." In the 23rd international conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1143844.1143893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yong, Hailun Lin, Lizhong Ding, Weiping Wang, and Shizhong Liao. "Fast Cross-Validation." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/346.

Full text
Abstract:
Cross-validation (CV) is the most widely adopted approach for selecting the optimal model. However, the computation of CV has high complexity due to multiple times of learner training, making it disabled for large scale model selection. In this paper, we present an approximate approach to CV based on the theoretical notion of Bouligand influence function (BIF) and the Nystr\"{o}m method for kernel methods. We first establish the relationship between the theoretical notion of BIF and CV, and propose a method to approximate the CV via the Taylor expansion of BIF. Then, we provide a novel computing method to calculate the BIF for general distribution, and evaluate BIF for sample distribution. Finally, we use the Nystr\"{o}m method to accelerate the computation of the BIF matrix for giving the finally approximate CV criterion. The proposed approximate CV requires training only once and is suitable for a wide variety of kernel methods. Experimental results on lots of datasets how that our approximate CV has no statistical discrepancy with the original CV, but can significantly improve the efficiency.
APA, Harvard, Vancouver, ISO, and other styles
5

Ceola, Federico, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, and Lorenzo Natale. "Fast Object Segmentation Learning with Kernel-based Methods for Robotics." In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. http://dx.doi.org/10.1109/icra48506.2021.9561758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baczewski, A. D., M. R. Vikram, B. Shanker, and L. C. Kempel. "Fast methods for the evaluation of the diffusion kernel with potential extensions to the dissipative kernel." In 2008 IEEE Antennas and Propagation Society International Symposium and USNC/URSI National Radio Science Meeting. IEEE, 2008. http://dx.doi.org/10.1109/aps.2008.4619429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mak, Brian, and Roger Hsiao. "Robustness of several kernel-based fast adaptation methods on noisy LVCSR." In Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yi, Nan Xue, Xin Fan, Jiebo Luo, Risheng Liu, Bin Chen, Haojie Li, and Zhongxuan Luo. "Fast Factorization-free Kernel Learning for Unlabeled Chunk Data Streams." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/393.

Full text
Abstract:
Data stream analysis aims at extracting discriminative information for classification from continuously incoming samples. It is extremely challenging to detect novel data while updating the model in an efficient and stable fashion, especially for the chunk data. This paper proposes a fast factorization-free kernel learning method to unify novelty detection and incremental learning for unlabeled chunk data streams in one framework. The proposed method constructs a joint reproducing kernel Hilbert space from known class centers by solving a linear system in kernel space. Naturally, unlabeled data can be detected and classified among multi-classes by a single decision model. And projecting samples into the discriminative feature space turns out to be the product of two small-sized kernel matrices without needing such time-consuming factorization like QR-decomposition or singular value decomposition. Moreover, the insertion of a novel class can be treated as the addition of a new orthogonal basis to the existing feature space, resulting in fast and stable updating schemes. Both theoretical analysis and experimental validation on real-world datasets demonstrate that the proposed methods learn chunk data streams with significantly lower computational costs and comparable or superior accuracy than the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
9

Yongqing Wang, Yongkang Zou, Suiwu Zheng, and Xinlan Guo. "Simpler Minimum Enclosing Ball: Fast approximate MEB algorithm for extensive kernel methods." In 2008 Chinese Control and Decision Conference (CCDC). IEEE, 2008. http://dx.doi.org/10.1109/ccdc.2008.4597996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Suhang, Charu Aggarwal, and Huan Liu. "Randomized Feature Engineering as a Fast and Accurate Alternative to Kernel Methods." In KDD '17: The 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3097983.3098001.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Fast kernel methods"

1

Martinsson, P. G., and V. Rokhlin. An Accelerated Kernel-Independent Fast Multipole Method in One Dimension. Fort Belvoir, VA: Defense Technical Information Center, May 2006. http://dx.doi.org/10.21236/ada639971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gimbutas, Z., and V. Rokhlin. A Generalized Fast Multipole Method for Non-Oscillatory Kernels. Fort Belvoir, VA: Defense Technical Information Center, July 2000. http://dx.doi.org/10.21236/ada640378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haney, Richard H., Eric Darve, Mohammad P. Ansari, Rohit Pataki, AmirHossein AminFar, and Dale Shires. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method. Fort Belvoir, VA: Defense Technical Information Center, May 2015. http://dx.doi.org/10.21236/ada625090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography