To see the other types of publications on this topic, follow the link: Approximation de Nyström.

Journal articles on the topic 'Approximation de Nyström'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Approximation de Nyström.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ding, Lizhong, Yong Liu, Shizhong Liao, Yu Li, Peng Yang, Yijie Pan, Chao Huang, Ling Shao, and Xin Gao. "Approximate Kernel Selection with Strong Approximate Consistency." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3462–69. http://dx.doi.org/10.1609/aaai.v33i01.33013462.

Full text
Abstract:
Kernel selection is fundamental to the generalization performance of kernel-based learning algorithms. Approximate kernel selection is an efficient kernel selection approach that exploits the convergence property of the kernel selection criteria and the computational virtue of kernel matrix approximation. The convergence property is measured by the notion of approximate consistency. For the existing Nyström approximations, whose sampling distributions are independent of the specific learning task at hand, it is difficult to establish the strong approximate consistency. They mainly focus on the quality of the low-rank matrix approximation, rather than the performance of the kernel selection criterion used in conjunction with the approximate matrix. In this paper, we propose a novel Nyström approximate kernel selection algorithm by customizing a criterion-driven adaptive sampling distribution for the Nyström approximation, which adaptively reduces the error between the approximate and accurate criteria. We theoretically derive the strong approximate consistency of the proposed Nyström approximate kernel selection algorithm. Finally, we empirically evaluate the approximate consistency of our algorithm as compared to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Ling, Hongqiao Wang, and Guangyuan Fu. "Multi-Nyström Method Based on Multiple Kernel Learning for Large Scale Imbalanced Classification." Computational Intelligence and Neuroscience 2021 (June 13, 2021): 1–11. http://dx.doi.org/10.1155/2021/9911871.

Full text
Abstract:
Extensions of kernel methods for the class imbalance problems have been extensively studied. Although they work well in coping with nonlinear problems, the high computation and memory costs severely limit their application to real-world imbalanced tasks. The Nyström method is an effective technique to scale kernel methods. However, the standard Nyström method needs to sample a sufficiently large number of landmark points to ensure an accurate approximation, which seriously affects its efficiency. In this study, we propose a multi-Nyström method based on mixtures of Nyström approximations to avoid the explosion of subkernel matrix, whereas the optimization to mixture weights is embedded into the model training process by multiple kernel learning (MKL) algorithms to yield more accurate low-rank approximation. Moreover, we select subsets of landmark points according to the imbalance distribution to reduce the model’s sensitivity to skewness. We also provide a kernel stability analysis of our method and show that the model solution error is bounded by weighted approximate errors, which can help us improve the learning process. Extensive experiments on several large scale datasets show that our method can achieve a higher classification accuracy and a dramatical speedup of MKL algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Kai, and James T. Kwok. "Density-Weighted Nyström Method for Computing Large Kernel Eigensystems." Neural Computation 21, no. 1 (January 2009): 121–46. http://dx.doi.org/10.1162/neco.2009.11-07-651.

Full text
Abstract:
The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observation, we extend the Nyström method to a more general, density-weighted version. We show that by introducing the probability density function as a natural weighting scheme, the approximation of the eigensystem can be greatly improved. An efficient algorithm is proposed to enforce such weighting in practice, which has the same complexity as the original Nyström method and hence is notably cheaper than several other alternatives. Experiments on kernel principal component analysis, spectral clustering, and image segmentation demonstrate the encouraging performance of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Díaz de Alba, Patricia, Luisa Fermo, and Giuseppe Rodriguez. "Solution of second kind Fredholm integral equations by means of Gauss and anti-Gauss quadrature rules." Numerische Mathematik 146, no. 4 (November 18, 2020): 699–728. http://dx.doi.org/10.1007/s00211-020-01163-7.

Full text
Abstract:
AbstractThis paper is concerned with the numerical approximation of Fredholm integral equations of the second kind. A Nyström method based on the anti-Gauss quadrature formula is developed and investigated in terms of stability and convergence in appropriate weighted spaces. The Nyström interpolants corresponding to the Gauss and the anti-Gauss quadrature rules are proved to furnish upper and lower bounds for the solution of the equation, under suitable assumptions which are easily verified for a particular weight function. Hence, an error estimate is available, and the accuracy of the solution can be improved by approximating it by an averaged Nyström interpolant. The effectiveness of the proposed approach is illustrated through different numerical tests.
APA, Harvard, Vancouver, ISO, and other styles
5

Rudi, Alessandro, Leonard Wossnig, Carlo Ciliberto, Andrea Rocchetto, Massimiliano Pontil, and Simone Severini. "Approximating Hamiltonian dynamics with the Nyström method." Quantum 4 (February 20, 2020): 234. http://dx.doi.org/10.22331/q-2020-02-20-234.

Full text
Abstract:
Simulating the time-evolution of quantum mechanical systems is BQP-hard and expected to be one of the foremost applications of quantum computers. We consider classical algorithms for the approximation of Hamiltonian dynamics using subsampling methods from randomized numerical linear algebra. We derive a simulation technique whose runtime scales polynomially in the number of qubits and the Frobenius norm of the Hamiltonian. As an immediate application, we show that sample based quantum simulation, a type of evolution where the Hamiltonian is a density matrix, can be efficiently classically simulated under specific structural conditions. Our main technical contribution is a randomized algorithm for approximating Hermitian matrix exponentials. The proof leverages a low-rank, symmetric approximation via the Nyström method. Our results suggest that under strong sampling assumptions there exist classical poly-logarithmic time simulations of quantum computations.
APA, Harvard, Vancouver, ISO, and other styles
6

Trokicić, Aleksandar, and Branimir Todorović. "Constrained spectral clustering via multi–layer graph embeddings on a grassmann manifold." International Journal of Applied Mathematics and Computer Science 29, no. 1 (March 1, 2019): 125–37. http://dx.doi.org/10.2478/amcs-2019-0010.

Full text
Abstract:
Abstract We present two algorithms in which constrained spectral clustering is implemented as unconstrained spectral clustering on a multi-layer graph where constraints are represented as graph layers. By using the Nystrom approximation in one of the algorithms, we obtain time and memory complexities which are linear in the number of data points regardless of the number of constraints. Our algorithms achieve superior or comparative accuracy on real world data sets, compared with the existing state-of-the-art solutions. However, the complexity of these algorithms is squared with the number of vertices, while our technique, based on the Nyström approximation method, has linear time complexity. The proposed algorithms efficiently use both soft and hard constraints since the time complexity of the algorithms does not depend on the size of the set of constraints.
APA, Harvard, Vancouver, ISO, and other styles
7

Cai, Difeng, and Panayot S. Vassilevski. "Eigenvalue Problems for Exponential-Type Kernels." Computational Methods in Applied Mathematics 20, no. 1 (January 1, 2020): 61–78. http://dx.doi.org/10.1515/cmam-2018-0186.

Full text
Abstract:
AbstractWe study approximations of eigenvalue problems for integral operators associated with kernel functions of exponential type. We show convergence rate {\lvert\lambda_{k}-\lambda_{k,h}\rvert\leq C_{k}h^{2}} in the case of lowest order approximation for both Galerkin and Nyström methods, where h is the mesh size, {\lambda_{k}} and {\lambda_{k,h}} are the exact and approximate kth largest eigenvalues, respectively. We prove that the two methods are numerically equivalent in the sense that {|\lambda^{(G)}_{k,h}-\lambda^{(N)}_{k,h}|\leq Ch^{2}}, where {\lambda^{(G)}_{k,h}} and {\lambda^{(N)}_{k,h}} denote the kth largest eigenvalues computed by Galerkin and Nyström methods, respectively, and C is a eigenvalue independent constant. The theoretical results are accompanied by a series of numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
8

He, Li, and Hong Zhang. "Kernel K-Means Sampling for Nyström Approximation." IEEE Transactions on Image Processing 27, no. 5 (May 2018): 2108–20. http://dx.doi.org/10.1109/tip.2018.2796860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Shiyuan, Lujuan Dang, Guobing Qian, and Yunxiang Jiang. "Kernel recursive maximum correntropy with Nyström approximation." Neurocomputing 329 (February 2019): 424–32. http://dx.doi.org/10.1016/j.neucom.2018.10.064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Laguardia, Anna Lucia, and Maria Grazia Russo. "A Nyström Method for 2D Linear Fredholm Integral Equations on Curvilinear Domains." Mathematics 11, no. 23 (December 3, 2023): 4859. http://dx.doi.org/10.3390/math11234859.

Full text
Abstract:
This paper is devoted to the numerical treatment of two-dimensional Fredholm integral equations, defined on general curvilinear domains of the plane. A Nyström method, based on a suitable Gauss-like cubature formula, recently proposed in the literature is proposed. The convergence, stability and good conditioning of the method are proved in suitable subspaces of continuous functions of Sobolev type. The cubature formula, on which the Nyström method is constructed, has an error that behaves like the best polynomial approximation of the integrand function. Consequently, it is also shown how the Nyström method inherits this property and, hence, the proposed numerical strategy is fast when the involved known functions are smooth. Some numerical examples illustrate the efficiency of the method, also in comparison with other methods known in the literature.
APA, Harvard, Vancouver, ISO, and other styles
11

Baker, T. S., J. R. Dormand, and P. J. Prince. "Continuous approximation with embedded Runge-Kutta-Nyström methods." Applied Numerical Mathematics 29, no. 2 (February 1999): 171–88. http://dx.doi.org/10.1016/s0168-9274(98)00065-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Allouch, C., M. Arrai, H. Bouda, and M. Tahrichi. "Legendre superconvergent degenerate kernel and Nyström methods for nonlinear integral equations." Ukrains’kyi Matematychnyi Zhurnal 75, no. 5 (May 24, 2023): 579–95. http://dx.doi.org/10.37863/umzh.v75i5.7039.

Full text
Abstract:
UDC 517.9 We study polynomially based superconvergent collocation methods for the approximation of solutions of nonlinear integral equations. The superconvergent degenerate kernel method is chosen for approximating the solutions of Hammerstein equations, while a superconvergent Nystr\"om method is used for solving Urysohn equations. By applying interpolatory projections based on Legendre polynomials of degree ≤ n , we analyze the superconvergence of these methods and their iterated versions. Numerical results are presented to validate the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
13

Didenko, Victor D., and Johan Heising. "Features of the Nyström Method for the Sherman-Lauricella Equation on Piecewise Smooth Contours." East Asian Journal on Applied Mathematics 1, no. 4 (November 2011): 403–14. http://dx.doi.org/10.4208/eajam.240611.070811a.

Full text
Abstract:
AbstractThe stability of the Nyström method for the Sherman-Lauricella equation on contours with corner points cj, j = 0, 1, …, m relies on the invertibility of certain operators belonging to an algebra of Toeplitz operators. The operators do not depend on the shape of the contour, but on the opening angle θj of the corresponding corner cj and on parameters of the approximation method mentioned. They have a complicated structure and there is no analytic tool to verify their invertibility. To study this problem, the original Nyström method is applied to the Sherman-Lauricella equation on a special model contour that has only one corner point with varying opening angle In the interval (0.1π, 1.9π), it is found that there are 8 values of θj where the invertibility of the operator may fail, so the corresponding original Nyström method on any contour with corner points of such magnitude cannot be stable and requires modification.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Changming, and Daqi Gao. "Improved multi-kernel classification machine with Nyström approximation technique." Pattern Recognition 48, no. 4 (April 2015): 1490–509. http://dx.doi.org/10.1016/j.patcog.2014.10.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Ming, Fei Wang, and Changshui Zhang. "Large-scale eigenvector approximation via Hilbert Space Embedding Nyström." Pattern Recognition 48, no. 5 (May 2015): 1904–12. http://dx.doi.org/10.1016/j.patcog.2014.11.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Mu Li, Wei Bi, James T. Kwok, and Bao-Liang Lu. "Large-Scale Nyström Kernel Matrix Approximation Using Randomized SVD." IEEE Transactions on Neural Networks and Learning Systems 26, no. 1 (January 2015): 152–64. http://dx.doi.org/10.1109/tnnls.2014.2359798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Guohong, Hong Chen, Xiaoying Sun, and Robert C. Qiu. "Modified MUSIC Algorithm for DOA Estimation With Nyström Approximation." IEEE Sensors Journal 16, no. 12 (June 2016): 4673–74. http://dx.doi.org/10.1109/jsen.2016.2557488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Zhe, Songcan Chen, and Daqi Gao. "A novel multi-view classifier based on Nyström approximation." Expert Systems with Applications 38, no. 9 (September 2011): 11193–200. http://dx.doi.org/10.1016/j.eswa.2011.02.166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mezzanotte, Domenico, Donatella Occorsio, and Maria Grazia Russo. "Combining Nyström Methods for a Fast Solution of Fredholm Integral Equations of the Second Kind." Mathematics 9, no. 21 (October 20, 2021): 2652. http://dx.doi.org/10.3390/math9212652.

Full text
Abstract:
In this paper, we propose a suitable combination of two different Nyström methods, both using the zeros of the same sequence of Jacobi polynomials, in order to approximate the solution of Fredholm integral equations on [−1,1]. The proposed procedure is cheaper than the Nyström scheme based on using only one of the described methods . Moreover, we can successfully manage functions with possible algebraic singularities at the endpoints and kernels with different pathologies. The error of the method is comparable with that of the best polynomial approximation in suitable spaces of functions, equipped with the weighted uniform norm. The convergence and the stability of the method are proved, and some numerical tests that confirm the theoretical estimates are given.
APA, Harvard, Vancouver, ISO, and other styles
20

Martinsson, Per-Gunnar, and Joel A. Tropp. "Randomized numerical linear algebra: Foundations and algorithms." Acta Numerica 29 (May 2020): 403–572. http://dx.doi.org/10.1017/s0962492920000021.

Full text
Abstract:
This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues.Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nyström approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machine learning and in scientific computing.
APA, Harvard, Vancouver, ISO, and other styles
21

Kulkarni, Rekha P., and Gobinda Rakshit. "DISCRETE MODIFIED PROJECTION METHODS FOR URYSOHN INTEGRAL EQUATIONS WITH GREEN’S FUNCTION TYPE KERNELS." Mathematical Modelling and Analysis 25, no. 3 (May 13, 2020): 421–40. http://dx.doi.org/10.3846/mma.2020.11093.

Full text
Abstract:
In the present paper we consider discrete versions of the modified projection methods for solving a Urysohn integral equation with a kernel of the type of Green’s function. For r ≥ 0, a space of piecewise polynomials of degree ≤ r with respect to an uniform partition is chosen to be the approximating space. We define a discrete orthogonal projection onto this space and replace the Urysohn integral operator by a Nyström approximation. The order of convergence which we obtain for the discrete version indicates the choice of numerical quadrature which preserves the orders of convergence in the continuous modified projection methods. Numerical results are given for a specific example.
APA, Harvard, Vancouver, ISO, and other styles
22

Pourkamali-Anaraki, Farhad. "Scalable Spectral Clustering With Nyström Approximation: Practical and Theoretical Aspects." IEEE Open Journal of Signal Processing 1 (2020): 242–56. http://dx.doi.org/10.1109/ojsp.2020.3039330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Jinfeng, Weifeng Liu, Yicong Zhou, Jun Yu, Dapeng Tao, and Changsheng Xu. "Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 3 (August 31, 2022): 1–18. http://dx.doi.org/10.1145/3487194.

Full text
Abstract:
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain. Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain. However, these algorithms will be infeasible when only a few labeled data exist in the source domain, thus the performance decreases significantly. To address this challenge, we propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples. Firstly, DGL introduces the Nyström method to construct a plastic graph that shares similar geometric property with the target domain. Then, DGL flexibly employs the Nyström approximation error to measure the divergence between the plastic graph and source graph to formalize the distribution mismatch from the geometric perspective. Through minimizing the approximation error, DGL learns a domain-invariant geometric graph to bridge the source and target domains. Finally, we integrate the learned domain-invariant graph with the semi-supervised learning and further propose an adaptive semi-supervised model to handle the cross-domain problems. The results of extensive experiments on popular datasets verify the superiority of DGL, especially when only a few labeled source samples are available.
APA, Harvard, Vancouver, ISO, and other styles
24

Chang, Lo-Bin, Zhidong Bai, Su-Yun Huang, and Chii-Ruey Hwang. "Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices." Journal of Multivariate Analysis 120 (September 2013): 102–19. http://dx.doi.org/10.1016/j.jmva.2013.05.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Redner, Oliver. "DISCRETE APPROXIMATION OF NON-COMPACT OPERATORS DESCRIBING CONTINUUM-OF-ALLELES MODELS." Proceedings of the Edinburgh Mathematical Society 47, no. 2 (June 2004): 449–72. http://dx.doi.org/10.1017/s0013091503000476.

Full text
Abstract:
AbstractWe consider the eigenvalue equation for the largest eigenvalue of certain kinds of non-compact linear operators given as the sum of a multiplication and a kernel operator. It is shown that, under moderate conditions, such operators can be approximated arbitrarily well by operators of finite rank, which constitutes a discretization procedure. For this purpose, two standard methods of approximation theory, the Nyström and the Galerkin method, are generalized. The operators considered describe models for mutation and selection of an infinitely large population of individuals that are labelled by real numbers, commonly called continuum-of-alleles models.AMS 2000 Mathematics subject classification: Primary 47A58; 45C05. Secondary 47B34
APA, Harvard, Vancouver, ISO, and other styles
26

Kulkarni, Rekha P., and Akshay S. Rane. "ASYMPTOTIC EXPANSIONS FOR APPROXIMATE SOLUTIONS OF HAMMERSTEIN INTEGRAL EQUATIONS WITH GREEN'S FUNCTION TYPE KERNELS." Mathematical Modelling and Analysis 19, no. 1 (February 20, 2014): 127–43. http://dx.doi.org/10.3846/13926292.2014.893457.

Full text
Abstract:
We consider approximation of a nonlinear Hammerstein equation with a kernel of the type of Green's function using the Nyström method based on the composite midpoint and the composite modified Simpson rules associated with a uniform partition. We obtain asymptotic expansions for the approximate solution unat the node points as well as at the partition points and use Richardson extrapolation to obtain approximate solutions with higher orders of convergence. Numerical results are presented which confirm the theoretical orders of convergence.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Changming. "Improved multi-kernel classification machine with Nyström approximation technique and Universum data." Neurocomputing 175 (January 2016): 610–34. http://dx.doi.org/10.1016/j.neucom.2015.10.102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Pourkamali-Anaraki, Farhad, and Stephen Becker. "Improved fixed-rank Nyström approximation via QR decomposition: Practical and theoretical aspects." Neurocomputing 363 (October 2019): 261–72. http://dx.doi.org/10.1016/j.neucom.2019.06.070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Airola, Antti, Tapio Pahikkala, and Tapio Salakoski. "On Learning and Cross-Validation with Decomposed Nyström Approximation of Kernel Matrix." Neural Processing Letters 33, no. 1 (December 1, 2010): 17–30. http://dx.doi.org/10.1007/s11063-010-9159-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Kai, Rongchun Li, Yong Dou, Zhengfa Liang, and Qi Lv. "Ranking Support Vector Machine with Kernel Approximation." Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4629534.

Full text
Abstract:
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

GISBRECHT, ANDREJ, BASSAM MOKBEL, FRANK-MICHAEL SCHLEIF, XIBIN ZHU, and BARBARA HAMMER. "LINEAR TIME RELATIONAL PROTOTYPE BASED LEARNING." International Journal of Neural Systems 22, no. 05 (September 26, 2012): 1250021. http://dx.doi.org/10.1142/s0129065712500219.

Full text
Abstract:
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Yongguang, Shuzhen Yao, and Tian Xu. "Incremental Kernel Principal Components Subspace Inference With Nyström Approximation for Bayesian Deep Learning." IEEE Access 9 (2021): 36241–51. http://dx.doi.org/10.1109/access.2021.3062747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Trokicic, Aleksandar, and Branimir Todorovic. "On expected error of randomized Nyström kernel regression." Filomat 34, no. 11 (2020): 3871–84. http://dx.doi.org/10.2298/fil2011871t.

Full text
Abstract:
Kernel methods are a class of machine learning algorithms which learn and discover patterns in a high (possibly infinite) dimensional feature space obtained by often nonlinear, possibly infinite mapping of an input space. A major problem with kernel methods is their time complexity. For a data set with n input points a time complexity of a kernel method is O(n3), which is intractable for a large data set. A method based on a random Nystr?m features is an approximation method that is able to reduce the time complexity to O(np2+p3) where p is the number of randomly selected input data points. A time complexity of O(p3) comes from the fact that a spectral decomposition needs to be performed on a p x p Gram matrix, and if p is a large number even an approximate algorithm is time consuming. In this paper we will apply the randomized SVD method instead of the spectral decomposition and further reduce the time complexity. An input parameters of a randomized SVD algorithm are p x p Gram matrix and a number m < p. In this case time complexity is O(nm2+p2m+m3), and linear regression is performed on a m-dimensional random features. We will prove that the error of a predictor, learned via this method is almost the same in expectation as the error of a kernel predictor. Aditionally, we will empirically show that this predictor is better than the ONE that uses only Nystr?m method.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Lin, and Alex Barnett. "Robust and Efficient Solution of the Drum Problem via Nyström Approximation of the Fredholm Determinant." SIAM Journal on Numerical Analysis 53, no. 4 (January 2015): 1984–2007. http://dx.doi.org/10.1137/140973992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Jianzhong. "Mathematical analysis on out-of-sample extensions." International Journal of Wavelets, Multiresolution and Information Processing 16, no. 05 (September 2018): 1850042. http://dx.doi.org/10.1142/s021969131850042x.

Full text
Abstract:
Let [Formula: see text] be a data set in [Formula: see text], where [Formula: see text] is the training set and [Formula: see text] is the test one. Many unsupervised learning algorithms based on kernel methods have been developed to provide dimensionality reduction (DR) embedding for a given training set [Formula: see text] ([Formula: see text]) that maps the high-dimensional data [Formula: see text] to its low-dimensional feature representation [Formula: see text]. However, these algorithms do not straightforwardly produce DR of the test set [Formula: see text]. An out-of-sample extension method provides DR of [Formula: see text] using an extension of the existent embedding [Formula: see text], instead of re-computing the DR embedding for the whole set [Formula: see text]. Among various out-of-sample DR extension methods, those based on Nyström approximation are very attractive. Many papers have developed such out-of-extension algorithms and shown their validity by numerical experiments. However, the mathematical theory for the DR extension still need further consideration. Utilizing the reproducing kernel Hilbert space (RKHS) theory, this paper develops a preliminary mathematical analysis on the out-of-sample DR extension operators. It treats an out-of-sample DR extension operator as an extension of the identity on the RKHS defined on [Formula: see text]. Then the Nyström-type DR extension turns out to be an orthogonal projection. In the paper, we also present the conditions for the exact DR extension and give the estimate for the error of the extension.
APA, Harvard, Vancouver, ISO, and other styles
36

García García, Juan F., and Salvador E. Venegas-Andraca. "Region-based approach for the spectral clustering Nyström approximation with an application to burn depth assessment." Machine Vision and Applications 26, no. 2-3 (March 3, 2015): 353–68. http://dx.doi.org/10.1007/s00138-015-0664-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Du, Ke-Lin, M. N. S. Swamy, Zhang-Quan Wang, and Wai Ho Mow. "Matrix Factorization Techniques in Machine Learning, Signal Processing, and Statistics." Mathematics 11, no. 12 (June 12, 2023): 2674. http://dx.doi.org/10.3390/math11122674.

Full text
Abstract:
Compressed sensing is an alternative to Shannon/Nyquist sampling for acquiring sparse or compressible signals. Sparse coding represents a signal as a sparse linear combination of atoms, which are elementary signals derived from a predefined dictionary. Compressed sensing, sparse approximation, and dictionary learning are topics similar to sparse coding. Matrix completion is the process of recovering a data matrix from a subset of its entries, and it extends the principles of compressed sensing and sparse approximation. The nonnegative matrix factorization is a low-rank matrix factorization technique for nonnegative data. All of these low-rank matrix factorization techniques are unsupervised learning techniques, and can be used for data analysis tasks, such as dimension reduction, feature extraction, blind source separation, data compression, and knowledge discovery. In this paper, we survey a few emerging matrix factorization techniques that are receiving wide attention in machine learning, signal processing, and statistics. The treated topics are compressed sensing, dictionary learning, sparse representation, matrix completion and matrix recovery, nonnegative matrix factorization, the Nyström method, and CUR matrix decomposition in the machine learning framework. Some related topics, such as matrix factorization using metaheuristics or neurodynamics, are also introduced. A few topics are suggested for future investigation in this article.
APA, Harvard, Vancouver, ISO, and other styles
38

Ajinuhi, Joel Olusegun, Umaru Mohammed, Abdullah Idris Enagi, and Jimoh Omananyi Razaq. "A Novel Seventh-Order Implicit Block Hybrid Nyström-Type Method for Second- Order Boundary Value Problems." International Journal of Research and Scientific Innovation X, no. XI (2023): 26–44. http://dx.doi.org/10.51244/ijrsi.2023.1011003.

Full text
Abstract:
This paper introduces a novel approach for solving second-order nonlinear differential equations, with a primary focus on the Bratu problem, which holds significant importance in diverse scientific areas. Existing methods for solving this problem have limitations, prompting the development of the Block Hybrid Nystrom-Type Method (BHNTM). BHNTM utilizes the Bhaskara points derived, using the Bhaskara cosine approximation formula. The method seeks a numerical solution in the form of a power series polynomial, efficiently determining coefficients. The paper discusses BHNTM’s convergence, zero stability, and consistency properties, substantiated through numerical experiments, highlighting its accuracy as a solver for Bratu-type equations. This research contributes to the field of numerical analysis by offering an alternative, effective approach to tackle complex second-order nonlinear differential equations, addressing critical challenges in various scientific domains.
APA, Harvard, Vancouver, ISO, and other styles
39

Hu, Jurong, Evans Baidoo, Lei Zhan, and Ying Tian. "Computationally Efficient Compressed Sensing-Based Method via FG Nyström in Bistatic MIMO Radar with Array Gain-Phase Error Effect." International Journal of Antennas and Propagation 2020 (March 10, 2020): 1–12. http://dx.doi.org/10.1155/2020/1586353.

Full text
Abstract:
In this paper, a robust angle estimator for uncorrelated targets that employs a compressed sense (CS) scheme following a fast greedy (FG) computation is proposed to achieve improved computational efficiency and performance for the bistatic MIMO radar with unknown gain-phase errors. The algorithm initially avoids the wholly computation of the received signal by compiling a lower approximation through a greedy Nyström approach. Then, the approximated signal is transformed into a sparse signal representation where the sparsity of the target is exploited in the spatial domain. Finally, a CS method, Simultaneous Orthogonal Matching Pursuit with an inherent gradient descent method, is utilized to reconstruct the signal and estimate the angles and the unknown gain-phase errors. The proposed algorithm, aside achieving closed-form resolution for automatically paired angle estimation, offers attractive computational competitiveness, specifically in large array scenarios. Additionally, the analyses of the computational complexity and the Cramér–Rao bounds for angle estimation are derived theoretically. Numerical experiments demonstrate the improvement and effectiveness of the proposed method against existing methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhao, Yang, Yuan Yuan, and Qi Wang. "Fast Spectral Clustering for Unsupervised Hyperspectral Image Classification." Remote Sensing 11, no. 4 (February 15, 2019): 399. http://dx.doi.org/10.3390/rs11040399.

Full text
Abstract:
Hyperspectral image classification is a challenging and significant domain in the field of remote sensing with numerous applications in agriculture, environmental science, mineralogy, and surveillance. In the past years, a growing number of advanced hyperspectral remote sensing image classification techniques based on manifold learning, sparse representation and deep learning have been proposed and reported a good performance in accuracy and efficiency on state-of-the-art public datasets. However, most existing methods still face challenges in dealing with large-scale hyperspectral image datasets due to their high computational complexity. In this work, we propose an improved spectral clustering method for large-scale hyperspectral image classification without any prior information. The proposed algorithm introduces two efficient approximation techniques based on Nyström extension and anchor-based graph to construct the affinity matrix. We also propose an effective solution to solve the eigenvalue decomposition problem by multiplicative update optimization. Experiments on both the synthetic datasets and the hyperspectral image datasets were conducted to demonstrate the efficiency and effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
41

Rubinstein, Yanir A., Gang Tian, and Kewei Zhang. "Basis divisors and balanced metrics." Journal für die reine und angewandte Mathematik (Crelles Journal) 2021, no. 778 (April 29, 2021): 171–218. http://dx.doi.org/10.1515/crelle-2021-0017.

Full text
Abstract:
Abstract Using log canonical thresholds and basis divisors Fujita–Odaka introduced purely algebro-geometric invariants δ m {\delta_{m}} whose limit in m is now known to characterize uniform K-stability on a Fano variety. As shown by Blum–Jonsson this carries over to a general polarization, and together with work of Berman, Boucksom, and Jonsson, it is now known that the limit of these δ m {\delta_{m}} -invariants characterizes uniform Ding stability. A basic question since Fujita–Odaka’s work has been to find an analytic interpretation of these invariants. We show that each δ m {\delta_{m}} is the coercivity threshold of a quantized Ding functional on the mth Bergman space and thus characterizes the existence of balanced metrics. This approach has a number of applications. The most basic one is that it provides an alternative way to compute these invariants, which is new even for ℙ n {{\mathbb{P}}^{n}} . Second, it allows us to introduce algebraically defined invariants that characterize the existence of Kähler–Ricci solitons (and the more general g-solitons of Berman–Witt Nyström), as well as coupled versions thereof. Third, it leads to approximation results involving balanced metrics in the presence of automorphisms that extend some results of Donaldson.
APA, Harvard, Vancouver, ISO, and other styles
42

Bounaya, Mohammed Charif, Samir Lemita, Sami Touati, and Mohamed Zine Aissaoui. "Analytical and numerical approach for a nonlinear Volterra-Fredholm integro-differential equation." Boletim da Sociedade Paranaense de Matemática 41 (December 21, 2022): 1–14. http://dx.doi.org/10.5269/bspm.52191.

Full text
Abstract:
An approach for Volterra- Fredholm integro-differential equations using appropriate fixed point theorems of existence, uniqueness is presented. The approximation of the solution is performed using Nystrom method in conjunction with successive approximations algorithm. Finally, we give a numerical example, in order to verify the effectiveness of the proposed method with respect to the analytical study.
APA, Harvard, Vancouver, ISO, and other styles
43

Adams, M., J. Finden, P. Phoncharon, and P. H. Muir. "Superconvergent interpolants for Gaussian collocation solutions of mixed order BVODE systems." AIMS Mathematics 7, no. 4 (2022): 5634–61. http://dx.doi.org/10.3934/math.2022312.

Full text
Abstract:
<abstract><p>The high quality COLSYS/COLNEW collocation software package is widely used for the numerical solution of boundary value ODEs (BVODEs), often through interfaces to computing environments such as Scilab, R, and Python. The continuous collocation solution returned by the code is much more accurate at a set of mesh points that partition the problem domain than it is elsewhere; the mesh point values are said to be superconvergent. In order to improve the accuracy of the continuous solution approximation at non-mesh points, when the BVODE is expressed in first order system form, an approach based on continuous Runge-Kutta (CRK) methods has been used to obtain a superconvergent interpolant (SCI) across the problem domain. Based on this approach, recent work has seen the development of a new, more efficient version of COLSYS/COLNEW that returns an error controlled SCI.</p> <p>However, most systems of BVODEs include higher derivatives and a feature of COLSYS/COLNEW is that it can directly treat such mixed order BVODE systems, resulting in improved efficiency, continuity of the approximate solution, and user convenience. In this paper we generalize the approach mentioned above for first order systems to obtain SCIs for collocation solutions of mixed order BVODE systems. The main contribution of this paper is the derivation of generalizations of continuous Runge-Kutta-Nyström methods that form the basis for SCIs for this more general problem class. We provide numerical results that (ⅰ) show that the SCIs are much more accurate than the collocation solutions at non-mesh points, (ⅱ) verify the order of accuracy of these SCIs, and (ⅲ) show that the cost of utilizing the SCIs is a small fraction of the cost of computing the collocation solution upon which they are based.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
44

Ворожцов, Е. В., and С. П. Киселев. "Explicit higher-order schemes for molecular dynamics problems." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 2 (May 12, 2021): 87–108. http://dx.doi.org/10.26089/nummet.v22r207.

Full text
Abstract:
Рассмотрены явные симплектические разностные схемы Рунге–Кутты–Нистрема (RKN) с числом стадий от 1 до 5 для численного решения задач молекулярной динамики, описываемых системами с распадающимися гамильтонианами. Для числа стадий 2 и 3 параметры RKN-схем получены с помощью техники базисов Гребнера. Для числа стадий 4 и 5 новые схемы най дены с применением метода численной оптимизации Нелдера–Мида. В частности, для числа стадий 4 получены четыре новые схемы. Для числа стадий 5 получены три новые схемы в дополнение к четырем схемам, известным в литературе. Для каждого конкретного числа стадий найдена схема, являющаяся наилучшей с точки зрения минимума ведущего члена погрешности аппроксимации. Верификация схем осуществлена на задаче, имеющей точное решение. Показано, что симплектическая пятистадийная RKN-схема обеспечивает более точное сохранение баланса полной энергии системы частиц, чем схемы более низких порядков точности. Исследования устойчивости схем выполнены с помощью программного пакета Mathematica. The Runge–Kutta–Nyström (RKN) explicit symplectic difference schemes are considered with a number of stages from 1 to 5 for the numerical solution of molecular dynamics problems described by systems with separable Hamiltonians. For the numbers of stages 2 and 3, the parameters of the RKN schemes are obtained using the Gröbner basis technique. For the number of stages 4 and 5, new schemes were found using the Nelder–Mead numerical optimization method. In particular, four new schemes are obtained for the number of stages 4. For the number of stages 5, three new schemes are obtained in addition to the four schemes, which are well-known in the literature. For each specific number of stages, a scheme is found being the best in terms of the minimum of the leading term of the approximation error. Verification of the schemes is carried out on a problem that has an exact solution. It is shown that the symplectic five-stage RKN scheme provides a more accurate conservation of the total energy balance of the particle system than schemes of lower orders of accuracy. The stability studies of the schemes were performed using the Mathematica software package.
APA, Harvard, Vancouver, ISO, and other styles
45

Ramos, Higinio, Mufutau Ajani Rufai, and Bruno Carpentieri. "A Pair of Optimized Nyström Methods with Symmetric Hybrid Points for the Numerical Solution of Second-Order Singular Boundary Value Problems." Symmetry 15, no. 9 (September 7, 2023): 1720. http://dx.doi.org/10.3390/sym15091720.

Full text
Abstract:
This paper introduces an efficient approach for solving Lane–Emden–Fowler problems. Our method utilizes two Nyström schemes to perform the integration. To overcome the singularity at the left end of the interval, we combine an optimized scheme of Nyström type with a set of Nyström formulas that are used at the fist subinterval. The optimized technique is obtained after imposing the vanishing of some of the local truncation errors, which results in a set of symmetric hybrid points. By solving an algebraic system of equations, our proposed approach generates simultaneous approximations at all grid points, resulting in a highly effective technique that outperforms several existing numerical methods in the literature. To assess the efficiency and accuracy of our approach, we perform some numerical tests on diverse real-world problems, including singular boundary value problems (SBVPs) from chemical kinetics.
APA, Harvard, Vancouver, ISO, and other styles
46

Ray, Archan, Nicholas Monath, Andrew McCallum, and Cameron Musco. "Sublinear Time Approximation of Text Similarity Matrices." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 8072–80. http://dx.doi.org/10.1609/aaai.v36i7.20779.

Full text
Abstract:
We study algorithms for approximating pairwise similarity matrices that arise in natural language processing. Generally, computing a similarity matrix for n data points requires Omega(n^2) similarity computations. This quadratic scaling is a significant bottleneck, especially when similarities are computed via expensive functions, e.g., via transformer models. Approximation methods reduce this quadratic complexity, often by using a small subset of exactly computed similarities to approximate the remainder of the complete pairwise similarity matrix. Significant work focuses on the efficient approximation of positive semidefinite (PSD) similarity matrices, which arise e.g., in kernel methods. However, much less is understood about indefinite (non-PSD) similarity matrices, which often arise in NLP. Motivated by the observation that many of these matrices are still somewhat close to PSD, we introduce a generalization of the popular Nystrom method to the indefinite setting. Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix, producing a rank-s approximation with just O(ns) similarity computations. We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices arising in NLP tasks. We demonstrate high accuracy of the approximated similarity matrices in tasks of document classification, sentence similarity, and cross-document coreference.
APA, Harvard, Vancouver, ISO, and other styles
47

Patil, Shailaja, and Mukesh Zaveri. "Localization in Sensor Network With Nystrom Approximation." International Journal of Wireless & Mobile Networks 3, no. 5 (October 30, 2011): 37–48. http://dx.doi.org/10.5121/ijwmn.2011.3504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Zhe, Wenbo Jie, and Daqi Gao. "A novel multiple Nyström-approximating kernel discriminant analysis." Neurocomputing 119 (November 2013): 385–98. http://dx.doi.org/10.1016/j.neucom.2013.03.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Xiong, Yunyang, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. "Nyströmformer: A Nyström-based Algorithm for Approximating Self-Attention." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14138–48. http://dx.doi.org/10.1609/aaai.v35i16.17664.

Full text
Abstract:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences - a topic being actively studied in the community. To address this limitation, we propose Nyströmformer - a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at https://github.com/mlpen/Nystromformer.
APA, Harvard, Vancouver, ISO, and other styles
50

Xia, Jianlin. "Making the Nyström Method Highly Accurate for Low-Rank Approximations." SIAM Journal on Scientific Computing 46, no. 2 (March 29, 2024): A1076—A1101. http://dx.doi.org/10.1137/23m1585039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography