Artykuły w czasopismach na temat „Approximation de Nyström”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Approximation de Nyström.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Approximation de Nyström”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Ding, Lizhong, Yong Liu, Shizhong Liao, Yu Li, Peng Yang, Yijie Pan, Chao Huang, Ling Shao i Xin Gao. "Approximate Kernel Selection with Strong Approximate Consistency". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3462–69. http://dx.doi.org/10.1609/aaai.v33i01.33013462.

Pełny tekst źródła
Streszczenie:
Kernel selection is fundamental to the generalization performance of kernel-based learning algorithms. Approximate kernel selection is an efficient kernel selection approach that exploits the convergence property of the kernel selection criteria and the computational virtue of kernel matrix approximation. The convergence property is measured by the notion of approximate consistency. For the existing Nyström approximations, whose sampling distributions are independent of the specific learning task at hand, it is difficult to establish the strong approximate consistency. They mainly focus on the quality of the low-rank matrix approximation, rather than the performance of the kernel selection criterion used in conjunction with the approximate matrix. In this paper, we propose a novel Nyström approximate kernel selection algorithm by customizing a criterion-driven adaptive sampling distribution for the Nyström approximation, which adaptively reduces the error between the approximate and accurate criteria. We theoretically derive the strong approximate consistency of the proposed Nyström approximate kernel selection algorithm. Finally, we empirically evaluate the approximate consistency of our algorithm as compared to state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Ling, Hongqiao Wang i Guangyuan Fu. "Multi-Nyström Method Based on Multiple Kernel Learning for Large Scale Imbalanced Classification". Computational Intelligence and Neuroscience 2021 (13.06.2021): 1–11. http://dx.doi.org/10.1155/2021/9911871.

Pełny tekst źródła
Streszczenie:
Extensions of kernel methods for the class imbalance problems have been extensively studied. Although they work well in coping with nonlinear problems, the high computation and memory costs severely limit their application to real-world imbalanced tasks. The Nyström method is an effective technique to scale kernel methods. However, the standard Nyström method needs to sample a sufficiently large number of landmark points to ensure an accurate approximation, which seriously affects its efficiency. In this study, we propose a multi-Nyström method based on mixtures of Nyström approximations to avoid the explosion of subkernel matrix, whereas the optimization to mixture weights is embedded into the model training process by multiple kernel learning (MKL) algorithms to yield more accurate low-rank approximation. Moreover, we select subsets of landmark points according to the imbalance distribution to reduce the model’s sensitivity to skewness. We also provide a kernel stability analysis of our method and show that the model solution error is bounded by weighted approximate errors, which can help us improve the learning process. Extensive experiments on several large scale datasets show that our method can achieve a higher classification accuracy and a dramatical speedup of MKL algorithms.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhang, Kai, i James T. Kwok. "Density-Weighted Nyström Method for Computing Large Kernel Eigensystems". Neural Computation 21, nr 1 (styczeń 2009): 121–46. http://dx.doi.org/10.1162/neco.2009.11-07-651.

Pełny tekst źródła
Streszczenie:
The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observation, we extend the Nyström method to a more general, density-weighted version. We show that by introducing the probability density function as a natural weighting scheme, the approximation of the eigensystem can be greatly improved. An efficient algorithm is proposed to enforce such weighting in practice, which has the same complexity as the original Nyström method and hence is notably cheaper than several other alternatives. Experiments on kernel principal component analysis, spectral clustering, and image segmentation demonstrate the encouraging performance of our algorithm.
Style APA, Harvard, Vancouver, ISO itp.
4

Díaz de Alba, Patricia, Luisa Fermo i Giuseppe Rodriguez. "Solution of second kind Fredholm integral equations by means of Gauss and anti-Gauss quadrature rules". Numerische Mathematik 146, nr 4 (18.11.2020): 699–728. http://dx.doi.org/10.1007/s00211-020-01163-7.

Pełny tekst źródła
Streszczenie:
AbstractThis paper is concerned with the numerical approximation of Fredholm integral equations of the second kind. A Nyström method based on the anti-Gauss quadrature formula is developed and investigated in terms of stability and convergence in appropriate weighted spaces. The Nyström interpolants corresponding to the Gauss and the anti-Gauss quadrature rules are proved to furnish upper and lower bounds for the solution of the equation, under suitable assumptions which are easily verified for a particular weight function. Hence, an error estimate is available, and the accuracy of the solution can be improved by approximating it by an averaged Nyström interpolant. The effectiveness of the proposed approach is illustrated through different numerical tests.
Style APA, Harvard, Vancouver, ISO itp.
5

Rudi, Alessandro, Leonard Wossnig, Carlo Ciliberto, Andrea Rocchetto, Massimiliano Pontil i Simone Severini. "Approximating Hamiltonian dynamics with the Nyström method". Quantum 4 (20.02.2020): 234. http://dx.doi.org/10.22331/q-2020-02-20-234.

Pełny tekst źródła
Streszczenie:
Simulating the time-evolution of quantum mechanical systems is BQP-hard and expected to be one of the foremost applications of quantum computers. We consider classical algorithms for the approximation of Hamiltonian dynamics using subsampling methods from randomized numerical linear algebra. We derive a simulation technique whose runtime scales polynomially in the number of qubits and the Frobenius norm of the Hamiltonian. As an immediate application, we show that sample based quantum simulation, a type of evolution where the Hamiltonian is a density matrix, can be efficiently classically simulated under specific structural conditions. Our main technical contribution is a randomized algorithm for approximating Hermitian matrix exponentials. The proof leverages a low-rank, symmetric approximation via the Nyström method. Our results suggest that under strong sampling assumptions there exist classical poly-logarithmic time simulations of quantum computations.
Style APA, Harvard, Vancouver, ISO itp.
6

Trokicić, Aleksandar, i Branimir Todorović. "Constrained spectral clustering via multi–layer graph embeddings on a grassmann manifold". International Journal of Applied Mathematics and Computer Science 29, nr 1 (1.03.2019): 125–37. http://dx.doi.org/10.2478/amcs-2019-0010.

Pełny tekst źródła
Streszczenie:
Abstract We present two algorithms in which constrained spectral clustering is implemented as unconstrained spectral clustering on a multi-layer graph where constraints are represented as graph layers. By using the Nystrom approximation in one of the algorithms, we obtain time and memory complexities which are linear in the number of data points regardless of the number of constraints. Our algorithms achieve superior or comparative accuracy on real world data sets, compared with the existing state-of-the-art solutions. However, the complexity of these algorithms is squared with the number of vertices, while our technique, based on the Nyström approximation method, has linear time complexity. The proposed algorithms efficiently use both soft and hard constraints since the time complexity of the algorithms does not depend on the size of the set of constraints.
Style APA, Harvard, Vancouver, ISO itp.
7

Cai, Difeng, i Panayot S. Vassilevski. "Eigenvalue Problems for Exponential-Type Kernels". Computational Methods in Applied Mathematics 20, nr 1 (1.01.2020): 61–78. http://dx.doi.org/10.1515/cmam-2018-0186.

Pełny tekst źródła
Streszczenie:
AbstractWe study approximations of eigenvalue problems for integral operators associated with kernel functions of exponential type. We show convergence rate {\lvert\lambda_{k}-\lambda_{k,h}\rvert\leq C_{k}h^{2}} in the case of lowest order approximation for both Galerkin and Nyström methods, where h is the mesh size, {\lambda_{k}} and {\lambda_{k,h}} are the exact and approximate kth largest eigenvalues, respectively. We prove that the two methods are numerically equivalent in the sense that {|\lambda^{(G)}_{k,h}-\lambda^{(N)}_{k,h}|\leq Ch^{2}}, where {\lambda^{(G)}_{k,h}} and {\lambda^{(N)}_{k,h}} denote the kth largest eigenvalues computed by Galerkin and Nyström methods, respectively, and C is a eigenvalue independent constant. The theoretical results are accompanied by a series of numerical experiments.
Style APA, Harvard, Vancouver, ISO itp.
8

He, Li, i Hong Zhang. "Kernel K-Means Sampling for Nyström Approximation". IEEE Transactions on Image Processing 27, nr 5 (maj 2018): 2108–20. http://dx.doi.org/10.1109/tip.2018.2796860.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Shiyuan, Lujuan Dang, Guobing Qian i Yunxiang Jiang. "Kernel recursive maximum correntropy with Nyström approximation". Neurocomputing 329 (luty 2019): 424–32. http://dx.doi.org/10.1016/j.neucom.2018.10.064.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Laguardia, Anna Lucia, i Maria Grazia Russo. "A Nyström Method for 2D Linear Fredholm Integral Equations on Curvilinear Domains". Mathematics 11, nr 23 (3.12.2023): 4859. http://dx.doi.org/10.3390/math11234859.

Pełny tekst źródła
Streszczenie:
This paper is devoted to the numerical treatment of two-dimensional Fredholm integral equations, defined on general curvilinear domains of the plane. A Nyström method, based on a suitable Gauss-like cubature formula, recently proposed in the literature is proposed. The convergence, stability and good conditioning of the method are proved in suitable subspaces of continuous functions of Sobolev type. The cubature formula, on which the Nyström method is constructed, has an error that behaves like the best polynomial approximation of the integrand function. Consequently, it is also shown how the Nyström method inherits this property and, hence, the proposed numerical strategy is fast when the involved known functions are smooth. Some numerical examples illustrate the efficiency of the method, also in comparison with other methods known in the literature.
Style APA, Harvard, Vancouver, ISO itp.
11

Baker, T. S., J. R. Dormand i P. J. Prince. "Continuous approximation with embedded Runge-Kutta-Nyström methods". Applied Numerical Mathematics 29, nr 2 (luty 1999): 171–88. http://dx.doi.org/10.1016/s0168-9274(98)00065-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Allouch, C., M. Arrai, H. Bouda i M. Tahrichi. "Legendre superconvergent degenerate kernel and Nyström methods for nonlinear integral equations". Ukrains’kyi Matematychnyi Zhurnal 75, nr 5 (24.05.2023): 579–95. http://dx.doi.org/10.37863/umzh.v75i5.7039.

Pełny tekst źródła
Streszczenie:
UDC 517.9 We study polynomially based superconvergent collocation methods for the approximation of solutions of nonlinear integral equations. The superconvergent degenerate kernel method is chosen for approximating the solutions of Hammerstein equations, while a superconvergent Nystr\"om method is used for solving Urysohn equations. By applying interpolatory projections based on Legendre polynomials of degree ≤ n , we analyze the superconvergence of these methods and their iterated versions. Numerical results are presented to validate the theoretical results.
Style APA, Harvard, Vancouver, ISO itp.
13

Didenko, Victor D., i Johan Heising. "Features of the Nyström Method for the Sherman-Lauricella Equation on Piecewise Smooth Contours". East Asian Journal on Applied Mathematics 1, nr 4 (listopad 2011): 403–14. http://dx.doi.org/10.4208/eajam.240611.070811a.

Pełny tekst źródła
Streszczenie:
AbstractThe stability of the Nyström method for the Sherman-Lauricella equation on contours with corner points cj, j = 0, 1, …, m relies on the invertibility of certain operators belonging to an algebra of Toeplitz operators. The operators do not depend on the shape of the contour, but on the opening angle θj of the corresponding corner cj and on parameters of the approximation method mentioned. They have a complicated structure and there is no analytic tool to verify their invertibility. To study this problem, the original Nyström method is applied to the Sherman-Lauricella equation on a special model contour that has only one corner point with varying opening angle In the interval (0.1π, 1.9π), it is found that there are 8 values of θj where the invertibility of the operator may fail, so the corresponding original Nyström method on any contour with corner points of such magnitude cannot be stable and requires modification.
Style APA, Harvard, Vancouver, ISO itp.
14

Zhu, Changming, i Daqi Gao. "Improved multi-kernel classification machine with Nyström approximation technique". Pattern Recognition 48, nr 4 (kwiecień 2015): 1490–509. http://dx.doi.org/10.1016/j.patcog.2014.10.029.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Lin, Ming, Fei Wang i Changshui Zhang. "Large-scale eigenvector approximation via Hilbert Space Embedding Nyström". Pattern Recognition 48, nr 5 (maj 2015): 1904–12. http://dx.doi.org/10.1016/j.patcog.2014.11.017.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Mu Li, Wei Bi, James T. Kwok i Bao-Liang Lu. "Large-Scale Nyström Kernel Matrix Approximation Using Randomized SVD". IEEE Transactions on Neural Networks and Learning Systems 26, nr 1 (styczeń 2015): 152–64. http://dx.doi.org/10.1109/tnnls.2014.2359798.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Liu, Guohong, Hong Chen, Xiaoying Sun i Robert C. Qiu. "Modified MUSIC Algorithm for DOA Estimation With Nyström Approximation". IEEE Sensors Journal 16, nr 12 (czerwiec 2016): 4673–74. http://dx.doi.org/10.1109/jsen.2016.2557488.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Wang, Zhe, Songcan Chen i Daqi Gao. "A novel multi-view classifier based on Nyström approximation". Expert Systems with Applications 38, nr 9 (wrzesień 2011): 11193–200. http://dx.doi.org/10.1016/j.eswa.2011.02.166.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Mezzanotte, Domenico, Donatella Occorsio i Maria Grazia Russo. "Combining Nyström Methods for a Fast Solution of Fredholm Integral Equations of the Second Kind". Mathematics 9, nr 21 (20.10.2021): 2652. http://dx.doi.org/10.3390/math9212652.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a suitable combination of two different Nyström methods, both using the zeros of the same sequence of Jacobi polynomials, in order to approximate the solution of Fredholm integral equations on [−1,1]. The proposed procedure is cheaper than the Nyström scheme based on using only one of the described methods . Moreover, we can successfully manage functions with possible algebraic singularities at the endpoints and kernels with different pathologies. The error of the method is comparable with that of the best polynomial approximation in suitable spaces of functions, equipped with the weighted uniform norm. The convergence and the stability of the method are proved, and some numerical tests that confirm the theoretical estimates are given.
Style APA, Harvard, Vancouver, ISO itp.
20

Martinsson, Per-Gunnar, i Joel A. Tropp. "Randomized numerical linear algebra: Foundations and algorithms". Acta Numerica 29 (maj 2020): 403–572. http://dx.doi.org/10.1017/s0962492920000021.

Pełny tekst źródła
Streszczenie:
This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues.Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nyström approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machine learning and in scientific computing.
Style APA, Harvard, Vancouver, ISO itp.
21

Kulkarni, Rekha P., i Gobinda Rakshit. "DISCRETE MODIFIED PROJECTION METHODS FOR URYSOHN INTEGRAL EQUATIONS WITH GREEN’S FUNCTION TYPE KERNELS". Mathematical Modelling and Analysis 25, nr 3 (13.05.2020): 421–40. http://dx.doi.org/10.3846/mma.2020.11093.

Pełny tekst źródła
Streszczenie:
In the present paper we consider discrete versions of the modified projection methods for solving a Urysohn integral equation with a kernel of the type of Green’s function. For r ≥ 0, a space of piecewise polynomials of degree ≤ r with respect to an uniform partition is chosen to be the approximating space. We define a discrete orthogonal projection onto this space and replace the Urysohn integral operator by a Nyström approximation. The order of convergence which we obtain for the discrete version indicates the choice of numerical quadrature which preserves the orders of convergence in the continuous modified projection methods. Numerical results are given for a specific example.
Style APA, Harvard, Vancouver, ISO itp.
22

Pourkamali-Anaraki, Farhad. "Scalable Spectral Clustering With Nyström Approximation: Practical and Theoretical Aspects". IEEE Open Journal of Signal Processing 1 (2020): 242–56. http://dx.doi.org/10.1109/ojsp.2020.3039330.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Li, Jinfeng, Weifeng Liu, Yicong Zhou, Jun Yu, Dapeng Tao i Changsheng Xu. "Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation". ACM Transactions on Multimedia Computing, Communications, and Applications 18, nr 3 (31.08.2022): 1–18. http://dx.doi.org/10.1145/3487194.

Pełny tekst źródła
Streszczenie:
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain. Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain. However, these algorithms will be infeasible when only a few labeled data exist in the source domain, thus the performance decreases significantly. To address this challenge, we propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples. Firstly, DGL introduces the Nyström method to construct a plastic graph that shares similar geometric property with the target domain. Then, DGL flexibly employs the Nyström approximation error to measure the divergence between the plastic graph and source graph to formalize the distribution mismatch from the geometric perspective. Through minimizing the approximation error, DGL learns a domain-invariant geometric graph to bridge the source and target domains. Finally, we integrate the learned domain-invariant graph with the semi-supervised learning and further propose an adaptive semi-supervised model to handle the cross-domain problems. The results of extensive experiments on popular datasets verify the superiority of DGL, especially when only a few labeled source samples are available.
Style APA, Harvard, Vancouver, ISO itp.
24

Chang, Lo-Bin, Zhidong Bai, Su-Yun Huang i Chii-Ruey Hwang. "Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices". Journal of Multivariate Analysis 120 (wrzesień 2013): 102–19. http://dx.doi.org/10.1016/j.jmva.2013.05.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Redner, Oliver. "DISCRETE APPROXIMATION OF NON-COMPACT OPERATORS DESCRIBING CONTINUUM-OF-ALLELES MODELS". Proceedings of the Edinburgh Mathematical Society 47, nr 2 (czerwiec 2004): 449–72. http://dx.doi.org/10.1017/s0013091503000476.

Pełny tekst źródła
Streszczenie:
AbstractWe consider the eigenvalue equation for the largest eigenvalue of certain kinds of non-compact linear operators given as the sum of a multiplication and a kernel operator. It is shown that, under moderate conditions, such operators can be approximated arbitrarily well by operators of finite rank, which constitutes a discretization procedure. For this purpose, two standard methods of approximation theory, the Nyström and the Galerkin method, are generalized. The operators considered describe models for mutation and selection of an infinitely large population of individuals that are labelled by real numbers, commonly called continuum-of-alleles models.AMS 2000 Mathematics subject classification: Primary 47A58; 45C05. Secondary 47B34
Style APA, Harvard, Vancouver, ISO itp.
26

Kulkarni, Rekha P., i Akshay S. Rane. "ASYMPTOTIC EXPANSIONS FOR APPROXIMATE SOLUTIONS OF HAMMERSTEIN INTEGRAL EQUATIONS WITH GREEN'S FUNCTION TYPE KERNELS". Mathematical Modelling and Analysis 19, nr 1 (20.02.2014): 127–43. http://dx.doi.org/10.3846/13926292.2014.893457.

Pełny tekst źródła
Streszczenie:
We consider approximation of a nonlinear Hammerstein equation with a kernel of the type of Green's function using the Nyström method based on the composite midpoint and the composite modified Simpson rules associated with a uniform partition. We obtain asymptotic expansions for the approximate solution unat the node points as well as at the partition points and use Richardson extrapolation to obtain approximate solutions with higher orders of convergence. Numerical results are presented which confirm the theoretical orders of convergence.
Style APA, Harvard, Vancouver, ISO itp.
27

Zhu, Changming. "Improved multi-kernel classification machine with Nyström approximation technique and Universum data". Neurocomputing 175 (styczeń 2016): 610–34. http://dx.doi.org/10.1016/j.neucom.2015.10.102.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Pourkamali-Anaraki, Farhad, i Stephen Becker. "Improved fixed-rank Nyström approximation via QR decomposition: Practical and theoretical aspects". Neurocomputing 363 (październik 2019): 261–72. http://dx.doi.org/10.1016/j.neucom.2019.06.070.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Airola, Antti, Tapio Pahikkala i Tapio Salakoski. "On Learning and Cross-Validation with Decomposed Nyström Approximation of Kernel Matrix". Neural Processing Letters 33, nr 1 (1.12.2010): 17–30. http://dx.doi.org/10.1007/s11063-010-9159-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Chen, Kai, Rongchun Li, Yong Dou, Zhengfa Liang i Qi Lv. "Ranking Support Vector Machine with Kernel Approximation". Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4629534.

Pełny tekst źródła
Streszczenie:
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Style APA, Harvard, Vancouver, ISO itp.
31

GISBRECHT, ANDREJ, BASSAM MOKBEL, FRANK-MICHAEL SCHLEIF, XIBIN ZHU i BARBARA HAMMER. "LINEAR TIME RELATIONAL PROTOTYPE BASED LEARNING". International Journal of Neural Systems 22, nr 05 (26.09.2012): 1250021. http://dx.doi.org/10.1142/s0129065712500219.

Pełny tekst źródła
Streszczenie:
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
Style APA, Harvard, Vancouver, ISO itp.
32

Wang, Yongguang, Shuzhen Yao i Tian Xu. "Incremental Kernel Principal Components Subspace Inference With Nyström Approximation for Bayesian Deep Learning". IEEE Access 9 (2021): 36241–51. http://dx.doi.org/10.1109/access.2021.3062747.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Trokicic, Aleksandar, i Branimir Todorovic. "On expected error of randomized Nyström kernel regression". Filomat 34, nr 11 (2020): 3871–84. http://dx.doi.org/10.2298/fil2011871t.

Pełny tekst źródła
Streszczenie:
Kernel methods are a class of machine learning algorithms which learn and discover patterns in a high (possibly infinite) dimensional feature space obtained by often nonlinear, possibly infinite mapping of an input space. A major problem with kernel methods is their time complexity. For a data set with n input points a time complexity of a kernel method is O(n3), which is intractable for a large data set. A method based on a random Nystr?m features is an approximation method that is able to reduce the time complexity to O(np2+p3) where p is the number of randomly selected input data points. A time complexity of O(p3) comes from the fact that a spectral decomposition needs to be performed on a p x p Gram matrix, and if p is a large number even an approximate algorithm is time consuming. In this paper we will apply the randomized SVD method instead of the spectral decomposition and further reduce the time complexity. An input parameters of a randomized SVD algorithm are p x p Gram matrix and a number m < p. In this case time complexity is O(nm2+p2m+m3), and linear regression is performed on a m-dimensional random features. We will prove that the error of a predictor, learned via this method is almost the same in expectation as the error of a kernel predictor. Aditionally, we will empirically show that this predictor is better than the ONE that uses only Nystr?m method.
Style APA, Harvard, Vancouver, ISO itp.
34

Zhao, Lin, i Alex Barnett. "Robust and Efficient Solution of the Drum Problem via Nyström Approximation of the Fredholm Determinant". SIAM Journal on Numerical Analysis 53, nr 4 (styczeń 2015): 1984–2007. http://dx.doi.org/10.1137/140973992.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Wang, Jianzhong. "Mathematical analysis on out-of-sample extensions". International Journal of Wavelets, Multiresolution and Information Processing 16, nr 05 (wrzesień 2018): 1850042. http://dx.doi.org/10.1142/s021969131850042x.

Pełny tekst źródła
Streszczenie:
Let [Formula: see text] be a data set in [Formula: see text], where [Formula: see text] is the training set and [Formula: see text] is the test one. Many unsupervised learning algorithms based on kernel methods have been developed to provide dimensionality reduction (DR) embedding for a given training set [Formula: see text] ([Formula: see text]) that maps the high-dimensional data [Formula: see text] to its low-dimensional feature representation [Formula: see text]. However, these algorithms do not straightforwardly produce DR of the test set [Formula: see text]. An out-of-sample extension method provides DR of [Formula: see text] using an extension of the existent embedding [Formula: see text], instead of re-computing the DR embedding for the whole set [Formula: see text]. Among various out-of-sample DR extension methods, those based on Nyström approximation are very attractive. Many papers have developed such out-of-extension algorithms and shown their validity by numerical experiments. However, the mathematical theory for the DR extension still need further consideration. Utilizing the reproducing kernel Hilbert space (RKHS) theory, this paper develops a preliminary mathematical analysis on the out-of-sample DR extension operators. It treats an out-of-sample DR extension operator as an extension of the identity on the RKHS defined on [Formula: see text]. Then the Nyström-type DR extension turns out to be an orthogonal projection. In the paper, we also present the conditions for the exact DR extension and give the estimate for the error of the extension.
Style APA, Harvard, Vancouver, ISO itp.
36

García García, Juan F., i Salvador E. Venegas-Andraca. "Region-based approach for the spectral clustering Nyström approximation with an application to burn depth assessment". Machine Vision and Applications 26, nr 2-3 (3.03.2015): 353–68. http://dx.doi.org/10.1007/s00138-015-0664-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Du, Ke-Lin, M. N. S. Swamy, Zhang-Quan Wang i Wai Ho Mow. "Matrix Factorization Techniques in Machine Learning, Signal Processing, and Statistics". Mathematics 11, nr 12 (12.06.2023): 2674. http://dx.doi.org/10.3390/math11122674.

Pełny tekst źródła
Streszczenie:
Compressed sensing is an alternative to Shannon/Nyquist sampling for acquiring sparse or compressible signals. Sparse coding represents a signal as a sparse linear combination of atoms, which are elementary signals derived from a predefined dictionary. Compressed sensing, sparse approximation, and dictionary learning are topics similar to sparse coding. Matrix completion is the process of recovering a data matrix from a subset of its entries, and it extends the principles of compressed sensing and sparse approximation. The nonnegative matrix factorization is a low-rank matrix factorization technique for nonnegative data. All of these low-rank matrix factorization techniques are unsupervised learning techniques, and can be used for data analysis tasks, such as dimension reduction, feature extraction, blind source separation, data compression, and knowledge discovery. In this paper, we survey a few emerging matrix factorization techniques that are receiving wide attention in machine learning, signal processing, and statistics. The treated topics are compressed sensing, dictionary learning, sparse representation, matrix completion and matrix recovery, nonnegative matrix factorization, the Nyström method, and CUR matrix decomposition in the machine learning framework. Some related topics, such as matrix factorization using metaheuristics or neurodynamics, are also introduced. A few topics are suggested for future investigation in this article.
Style APA, Harvard, Vancouver, ISO itp.
38

Ajinuhi, Joel Olusegun, Umaru Mohammed, Abdullah Idris Enagi i Jimoh Omananyi Razaq. "A Novel Seventh-Order Implicit Block Hybrid Nyström-Type Method for Second- Order Boundary Value Problems". International Journal of Research and Scientific Innovation X, nr XI (2023): 26–44. http://dx.doi.org/10.51244/ijrsi.2023.1011003.

Pełny tekst źródła
Streszczenie:
This paper introduces a novel approach for solving second-order nonlinear differential equations, with a primary focus on the Bratu problem, which holds significant importance in diverse scientific areas. Existing methods for solving this problem have limitations, prompting the development of the Block Hybrid Nystrom-Type Method (BHNTM). BHNTM utilizes the Bhaskara points derived, using the Bhaskara cosine approximation formula. The method seeks a numerical solution in the form of a power series polynomial, efficiently determining coefficients. The paper discusses BHNTM’s convergence, zero stability, and consistency properties, substantiated through numerical experiments, highlighting its accuracy as a solver for Bratu-type equations. This research contributes to the field of numerical analysis by offering an alternative, effective approach to tackle complex second-order nonlinear differential equations, addressing critical challenges in various scientific domains.
Style APA, Harvard, Vancouver, ISO itp.
39

Hu, Jurong, Evans Baidoo, Lei Zhan i Ying Tian. "Computationally Efficient Compressed Sensing-Based Method via FG Nyström in Bistatic MIMO Radar with Array Gain-Phase Error Effect". International Journal of Antennas and Propagation 2020 (10.03.2020): 1–12. http://dx.doi.org/10.1155/2020/1586353.

Pełny tekst źródła
Streszczenie:
In this paper, a robust angle estimator for uncorrelated targets that employs a compressed sense (CS) scheme following a fast greedy (FG) computation is proposed to achieve improved computational efficiency and performance for the bistatic MIMO radar with unknown gain-phase errors. The algorithm initially avoids the wholly computation of the received signal by compiling a lower approximation through a greedy Nyström approach. Then, the approximated signal is transformed into a sparse signal representation where the sparsity of the target is exploited in the spatial domain. Finally, a CS method, Simultaneous Orthogonal Matching Pursuit with an inherent gradient descent method, is utilized to reconstruct the signal and estimate the angles and the unknown gain-phase errors. The proposed algorithm, aside achieving closed-form resolution for automatically paired angle estimation, offers attractive computational competitiveness, specifically in large array scenarios. Additionally, the analyses of the computational complexity and the Cramér–Rao bounds for angle estimation are derived theoretically. Numerical experiments demonstrate the improvement and effectiveness of the proposed method against existing methods.
Style APA, Harvard, Vancouver, ISO itp.
40

Zhao, Yang, Yuan Yuan i Qi Wang. "Fast Spectral Clustering for Unsupervised Hyperspectral Image Classification". Remote Sensing 11, nr 4 (15.02.2019): 399. http://dx.doi.org/10.3390/rs11040399.

Pełny tekst źródła
Streszczenie:
Hyperspectral image classification is a challenging and significant domain in the field of remote sensing with numerous applications in agriculture, environmental science, mineralogy, and surveillance. In the past years, a growing number of advanced hyperspectral remote sensing image classification techniques based on manifold learning, sparse representation and deep learning have been proposed and reported a good performance in accuracy and efficiency on state-of-the-art public datasets. However, most existing methods still face challenges in dealing with large-scale hyperspectral image datasets due to their high computational complexity. In this work, we propose an improved spectral clustering method for large-scale hyperspectral image classification without any prior information. The proposed algorithm introduces two efficient approximation techniques based on Nyström extension and anchor-based graph to construct the affinity matrix. We also propose an effective solution to solve the eigenvalue decomposition problem by multiplicative update optimization. Experiments on both the synthetic datasets and the hyperspectral image datasets were conducted to demonstrate the efficiency and effectiveness of the proposed algorithm.
Style APA, Harvard, Vancouver, ISO itp.
41

Rubinstein, Yanir A., Gang Tian i Kewei Zhang. "Basis divisors and balanced metrics". Journal für die reine und angewandte Mathematik (Crelles Journal) 2021, nr 778 (29.04.2021): 171–218. http://dx.doi.org/10.1515/crelle-2021-0017.

Pełny tekst źródła
Streszczenie:
Abstract Using log canonical thresholds and basis divisors Fujita–Odaka introduced purely algebro-geometric invariants δ m {\delta_{m}} whose limit in m is now known to characterize uniform K-stability on a Fano variety. As shown by Blum–Jonsson this carries over to a general polarization, and together with work of Berman, Boucksom, and Jonsson, it is now known that the limit of these δ m {\delta_{m}} -invariants characterizes uniform Ding stability. A basic question since Fujita–Odaka’s work has been to find an analytic interpretation of these invariants. We show that each δ m {\delta_{m}} is the coercivity threshold of a quantized Ding functional on the mth Bergman space and thus characterizes the existence of balanced metrics. This approach has a number of applications. The most basic one is that it provides an alternative way to compute these invariants, which is new even for ℙ n {{\mathbb{P}}^{n}} . Second, it allows us to introduce algebraically defined invariants that characterize the existence of Kähler–Ricci solitons (and the more general g-solitons of Berman–Witt Nyström), as well as coupled versions thereof. Third, it leads to approximation results involving balanced metrics in the presence of automorphisms that extend some results of Donaldson.
Style APA, Harvard, Vancouver, ISO itp.
42

Bounaya, Mohammed Charif, Samir Lemita, Sami Touati i Mohamed Zine Aissaoui. "Analytical and numerical approach for a nonlinear Volterra-Fredholm integro-differential equation". Boletim da Sociedade Paranaense de Matemática 41 (21.12.2022): 1–14. http://dx.doi.org/10.5269/bspm.52191.

Pełny tekst źródła
Streszczenie:
An approach for Volterra- Fredholm integro-differential equations using appropriate fixed point theorems of existence, uniqueness is presented. The approximation of the solution is performed using Nystrom method in conjunction with successive approximations algorithm. Finally, we give a numerical example, in order to verify the effectiveness of the proposed method with respect to the analytical study.
Style APA, Harvard, Vancouver, ISO itp.
43

Adams, M., J. Finden, P. Phoncharon i P. H. Muir. "Superconvergent interpolants for Gaussian collocation solutions of mixed order BVODE systems". AIMS Mathematics 7, nr 4 (2022): 5634–61. http://dx.doi.org/10.3934/math.2022312.

Pełny tekst źródła
Streszczenie:
<abstract><p>The high quality COLSYS/COLNEW collocation software package is widely used for the numerical solution of boundary value ODEs (BVODEs), often through interfaces to computing environments such as Scilab, R, and Python. The continuous collocation solution returned by the code is much more accurate at a set of mesh points that partition the problem domain than it is elsewhere; the mesh point values are said to be superconvergent. In order to improve the accuracy of the continuous solution approximation at non-mesh points, when the BVODE is expressed in first order system form, an approach based on continuous Runge-Kutta (CRK) methods has been used to obtain a superconvergent interpolant (SCI) across the problem domain. Based on this approach, recent work has seen the development of a new, more efficient version of COLSYS/COLNEW that returns an error controlled SCI.</p> <p>However, most systems of BVODEs include higher derivatives and a feature of COLSYS/COLNEW is that it can directly treat such mixed order BVODE systems, resulting in improved efficiency, continuity of the approximate solution, and user convenience. In this paper we generalize the approach mentioned above for first order systems to obtain SCIs for collocation solutions of mixed order BVODE systems. The main contribution of this paper is the derivation of generalizations of continuous Runge-Kutta-Nyström methods that form the basis for SCIs for this more general problem class. We provide numerical results that (ⅰ) show that the SCIs are much more accurate than the collocation solutions at non-mesh points, (ⅱ) verify the order of accuracy of these SCIs, and (ⅲ) show that the cost of utilizing the SCIs is a small fraction of the cost of computing the collocation solution upon which they are based.</p></abstract>
Style APA, Harvard, Vancouver, ISO itp.
44

Ворожцов, Е. В., i С. П. Киселев. "Explicit higher-order schemes for molecular dynamics problems". Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), nr 2 (12.05.2021): 87–108. http://dx.doi.org/10.26089/nummet.v22r207.

Pełny tekst źródła
Streszczenie:
Рассмотрены явные симплектические разностные схемы Рунге–Кутты–Нистрема (RKN) с числом стадий от 1 до 5 для численного решения задач молекулярной динамики, описываемых системами с распадающимися гамильтонианами. Для числа стадий 2 и 3 параметры RKN-схем получены с помощью техники базисов Гребнера. Для числа стадий 4 и 5 новые схемы най дены с применением метода численной оптимизации Нелдера–Мида. В частности, для числа стадий 4 получены четыре новые схемы. Для числа стадий 5 получены три новые схемы в дополнение к четырем схемам, известным в литературе. Для каждого конкретного числа стадий найдена схема, являющаяся наилучшей с точки зрения минимума ведущего члена погрешности аппроксимации. Верификация схем осуществлена на задаче, имеющей точное решение. Показано, что симплектическая пятистадийная RKN-схема обеспечивает более точное сохранение баланса полной энергии системы частиц, чем схемы более низких порядков точности. Исследования устойчивости схем выполнены с помощью программного пакета Mathematica. The Runge–Kutta–Nyström (RKN) explicit symplectic difference schemes are considered with a number of stages from 1 to 5 for the numerical solution of molecular dynamics problems described by systems with separable Hamiltonians. For the numbers of stages 2 and 3, the parameters of the RKN schemes are obtained using the Gröbner basis technique. For the number of stages 4 and 5, new schemes were found using the Nelder–Mead numerical optimization method. In particular, four new schemes are obtained for the number of stages 4. For the number of stages 5, three new schemes are obtained in addition to the four schemes, which are well-known in the literature. For each specific number of stages, a scheme is found being the best in terms of the minimum of the leading term of the approximation error. Verification of the schemes is carried out on a problem that has an exact solution. It is shown that the symplectic five-stage RKN scheme provides a more accurate conservation of the total energy balance of the particle system than schemes of lower orders of accuracy. The stability studies of the schemes were performed using the Mathematica software package.
Style APA, Harvard, Vancouver, ISO itp.
45

Ramos, Higinio, Mufutau Ajani Rufai i Bruno Carpentieri. "A Pair of Optimized Nyström Methods with Symmetric Hybrid Points for the Numerical Solution of Second-Order Singular Boundary Value Problems". Symmetry 15, nr 9 (7.09.2023): 1720. http://dx.doi.org/10.3390/sym15091720.

Pełny tekst źródła
Streszczenie:
This paper introduces an efficient approach for solving Lane–Emden–Fowler problems. Our method utilizes two Nyström schemes to perform the integration. To overcome the singularity at the left end of the interval, we combine an optimized scheme of Nyström type with a set of Nyström formulas that are used at the fist subinterval. The optimized technique is obtained after imposing the vanishing of some of the local truncation errors, which results in a set of symmetric hybrid points. By solving an algebraic system of equations, our proposed approach generates simultaneous approximations at all grid points, resulting in a highly effective technique that outperforms several existing numerical methods in the literature. To assess the efficiency and accuracy of our approach, we perform some numerical tests on diverse real-world problems, including singular boundary value problems (SBVPs) from chemical kinetics.
Style APA, Harvard, Vancouver, ISO itp.
46

Ray, Archan, Nicholas Monath, Andrew McCallum i Cameron Musco. "Sublinear Time Approximation of Text Similarity Matrices". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 7 (28.06.2022): 8072–80. http://dx.doi.org/10.1609/aaai.v36i7.20779.

Pełny tekst źródła
Streszczenie:
We study algorithms for approximating pairwise similarity matrices that arise in natural language processing. Generally, computing a similarity matrix for n data points requires Omega(n^2) similarity computations. This quadratic scaling is a significant bottleneck, especially when similarities are computed via expensive functions, e.g., via transformer models. Approximation methods reduce this quadratic complexity, often by using a small subset of exactly computed similarities to approximate the remainder of the complete pairwise similarity matrix. Significant work focuses on the efficient approximation of positive semidefinite (PSD) similarity matrices, which arise e.g., in kernel methods. However, much less is understood about indefinite (non-PSD) similarity matrices, which often arise in NLP. Motivated by the observation that many of these matrices are still somewhat close to PSD, we introduce a generalization of the popular Nystrom method to the indefinite setting. Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix, producing a rank-s approximation with just O(ns) similarity computations. We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices arising in NLP tasks. We demonstrate high accuracy of the approximated similarity matrices in tasks of document classification, sentence similarity, and cross-document coreference.
Style APA, Harvard, Vancouver, ISO itp.
47

Patil, Shailaja, i Mukesh Zaveri. "Localization in Sensor Network With Nystrom Approximation". International Journal of Wireless & Mobile Networks 3, nr 5 (30.10.2011): 37–48. http://dx.doi.org/10.5121/ijwmn.2011.3504.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Wang, Zhe, Wenbo Jie i Daqi Gao. "A novel multiple Nyström-approximating kernel discriminant analysis". Neurocomputing 119 (listopad 2013): 385–98. http://dx.doi.org/10.1016/j.neucom.2013.03.019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Xiong, Yunyang, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li i Vikas Singh. "Nyströmformer: A Nyström-based Algorithm for Approximating Self-Attention". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 16 (18.05.2021): 14138–48. http://dx.doi.org/10.1609/aaai.v35i16.17664.

Pełny tekst źródła
Streszczenie:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences - a topic being actively studied in the community. To address this limitation, we propose Nyströmformer - a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at https://github.com/mlpen/Nystromformer.
Style APA, Harvard, Vancouver, ISO itp.
50

Xia, Jianlin. "Making the Nyström Method Highly Accurate for Low-Rank Approximations". SIAM Journal on Scientific Computing 46, nr 2 (29.03.2024): A1076—A1101. http://dx.doi.org/10.1137/23m1585039.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii