Academic literature on the topic 'Low-Rank matrix approximation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Low-Rank matrix approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Low-Rank matrix approximation"

1

Ting Liu, Ting Liu, Mingjian Sun Mingjian Sun, Naizhang Feng Naizhang Feng, Minghua Wang Minghua Wang, Deying Chen Deying Chen, and and Yi Shen and Yi Shen. "Sparse photoacoustic microscopy based on low-rank matrix approximation." Chinese Optics Letters 14, no. 9 (2016): 091701–91705. http://dx.doi.org/10.3788/col201614.091701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parekh, Ankit, and Ivan W. Selesnick. "Enhanced Low-Rank Matrix Approximation." IEEE Signal Processing Letters 23, no. 4 (April 2016): 493–97. http://dx.doi.org/10.1109/lsp.2016.2535227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fomin, Fedor V., Petr A. Golovach, and Fahad Panolan. "Parameterized low-rank binary matrix approximation." Data Mining and Knowledge Discovery 34, no. 2 (January 2, 2020): 478–532. http://dx.doi.org/10.1007/s10618-019-00669-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fomin, Fedor V., Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, and Saket Saurabh. "Approximation Schemes for Low-rank Binary Matrix Approximation Problems." ACM Transactions on Algorithms 16, no. 1 (January 11, 2020): 1–39. http://dx.doi.org/10.1145/3365653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhenyue Zhang and Keke Zhao. "Low-Rank Matrix Approximation with Manifold Regularization." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 7 (July 2013): 1717–29. http://dx.doi.org/10.1109/tpami.2012.274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, An-Bao, and Dongxiu Xie. "Low-rank approximation pursuit for matrix completion." Mechanical Systems and Signal Processing 95 (October 2017): 77–89. http://dx.doi.org/10.1016/j.ymssp.2017.03.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barlow, Jesse L., and Hasan Erbay. "Modifiable low-rank approximation to a matrix." Numerical Linear Algebra with Applications 16, no. 10 (October 2009): 833–60. http://dx.doi.org/10.1002/nla.651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jia, Yuheng, Hui Liu, Junhui Hou, and Qingfu Zhang. "Clustering Ensemble Meets Low-rank Tensor Approximation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7970–78. http://dx.doi.org/10.1609/aaai.v35i9.16972.

Full text
Abstract:
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one. The existing clustering ensemble methods generally construct a co-association matrix, which indicates the pairwise similarity between samples, as the weighted linear combination of the connective matrices from different base clusterings, and the resulting co-association matrix is then adopted as the input of an off-the-shelf clustering algorithm, e.g., spectral clustering. However, the co-association matrix may be dominated by poor base clusterings, resulting in inferior performance. In this paper, we propose a novel low-rank tensor approximation based method to solve the problem from a global perspective. Specifically, by inspecting whether two samples are clustered to an identical cluster under different base clusterings, we derive a coherent-link matrix, which contains limited but highly reliable relationships between samples. We then stack the coherent-link matrix and the co-association matrix to form a three-dimensional tensor, the low-rankness property of which is further explored to propagate the information of the coherent-link matrix to the co-association matrix, producing a refined co-association matrix. We formulate the proposed method as a convex constrained optimization problem and solve it efficiently. Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods. To the best of our knowledge, this is the first work to explore the potential of low-rank tensor on clustering ensemble, which is fundamentally different from previous approaches. Last but not least, our method only contains one parameter, which can be easily tuned.
APA, Harvard, Vancouver, ISO, and other styles
9

Tropp, Joel A., Alp Yurtsever, Madeleine Udell, and Volkan Cevher. "Practical Sketching Algorithms for Low-Rank Matrix Approximation." SIAM Journal on Matrix Analysis and Applications 38, no. 4 (January 2017): 1454–85. http://dx.doi.org/10.1137/17m1111590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Huafeng, Liping Jing, Yuhua Qian, and Jian Yu. "Adaptive Local Low-rank Matrix Approximation for Recommendation." ACM Transactions on Information Systems 37, no. 4 (December 10, 2019): 1–34. http://dx.doi.org/10.1145/3360488.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Low-Rank matrix approximation"

1

Blanchard, Pierre. "Fast hierarchical algorithms for the low-rank approximation of matrices, with applications to materials physics, geostatistics and data analysis." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0016/document.

Full text
Abstract:
Les techniques avancées pour l’approximation de rang faible des matrices sont des outils de réduction de dimension fondamentaux pour un grand nombre de domaines du calcul scientifique. Les approches hiérarchiques comme les matrices H2, en particulier la méthode multipôle rapide (FMM), bénéficient de la structure de rang faible par bloc de certaines matrices pour réduire le coût de calcul de problèmes d’interactions à n-corps en O(n) opérations au lieu de O(n2). Afin de mieux traiter des noyaux d’interaction complexes de plusieurs natures, des formulations FMM dites ”kernel-independent” ont récemment vu le jour, telles que les FMM basées sur l’interpolation polynomiale. Cependant elles deviennent très coûteuses pour les noyaux tensoriels à fortes dimensions, c’est pourquoi nous avons développé une nouvelle formulation FMM efficace basée sur l’interpolation polynomiale, appelée Uniform FMM. Cette méthode a été implémentée dans la bibliothèque parallèle ScalFMM et repose sur une grille d’interpolation régulière et la transformée de Fourier rapide (FFT). Ses performances et sa précision ont été comparées à celles de la FMM par interpolation de Chebyshev. Des simulations numériques sur des cas tests artificiels ont montré que la perte de précision induite par le schéma d’interpolation était largement compensées par le gain de performance apporté par la FFT. Dans un premier temps, nous avons étendu les FMM basées sur grille de Chebyshev et sur grille régulière au calcul des champs élastiques isotropes mis en jeu dans des simulations de Dynamique des Dislocations (DD). Dans un second temps, nous avons utilisé notre nouvelle FMM pour accélérer une factorisation SVD de rang r par projection aléatoire et ainsi permettre de générer efficacement des champs Gaussiens aléatoires sur de grandes grilles hétérogènes. Pour finir, nous avons développé un algorithme de réduction de dimension basé sur la projection aléatoire dense afin d’étudier de nouvelles façons de caractériser la biodiversité, à savoir d’un point de vue géométrique
Advanced techniques for the low-rank approximation of matrices are crucial dimension reduction tools in many domains of modern scientific computing. Hierarchical approaches like H2-matrices, in particular the Fast Multipole Method (FMM), benefit from the block low-rank structure of certain matrices to reduce the cost of computing n-body problems to O(n) operations instead of O(n2). In order to better deal with kernels of various kinds, kernel independent FMM formulations have recently arisen such as polynomial interpolation based FMM. However, they are hardly tractable to high dimensional tensorial kernels, therefore we designed a new highly efficient interpolation based FMM, called the Uniform FMM, and implemented it in the parallel library ScalFMM. The method relies on an equispaced interpolation grid and the Fast Fourier Transform (FFT). Performance and accuracy were compared with the Chebyshev interpolation based FMM. Numerical experiments on artificial benchmarks showed that the loss of accuracy induced by the interpolation scheme was largely compensated by the FFT optimization. First of all, we extended both interpolation based FMM to the computation of the isotropic elastic fields involved in Dislocation Dynamics (DD) simulations. Second of all, we used our new FMM algorithm to accelerate a rank-r Randomized SVD and thus efficiently generate multivariate Gaussian random variables on large heterogeneous grids in O(n) operations. Finally, we designed a new efficient dimensionality reduction algorithm based on dense random projection in order to investigate new ways of characterizing the biodiversity, namely from a geometric point of view
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Joonseok. "Local approaches for collaborative filtering." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53846.

Full text
Abstract:
Recommendation systems are emerging as an important business application as the demand for personalized services in E-commerce increases. Collaborative filtering techniques are widely used for predicting a user's preference or generating a list of items to be recommended. In this thesis, we develop several new approaches for collaborative filtering based on model combination and kernel smoothing. Specifically, we start with an experimental study that compares a wide variety of CF methods under different conditions. Based on this study, we formulate a combination model similar to boosting but where the combination coefficients are functions rather than constant. In another contribution we formulate and analyze a local variation of matrix factorization. This formulation constructs multiple local matrix factorization models and then combines them into a global model. This formulation is based on the local low-rank assumption, a slightly different but more plausible assumption about the rating matrix. We apply this assumption to both rating prediction and ranking problems, with both empirical validations and theoretical analysis. We contribute with this thesis in four aspects. First, the local approaches we present significantly improve the accuracy of recommendations both in rating prediction and ranking problems. Second, with the more realistic local low-rank assumption, we fundamentally change the underlying assumption for matrix factorization-based recommendation systems. Third, we present highly efficient and scalable algorithms which take advantage of parallelism, suited for recent large scale datasets. Lastly, we provide an open source software implementing the local approaches in this thesis as well as many other recent recommendation algorithms, which can be used both in research and production.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Galvin, Timothy Matthew. "Faster streaming algorithms for low-rank matrix approximations." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91810.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 53-55).
Low-rank matrix approximations are used in a significant number of applications. We present new algorithms for generating such approximations in a streaming fashion that expand upon recently discovered matrix sketching techniques. We test our approaches on real and synthetic data to explore runtime and accuracy performance. We apply our algorithms to the technique of Latent Semantic Indexing on a widely studied data set. We find our algorithms provide strong empirical results.
by Timothy Matthew Galvin.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Abbas, Kinan. "Dématriçage et démélange conjoints d'images multispectrales." Electronic Thesis or Diss., Littoral, 2024. http://www.theses.fr/2024DUNK0710.

Full text
Abstract:
Dans cette thèse, nous considérons des images captées par une caméra multispectrale (MS) miniaturisée « snapshot ». Contrairement aux caméras RVB classiques, l’imagerie MS permet d’observer une scène sur des dizaines de longueurs d’onde différentes, permettant une analyse beaucoup plus précise du contenu observé. Alors que la plupart des caméras MS nécessitent un scan pour générer une image, les caméras MS snapshot peuvent fournir instantanément des images, voire des vidéos. Lorsque la caméra est miniaturisée, au lieu d’un cube de données 3D, elle fournit une image 2D, chaque pixel étant associé à une version filtrée du spectre théorique sensé être acquis. Un post-traitement, appelé «dématriçage », est alors nécessaire pour reconstruire le cube de données. De plus, dans chaque pixel de l’image, le spectre observé peut être considéré comme un mélange de spectres de matériaux purs présents dans le pixel. L’estimation de ces spectres nommés endmembers ainsi que leur distribution spatiale (appelée abondances) est appelée « démélange ». Alors qu’un pipeline classique pour traiter les images MS snapshot consiste d’abord à dématricer puis à démélanger les données, les travaux présentés dans cette thèse explorent des stratégies alternatives dans lesquelles le dématriçage et le démélange sont effectués conjointement. En étendant les hypothèsesclassiques rencontrées dans l’analyse des composantes parcimonieuses et dans le démélange MS utilisé en télédétection, nous proposons deux cadres différents pour restaurer et démélanger la scène acquise, basés respectivement sur la complétion de matrice de faible rang et la déconvolution, cette dernière étant spécifiquement conçue pour les filtres Fabry-Pérot utilisés dans la caméra considérée. Les quatre méthodes proposées présentent une bien meilleure qualité de démélange que les variantes qu’elles étendent lorsque ces dernières sont appliquées à des données dématricées. Néanmoins, elles permettent des performances de dématriçage similaires à celles des méthodes de l’état de l’art. La dernière partie de cette thèse introduit une approche de déconvolution pour restaurer les spectres de telles caméras. Notre contribution réside dans les poids du terme de pénalisation qui sont automatiquement fixés en utilisant l’entropie des harmoniques de Fabry-Pérot. La méthode proposéeprésente une meilleure restauration spectrale que la stratégie proposée par le fabricant de la caméra et que la technique de déconvolution classique qu’elle étend
In this thesis, we consider images sensed by a miniaturized multispectral (MS) snapshot camera. Contrary to classical RGB cameras, MS imaging allows to observe a scene on tens of different wavelengths, allowing a much more precise analysis of the observed content. While most MS cameras require a scan to generate an image, snapshot MS cameras can instantaneouslyprovide images, or even videos. When the camera is miniaturized, instead of a 3D data cube, it gets a 2D image, each pixel being associated with a filtered version of the theoretical spectrum it should acquire. Post-processing, called “demosaicing”, is then necessary to reconstruct a data cube. Furthermore, in each pixel of the image, the observed spectrum can be considered as a mixture of spectra of pure materials present in the pixel. Estimating these spectra named endmembers as well as their spatial distribution (named abundances) is called “unmixing”. While a classical pipeline to process MS snapshot images is to first demosaice and then unmix the data, the work introduced in this thesis explores alternative strategies in which demosaicing and unmixing are jointly performed. Extending classical assumptions met in sparse component analysis and in remote sensing MS unmixing, we propose two different frameworks to restore and unmixing the acquired scene, based on low-rank matrix completion and deconvolution, respectively, the latter being specifically designed for Fabry-Perot filters used in the considered camera. The four proposed methods exhibit a far better unmixing enhancement than the variants they extend when the latter are applied to demosaiced data. Still, they allow a similar demosaicing performance as state-of-the-art methods. The last part of this thesis introduces a deconvolution approach to restore the spectra of such cameras. Our contribution lies in the weights of the penalization term which are automatically set using the entropy of the Fabry-Perot harmonics. The proposed method exhibits a better spectrum restoration than the strategy proposed by the camera manufacturer and than the classical deconvolution technique it extends
APA, Harvard, Vancouver, ISO, and other styles
6

Castorena, Juan. "Remote-Sensed LIDAR Using Random Impulsive Scans." International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581855.

Full text
Abstract:
Third generation full-waveform (FW) LIDAR systems image an entire scene by emitting laser pulses in particular directions and measuring the echoes. Each of these echoes provides range measurements about the objects intercepted by the laser pulse along a specified direction. By scanning through a specified region using a series of emitted pulses and observing their echoes, connected 1D profiles of 3D scenes can be readily obtained. This extra information has proven helpful in providing additional insight into the scene structure which can be used to construct effective characterizations and classifications. Unfortunately, massive amounts of data are typically collected which impose storage, processing and transmission limitations. To address these problems, a number of compression approaches have been developed in the literature. These, however, generally require the initial acquisition of large amounts of data only to later discard most of it by exploiting redundancies, thus sampling inefficiently. Based on this, our main goal is to apply efficient and effective LIDAR sampling schemes that achieve acceptable reconstruction quality of the 3D scenes. To achieve this goal, we propose on using compressive sampling by emitting pulses only into random locations within the scene and collecting only the corresponding returned FW signals. Under this framework, the number of emissions would typically be much smaller than what traditional LIDAR systems require. Application of this requires, however, that scenes contain many degrees of freedom. Fortunately, such a requirement is satisfied in most natural and man-made scenes. Here, we propose to use a measure of rank as the measure of degrees of freedom. To recover the connected 1D profiles of the 3D scene, matrix completion is applied to the tensor slices. In this paper, we test our approach by showing that recovery of compressively sampled 1D profiles of actual 3D scenes is possible using only a subset of measurements.
APA, Harvard, Vancouver, ISO, and other styles
7

Vinyes, Marina. "Convex matrix sparsity for demixing with an application to graphical model structure estimation." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1130/document.

Full text
Abstract:
En apprentissage automatique on a pour but d'apprendre un modèle, à partir de données, qui soit capable de faire des prédictions sur des nouvelles données (pas explorées auparavant). Pour obtenir un modèle qui puisse se généraliser sur les nouvelles données, et éviter le sur-apprentissage, nous devons restreindre le modèle. Ces restrictions sont généralement une connaissance a priori de la structure du modèle. Les premières approches considérées dans la littérature sont la régularisation de Tikhonov et plus tard le Lasso pour induire de la parcimonie dans la solution. La parcimonie fait partie d'un concept fondamental en apprentissage automatique. Les modèles parcimonieux sont attrayants car ils offrent plus d'interprétabilité et une meilleure généralisation (en évitant le sur-apprentissage) en induisant un nombre réduit de paramètres dans le modèle. Au-delà de la parcimonie générale et dans de nombreux cas, les modèles sont structurellement contraints et ont une représentation simple de certains éléments fondamentaux, comme par exemple une collection de vecteurs, matrices ou tenseurs spécifiques. Ces éléments fondamentaux sont appelés atomes. Dans ce contexte, les normes atomiques fournissent un cadre général pour estimer ce type de modèles. périodes de modèles. Le but de cette thèse est d'utiliser le cadre de parcimonie convexe fourni par les normes atomiques pour étudier une forme de parcimonie matricielle. Tout d'abord, nous développons un algorithme efficace basé sur les méthodes de Frank-Wolfe et qui est particulièrement adapté pour résoudre des problèmes convexes régularisés par une norme atomique. Nous nous concentrons ensuite sur l'estimation de la structure des modèles graphiques gaussiens, où la structure du modèle est encodée dans la matrice de précision et nous étudions le cas avec des variables manquantes. Nous proposons une formulation convexe avec une approche algorithmique et fournissons un résultat théorique qui énonce les conditions nécessaires pour récupérer la structure souhaitée. Enfin, nous considérons le problème de démixage d'un signal en deux composantes ou plus via la minimisation d’une somme de normes ou de jauges, encodant chacune la structure a priori des composants à récupérer. En particulier, nous fournissons une garantie de récupération exacte dans le cadre sans bruit, basée sur des mesures d'incohérence
The goal of machine learning is to learn a model from some data that will make accurate predictions on data that it has not seen before. In order to obtain a model that will generalize on new data, and avoid overfitting, we need to restrain the model. These restrictions are usually some a priori knowledge of the structure of the model. First considered approaches included a regularization, first ridge regression and later Lasso regularization for inducing sparsity in the solution. Sparsity, also known as parsimony, has emerged as a fundamental concept in machine learning. Parsimonious models are appealing since they provide more interpretability and better generalization (avoid overfitting) through the reduced number of parameters. Beyond general sparsity and in many cases, models are constrained structurally so they have a simple representation in terms of some fundamental elements, consisting for example of a collection of specific vectors, matrices or tensors. These fundamental elements are called atoms. In this context, atomic norms provide a general framework for estimating these sorts of models. The goal of this thesis is to use the framework of convex sparsity provided by atomic norms to study a form of matrix sparsity. First, we develop an efficient algorithm based on Frank-Wolfe methods that is particularly adapted to solve problems with an atomic norm regularization. Then, we focus on the structure estimation of Gaussian graphical models, where the structure of the graph is encoded in the precision matrix and study the case with unobserved variables. We propose a convex formulation with an algorithmic approach and provide a theoretical result that states necessary conditions for recovering the desired structure. Finally, we consider the problem of signal demixing into two or more components via the minimization of a sum of norms or gauges, encoding each a structural prior on the corresponding components to recover. In particular, we provide general exact recovery guarantees in the noiseless setting based on incoherence measures
APA, Harvard, Vancouver, ISO, and other styles
8

Sadek, El Mostafa. "Méthodes itératives pour la résolution d'équations matricielles." Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0434/document.

Full text
Abstract:
Nous nous intéressons dans cette thèse, à l’étude des méthodes itératives pour la résolutiond’équations matricielles de grande taille : Lyapunov, Sylvester, Riccati et Riccatinon symétrique.L’objectif est de chercher des méthodes itératives plus efficaces et plus rapides pour résoudreles équations matricielles de grande taille. Nous proposons des méthodes itérativesde type projection sur des sous espaces de Krylov par blocs Km(A, V ) = Image{V,AV, . . . ,Am−1V }, ou des sous espaces de Krylov étendus par blocs Kem(A, V ) = Image{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V } . Ces méthodes sont généralement plus efficaces et rapides pour les problèmes de grande dimension. Nous avons traité d'abord la résolution numérique des équations matricielles linéaires : Lyapunov, Sylvester, Stein. Nous avons proposé une nouvelle méthode itérative basée sur la minimisation de résidu MR et la projection sur des sous espaces de Krylov étendus par blocs Kem(A, V ). L'algorithme d'Arnoldi étendu par blocs permet de donner un problème de minimisation projeté de petite taille. Le problème de minimisation de taille réduit est résolu par différentes méthodes directes ou itératives. Nous avons présenté ainsi la méthode de minimisation de résidu basée sur l'approche global à la place de l'approche bloc. Nous projetons sur des sous espaces de Krylov étendus Global Kem(A, V ) = sev{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. Nous nous sommes intéressés en deuxième lieu à des équations matricielles non linéaires, et tout particulièrement l'équation matricielle de Riccati dans le cas continu et dans le cas non symétrique appliquée dans les problèmes de transport. Nous avons utilisé la méthode de Newtown et l'algorithme MINRES pour résoudre le problème de minimisation projeté. Enfin, nous avons proposé deux nouvelles méthodes itératives pour résoudre les équations de Riccati non symétriques de grande taille : la première basée sur l'algorithme d'Arnoldi étendu par bloc et la condition d'orthogonalité de Galerkin, la deuxième est de type Newton-Krylov, basée sur la méthode de Newton et la résolution d'une équation de Sylvester de grande taille par une méthode de type Krylov par blocs. Pour toutes ces méthodes, les approximations sont données sous la forme factorisée, ce qui nous permet d'économiser la place mémoire en programmation. Nous avons donné des exemples numériques qui montrent bien l'efficacité des méthodes proposées dans le cas de grandes tailles
In this thesis, we focus in the studying of some iterative methods for solving large matrix equations such as Lyapunov, Sylvester, Riccati and nonsymmetric algebraic Riccati equation. We look for the most efficient and faster iterative methods for solving large matrix equations. We propose iterative methods such as projection on block Krylov subspaces Km(A, V ) = Range{V,AV, . . . ,Am−1V }, or block extended Krylov subspaces Kem(A, V ) = Range{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. These methods are generally most efficient and faster for large problems. We first treat the numerical solution of the following linear matrix equations : Lyapunov, Sylvester and Stein matrix equations. We have proposed a new iterative method based on Minimal Residual MR and projection on block extended Krylov subspaces Kem(A, V ). The extended block Arnoldi algorithm gives a projected minimization problem of small size. The reduced size of the minimization problem is solved by direct or iterative methods. We also introduced the Minimal Residual method based on the global approach instead of the block approach. We projected on the global extended Krylov subspace Kem(A, V ) = Span{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. Secondly, we focus on nonlinear matrix equations, especially the matrix Riccati equation in the continuous case and the nonsymmetric case applied in transportation problems. We used the Newton method and MINRES algorithm to solve the projected minimization problem. Finally, we proposed two new iterative methods for solving large nonsymmetric Riccati equation : the first based on the algorithm of extended block Arnoldi and Galerkin condition, the second type is Newton-Krylov, based on Newton’s method and the resolution of the large matrix Sylvester equation by using block Krylov method. For all these methods, approximations are given in low rank form, wich allow us to save memory space. We have given numerical examples that show the effectiveness of the methods proposed in the case of large sizes
APA, Harvard, Vancouver, ISO, and other styles
9

Winkler, Anderson M. "Widening the applicability of permutation inference." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ce166876-0aa3-449e-8496-f28bf189960c.

Full text
Abstract:
This thesis is divided into three main parts. In the first, we discuss that, although permutation tests can provide exact control of false positives under the reasonable assumption of exchangeability, there are common examples in which global exchangeability does not hold, such as in experiments with repeated measurements or tests in which subjects are related to each other. To allow permutation inference in such cases, we propose an extension of the well known concept of exchangeability blocks, allowing these to be nested in a hierarchical, multi-level definition. This definition allows permutations that retain the original joint distribution unaltered, thus preserving exchangeability. The null hypothesis is tested using only a subset of all otherwise possible permutations. We do not need to explicitly model the degree of dependence between observations; rather the use of such permutation scheme leaves any dependence intact. The strategy is compatible with heteroscedasticity and can be used with permutations, sign flippings, or both combined. In the second part, we exploit properties of test statistics to obtain accelerations irrespective of generic software or hardware improvements. We compare six different approaches using synthetic and real data, assessing the methods in terms of their error rates, power, agreement with a reference result, and the risk of taking a different decision regarding the rejection of the null hypotheses (known as the resampling risk). In the third part, we investigate and compare the different methods for assessment of cortical volume and area from magnetic resonance images using surface-based methods. Using data from young adults born with very low birth weight and coetaneous controls, we show that instead of volume, the permutation-based non-parametric combination (NPC) of thickness and area is a more sensitive option for studying joint effects on these two quantities, giving equal weight to variation in both, and allowing a better characterisation of biological processes that can affect brain morphology.
APA, Harvard, Vancouver, ISO, and other styles
10

Plan, Yaniv. "Compressed Sensing, Sparse Approximation, and Low-Rank Matrix Estimation." Thesis, 2011. https://thesis.library.caltech.edu/6259/1/thesis.pdf.

Full text
Abstract:

The importance of sparse signal structures has been recognized in a plethora of applications ranging from medical imaging to group disease testing to radar technology. It has been shown in practice that various signals of interest may be (approximately) sparsely modeled, and that sparse modeling is often beneficial, or even indispensable to signal recovery. Alongside an increase in applications, a rich theory of sparse and compressible signal recovery has recently been developed under the names compressed sensing (CS) and sparse approximation (SA). This revolutionary research has demonstrated that many signals can be recovered from severely undersampled measurements by taking advantage of their inherent low-dimensional structure. More recently, an offshoot of CS and SA has been a focus of research on other low-dimensional signal structures such as matrices of low rank. Low-rank matrix recovery (LRMR) is demonstrating a rapidly growing array of important applications such as quantum state tomography, triangulation from incomplete distance measurements, recommender systems (e.g., the Netflix problem), and system identification and control.

In this dissertation, we examine CS, SA, and LRMR from a theoretical perspective. We consider a variety of different measurement and signal models, both random and deterministic, and mainly ask two questions.

How many measurements are necessary? How large is the recovery error?

We give theoretical lower bounds for both of these questions, including oracle and minimax lower bounds for the error. However, the main emphasis of the thesis is to demonstrate the efficacy of convex optimization---in particular l1 and nuclear-norm minimization based programs---in CS, SA, and LRMR. We derive upper bounds for the number of measurements required and the error derived by convex optimization, which in many cases match the lower bounds up to constant or logarithmic factors. The majority of these results do not require the restricted isometry property (RIP), a ubiquitous condition in the literature.

APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Low-Rank matrix approximation"

1

Kannan, Ramakrishnan, Mariya Ishteva, Barry Drake, and Haesun Park. "Bounded Matrix Low Rank Approximation." In Signals and Communication Technology, 89–118. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48331-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Friedland, Shmuel, and Venu Tammali. "Low-Rank Approximation of Tensors." In Numerical Algebra, Matrix Theory, Differential-Algebraic Equations and Control Theory, 377–411. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15260-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dewilde, Patrick, and Alle-Jan van der Veen. "Low-Rank Matrix Approximation and Subspace Tracking." In Time-Varying Systems and Computations, 307–33. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4757-2817-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Huaxiang, Zhichao Wang, and Linlin Cao. "Fast Nyström for Low Rank Matrix Approximation." In Advanced Data Mining and Applications, 456–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35527-1_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Deshpande, Amit, and Santosh Vempala. "Adaptive Sampling and Fast Low-Rank Matrix Approximation." In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 292–303. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11830924_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Evensen, Geir, Femke C. Vossepoel, and Peter Jan van Leeuwen. "Localization and Inflation." In Springer Textbooks in Earth Sciences, Geography and Environment, 111–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96709-3_10.

Full text
Abstract:
AbstractLocalization and inflation have become essential means of mitigating the effects of the low-rank approximation in ensemble methods. Localization increases the effective rank of the ensemble covariance matrix and allows it to fit a large number of independent observations. Thus, we use localization to reduce sampling errors, in combination with inflation, to reduce the underestimation of the ensemble variance caused by the low-rank approximation. These methods are essential for high-dimensional applications, and this chapter will give a general introduction to various formulations of localization and inflation methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Chong-Ya, Wenzheng Bao, Zhipeng Li, Youhua Zhang, Yong-Li Jiang, and Chang-An Yuan. "Local Sensitive Low Rank Matrix Approximation via Nonconvex Optimization." In Intelligent Computing Methodologies, 771–81. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63315-2_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wacira, Joseph Muthui, Dinna Ranirina, and Bubacarr Bah. "Low Rank Matrix Approximation for Imputing Missing Categorical Data." In Artificial Intelligence Research, 242–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95070-5_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Jiangang, and Shizhong Liao. "Accuracy-Preserving and Scalable Column-Based Low-Rank Matrix Approximation." In Knowledge Science, Engineering and Management, 236–47. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25159-2_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mantzaflaris, Angelos, Bert Jüttler, B. N. Khoromskij, and Ulrich Langer. "Matrix Generation in Isogeometric Analysis by Low Rank Tensor Approximation." In Curves and Surfaces, 321–40. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22804-4_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Low-Rank matrix approximation"

1

Kannan, Ramakrishnan, Mariya Ishteva, and Haesun Park. "Bounded Matrix Low Rank Approximation." In 2012 IEEE 12th International Conference on Data Mining (ICDM). IEEE, 2012. http://dx.doi.org/10.1109/icdm.2012.131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Chong-Ya, Lin Zhu, Wen-Zheng Bao, Yong-Li Jiang, Chang-An Yuan, and De-Shuang Huang. "Convex local sensitive low rank matrix approximation." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

van der Veen, Alle-Jan. "Schur method for low-rank matrix approximation." In SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation, edited by Franklin T. Luk. SPIE, 1994. http://dx.doi.org/10.1117/12.190848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nadakuditi, Raj Rao. "Exploiting random matrix theory to improve noisy low-rank matrix approximation." In 2011 45th Asilomar Conference on Signals, Systems and Computers. IEEE, 2011. http://dx.doi.org/10.1109/acssc.2011.6190110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tatsukawa, Manami, and Mirai Tanaka. "Box Constrained Low-rank Matrix Approximation with Missing Values." In 7th International Conference on Operations Research and Enterprise Systems. SCITEPRESS - Science and Technology Publications, 2018. http://dx.doi.org/10.5220/0006612100780084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yinqiang Zheng, Guangcan Liu, S. Sugimoto, Shuicheng Yan, and M. Okutomi. "Practical low-rank matrix approximation under robust L1-norm." In 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012. http://dx.doi.org/10.1109/cvpr.2012.6247828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alelyani, Salem, and Huan Liu. "Supervised Low Rank Matrix Approximation for Stable Feature Selection." In 2012 Eleventh International Conference on Machine Learning and Applications (ICMLA). IEEE, 2012. http://dx.doi.org/10.1109/icmla.2012.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yang, Wenji Chen, and Yong Guan. "Monitoring Traffic Activity Graphs with low-rank matrix approximation." In 2012 IEEE 37th Conference on Local Computer Networks (LCN 2012). IEEE, 2012. http://dx.doi.org/10.1109/lcn.2012.6423680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Hengyou, Ruizhen Zhao, Yigang Cen, and Fengzhen Zhang. "Low-rank matrix recovery based on smooth function approximation." In 2016 IEEE 13th International Conference on Signal Processing (ICSP). IEEE, 2016. http://dx.doi.org/10.1109/icsp.2016.7877928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kaloorazi, Maboud F., and Jie Chen. "Low-rank Matrix Approximation Based on Intermingled Randomized Decomposition." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography