Academic literature on the topic 'Low-Rank matrix approximation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Low-Rank matrix approximation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Low-Rank matrix approximation"
Ting Liu, Ting Liu, Mingjian Sun Mingjian Sun, Naizhang Feng Naizhang Feng, Minghua Wang Minghua Wang, Deying Chen Deying Chen, and and Yi Shen and Yi Shen. "Sparse photoacoustic microscopy based on low-rank matrix approximation." Chinese Optics Letters 14, no. 9 (2016): 091701–91705. http://dx.doi.org/10.3788/col201614.091701.
Full textParekh, Ankit, and Ivan W. Selesnick. "Enhanced Low-Rank Matrix Approximation." IEEE Signal Processing Letters 23, no. 4 (April 2016): 493–97. http://dx.doi.org/10.1109/lsp.2016.2535227.
Full textFomin, Fedor V., Petr A. Golovach, and Fahad Panolan. "Parameterized low-rank binary matrix approximation." Data Mining and Knowledge Discovery 34, no. 2 (January 2, 2020): 478–532. http://dx.doi.org/10.1007/s10618-019-00669-5.
Full textFomin, Fedor V., Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, and Saket Saurabh. "Approximation Schemes for Low-rank Binary Matrix Approximation Problems." ACM Transactions on Algorithms 16, no. 1 (January 11, 2020): 1–39. http://dx.doi.org/10.1145/3365653.
Full textZhenyue Zhang and Keke Zhao. "Low-Rank Matrix Approximation with Manifold Regularization." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 7 (July 2013): 1717–29. http://dx.doi.org/10.1109/tpami.2012.274.
Full textXu, An-Bao, and Dongxiu Xie. "Low-rank approximation pursuit for matrix completion." Mechanical Systems and Signal Processing 95 (October 2017): 77–89. http://dx.doi.org/10.1016/j.ymssp.2017.03.024.
Full textBarlow, Jesse L., and Hasan Erbay. "Modifiable low-rank approximation to a matrix." Numerical Linear Algebra with Applications 16, no. 10 (October 2009): 833–60. http://dx.doi.org/10.1002/nla.651.
Full textJia, Yuheng, Hui Liu, Junhui Hou, and Qingfu Zhang. "Clustering Ensemble Meets Low-rank Tensor Approximation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7970–78. http://dx.doi.org/10.1609/aaai.v35i9.16972.
Full textTropp, Joel A., Alp Yurtsever, Madeleine Udell, and Volkan Cevher. "Practical Sketching Algorithms for Low-Rank Matrix Approximation." SIAM Journal on Matrix Analysis and Applications 38, no. 4 (January 2017): 1454–85. http://dx.doi.org/10.1137/17m1111590.
Full textLiu, Huafeng, Liping Jing, Yuhua Qian, and Jian Yu. "Adaptive Local Low-rank Matrix Approximation for Recommendation." ACM Transactions on Information Systems 37, no. 4 (December 10, 2019): 1–34. http://dx.doi.org/10.1145/3360488.
Full textDissertations / Theses on the topic "Low-Rank matrix approximation"
Blanchard, Pierre. "Fast hierarchical algorithms for the low-rank approximation of matrices, with applications to materials physics, geostatistics and data analysis." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0016/document.
Full textAdvanced techniques for the low-rank approximation of matrices are crucial dimension reduction tools in many domains of modern scientific computing. Hierarchical approaches like H2-matrices, in particular the Fast Multipole Method (FMM), benefit from the block low-rank structure of certain matrices to reduce the cost of computing n-body problems to O(n) operations instead of O(n2). In order to better deal with kernels of various kinds, kernel independent FMM formulations have recently arisen such as polynomial interpolation based FMM. However, they are hardly tractable to high dimensional tensorial kernels, therefore we designed a new highly efficient interpolation based FMM, called the Uniform FMM, and implemented it in the parallel library ScalFMM. The method relies on an equispaced interpolation grid and the Fast Fourier Transform (FFT). Performance and accuracy were compared with the Chebyshev interpolation based FMM. Numerical experiments on artificial benchmarks showed that the loss of accuracy induced by the interpolation scheme was largely compensated by the FFT optimization. First of all, we extended both interpolation based FMM to the computation of the isotropic elastic fields involved in Dislocation Dynamics (DD) simulations. Second of all, we used our new FMM algorithm to accelerate a rank-r Randomized SVD and thus efficiently generate multivariate Gaussian random variables on large heterogeneous grids in O(n) operations. Finally, we designed a new efficient dimensionality reduction algorithm based on dense random projection in order to investigate new ways of characterizing the biodiversity, namely from a geometric point of view
Lee, Joonseok. "Local approaches for collaborative filtering." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53846.
Full textKim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.
Full textGalvin, Timothy Matthew. "Faster streaming algorithms for low-rank matrix approximations." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91810.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 53-55).
Low-rank matrix approximations are used in a significant number of applications. We present new algorithms for generating such approximations in a streaming fashion that expand upon recently discovered matrix sketching techniques. We test our approaches on real and synthetic data to explore runtime and accuracy performance. We apply our algorithms to the technique of Latent Semantic Indexing on a widely studied data set. We find our algorithms provide strong empirical results.
by Timothy Matthew Galvin.
M. Eng.
Abbas, Kinan. "Dématriçage et démélange conjoints d'images multispectrales." Electronic Thesis or Diss., Littoral, 2024. http://www.theses.fr/2024DUNK0710.
Full textIn this thesis, we consider images sensed by a miniaturized multispectral (MS) snapshot camera. Contrary to classical RGB cameras, MS imaging allows to observe a scene on tens of different wavelengths, allowing a much more precise analysis of the observed content. While most MS cameras require a scan to generate an image, snapshot MS cameras can instantaneouslyprovide images, or even videos. When the camera is miniaturized, instead of a 3D data cube, it gets a 2D image, each pixel being associated with a filtered version of the theoretical spectrum it should acquire. Post-processing, called “demosaicing”, is then necessary to reconstruct a data cube. Furthermore, in each pixel of the image, the observed spectrum can be considered as a mixture of spectra of pure materials present in the pixel. Estimating these spectra named endmembers as well as their spatial distribution (named abundances) is called “unmixing”. While a classical pipeline to process MS snapshot images is to first demosaice and then unmix the data, the work introduced in this thesis explores alternative strategies in which demosaicing and unmixing are jointly performed. Extending classical assumptions met in sparse component analysis and in remote sensing MS unmixing, we propose two different frameworks to restore and unmixing the acquired scene, based on low-rank matrix completion and deconvolution, respectively, the latter being specifically designed for Fabry-Perot filters used in the considered camera. The four proposed methods exhibit a far better unmixing enhancement than the variants they extend when the latter are applied to demosaiced data. Still, they allow a similar demosaicing performance as state-of-the-art methods. The last part of this thesis introduces a deconvolution approach to restore the spectra of such cameras. Our contribution lies in the weights of the penalization term which are automatically set using the entropy of the Fabry-Perot harmonics. The proposed method exhibits a better spectrum restoration than the strategy proposed by the camera manufacturer and than the classical deconvolution technique it extends
Castorena, Juan. "Remote-Sensed LIDAR Using Random Impulsive Scans." International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581855.
Full textVinyes, Marina. "Convex matrix sparsity for demixing with an application to graphical model structure estimation." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1130/document.
Full textThe goal of machine learning is to learn a model from some data that will make accurate predictions on data that it has not seen before. In order to obtain a model that will generalize on new data, and avoid overfitting, we need to restrain the model. These restrictions are usually some a priori knowledge of the structure of the model. First considered approaches included a regularization, first ridge regression and later Lasso regularization for inducing sparsity in the solution. Sparsity, also known as parsimony, has emerged as a fundamental concept in machine learning. Parsimonious models are appealing since they provide more interpretability and better generalization (avoid overfitting) through the reduced number of parameters. Beyond general sparsity and in many cases, models are constrained structurally so they have a simple representation in terms of some fundamental elements, consisting for example of a collection of specific vectors, matrices or tensors. These fundamental elements are called atoms. In this context, atomic norms provide a general framework for estimating these sorts of models. The goal of this thesis is to use the framework of convex sparsity provided by atomic norms to study a form of matrix sparsity. First, we develop an efficient algorithm based on Frank-Wolfe methods that is particularly adapted to solve problems with an atomic norm regularization. Then, we focus on the structure estimation of Gaussian graphical models, where the structure of the graph is encoded in the precision matrix and study the case with unobserved variables. We propose a convex formulation with an algorithmic approach and provide a theoretical result that states necessary conditions for recovering the desired structure. Finally, we consider the problem of signal demixing into two or more components via the minimization of a sum of norms or gauges, encoding each a structural prior on the corresponding components to recover. In particular, we provide general exact recovery guarantees in the noiseless setting based on incoherence measures
Sadek, El Mostafa. "Méthodes itératives pour la résolution d'équations matricielles." Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0434/document.
Full textIn this thesis, we focus in the studying of some iterative methods for solving large matrix equations such as Lyapunov, Sylvester, Riccati and nonsymmetric algebraic Riccati equation. We look for the most efficient and faster iterative methods for solving large matrix equations. We propose iterative methods such as projection on block Krylov subspaces Km(A, V ) = Range{V,AV, . . . ,Am−1V }, or block extended Krylov subspaces Kem(A, V ) = Range{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. These methods are generally most efficient and faster for large problems. We first treat the numerical solution of the following linear matrix equations : Lyapunov, Sylvester and Stein matrix equations. We have proposed a new iterative method based on Minimal Residual MR and projection on block extended Krylov subspaces Kem(A, V ). The extended block Arnoldi algorithm gives a projected minimization problem of small size. The reduced size of the minimization problem is solved by direct or iterative methods. We also introduced the Minimal Residual method based on the global approach instead of the block approach. We projected on the global extended Krylov subspace Kem(A, V ) = Span{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. Secondly, we focus on nonlinear matrix equations, especially the matrix Riccati equation in the continuous case and the nonsymmetric case applied in transportation problems. We used the Newton method and MINRES algorithm to solve the projected minimization problem. Finally, we proposed two new iterative methods for solving large nonsymmetric Riccati equation : the first based on the algorithm of extended block Arnoldi and Galerkin condition, the second type is Newton-Krylov, based on Newton’s method and the resolution of the large matrix Sylvester equation by using block Krylov method. For all these methods, approximations are given in low rank form, wich allow us to save memory space. We have given numerical examples that show the effectiveness of the methods proposed in the case of large sizes
Winkler, Anderson M. "Widening the applicability of permutation inference." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ce166876-0aa3-449e-8496-f28bf189960c.
Full textPlan, Yaniv. "Compressed Sensing, Sparse Approximation, and Low-Rank Matrix Estimation." Thesis, 2011. https://thesis.library.caltech.edu/6259/1/thesis.pdf.
Full textThe importance of sparse signal structures has been recognized in a plethora of applications ranging from medical imaging to group disease testing to radar technology. It has been shown in practice that various signals of interest may be (approximately) sparsely modeled, and that sparse modeling is often beneficial, or even indispensable to signal recovery. Alongside an increase in applications, a rich theory of sparse and compressible signal recovery has recently been developed under the names compressed sensing (CS) and sparse approximation (SA). This revolutionary research has demonstrated that many signals can be recovered from severely undersampled measurements by taking advantage of their inherent low-dimensional structure. More recently, an offshoot of CS and SA has been a focus of research on other low-dimensional signal structures such as matrices of low rank. Low-rank matrix recovery (LRMR) is demonstrating a rapidly growing array of important applications such as quantum state tomography, triangulation from incomplete distance measurements, recommender systems (e.g., the Netflix problem), and system identification and control.
In this dissertation, we examine CS, SA, and LRMR from a theoretical perspective. We consider a variety of different measurement and signal models, both random and deterministic, and mainly ask two questions.
How many measurements are necessary? How large is the recovery error?
We give theoretical lower bounds for both of these questions, including oracle and minimax lower bounds for the error. However, the main emphasis of the thesis is to demonstrate the efficacy of convex optimization---in particular l1 and nuclear-norm minimization based programs---in CS, SA, and LRMR. We derive upper bounds for the number of measurements required and the error derived by convex optimization, which in many cases match the lower bounds up to constant or logarithmic factors. The majority of these results do not require the restricted isometry property (RIP), a ubiquitous condition in the literature.
Book chapters on the topic "Low-Rank matrix approximation"
Kannan, Ramakrishnan, Mariya Ishteva, Barry Drake, and Haesun Park. "Bounded Matrix Low Rank Approximation." In Signals and Communication Technology, 89–118. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48331-2_4.
Full textFriedland, Shmuel, and Venu Tammali. "Low-Rank Approximation of Tensors." In Numerical Algebra, Matrix Theory, Differential-Algebraic Equations and Control Theory, 377–411. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15260-8_14.
Full textDewilde, Patrick, and Alle-Jan van der Veen. "Low-Rank Matrix Approximation and Subspace Tracking." In Time-Varying Systems and Computations, 307–33. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4757-2817-0_11.
Full textZhang, Huaxiang, Zhichao Wang, and Linlin Cao. "Fast Nyström for Low Rank Matrix Approximation." In Advanced Data Mining and Applications, 456–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35527-1_38.
Full textDeshpande, Amit, and Santosh Vempala. "Adaptive Sampling and Fast Low-Rank Matrix Approximation." In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 292–303. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11830924_28.
Full textEvensen, Geir, Femke C. Vossepoel, and Peter Jan van Leeuwen. "Localization and Inflation." In Springer Textbooks in Earth Sciences, Geography and Environment, 111–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96709-3_10.
Full textLi, Chong-Ya, Wenzheng Bao, Zhipeng Li, Youhua Zhang, Yong-Li Jiang, and Chang-An Yuan. "Local Sensitive Low Rank Matrix Approximation via Nonconvex Optimization." In Intelligent Computing Methodologies, 771–81. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63315-2_67.
Full textWacira, Joseph Muthui, Dinna Ranirina, and Bubacarr Bah. "Low Rank Matrix Approximation for Imputing Missing Categorical Data." In Artificial Intelligence Research, 242–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95070-5_16.
Full textWu, Jiangang, and Shizhong Liao. "Accuracy-Preserving and Scalable Column-Based Low-Rank Matrix Approximation." In Knowledge Science, Engineering and Management, 236–47. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25159-2_22.
Full textMantzaflaris, Angelos, Bert Jüttler, B. N. Khoromskij, and Ulrich Langer. "Matrix Generation in Isogeometric Analysis by Low Rank Tensor Approximation." In Curves and Surfaces, 321–40. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22804-4_24.
Full textConference papers on the topic "Low-Rank matrix approximation"
Kannan, Ramakrishnan, Mariya Ishteva, and Haesun Park. "Bounded Matrix Low Rank Approximation." In 2012 IEEE 12th International Conference on Data Mining (ICDM). IEEE, 2012. http://dx.doi.org/10.1109/icdm.2012.131.
Full textLi, Chong-Ya, Lin Zhu, Wen-Zheng Bao, Yong-Li Jiang, Chang-An Yuan, and De-Shuang Huang. "Convex local sensitive low rank matrix approximation." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965863.
Full textvan der Veen, Alle-Jan. "Schur method for low-rank matrix approximation." In SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation, edited by Franklin T. Luk. SPIE, 1994. http://dx.doi.org/10.1117/12.190848.
Full textNadakuditi, Raj Rao. "Exploiting random matrix theory to improve noisy low-rank matrix approximation." In 2011 45th Asilomar Conference on Signals, Systems and Computers. IEEE, 2011. http://dx.doi.org/10.1109/acssc.2011.6190110.
Full textTatsukawa, Manami, and Mirai Tanaka. "Box Constrained Low-rank Matrix Approximation with Missing Values." In 7th International Conference on Operations Research and Enterprise Systems. SCITEPRESS - Science and Technology Publications, 2018. http://dx.doi.org/10.5220/0006612100780084.
Full textYinqiang Zheng, Guangcan Liu, S. Sugimoto, Shuicheng Yan, and M. Okutomi. "Practical low-rank matrix approximation under robust L1-norm." In 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012. http://dx.doi.org/10.1109/cvpr.2012.6247828.
Full textAlelyani, Salem, and Huan Liu. "Supervised Low Rank Matrix Approximation for Stable Feature Selection." In 2012 Eleventh International Conference on Machine Learning and Applications (ICMLA). IEEE, 2012. http://dx.doi.org/10.1109/icmla.2012.61.
Full textLiu, Yang, Wenji Chen, and Yong Guan. "Monitoring Traffic Activity Graphs with low-rank matrix approximation." In 2012 IEEE 37th Conference on Local Computer Networks (LCN 2012). IEEE, 2012. http://dx.doi.org/10.1109/lcn.2012.6423680.
Full textWang, Hengyou, Ruizhen Zhao, Yigang Cen, and Fengzhen Zhang. "Low-rank matrix recovery based on smooth function approximation." In 2016 IEEE 13th International Conference on Signal Processing (ICSP). IEEE, 2016. http://dx.doi.org/10.1109/icsp.2016.7877928.
Full textKaloorazi, Maboud F., and Jie Chen. "Low-rank Matrix Approximation Based on Intermingled Randomized Decomposition." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683284.
Full text