Journal articles on the topic 'Sparse Low-Rank Representation'

To see the other types of publications on this topic, follow the link: Sparse Low-Rank Representation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparse Low-Rank Representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hengdong Zhu, Hengdong Zhu, Ting Yang Hengdong Zhu, Yingcang Ma Ting Yang, and Xiaofei Yang Yingcang Ma. "Multi-view Re-weighted Sparse Subspace Clustering with Intact Low-rank Space Learning." 電腦學刊 33, no. 4 (August 2022): 121–31. http://dx.doi.org/10.53106/199115992022083304010.

Full text
Abstract:
<p>In this paper, we propose a new Multi-view Re-weighted Sparse Subspace Clustering with Intact Low-rank Space Learning (ILrS-MRSSC) method, trying to find a sparse representation of the complete space of information. Specifically, this method integrates the complementary information inherent in multiple angles of the data, learns a complete space of potential low-rank representation, and constructs a sparse information matrix to reconstruct the data. The correlation between multi-view learning and subspace clustering is strengthened to the greatest extent, so that the subspace representation is more intuitive and accurate. The optimal solution of the model is solved by the augmented lagrangian multiplier (ALM) method of alternating direction minimal. Experiments on multiple benchmark data sets verify the effec-tiveness of this method.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Jianxi, and Lina Zhao. "Low-rank and sparse matrices fitting algorithm for low-rank representation." Computers & Mathematics with Applications 79, no. 2 (January 2020): 407–25. http://dx.doi.org/10.1016/j.camwa.2019.07.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Hyuncheol, and Joonki Paik. "Video Summarization using Low-Rank Sparse Representation." IEIE Transactions on Smart Processing & Computing 7, no. 3 (June 30, 2018): 236–44. http://dx.doi.org/10.5573/ieiespc.2018.7.3.236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

CHENG, Shilei, Song GU, Maoquan YE, and Mei XIE. "Action Recognition Using Low-Rank Sparse Representation." IEICE Transactions on Information and Systems E101.D, no. 3 (2018): 830–34. http://dx.doi.org/10.1587/transinf.2017edl8176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Jun, Daming Shi, Dansong Cheng, Yongqiang Zhang, and Junbin Gao. "LRSR: Low-Rank-Sparse representation for subspace clustering." Neurocomputing 214 (November 2016): 1026–37. http://dx.doi.org/10.1016/j.neucom.2016.07.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Haishun, Xudong Zhang, Qingpu Hu, and Yandong Hou. "Sparse representation-based robust face recognition by graph regularized low-rank sparse representation recovery." Neurocomputing 164 (September 2015): 220–29. http://dx.doi.org/10.1016/j.neucom.2015.02.067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Xiujun, Chen Xu, Min Li, and Xiaoli Sun. "Sparse and Low-Rank Coupling Image Segmentation Model Via Nonconvex Regularization." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 02 (February 27, 2015): 1555004. http://dx.doi.org/10.1142/s0218001415550046.

Full text
Abstract:
This paper investigates how to boost region-based image segmentation by inheriting the advantages of sparse representation and low-rank representation. A novel image segmentation model, called nonconvex regularization based sparse and low-rank coupling model, is presented for such a purpose. We aim at finding the optimal solution which is provided with sparse and low-rank simultaneously. This is achieved by relaxing sparse representation problem as L1/2 norm minimization other than the L1 norm minimization, while relaxing low-rank representation problem as the S1/2 norm minimization other than the nuclear norm minimization. This coupled model can be solved efficiently through the Augmented Lagrange Multiplier (ALM) method and half-threshold operator. Compared to the other state-of-the-art methods, the new method is better at capturing the global structure of the whole data, the robustness is better and the segmentation accuracy is also competitive. Experiments on two public image segmentation databases well validate the superiority of our method.
APA, Harvard, Vancouver, ISO, and other styles
8

Zheng, Chun-Hou, Yi-Fu Hou, and Jun Zhang. "Improved sparse representation with low-rank representation for robust face recognition." Neurocomputing 198 (July 2016): 114–24. http://dx.doi.org/10.1016/j.neucom.2015.07.146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Shiqiang, Yuqing Shi, Guangrong Shan, Weilan Wang, and Yide Ma. "Tensor low-rank sparse representation for tensor subspace learning." Neurocomputing 440 (June 2021): 351–64. http://dx.doi.org/10.1016/j.neucom.2021.02.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zou, Dongqing, Xiaowu Chen, Guangying Cao, and Xiaogang Wang. "Unsupervised Video Matting via Sparse and Low-Rank Representation." IEEE Transactions on Pattern Analysis and Machine Intelligence 42, no. 6 (June 1, 2020): 1501–14. http://dx.doi.org/10.1109/tpami.2019.2895331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

He, Zhi-Fen, and Ming Yang. "Sparse and low-rank representation for multi-label classification." Applied Intelligence 49, no. 5 (November 26, 2018): 1708–23. http://dx.doi.org/10.1007/s10489-018-1345-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Jian, Hongsong Dong, Wenlian Gao, Li Zhang, Zhiwen Xue, and Xiangfei Shen. "Structured low-rank representation learning for hyperspectral sparse unmixing." International Journal of Remote Sensing 45, no. 2 (January 15, 2024): 351–75. http://dx.doi.org/10.1080/01431161.2023.2295836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

You, Cong-Zhe, Zhen-Qiu Shu, and Hong-Hui Fan. "Low-rank sparse subspace clustering with a clean dictionary." Journal of Algorithms & Computational Technology 15 (January 2021): 174830262098369. http://dx.doi.org/10.1177/1748302620983690.

Full text
Abstract:
Low-Rank Representation (LRR) and Sparse Subspace Clustering (SSC) are considered as the hot topics of subspace clustering algorithms. SSC induces the sparsity through minimizing the [Formula: see text]-norm of the data matrix while LRR promotes a low-rank structure through minimizing the nuclear norm. In this paper, considering the problem of fitting a union of subspace to a collection of data points drawn from one more subspaces and corrupted by noise, we pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise. We propose a new algorithm, named Low-Rank and Sparse Subspace Clustering with a Clean dictionary (LRS2C2), by combining SSC and LRR, as the representation is often both sparse and low-rank. The effectiveness of the proposed algorithm is demonstrated through experiments on motion segmentation and image clustering.
APA, Harvard, Vancouver, ISO, and other styles
14

Dong, Le, and Yuan Yuan. "Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing." Remote Sensing 13, no. 8 (April 11, 2021): 1473. http://dx.doi.org/10.3390/rs13081473.

Full text
Abstract:
Recently, non-negative tensor factorization (NTF) as a very powerful tool has attracted the attention of researchers. It is used in the unmixing of hyperspectral images (HSI) due to its excellent expression ability without any information loss when describing data. However, most of the existing unmixing methods based on NTF fail to fully explore the unique properties of data, for example, low rank, that exists in both the spectral and spatial domains. To explore this low-rank structure, in this paper we learn the different low-rank representations of HSI in the spectral, spatial and non-local similarity modes. Firstly, HSI is divided into many patches, and these patches are clustered multiple groups according to the similarity. Each similarity group can constitute a 4-D tensor, including two spatial modes, a spectral mode and a non-local similarity mode, which has strong low-rank properties. Secondly, a low-rank regularization with logarithmic function is designed and embedded in the NTF framework, which simulates the spatial, spectral and non-local similarity modes of these 4-D tensors. In addition, the sparsity of the abundance tensor is also integrated into the unmixing framework to improve the unmixing performance through the L2,1 norm. Experiments on three real data sets illustrate the stability and effectiveness of our algorithm compared with five state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Zhao, Le Wang, Tao Yu, and Bing Liang Hu. "Image Super-Resolution via Low-Rank Representation." Applied Mechanics and Materials 568-570 (June 2014): 652–55. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.652.

Full text
Abstract:
This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
16

Gan, Bin, Chun-Hou Zheng, Jun Zhang, and Hong-Qiang Wang. "Sparse Representation for Tumor Classification Based on Feature Extraction Using Latent Low-Rank Representation." BioMed Research International 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/420856.

Full text
Abstract:
Accurate tumor classification is crucial to the proper treatment of cancer. To now, sparse representation (SR) has shown its great performance for tumor classification. This paper conceives a new SR-based method for tumor classification by using gene expression data. In the proposed method, we firstly use latent low-rank representation for extracting salient features and removing noise from the original samples data. Then we use sparse representation classifier (SRC) to build tumor classification model. The experimental results on several real-world data sets show that our method is more efficient and more effective than the previous classification methods including SVM, SRC, and LASSO.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Yong, Xiaozhao Fang, Jian Wu, Xuelong Li, and David Zhang. "Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation." IEEE Transactions on Image Processing 25, no. 2 (February 2016): 850–63. http://dx.doi.org/10.1109/tip.2015.2510498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Audouze, Christophe, and Prasanth B. Nair. "Sparse low-rank separated representation models for learning from data." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 475, no. 2221 (January 2019): 20180490. http://dx.doi.org/10.1098/rspa.2018.0490.

Full text
Abstract:
We consider the problem of learning a multivariate function from a set of scattered observations using a sparse low-rank separated representation (SSR) model. The model structure considered here is promising for high-dimensional learning problems; however, existing training algorithms based on alternating least-squares (ALS) are known to have convergence difficulties, particularly when the rank of the model is greater than 1. In the present work, we supplement the model structure with sparsity constraints to ensure the well posedness of the approximation problem. We propose two fast training algorithms to estimate the model parameters: (i) a cyclic coordinate descent algorithm and (ii) a block coordinate descent (BCD) algorithm. While the first algorithm is not provably convergent owing to the non-convexity of the optimization problem, the BCD algorithm guarantees convergence to a Nash equilibrium point. The computational cost of the proposed algorithms is shown to scale linearly with respect to all of the parameters in contrast to methods based on ALS. Numerical studies on synthetic and real-world regression datasets indicate that the proposed SSR model structure holds significant potential for machine learning problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Zhonghua, Weihua Ou, Wenpeng Lu, and Lin Wang. "Discriminative feature extraction based on sparse and low-rank representation." Neurocomputing 362 (October 2019): 129–38. http://dx.doi.org/10.1016/j.neucom.2019.06.073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Du, Hai-Shun, Qing-Pu Hu, Dian-Feng Qiao, and Ioannis Pitas. "Robust face recognition via low-rank sparse representation-based classification." International Journal of Automation and Computing 12, no. 6 (November 6, 2015): 579–87. http://dx.doi.org/10.1007/s11633-015-0901-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Huang, Jie, Ting-Zhu Huang, Liang-Jian Deng, and Xi-Le Zhao. "Joint-Sparse-Blocks and Low-Rank Representation for Hyperspectral Unmixing." IEEE Transactions on Geoscience and Remote Sensing 57, no. 4 (April 2019): 2419–38. http://dx.doi.org/10.1109/tgrs.2018.2873326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yong-Qiang Zhao and Jingxiang Yang. "Hyperspectral Image Denoising via Sparse Representation and Low-Rank Constraint." IEEE Transactions on Geoscience and Remote Sensing 53, no. 1 (January 2015): 296–308. http://dx.doi.org/10.1109/tgrs.2014.2321557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ma, Guanqun, Ting-Zhu Huang, Jie Huang, and Chao-Chao Zheng. "Local Low-Rank and Sparse Representation for Hyperspectral Image Denoising." IEEE Access 7 (2019): 79850–65. http://dx.doi.org/10.1109/access.2019.2923255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hou, Yi-Fu, Zhan-Li Sun, Yan-Wen Chong, and Chun-Hou Zheng. "Low-Rank and Eigenface Based Sparse Representation for Face Recognition." PLoS ONE 9, no. 10 (October 21, 2014): e110318. http://dx.doi.org/10.1371/journal.pone.0110318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Feng, Weijian Si, and Zheng Dou. "Sparse media image restoration based on collaborative low rank representation." Multimedia Tools and Applications 77, no. 8 (June 29, 2017): 10051–62. http://dx.doi.org/10.1007/s11042-017-4958-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tang, Kewei, Jie Zhang, Zhixun Su, and Jiangxin Dong. "Bayesian Low-Rank and Sparse Nonlinear Representation for Manifold Clustering." Neural Processing Letters 44, no. 3 (December 29, 2015): 719–33. http://dx.doi.org/10.1007/s11063-015-9490-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nazari Siahsar, Mohammad Amir, Saman Gholtashi, Amin Roshandel Kahoo, Hosein Marvi, and Alireza Ahmadifard. "Sparse time-frequency representation for seismic noise reduction using low-rank and sparse decomposition." GEOPHYSICS 81, no. 2 (March 1, 2016): V117—V124. http://dx.doi.org/10.1190/geo2015-0341.1.

Full text
Abstract:
Attenuation of random noise is a major concern in seismic data processing. This kind of noise is usually characterized by random oscillation in seismic data over the entire time and frequency. We introduced and evaluated a low-rank and sparse decomposition-based method for seismic random noise attenuation. The proposed method, which is a trace by trace algorithm, starts by transforming the seismic signal into a new sparse subspace using the synchrosqueezing transform. Then, the sparse time-frequency representation (TFR) matrix is decomposed into two parts: (a) a low-rank component and (b) a sparse component using bilateral random projection. Although seismic data are not exactly low-rank in the sparse TFR domain, they can be assumed as being of semi-low-rank or approximately low-rank type. Hence, we can recover the denoised seismic signal by minimizing the mixed [Formula: see text] norms’ objective function by considering the intrinsically semilow-rank property of the seismic data and sparsity feature of random noise in the sparse TFR domain. The proposed method was tested on synthetic and real data. In the synthetic case, the data were contaminated by random noise. Denoising was carried out by means of the [Formula: see text] classical singular spectrum analysis (SSA) and [Formula: see text] deconvolution method for comparison. The [Formula: see text] deconvolution and the classical [Formula: see text] SSA method failed to properly reduce the noise and to recover the desired signal. We have also tested the proposed method on a prestack real data set from an oil field in the southwest of Iran. Through synthetic and real tests, the proposed method is determined to be an effective, amplitude preserving, and robust tool that gives superior results over classical [Formula: see text] SSA as conventional algorithm for denoising seismic data.
APA, Harvard, Vancouver, ISO, and other styles
28

Liao, Jiayu, Xiaolan Liu, and Mengying Xie. "Inductive Latent Space Sparse and Low-rank Subspace Clustering Algorithm." Journal of Physics: Conference Series 2224, no. 1 (April 1, 2022): 012124. http://dx.doi.org/10.1088/1742-6596/2224/1/012124.

Full text
Abstract:
Abstract Sparse subspace clustering (SSC) and low-rank representation (LRR) are the most popular algorithms for subspace clustering. However, SSC and LRR are transductive methods and cannot deal with the new data not involved in the training data. When a new data comes, SSC and LRR need to calculate over all the data again, which is a time-consuming thing. On the other hand, for high-dimensional data, dimensionality reduction is firstly performed before running SSC and LRR algorithms which isolate the dimensionality reduction and the following subspace clustering. To overcome these shortcomings, in this paper, two sparse and low-rank subspace clustering algorithms based on simultaneously dimensionality reduction and subspace clustering which can deal with out-of-sample data were proposed. The proposed algorithms divide the whole data set into in-sample data and out-of-sample data. The in-sample data are used to learn the projection matrix and the sparse or low-rank representation matrix in the low-dimensional space. The membership of in-sample data is obtained by spectral clustering. In the low dimensional embedding space, the membership of out of sample data is obtained by collaborative representation classification (CRC). Experimental results on a variety of data sets verify that our proposed algorithms can handle new data in an efficient way.
APA, Harvard, Vancouver, ISO, and other styles
29

Xie, Shicheng, Shun Wang, Chuanming Song, and Xianghai Wang. "Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation." Remote Sensing 14, no. 17 (August 25, 2022): 4184. http://dx.doi.org/10.3390/rs14174184.

Full text
Abstract:
The enormous amount of data that are generated by hyperspectral remote sensing images (HSI) combined with the spatial channel’s limited and fragile bandwidth creates serious transmission, storage, and application challenges. HSI reconstruction based on compressed sensing has become a frontier area, and its effectiveness depends heavily on the exploitation and sparse representation of HSI information correlation. In this paper, we propose a low-rank sparse constrained HSI reconstruction model (LRCoSM) that is based on joint spatial-spectral HSI sparseness. In the spectral dimension, a spectral domain sparsity measure and the representation of the joint spectral dimensional plane are proposed for the first time. A Gaussian mixture model (GMM) that is based on unsupervised adaptive parameter learning of external datasets is used to cluster similar patches of joint spectral plane features, capturing the correlation of HSI spectral dimensional non-local structure image patches while performing low-rank decomposition of clustered similar patches to extract feature information, effectively improving the ability of low-rank approximate sparse representation of spectral dimensional similar patches. In the spatial dimension, local-nonlocal HSI similarity is explored to refine sparse prior constraints. Spectral and spatial dimension sparse constraints improve HSI reconstruction quality. Experimental results that are based on various sampling rates on four publicly available datasets show that the proposed algorithm can obtain high-quality reconstructed PSNR and FSIM values and effectively maintain the spectral curves for few-band datasets compared with six currently popular reconstruction algorithms, and the proposed algorithm has strong robustness and generalization ability at different sampling rates and on other datasets.
APA, Harvard, Vancouver, ISO, and other styles
30

Sun, Yubao, Zhi Li, and Min Wu. "A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering." Mathematical Problems in Engineering 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/572753.

Full text
Abstract:
This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.
APA, Harvard, Vancouver, ISO, and other styles
31

Cai, T. Tony, and Anru Zhang. "Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-Rank Matrices." IEEE Transactions on Information Theory 60, no. 1 (January 2014): 122–32. http://dx.doi.org/10.1109/tit.2013.2288639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang Xiaohui, 张晓慧, 郝润芳 Hao Runfang, and 李廷鱼 Li Tingyu. "Hyperspectral Abnormal Target Detection Based on Low Rank and Sparse Matrix Decomposition-Sparse Representation." Laser & Optoelectronics Progress 56, no. 4 (2019): 042801. http://dx.doi.org/10.3788/lop56.042801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Jie, Jun Ma, Khin Than Win, Junbin Gao, and Zhenyu Yang. "Low-rank and sparse representation based learning for cancer survivability prediction." Information Sciences 582 (January 2022): 573–92. http://dx.doi.org/10.1016/j.ins.2021.10.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Lu, Gongping Yang, Kuikui Wang, Fanchang Hao, and Yilong Yin. "Finger Vein Recognition via Sparse Reconstruction Error Constrained Low-Rank Representation." IEEE Transactions on Information Forensics and Security 16 (2021): 4869–81. http://dx.doi.org/10.1109/tifs.2021.3118894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Long, and Zheng Liu. "Noise‐robust HRRP target recognition method via sparse‐low‐rank representation." Electronics Letters 53, no. 24 (November 2017): 1602–4. http://dx.doi.org/10.1049/el.2017.2960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tao, JianWen, Shiting Wen, and Wenjun Hu. "Robust domain adaptation image classification via sparse and low rank representation." Journal of Visual Communication and Image Representation 33 (November 2015): 134–48. http://dx.doi.org/10.1016/j.jvcir.2015.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Jie, and Zhang Yi. "Sparse representation for face recognition by discriminative low-rank matrix recovery." Journal of Visual Communication and Image Representation 25, no. 5 (July 2014): 763–73. http://dx.doi.org/10.1016/j.jvcir.2014.01.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Wanqi, Yinghuan Shi, Yang Gao, Lei Wang, and Ming Yang. "Incomplete-Data Oriented Multiview Dimension Reduction via Sparse Low-Rank Representation." IEEE Transactions on Neural Networks and Learning Systems 29, no. 12 (December 2018): 6276–91. http://dx.doi.org/10.1109/tnnls.2018.2828699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gu, Song, Lihui Wang, Wei Hao, Yingjie Du, Jian Wang, and Weirui Zhang. "Online Video Object Segmentation via Boundary-Constrained Low-Rank Sparse Representation." IEEE Access 7 (2019): 53520–33. http://dx.doi.org/10.1109/access.2019.2912760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

He, YuJie, Min Li, JinLi Zhang, and Qi An. "Small infrared target detection based on low-rank and sparse representation." Infrared Physics & Technology 68 (January 2015): 98–109. http://dx.doi.org/10.1016/j.infrared.2014.10.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ge, Ting, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao, and Shanxiang Mu. "Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation." Computational Intelligence and Neuroscience 2019 (July 1, 2019): 1–11. http://dx.doi.org/10.1155/2019/9378014.

Full text
Abstract:
The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Shicheng, Le Zhang, Lianghua He, and Ying Wen. "Sparse Low-Rank Component-Based Representation for Face Recognition With Low-Quality Images." IEEE Transactions on Information Forensics and Security 14, no. 1 (January 2019): 251–61. http://dx.doi.org/10.1109/tifs.2018.2849883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kim, Hyuncheol, and Joonki Paik. "Low-Rank Representation-Based Object Tracking Using Multitask Feature Learning with Joint Sparsity." Abstract and Applied Analysis 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/147353.

Full text
Abstract:
We address object tracking problem as a multitask feature learning process based on low-rank representation of features with joint sparsity. We first select features with low-rank representation within a number of initial frames to obtain subspace basis. Next, the features represented by the low-rank and sparse property are learned using a modified joint sparsity-based multitask feature learning framework. Both the features and sparse errors are then optimally updated using a novel incremental alternating direction method. The low-rank minimization problem for learning multitask features can be achieved by a few sequences of efficient closed form update process. Since the proposed method attempts to perform the feature learning problem in both multitask and low-rank manner, it can not only reduce the dimension but also improve the tracking performance without drift. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art tracking methods for tracking objects in challenging image sequences.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhan, Shanhua, Weijun Sun, and Peipei Kang. "Robust Latent Common Subspace Learning for Transferable Feature Representation." Electronics 11, no. 5 (March 4, 2022): 810. http://dx.doi.org/10.3390/electronics11050810.

Full text
Abstract:
This paper proposes a novel robust latent common subspace learning (RLCSL) method by integrating low-rank and sparse constraints into a joint learning framework. Specifically, we transform the data from source and target domains into a latent common subspace to perform the data reconstruction, i.e., the transformed source data is used to reconstruct the transformed target data. We impose joint low-rank and sparse constraints on the reconstruction coefficient matrix which can achieve following objectives: (1) the data from different domains can be interlaced by using the low-rank constraint; (2) the data from different domains but with the same label can be aligned together by using the sparse constraint. In this way, the new feature representation in the latent common subspace is discriminative and transferable. To learn a suitable classifier, we also integrate the classifier learning and feature representation learning into a unified objective and thus the high-level semantics label (data label) is fully used to guide the learning process of these two tasks. Experiments are conducted on diverse data sets for image, object, and document classifications, and encouraging experimental results show that the proposed method outperforms some state-of-the-arts methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Bo, Kunkun Tong, Xueqing Zhao, Shanmin Pang, and Jinguang Chen. "Multilabel Classification Using Low-Rank Decomposition." Discrete Dynamics in Nature and Society 2020 (April 7, 2020): 1–8. http://dx.doi.org/10.1155/2020/1279253.

Full text
Abstract:
In the multilabel learning framework, each instance is no longer associated with a single semantic, but rather with concept ambiguity. Specifically, the ambiguity of an instance in the input space means that there are multiple corresponding labels in the output space. In most of the existing multilabel classification methods, a binary annotation vector is used to denote the multiple semantic concepts. That is, +1 denotes that the instance has a relevant label, while −1 means the opposite. However, the label representation contains too little semantic information to truly express the differences among multiple different labels. Therefore, we propose a new approach to transform binary label into a real-valued label. We adopt the low-rank decomposition to get latent label information and then incorporate the information and original features to generate new features. Then, using the sparse representation to reconstruct the new instance, the reconstruction error can also be applied in the label space. In this way, we finally achieve the purpose of label conversion. Extensive experiments validate that the proposed method can achieve comparable to or even better results than other state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Chao, Huaxiong Li, Wei Lv, Zizheng Huang, Yang Gao, and Chunlin Chen. "Enhanced Tensor Low-Rank and Sparse Representation Recovery for Incomplete Multi-View Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11174–82. http://dx.doi.org/10.1609/aaai.v37i9.26323.

Full text
Abstract:
Incomplete multi-view clustering (IMVC) has attracted remarkable attention due to the emergence of multi-view data with missing views in real applications. Recent methods attempt to recover the missing information to address the IMVC problem. However, they generally cannot fully explore the underlying properties and correlations of data similarities across views. This paper proposes a novel Enhanced Tensor Low-rank and Sparse Representation Recovery (ETLSRR) method, which reformulates the IMVC problem as a joint incomplete similarity graphs learning and complete tensor representation recovery problem. Specifically, ETLSRR learns the intra-view similarity graphs and constructs a 3-way tensor by stacking the graphs to explore the inter-view correlations. To alleviate the negative influence of missing views and data noise, ETLSRR decomposes the tensor into two parts: a sparse tensor and an intrinsic tensor, which models the noise and underlying true data similarities, respectively. Both global low-rank and local structured sparse characteristics of the intrinsic tensor are considered, which enhances the discrimination of similarity matrix. Moreover, instead of using the convex tensor nuclear norm, ETLSRR introduces a generalized non-convex tensor low-rank regularization to alleviate the biased approximation. Experiments on several datasets demonstrate the effectiveness of our method compared with the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Xue, Jize, Yong-Qiang Zhao, Yuanyang Bu, Wenzhi Liao, Jonathan Cheung-Wai Chan, and Wilfried Philips. "Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution." IEEE Transactions on Image Processing 30 (2021): 3084–97. http://dx.doi.org/10.1109/tip.2021.3058590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Xiaolan, Miao Yi, Le Han, and Xue Deng. "A subspace clustering algorithm based on simultaneously sparse and low-rank representation." Journal of Intelligent & Fuzzy Systems 33, no. 1 (June 22, 2017): 621–33. http://dx.doi.org/10.3233/jifs-16771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ding, Yun, Yanwen Chong, and Shaoming Pan. "Sparse and Low-Rank Representation With Key Connectivity for Hyperspectral Image Classification." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020): 5609–22. http://dx.doi.org/10.1109/jstars.2020.3023483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Xie, Wenbin, Hong Yin, Meini Wang, Yan Shao, and Bosi Yu. "Low‐rank structured sparse representation and reduced dictionary learning‐based abnormity detection." IET Computer Vision 13, no. 1 (December 4, 2018): 8–14. http://dx.doi.org/10.1049/iet-cvi.2018.5256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography