Artículos de revistas sobre el tema "Sparse Low-Rank Representation"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Sparse Low-Rank Representation.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Sparse Low-Rank Representation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Hengdong Zhu, Hengdong Zhu, Ting Yang Hengdong Zhu, Yingcang Ma Ting Yang y Xiaofei Yang Yingcang Ma. "Multi-view Re-weighted Sparse Subspace Clustering with Intact Low-rank Space Learning". 電腦學刊 33, n.º 4 (agosto de 2022): 121–31. http://dx.doi.org/10.53106/199115992022083304010.

Texto completo
Resumen
<p>In this paper, we propose a new Multi-view Re-weighted Sparse Subspace Clustering with Intact Low-rank Space Learning (ILrS-MRSSC) method, trying to find a sparse representation of the complete space of information. Specifically, this method integrates the complementary information inherent in multiple angles of the data, learns a complete space of potential low-rank representation, and constructs a sparse information matrix to reconstruct the data. The correlation between multi-view learning and subspace clustering is strengthened to the greatest extent, so that the subspace representation is more intuitive and accurate. The optimal solution of the model is solved by the augmented lagrangian multiplier (ALM) method of alternating direction minimal. Experiments on multiple benchmark data sets verify the effec-tiveness of this method.</p> <p>&nbsp;</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Jianxi y Lina Zhao. "Low-rank and sparse matrices fitting algorithm for low-rank representation". Computers & Mathematics with Applications 79, n.º 2 (enero de 2020): 407–25. http://dx.doi.org/10.1016/j.camwa.2019.07.012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kim, Hyuncheol y Joonki Paik. "Video Summarization using Low-Rank Sparse Representation". IEIE Transactions on Smart Processing & Computing 7, n.º 3 (30 de junio de 2018): 236–44. http://dx.doi.org/10.5573/ieiespc.2018.7.3.236.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

CHENG, Shilei, Song GU, Maoquan YE y Mei XIE. "Action Recognition Using Low-Rank Sparse Representation". IEICE Transactions on Information and Systems E101.D, n.º 3 (2018): 830–34. http://dx.doi.org/10.1587/transinf.2017edl8176.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wang, Jun, Daming Shi, Dansong Cheng, Yongqiang Zhang y Junbin Gao. "LRSR: Low-Rank-Sparse representation for subspace clustering". Neurocomputing 214 (noviembre de 2016): 1026–37. http://dx.doi.org/10.1016/j.neucom.2016.07.015.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Du, Haishun, Xudong Zhang, Qingpu Hu y Yandong Hou. "Sparse representation-based robust face recognition by graph regularized low-rank sparse representation recovery". Neurocomputing 164 (septiembre de 2015): 220–29. http://dx.doi.org/10.1016/j.neucom.2015.02.067.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Xiujun, Chen Xu, Min Li y Xiaoli Sun. "Sparse and Low-Rank Coupling Image Segmentation Model Via Nonconvex Regularization". International Journal of Pattern Recognition and Artificial Intelligence 29, n.º 02 (27 de febrero de 2015): 1555004. http://dx.doi.org/10.1142/s0218001415550046.

Texto completo
Resumen
This paper investigates how to boost region-based image segmentation by inheriting the advantages of sparse representation and low-rank representation. A novel image segmentation model, called nonconvex regularization based sparse and low-rank coupling model, is presented for such a purpose. We aim at finding the optimal solution which is provided with sparse and low-rank simultaneously. This is achieved by relaxing sparse representation problem as L1/2 norm minimization other than the L1 norm minimization, while relaxing low-rank representation problem as the S1/2 norm minimization other than the nuclear norm minimization. This coupled model can be solved efficiently through the Augmented Lagrange Multiplier (ALM) method and half-threshold operator. Compared to the other state-of-the-art methods, the new method is better at capturing the global structure of the whole data, the robustness is better and the segmentation accuracy is also competitive. Experiments on two public image segmentation databases well validate the superiority of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zheng, Chun-Hou, Yi-Fu Hou y Jun Zhang. "Improved sparse representation with low-rank representation for robust face recognition". Neurocomputing 198 (julio de 2016): 114–24. http://dx.doi.org/10.1016/j.neucom.2015.07.146.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Du, Shiqiang, Yuqing Shi, Guangrong Shan, Weilan Wang y Yide Ma. "Tensor low-rank sparse representation for tensor subspace learning". Neurocomputing 440 (junio de 2021): 351–64. http://dx.doi.org/10.1016/j.neucom.2021.02.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zou, Dongqing, Xiaowu Chen, Guangying Cao y Xiaogang Wang. "Unsupervised Video Matting via Sparse and Low-Rank Representation". IEEE Transactions on Pattern Analysis and Machine Intelligence 42, n.º 6 (1 de junio de 2020): 1501–14. http://dx.doi.org/10.1109/tpami.2019.2895331.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

He, Zhi-Fen y Ming Yang. "Sparse and low-rank representation for multi-label classification". Applied Intelligence 49, n.º 5 (26 de noviembre de 2018): 1708–23. http://dx.doi.org/10.1007/s10489-018-1345-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Jian, Hongsong Dong, Wenlian Gao, Li Zhang, Zhiwen Xue y Xiangfei Shen. "Structured low-rank representation learning for hyperspectral sparse unmixing". International Journal of Remote Sensing 45, n.º 2 (15 de enero de 2024): 351–75. http://dx.doi.org/10.1080/01431161.2023.2295836.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

You, Cong-Zhe, Zhen-Qiu Shu y Hong-Hui Fan. "Low-rank sparse subspace clustering with a clean dictionary". Journal of Algorithms & Computational Technology 15 (enero de 2021): 174830262098369. http://dx.doi.org/10.1177/1748302620983690.

Texto completo
Resumen
Low-Rank Representation (LRR) and Sparse Subspace Clustering (SSC) are considered as the hot topics of subspace clustering algorithms. SSC induces the sparsity through minimizing the [Formula: see text]-norm of the data matrix while LRR promotes a low-rank structure through minimizing the nuclear norm. In this paper, considering the problem of fitting a union of subspace to a collection of data points drawn from one more subspaces and corrupted by noise, we pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise. We propose a new algorithm, named Low-Rank and Sparse Subspace Clustering with a Clean dictionary (LRS2C2), by combining SSC and LRR, as the representation is often both sparse and low-rank. The effectiveness of the proposed algorithm is demonstrated through experiments on motion segmentation and image clustering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Dong, Le y Yuan Yuan. "Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing". Remote Sensing 13, n.º 8 (11 de abril de 2021): 1473. http://dx.doi.org/10.3390/rs13081473.

Texto completo
Resumen
Recently, non-negative tensor factorization (NTF) as a very powerful tool has attracted the attention of researchers. It is used in the unmixing of hyperspectral images (HSI) due to its excellent expression ability without any information loss when describing data. However, most of the existing unmixing methods based on NTF fail to fully explore the unique properties of data, for example, low rank, that exists in both the spectral and spatial domains. To explore this low-rank structure, in this paper we learn the different low-rank representations of HSI in the spectral, spatial and non-local similarity modes. Firstly, HSI is divided into many patches, and these patches are clustered multiple groups according to the similarity. Each similarity group can constitute a 4-D tensor, including two spatial modes, a spectral mode and a non-local similarity mode, which has strong low-rank properties. Secondly, a low-rank regularization with logarithmic function is designed and embedded in the NTF framework, which simulates the spatial, spectral and non-local similarity modes of these 4-D tensors. In addition, the sparsity of the abundance tensor is also integrated into the unmixing framework to improve the unmixing performance through the L2,1 norm. Experiments on three real data sets illustrate the stability and effectiveness of our algorithm compared with five state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Li, Zhao, Le Wang, Tao Yu y Bing Liang Hu. "Image Super-Resolution via Low-Rank Representation". Applied Mechanics and Materials 568-570 (junio de 2014): 652–55. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.652.

Texto completo
Resumen
This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Gan, Bin, Chun-Hou Zheng, Jun Zhang y Hong-Qiang Wang. "Sparse Representation for Tumor Classification Based on Feature Extraction Using Latent Low-Rank Representation". BioMed Research International 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/420856.

Texto completo
Resumen
Accurate tumor classification is crucial to the proper treatment of cancer. To now, sparse representation (SR) has shown its great performance for tumor classification. This paper conceives a new SR-based method for tumor classification by using gene expression data. In the proposed method, we firstly use latent low-rank representation for extracting salient features and removing noise from the original samples data. Then we use sparse representation classifier (SRC) to build tumor classification model. The experimental results on several real-world data sets show that our method is more efficient and more effective than the previous classification methods including SVM, SRC, and LASSO.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Xu, Yong, Xiaozhao Fang, Jian Wu, Xuelong Li y David Zhang. "Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation". IEEE Transactions on Image Processing 25, n.º 2 (febrero de 2016): 850–63. http://dx.doi.org/10.1109/tip.2015.2510498.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Audouze, Christophe y Prasanth B. Nair. "Sparse low-rank separated representation models for learning from data". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 475, n.º 2221 (enero de 2019): 20180490. http://dx.doi.org/10.1098/rspa.2018.0490.

Texto completo
Resumen
We consider the problem of learning a multivariate function from a set of scattered observations using a sparse low-rank separated representation (SSR) model. The model structure considered here is promising for high-dimensional learning problems; however, existing training algorithms based on alternating least-squares (ALS) are known to have convergence difficulties, particularly when the rank of the model is greater than 1. In the present work, we supplement the model structure with sparsity constraints to ensure the well posedness of the approximation problem. We propose two fast training algorithms to estimate the model parameters: (i) a cyclic coordinate descent algorithm and (ii) a block coordinate descent (BCD) algorithm. While the first algorithm is not provably convergent owing to the non-convexity of the optimization problem, the BCD algorithm guarantees convergence to a Nash equilibrium point. The computational cost of the proposed algorithms is shown to scale linearly with respect to all of the parameters in contrast to methods based on ALS. Numerical studies on synthetic and real-world regression datasets indicate that the proposed SSR model structure holds significant potential for machine learning problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Liu, Zhonghua, Weihua Ou, Wenpeng Lu y Lin Wang. "Discriminative feature extraction based on sparse and low-rank representation". Neurocomputing 362 (octubre de 2019): 129–38. http://dx.doi.org/10.1016/j.neucom.2019.06.073.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Du, Hai-Shun, Qing-Pu Hu, Dian-Feng Qiao y Ioannis Pitas. "Robust face recognition via low-rank sparse representation-based classification". International Journal of Automation and Computing 12, n.º 6 (6 de noviembre de 2015): 579–87. http://dx.doi.org/10.1007/s11633-015-0901-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Huang, Jie, Ting-Zhu Huang, Liang-Jian Deng y Xi-Le Zhao. "Joint-Sparse-Blocks and Low-Rank Representation for Hyperspectral Unmixing". IEEE Transactions on Geoscience and Remote Sensing 57, n.º 4 (abril de 2019): 2419–38. http://dx.doi.org/10.1109/tgrs.2018.2873326.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yong-Qiang Zhao y Jingxiang Yang. "Hyperspectral Image Denoising via Sparse Representation and Low-Rank Constraint". IEEE Transactions on Geoscience and Remote Sensing 53, n.º 1 (enero de 2015): 296–308. http://dx.doi.org/10.1109/tgrs.2014.2321557.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ma, Guanqun, Ting-Zhu Huang, Jie Huang y Chao-Chao Zheng. "Local Low-Rank and Sparse Representation for Hyperspectral Image Denoising". IEEE Access 7 (2019): 79850–65. http://dx.doi.org/10.1109/access.2019.2923255.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hou, Yi-Fu, Zhan-Li Sun, Yan-Wen Chong y Chun-Hou Zheng. "Low-Rank and Eigenface Based Sparse Representation for Face Recognition". PLoS ONE 9, n.º 10 (21 de octubre de 2014): e110318. http://dx.doi.org/10.1371/journal.pone.0110318.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Zhao, Feng, Weijian Si y Zheng Dou. "Sparse media image restoration based on collaborative low rank representation". Multimedia Tools and Applications 77, n.º 8 (29 de junio de 2017): 10051–62. http://dx.doi.org/10.1007/s11042-017-4958-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Tang, Kewei, Jie Zhang, Zhixun Su y Jiangxin Dong. "Bayesian Low-Rank and Sparse Nonlinear Representation for Manifold Clustering". Neural Processing Letters 44, n.º 3 (29 de diciembre de 2015): 719–33. http://dx.doi.org/10.1007/s11063-015-9490-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Nazari Siahsar, Mohammad Amir, Saman Gholtashi, Amin Roshandel Kahoo, Hosein Marvi y Alireza Ahmadifard. "Sparse time-frequency representation for seismic noise reduction using low-rank and sparse decomposition". GEOPHYSICS 81, n.º 2 (1 de marzo de 2016): V117—V124. http://dx.doi.org/10.1190/geo2015-0341.1.

Texto completo
Resumen
Attenuation of random noise is a major concern in seismic data processing. This kind of noise is usually characterized by random oscillation in seismic data over the entire time and frequency. We introduced and evaluated a low-rank and sparse decomposition-based method for seismic random noise attenuation. The proposed method, which is a trace by trace algorithm, starts by transforming the seismic signal into a new sparse subspace using the synchrosqueezing transform. Then, the sparse time-frequency representation (TFR) matrix is decomposed into two parts: (a) a low-rank component and (b) a sparse component using bilateral random projection. Although seismic data are not exactly low-rank in the sparse TFR domain, they can be assumed as being of semi-low-rank or approximately low-rank type. Hence, we can recover the denoised seismic signal by minimizing the mixed [Formula: see text] norms’ objective function by considering the intrinsically semilow-rank property of the seismic data and sparsity feature of random noise in the sparse TFR domain. The proposed method was tested on synthetic and real data. In the synthetic case, the data were contaminated by random noise. Denoising was carried out by means of the [Formula: see text] classical singular spectrum analysis (SSA) and [Formula: see text] deconvolution method for comparison. The [Formula: see text] deconvolution and the classical [Formula: see text] SSA method failed to properly reduce the noise and to recover the desired signal. We have also tested the proposed method on a prestack real data set from an oil field in the southwest of Iran. Through synthetic and real tests, the proposed method is determined to be an effective, amplitude preserving, and robust tool that gives superior results over classical [Formula: see text] SSA as conventional algorithm for denoising seismic data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Liao, Jiayu, Xiaolan Liu y Mengying Xie. "Inductive Latent Space Sparse and Low-rank Subspace Clustering Algorithm". Journal of Physics: Conference Series 2224, n.º 1 (1 de abril de 2022): 012124. http://dx.doi.org/10.1088/1742-6596/2224/1/012124.

Texto completo
Resumen
Abstract Sparse subspace clustering (SSC) and low-rank representation (LRR) are the most popular algorithms for subspace clustering. However, SSC and LRR are transductive methods and cannot deal with the new data not involved in the training data. When a new data comes, SSC and LRR need to calculate over all the data again, which is a time-consuming thing. On the other hand, for high-dimensional data, dimensionality reduction is firstly performed before running SSC and LRR algorithms which isolate the dimensionality reduction and the following subspace clustering. To overcome these shortcomings, in this paper, two sparse and low-rank subspace clustering algorithms based on simultaneously dimensionality reduction and subspace clustering which can deal with out-of-sample data were proposed. The proposed algorithms divide the whole data set into in-sample data and out-of-sample data. The in-sample data are used to learn the projection matrix and the sparse or low-rank representation matrix in the low-dimensional space. The membership of in-sample data is obtained by spectral clustering. In the low dimensional embedding space, the membership of out of sample data is obtained by collaborative representation classification (CRC). Experimental results on a variety of data sets verify that our proposed algorithms can handle new data in an efficient way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Xie, Shicheng, Shun Wang, Chuanming Song y Xianghai Wang. "Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation". Remote Sensing 14, n.º 17 (25 de agosto de 2022): 4184. http://dx.doi.org/10.3390/rs14174184.

Texto completo
Resumen
The enormous amount of data that are generated by hyperspectral remote sensing images (HSI) combined with the spatial channel’s limited and fragile bandwidth creates serious transmission, storage, and application challenges. HSI reconstruction based on compressed sensing has become a frontier area, and its effectiveness depends heavily on the exploitation and sparse representation of HSI information correlation. In this paper, we propose a low-rank sparse constrained HSI reconstruction model (LRCoSM) that is based on joint spatial-spectral HSI sparseness. In the spectral dimension, a spectral domain sparsity measure and the representation of the joint spectral dimensional plane are proposed for the first time. A Gaussian mixture model (GMM) that is based on unsupervised adaptive parameter learning of external datasets is used to cluster similar patches of joint spectral plane features, capturing the correlation of HSI spectral dimensional non-local structure image patches while performing low-rank decomposition of clustered similar patches to extract feature information, effectively improving the ability of low-rank approximate sparse representation of spectral dimensional similar patches. In the spatial dimension, local-nonlocal HSI similarity is explored to refine sparse prior constraints. Spectral and spatial dimension sparse constraints improve HSI reconstruction quality. Experimental results that are based on various sampling rates on four publicly available datasets show that the proposed algorithm can obtain high-quality reconstructed PSNR and FSIM values and effectively maintain the spectral curves for few-band datasets compared with six currently popular reconstruction algorithms, and the proposed algorithm has strong robustness and generalization ability at different sampling rates and on other datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Sun, Yubao, Zhi Li y Min Wu. "A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering". Mathematical Problems in Engineering 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/572753.

Texto completo
Resumen
This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Cai, T. Tony y Anru Zhang. "Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-Rank Matrices". IEEE Transactions on Information Theory 60, n.º 1 (enero de 2014): 122–32. http://dx.doi.org/10.1109/tit.2013.2288639.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Zhang Xiaohui, 张晓慧, 郝润芳 Hao Runfang y 李廷鱼 Li Tingyu. "Hyperspectral Abnormal Target Detection Based on Low Rank and Sparse Matrix Decomposition-Sparse Representation". Laser & Optoelectronics Progress 56, n.º 4 (2019): 042801. http://dx.doi.org/10.3788/lop56.042801.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Yang, Jie, Jun Ma, Khin Than Win, Junbin Gao y Zhenyu Yang. "Low-rank and sparse representation based learning for cancer survivability prediction". Information Sciences 582 (enero de 2022): 573–92. http://dx.doi.org/10.1016/j.ins.2021.10.013.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Yang, Lu, Gongping Yang, Kuikui Wang, Fanchang Hao y Yilong Yin. "Finger Vein Recognition via Sparse Reconstruction Error Constrained Low-Rank Representation". IEEE Transactions on Information Forensics and Security 16 (2021): 4869–81. http://dx.doi.org/10.1109/tifs.2021.3118894.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Li, Long y Zheng Liu. "Noise‐robust HRRP target recognition method via sparse‐low‐rank representation". Electronics Letters 53, n.º 24 (noviembre de 2017): 1602–4. http://dx.doi.org/10.1049/el.2017.2960.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Tao, JianWen, Shiting Wen y Wenjun Hu. "Robust domain adaptation image classification via sparse and low rank representation". Journal of Visual Communication and Image Representation 33 (noviembre de 2015): 134–48. http://dx.doi.org/10.1016/j.jvcir.2015.09.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Chen, Jie y Zhang Yi. "Sparse representation for face recognition by discriminative low-rank matrix recovery". Journal of Visual Communication and Image Representation 25, n.º 5 (julio de 2014): 763–73. http://dx.doi.org/10.1016/j.jvcir.2014.01.015.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Yang, Wanqi, Yinghuan Shi, Yang Gao, Lei Wang y Ming Yang. "Incomplete-Data Oriented Multiview Dimension Reduction via Sparse Low-Rank Representation". IEEE Transactions on Neural Networks and Learning Systems 29, n.º 12 (diciembre de 2018): 6276–91. http://dx.doi.org/10.1109/tnnls.2018.2828699.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Gu, Song, Lihui Wang, Wei Hao, Yingjie Du, Jian Wang y Weirui Zhang. "Online Video Object Segmentation via Boundary-Constrained Low-Rank Sparse Representation". IEEE Access 7 (2019): 53520–33. http://dx.doi.org/10.1109/access.2019.2912760.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

He, YuJie, Min Li, JinLi Zhang y Qi An. "Small infrared target detection based on low-rank and sparse representation". Infrared Physics & Technology 68 (enero de 2015): 98–109. http://dx.doi.org/10.1016/j.infrared.2014.10.022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Ge, Ting, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao y Shanxiang Mu. "Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation". Computational Intelligence and Neuroscience 2019 (1 de julio de 2019): 1–11. http://dx.doi.org/10.1155/2019/9378014.

Texto completo
Resumen
The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Yang, Shicheng, Le Zhang, Lianghua He y Ying Wen. "Sparse Low-Rank Component-Based Representation for Face Recognition With Low-Quality Images". IEEE Transactions on Information Forensics and Security 14, n.º 1 (enero de 2019): 251–61. http://dx.doi.org/10.1109/tifs.2018.2849883.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Kim, Hyuncheol y Joonki Paik. "Low-Rank Representation-Based Object Tracking Using Multitask Feature Learning with Joint Sparsity". Abstract and Applied Analysis 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/147353.

Texto completo
Resumen
We address object tracking problem as a multitask feature learning process based on low-rank representation of features with joint sparsity. We first select features with low-rank representation within a number of initial frames to obtain subspace basis. Next, the features represented by the low-rank and sparse property are learned using a modified joint sparsity-based multitask feature learning framework. Both the features and sparse errors are then optimally updated using a novel incremental alternating direction method. The low-rank minimization problem for learning multitask features can be achieved by a few sequences of efficient closed form update process. Since the proposed method attempts to perform the feature learning problem in both multitask and low-rank manner, it can not only reduce the dimension but also improve the tracking performance without drift. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art tracking methods for tracking objects in challenging image sequences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Zhan, Shanhua, Weijun Sun y Peipei Kang. "Robust Latent Common Subspace Learning for Transferable Feature Representation". Electronics 11, n.º 5 (4 de marzo de 2022): 810. http://dx.doi.org/10.3390/electronics11050810.

Texto completo
Resumen
This paper proposes a novel robust latent common subspace learning (RLCSL) method by integrating low-rank and sparse constraints into a joint learning framework. Specifically, we transform the data from source and target domains into a latent common subspace to perform the data reconstruction, i.e., the transformed source data is used to reconstruct the transformed target data. We impose joint low-rank and sparse constraints on the reconstruction coefficient matrix which can achieve following objectives: (1) the data from different domains can be interlaced by using the low-rank constraint; (2) the data from different domains but with the same label can be aligned together by using the sparse constraint. In this way, the new feature representation in the latent common subspace is discriminative and transferable. To learn a suitable classifier, we also integrate the classifier learning and feature representation learning into a unified objective and thus the high-level semantics label (data label) is fully used to guide the learning process of these two tasks. Experiments are conducted on diverse data sets for image, object, and document classifications, and encouraging experimental results show that the proposed method outperforms some state-of-the-arts methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Yang, Bo, Kunkun Tong, Xueqing Zhao, Shanmin Pang y Jinguang Chen. "Multilabel Classification Using Low-Rank Decomposition". Discrete Dynamics in Nature and Society 2020 (7 de abril de 2020): 1–8. http://dx.doi.org/10.1155/2020/1279253.

Texto completo
Resumen
In the multilabel learning framework, each instance is no longer associated with a single semantic, but rather with concept ambiguity. Specifically, the ambiguity of an instance in the input space means that there are multiple corresponding labels in the output space. In most of the existing multilabel classification methods, a binary annotation vector is used to denote the multiple semantic concepts. That is, +1 denotes that the instance has a relevant label, while −1 means the opposite. However, the label representation contains too little semantic information to truly express the differences among multiple different labels. Therefore, we propose a new approach to transform binary label into a real-valued label. We adopt the low-rank decomposition to get latent label information and then incorporate the information and original features to generate new features. Then, using the sparse representation to reconstruct the new instance, the reconstruction error can also be applied in the label space. In this way, we finally achieve the purpose of label conversion. Extensive experiments validate that the proposed method can achieve comparable to or even better results than other state-of-the-art algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Zhang, Chao, Huaxiong Li, Wei Lv, Zizheng Huang, Yang Gao y Chunlin Chen. "Enhanced Tensor Low-Rank and Sparse Representation Recovery for Incomplete Multi-View Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junio de 2023): 11174–82. http://dx.doi.org/10.1609/aaai.v37i9.26323.

Texto completo
Resumen
Incomplete multi-view clustering (IMVC) has attracted remarkable attention due to the emergence of multi-view data with missing views in real applications. Recent methods attempt to recover the missing information to address the IMVC problem. However, they generally cannot fully explore the underlying properties and correlations of data similarities across views. This paper proposes a novel Enhanced Tensor Low-rank and Sparse Representation Recovery (ETLSRR) method, which reformulates the IMVC problem as a joint incomplete similarity graphs learning and complete tensor representation recovery problem. Specifically, ETLSRR learns the intra-view similarity graphs and constructs a 3-way tensor by stacking the graphs to explore the inter-view correlations. To alleviate the negative influence of missing views and data noise, ETLSRR decomposes the tensor into two parts: a sparse tensor and an intrinsic tensor, which models the noise and underlying true data similarities, respectively. Both global low-rank and local structured sparse characteristics of the intrinsic tensor are considered, which enhances the discrimination of similarity matrix. Moreover, instead of using the convex tensor nuclear norm, ETLSRR introduces a generalized non-convex tensor low-rank regularization to alleviate the biased approximation. Experiments on several datasets demonstrate the effectiveness of our method compared with the state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Xue, Jize, Yong-Qiang Zhao, Yuanyang Bu, Wenzhi Liao, Jonathan Cheung-Wai Chan y Wilfried Philips. "Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution". IEEE Transactions on Image Processing 30 (2021): 3084–97. http://dx.doi.org/10.1109/tip.2021.3058590.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Liu, Xiaolan, Miao Yi, Le Han y Xue Deng. "A subspace clustering algorithm based on simultaneously sparse and low-rank representation". Journal of Intelligent & Fuzzy Systems 33, n.º 1 (22 de junio de 2017): 621–33. http://dx.doi.org/10.3233/jifs-16771.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ding, Yun, Yanwen Chong y Shaoming Pan. "Sparse and Low-Rank Representation With Key Connectivity for Hyperspectral Image Classification". IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020): 5609–22. http://dx.doi.org/10.1109/jstars.2020.3023483.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Xie, Wenbin, Hong Yin, Meini Wang, Yan Shao y Bosi Yu. "Low‐rank structured sparse representation and reduced dictionary learning‐based abnormity detection". IET Computer Vision 13, n.º 1 (4 de diciembre de 2018): 8–14. http://dx.doi.org/10.1049/iet-cvi.2018.5256.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía