Articles de revues sur le sujet « Sparse Low-Rank Representation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Sparse Low-Rank Representation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Sparse Low-Rank Representation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Hengdong Zhu, Hengdong Zhu, Ting Yang Hengdong Zhu, Yingcang Ma Ting Yang et Xiaofei Yang Yingcang Ma. « Multi-view Re-weighted Sparse Subspace Clustering with Intact Low-rank Space Learning ». 電腦學刊 33, no 4 (août 2022) : 121–31. http://dx.doi.org/10.53106/199115992022083304010.

Texte intégral
Résumé :
<p>In this paper, we propose a new Multi-view Re-weighted Sparse Subspace Clustering with Intact Low-rank Space Learning (ILrS-MRSSC) method, trying to find a sparse representation of the complete space of information. Specifically, this method integrates the complementary information inherent in multiple angles of the data, learns a complete space of potential low-rank representation, and constructs a sparse information matrix to reconstruct the data. The correlation between multi-view learning and subspace clustering is strengthened to the greatest extent, so that the subspace representation is more intuitive and accurate. The optimal solution of the model is solved by the augmented lagrangian multiplier (ALM) method of alternating direction minimal. Experiments on multiple benchmark data sets verify the effec-tiveness of this method.</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Jianxi, et Lina Zhao. « Low-rank and sparse matrices fitting algorithm for low-rank representation ». Computers & ; Mathematics with Applications 79, no 2 (janvier 2020) : 407–25. http://dx.doi.org/10.1016/j.camwa.2019.07.012.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kim, Hyuncheol, et Joonki Paik. « Video Summarization using Low-Rank Sparse Representation ». IEIE Transactions on Smart Processing & ; Computing 7, no 3 (30 juin 2018) : 236–44. http://dx.doi.org/10.5573/ieiespc.2018.7.3.236.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

CHENG, Shilei, Song GU, Maoquan YE et Mei XIE. « Action Recognition Using Low-Rank Sparse Representation ». IEICE Transactions on Information and Systems E101.D, no 3 (2018) : 830–34. http://dx.doi.org/10.1587/transinf.2017edl8176.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Wang, Jun, Daming Shi, Dansong Cheng, Yongqiang Zhang et Junbin Gao. « LRSR : Low-Rank-Sparse representation for subspace clustering ». Neurocomputing 214 (novembre 2016) : 1026–37. http://dx.doi.org/10.1016/j.neucom.2016.07.015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Du, Haishun, Xudong Zhang, Qingpu Hu et Yandong Hou. « Sparse representation-based robust face recognition by graph regularized low-rank sparse representation recovery ». Neurocomputing 164 (septembre 2015) : 220–29. http://dx.doi.org/10.1016/j.neucom.2015.02.067.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Xiujun, Chen Xu, Min Li et Xiaoli Sun. « Sparse and Low-Rank Coupling Image Segmentation Model Via Nonconvex Regularization ». International Journal of Pattern Recognition and Artificial Intelligence 29, no 02 (27 février 2015) : 1555004. http://dx.doi.org/10.1142/s0218001415550046.

Texte intégral
Résumé :
This paper investigates how to boost region-based image segmentation by inheriting the advantages of sparse representation and low-rank representation. A novel image segmentation model, called nonconvex regularization based sparse and low-rank coupling model, is presented for such a purpose. We aim at finding the optimal solution which is provided with sparse and low-rank simultaneously. This is achieved by relaxing sparse representation problem as L1/2 norm minimization other than the L1 norm minimization, while relaxing low-rank representation problem as the S1/2 norm minimization other than the nuclear norm minimization. This coupled model can be solved efficiently through the Augmented Lagrange Multiplier (ALM) method and half-threshold operator. Compared to the other state-of-the-art methods, the new method is better at capturing the global structure of the whole data, the robustness is better and the segmentation accuracy is also competitive. Experiments on two public image segmentation databases well validate the superiority of our method.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zheng, Chun-Hou, Yi-Fu Hou et Jun Zhang. « Improved sparse representation with low-rank representation for robust face recognition ». Neurocomputing 198 (juillet 2016) : 114–24. http://dx.doi.org/10.1016/j.neucom.2015.07.146.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Du, Shiqiang, Yuqing Shi, Guangrong Shan, Weilan Wang et Yide Ma. « Tensor low-rank sparse representation for tensor subspace learning ». Neurocomputing 440 (juin 2021) : 351–64. http://dx.doi.org/10.1016/j.neucom.2021.02.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zou, Dongqing, Xiaowu Chen, Guangying Cao et Xiaogang Wang. « Unsupervised Video Matting via Sparse and Low-Rank Representation ». IEEE Transactions on Pattern Analysis and Machine Intelligence 42, no 6 (1 juin 2020) : 1501–14. http://dx.doi.org/10.1109/tpami.2019.2895331.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

He, Zhi-Fen, et Ming Yang. « Sparse and low-rank representation for multi-label classification ». Applied Intelligence 49, no 5 (26 novembre 2018) : 1708–23. http://dx.doi.org/10.1007/s10489-018-1345-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Jian, Hongsong Dong, Wenlian Gao, Li Zhang, Zhiwen Xue et Xiangfei Shen. « Structured low-rank representation learning for hyperspectral sparse unmixing ». International Journal of Remote Sensing 45, no 2 (15 janvier 2024) : 351–75. http://dx.doi.org/10.1080/01431161.2023.2295836.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

You, Cong-Zhe, Zhen-Qiu Shu et Hong-Hui Fan. « Low-rank sparse subspace clustering with a clean dictionary ». Journal of Algorithms & ; Computational Technology 15 (janvier 2021) : 174830262098369. http://dx.doi.org/10.1177/1748302620983690.

Texte intégral
Résumé :
Low-Rank Representation (LRR) and Sparse Subspace Clustering (SSC) are considered as the hot topics of subspace clustering algorithms. SSC induces the sparsity through minimizing the [Formula: see text]-norm of the data matrix while LRR promotes a low-rank structure through minimizing the nuclear norm. In this paper, considering the problem of fitting a union of subspace to a collection of data points drawn from one more subspaces and corrupted by noise, we pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise. We propose a new algorithm, named Low-Rank and Sparse Subspace Clustering with a Clean dictionary (LRS2C2), by combining SSC and LRR, as the representation is often both sparse and low-rank. The effectiveness of the proposed algorithm is demonstrated through experiments on motion segmentation and image clustering.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Dong, Le, et Yuan Yuan. « Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing ». Remote Sensing 13, no 8 (11 avril 2021) : 1473. http://dx.doi.org/10.3390/rs13081473.

Texte intégral
Résumé :
Recently, non-negative tensor factorization (NTF) as a very powerful tool has attracted the attention of researchers. It is used in the unmixing of hyperspectral images (HSI) due to its excellent expression ability without any information loss when describing data. However, most of the existing unmixing methods based on NTF fail to fully explore the unique properties of data, for example, low rank, that exists in both the spectral and spatial domains. To explore this low-rank structure, in this paper we learn the different low-rank representations of HSI in the spectral, spatial and non-local similarity modes. Firstly, HSI is divided into many patches, and these patches are clustered multiple groups according to the similarity. Each similarity group can constitute a 4-D tensor, including two spatial modes, a spectral mode and a non-local similarity mode, which has strong low-rank properties. Secondly, a low-rank regularization with logarithmic function is designed and embedded in the NTF framework, which simulates the spatial, spectral and non-local similarity modes of these 4-D tensors. In addition, the sparsity of the abundance tensor is also integrated into the unmixing framework to improve the unmixing performance through the L2,1 norm. Experiments on three real data sets illustrate the stability and effectiveness of our algorithm compared with five state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Li, Zhao, Le Wang, Tao Yu et Bing Liang Hu. « Image Super-Resolution via Low-Rank Representation ». Applied Mechanics and Materials 568-570 (juin 2014) : 652–55. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.652.

Texte intégral
Résumé :
This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Gan, Bin, Chun-Hou Zheng, Jun Zhang et Hong-Qiang Wang. « Sparse Representation for Tumor Classification Based on Feature Extraction Using Latent Low-Rank Representation ». BioMed Research International 2014 (2014) : 1–7. http://dx.doi.org/10.1155/2014/420856.

Texte intégral
Résumé :
Accurate tumor classification is crucial to the proper treatment of cancer. To now, sparse representation (SR) has shown its great performance for tumor classification. This paper conceives a new SR-based method for tumor classification by using gene expression data. In the proposed method, we firstly use latent low-rank representation for extracting salient features and removing noise from the original samples data. Then we use sparse representation classifier (SRC) to build tumor classification model. The experimental results on several real-world data sets show that our method is more efficient and more effective than the previous classification methods including SVM, SRC, and LASSO.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Xu, Yong, Xiaozhao Fang, Jian Wu, Xuelong Li et David Zhang. « Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation ». IEEE Transactions on Image Processing 25, no 2 (février 2016) : 850–63. http://dx.doi.org/10.1109/tip.2015.2510498.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Audouze, Christophe, et Prasanth B. Nair. « Sparse low-rank separated representation models for learning from data ». Proceedings of the Royal Society A : Mathematical, Physical and Engineering Sciences 475, no 2221 (janvier 2019) : 20180490. http://dx.doi.org/10.1098/rspa.2018.0490.

Texte intégral
Résumé :
We consider the problem of learning a multivariate function from a set of scattered observations using a sparse low-rank separated representation (SSR) model. The model structure considered here is promising for high-dimensional learning problems; however, existing training algorithms based on alternating least-squares (ALS) are known to have convergence difficulties, particularly when the rank of the model is greater than 1. In the present work, we supplement the model structure with sparsity constraints to ensure the well posedness of the approximation problem. We propose two fast training algorithms to estimate the model parameters: (i) a cyclic coordinate descent algorithm and (ii) a block coordinate descent (BCD) algorithm. While the first algorithm is not provably convergent owing to the non-convexity of the optimization problem, the BCD algorithm guarantees convergence to a Nash equilibrium point. The computational cost of the proposed algorithms is shown to scale linearly with respect to all of the parameters in contrast to methods based on ALS. Numerical studies on synthetic and real-world regression datasets indicate that the proposed SSR model structure holds significant potential for machine learning problems.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Liu, Zhonghua, Weihua Ou, Wenpeng Lu et Lin Wang. « Discriminative feature extraction based on sparse and low-rank representation ». Neurocomputing 362 (octobre 2019) : 129–38. http://dx.doi.org/10.1016/j.neucom.2019.06.073.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Du, Hai-Shun, Qing-Pu Hu, Dian-Feng Qiao et Ioannis Pitas. « Robust face recognition via low-rank sparse representation-based classification ». International Journal of Automation and Computing 12, no 6 (6 novembre 2015) : 579–87. http://dx.doi.org/10.1007/s11633-015-0901-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Huang, Jie, Ting-Zhu Huang, Liang-Jian Deng et Xi-Le Zhao. « Joint-Sparse-Blocks and Low-Rank Representation for Hyperspectral Unmixing ». IEEE Transactions on Geoscience and Remote Sensing 57, no 4 (avril 2019) : 2419–38. http://dx.doi.org/10.1109/tgrs.2018.2873326.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Yong-Qiang Zhao et Jingxiang Yang. « Hyperspectral Image Denoising via Sparse Representation and Low-Rank Constraint ». IEEE Transactions on Geoscience and Remote Sensing 53, no 1 (janvier 2015) : 296–308. http://dx.doi.org/10.1109/tgrs.2014.2321557.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Ma, Guanqun, Ting-Zhu Huang, Jie Huang et Chao-Chao Zheng. « Local Low-Rank and Sparse Representation for Hyperspectral Image Denoising ». IEEE Access 7 (2019) : 79850–65. http://dx.doi.org/10.1109/access.2019.2923255.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Hou, Yi-Fu, Zhan-Li Sun, Yan-Wen Chong et Chun-Hou Zheng. « Low-Rank and Eigenface Based Sparse Representation for Face Recognition ». PLoS ONE 9, no 10 (21 octobre 2014) : e110318. http://dx.doi.org/10.1371/journal.pone.0110318.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Zhao, Feng, Weijian Si et Zheng Dou. « Sparse media image restoration based on collaborative low rank representation ». Multimedia Tools and Applications 77, no 8 (29 juin 2017) : 10051–62. http://dx.doi.org/10.1007/s11042-017-4958-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Tang, Kewei, Jie Zhang, Zhixun Su et Jiangxin Dong. « Bayesian Low-Rank and Sparse Nonlinear Representation for Manifold Clustering ». Neural Processing Letters 44, no 3 (29 décembre 2015) : 719–33. http://dx.doi.org/10.1007/s11063-015-9490-x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Nazari Siahsar, Mohammad Amir, Saman Gholtashi, Amin Roshandel Kahoo, Hosein Marvi et Alireza Ahmadifard. « Sparse time-frequency representation for seismic noise reduction using low-rank and sparse decomposition ». GEOPHYSICS 81, no 2 (1 mars 2016) : V117—V124. http://dx.doi.org/10.1190/geo2015-0341.1.

Texte intégral
Résumé :
Attenuation of random noise is a major concern in seismic data processing. This kind of noise is usually characterized by random oscillation in seismic data over the entire time and frequency. We introduced and evaluated a low-rank and sparse decomposition-based method for seismic random noise attenuation. The proposed method, which is a trace by trace algorithm, starts by transforming the seismic signal into a new sparse subspace using the synchrosqueezing transform. Then, the sparse time-frequency representation (TFR) matrix is decomposed into two parts: (a) a low-rank component and (b) a sparse component using bilateral random projection. Although seismic data are not exactly low-rank in the sparse TFR domain, they can be assumed as being of semi-low-rank or approximately low-rank type. Hence, we can recover the denoised seismic signal by minimizing the mixed [Formula: see text] norms’ objective function by considering the intrinsically semilow-rank property of the seismic data and sparsity feature of random noise in the sparse TFR domain. The proposed method was tested on synthetic and real data. In the synthetic case, the data were contaminated by random noise. Denoising was carried out by means of the [Formula: see text] classical singular spectrum analysis (SSA) and [Formula: see text] deconvolution method for comparison. The [Formula: see text] deconvolution and the classical [Formula: see text] SSA method failed to properly reduce the noise and to recover the desired signal. We have also tested the proposed method on a prestack real data set from an oil field in the southwest of Iran. Through synthetic and real tests, the proposed method is determined to be an effective, amplitude preserving, and robust tool that gives superior results over classical [Formula: see text] SSA as conventional algorithm for denoising seismic data.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Liao, Jiayu, Xiaolan Liu et Mengying Xie. « Inductive Latent Space Sparse and Low-rank Subspace Clustering Algorithm ». Journal of Physics : Conference Series 2224, no 1 (1 avril 2022) : 012124. http://dx.doi.org/10.1088/1742-6596/2224/1/012124.

Texte intégral
Résumé :
Abstract Sparse subspace clustering (SSC) and low-rank representation (LRR) are the most popular algorithms for subspace clustering. However, SSC and LRR are transductive methods and cannot deal with the new data not involved in the training data. When a new data comes, SSC and LRR need to calculate over all the data again, which is a time-consuming thing. On the other hand, for high-dimensional data, dimensionality reduction is firstly performed before running SSC and LRR algorithms which isolate the dimensionality reduction and the following subspace clustering. To overcome these shortcomings, in this paper, two sparse and low-rank subspace clustering algorithms based on simultaneously dimensionality reduction and subspace clustering which can deal with out-of-sample data were proposed. The proposed algorithms divide the whole data set into in-sample data and out-of-sample data. The in-sample data are used to learn the projection matrix and the sparse or low-rank representation matrix in the low-dimensional space. The membership of in-sample data is obtained by spectral clustering. In the low dimensional embedding space, the membership of out of sample data is obtained by collaborative representation classification (CRC). Experimental results on a variety of data sets verify that our proposed algorithms can handle new data in an efficient way.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Xie, Shicheng, Shun Wang, Chuanming Song et Xianghai Wang. « Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation ». Remote Sensing 14, no 17 (25 août 2022) : 4184. http://dx.doi.org/10.3390/rs14174184.

Texte intégral
Résumé :
The enormous amount of data that are generated by hyperspectral remote sensing images (HSI) combined with the spatial channel’s limited and fragile bandwidth creates serious transmission, storage, and application challenges. HSI reconstruction based on compressed sensing has become a frontier area, and its effectiveness depends heavily on the exploitation and sparse representation of HSI information correlation. In this paper, we propose a low-rank sparse constrained HSI reconstruction model (LRCoSM) that is based on joint spatial-spectral HSI sparseness. In the spectral dimension, a spectral domain sparsity measure and the representation of the joint spectral dimensional plane are proposed for the first time. A Gaussian mixture model (GMM) that is based on unsupervised adaptive parameter learning of external datasets is used to cluster similar patches of joint spectral plane features, capturing the correlation of HSI spectral dimensional non-local structure image patches while performing low-rank decomposition of clustered similar patches to extract feature information, effectively improving the ability of low-rank approximate sparse representation of spectral dimensional similar patches. In the spatial dimension, local-nonlocal HSI similarity is explored to refine sparse prior constraints. Spectral and spatial dimension sparse constraints improve HSI reconstruction quality. Experimental results that are based on various sampling rates on four publicly available datasets show that the proposed algorithm can obtain high-quality reconstructed PSNR and FSIM values and effectively maintain the spectral curves for few-band datasets compared with six currently popular reconstruction algorithms, and the proposed algorithm has strong robustness and generalization ability at different sampling rates and on other datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Sun, Yubao, Zhi Li et Min Wu. « A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering ». Mathematical Problems in Engineering 2015 (2015) : 1–12. http://dx.doi.org/10.1155/2015/572753.

Texte intégral
Résumé :
This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Cai, T. Tony, et Anru Zhang. « Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-Rank Matrices ». IEEE Transactions on Information Theory 60, no 1 (janvier 2014) : 122–32. http://dx.doi.org/10.1109/tit.2013.2288639.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Zhang Xiaohui, 张晓慧, 郝润芳 Hao Runfang et 李廷鱼 Li Tingyu. « Hyperspectral Abnormal Target Detection Based on Low Rank and Sparse Matrix Decomposition-Sparse Representation ». Laser & ; Optoelectronics Progress 56, no 4 (2019) : 042801. http://dx.doi.org/10.3788/lop56.042801.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Yang, Jie, Jun Ma, Khin Than Win, Junbin Gao et Zhenyu Yang. « Low-rank and sparse representation based learning for cancer survivability prediction ». Information Sciences 582 (janvier 2022) : 573–92. http://dx.doi.org/10.1016/j.ins.2021.10.013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Yang, Lu, Gongping Yang, Kuikui Wang, Fanchang Hao et Yilong Yin. « Finger Vein Recognition via Sparse Reconstruction Error Constrained Low-Rank Representation ». IEEE Transactions on Information Forensics and Security 16 (2021) : 4869–81. http://dx.doi.org/10.1109/tifs.2021.3118894.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Li, Long, et Zheng Liu. « Noise‐robust HRRP target recognition method via sparse‐low‐rank representation ». Electronics Letters 53, no 24 (novembre 2017) : 1602–4. http://dx.doi.org/10.1049/el.2017.2960.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Tao, JianWen, Shiting Wen et Wenjun Hu. « Robust domain adaptation image classification via sparse and low rank representation ». Journal of Visual Communication and Image Representation 33 (novembre 2015) : 134–48. http://dx.doi.org/10.1016/j.jvcir.2015.09.005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Chen, Jie, et Zhang Yi. « Sparse representation for face recognition by discriminative low-rank matrix recovery ». Journal of Visual Communication and Image Representation 25, no 5 (juillet 2014) : 763–73. http://dx.doi.org/10.1016/j.jvcir.2014.01.015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Yang, Wanqi, Yinghuan Shi, Yang Gao, Lei Wang et Ming Yang. « Incomplete-Data Oriented Multiview Dimension Reduction via Sparse Low-Rank Representation ». IEEE Transactions on Neural Networks and Learning Systems 29, no 12 (décembre 2018) : 6276–91. http://dx.doi.org/10.1109/tnnls.2018.2828699.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Gu, Song, Lihui Wang, Wei Hao, Yingjie Du, Jian Wang et Weirui Zhang. « Online Video Object Segmentation via Boundary-Constrained Low-Rank Sparse Representation ». IEEE Access 7 (2019) : 53520–33. http://dx.doi.org/10.1109/access.2019.2912760.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

He, YuJie, Min Li, JinLi Zhang et Qi An. « Small infrared target detection based on low-rank and sparse representation ». Infrared Physics & ; Technology 68 (janvier 2015) : 98–109. http://dx.doi.org/10.1016/j.infrared.2014.10.022.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Ge, Ting, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao et Shanxiang Mu. « Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation ». Computational Intelligence and Neuroscience 2019 (1 juillet 2019) : 1–11. http://dx.doi.org/10.1155/2019/9378014.

Texte intégral
Résumé :
The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Yang, Shicheng, Le Zhang, Lianghua He et Ying Wen. « Sparse Low-Rank Component-Based Representation for Face Recognition With Low-Quality Images ». IEEE Transactions on Information Forensics and Security 14, no 1 (janvier 2019) : 251–61. http://dx.doi.org/10.1109/tifs.2018.2849883.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Kim, Hyuncheol, et Joonki Paik. « Low-Rank Representation-Based Object Tracking Using Multitask Feature Learning with Joint Sparsity ». Abstract and Applied Analysis 2014 (2014) : 1–12. http://dx.doi.org/10.1155/2014/147353.

Texte intégral
Résumé :
We address object tracking problem as a multitask feature learning process based on low-rank representation of features with joint sparsity. We first select features with low-rank representation within a number of initial frames to obtain subspace basis. Next, the features represented by the low-rank and sparse property are learned using a modified joint sparsity-based multitask feature learning framework. Both the features and sparse errors are then optimally updated using a novel incremental alternating direction method. The low-rank minimization problem for learning multitask features can be achieved by a few sequences of efficient closed form update process. Since the proposed method attempts to perform the feature learning problem in both multitask and low-rank manner, it can not only reduce the dimension but also improve the tracking performance without drift. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art tracking methods for tracking objects in challenging image sequences.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Zhan, Shanhua, Weijun Sun et Peipei Kang. « Robust Latent Common Subspace Learning for Transferable Feature Representation ». Electronics 11, no 5 (4 mars 2022) : 810. http://dx.doi.org/10.3390/electronics11050810.

Texte intégral
Résumé :
This paper proposes a novel robust latent common subspace learning (RLCSL) method by integrating low-rank and sparse constraints into a joint learning framework. Specifically, we transform the data from source and target domains into a latent common subspace to perform the data reconstruction, i.e., the transformed source data is used to reconstruct the transformed target data. We impose joint low-rank and sparse constraints on the reconstruction coefficient matrix which can achieve following objectives: (1) the data from different domains can be interlaced by using the low-rank constraint; (2) the data from different domains but with the same label can be aligned together by using the sparse constraint. In this way, the new feature representation in the latent common subspace is discriminative and transferable. To learn a suitable classifier, we also integrate the classifier learning and feature representation learning into a unified objective and thus the high-level semantics label (data label) is fully used to guide the learning process of these two tasks. Experiments are conducted on diverse data sets for image, object, and document classifications, and encouraging experimental results show that the proposed method outperforms some state-of-the-arts methods.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Yang, Bo, Kunkun Tong, Xueqing Zhao, Shanmin Pang et Jinguang Chen. « Multilabel Classification Using Low-Rank Decomposition ». Discrete Dynamics in Nature and Society 2020 (7 avril 2020) : 1–8. http://dx.doi.org/10.1155/2020/1279253.

Texte intégral
Résumé :
In the multilabel learning framework, each instance is no longer associated with a single semantic, but rather with concept ambiguity. Specifically, the ambiguity of an instance in the input space means that there are multiple corresponding labels in the output space. In most of the existing multilabel classification methods, a binary annotation vector is used to denote the multiple semantic concepts. That is, +1 denotes that the instance has a relevant label, while −1 means the opposite. However, the label representation contains too little semantic information to truly express the differences among multiple different labels. Therefore, we propose a new approach to transform binary label into a real-valued label. We adopt the low-rank decomposition to get latent label information and then incorporate the information and original features to generate new features. Then, using the sparse representation to reconstruct the new instance, the reconstruction error can also be applied in the label space. In this way, we finally achieve the purpose of label conversion. Extensive experiments validate that the proposed method can achieve comparable to or even better results than other state-of-the-art algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Zhang, Chao, Huaxiong Li, Wei Lv, Zizheng Huang, Yang Gao et Chunlin Chen. « Enhanced Tensor Low-Rank and Sparse Representation Recovery for Incomplete Multi-View Clustering ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 11174–82. http://dx.doi.org/10.1609/aaai.v37i9.26323.

Texte intégral
Résumé :
Incomplete multi-view clustering (IMVC) has attracted remarkable attention due to the emergence of multi-view data with missing views in real applications. Recent methods attempt to recover the missing information to address the IMVC problem. However, they generally cannot fully explore the underlying properties and correlations of data similarities across views. This paper proposes a novel Enhanced Tensor Low-rank and Sparse Representation Recovery (ETLSRR) method, which reformulates the IMVC problem as a joint incomplete similarity graphs learning and complete tensor representation recovery problem. Specifically, ETLSRR learns the intra-view similarity graphs and constructs a 3-way tensor by stacking the graphs to explore the inter-view correlations. To alleviate the negative influence of missing views and data noise, ETLSRR decomposes the tensor into two parts: a sparse tensor and an intrinsic tensor, which models the noise and underlying true data similarities, respectively. Both global low-rank and local structured sparse characteristics of the intrinsic tensor are considered, which enhances the discrimination of similarity matrix. Moreover, instead of using the convex tensor nuclear norm, ETLSRR introduces a generalized non-convex tensor low-rank regularization to alleviate the biased approximation. Experiments on several datasets demonstrate the effectiveness of our method compared with the state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Xue, Jize, Yong-Qiang Zhao, Yuanyang Bu, Wenzhi Liao, Jonathan Cheung-Wai Chan et Wilfried Philips. « Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution ». IEEE Transactions on Image Processing 30 (2021) : 3084–97. http://dx.doi.org/10.1109/tip.2021.3058590.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Liu, Xiaolan, Miao Yi, Le Han et Xue Deng. « A subspace clustering algorithm based on simultaneously sparse and low-rank representation ». Journal of Intelligent & ; Fuzzy Systems 33, no 1 (22 juin 2017) : 621–33. http://dx.doi.org/10.3233/jifs-16771.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Ding, Yun, Yanwen Chong et Shaoming Pan. « Sparse and Low-Rank Representation With Key Connectivity for Hyperspectral Image Classification ». IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020) : 5609–22. http://dx.doi.org/10.1109/jstars.2020.3023483.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Xie, Wenbin, Hong Yin, Meini Wang, Yan Shao et Bosi Yu. « Low‐rank structured sparse representation and reduced dictionary learning‐based abnormity detection ». IET Computer Vision 13, no 1 (4 décembre 2018) : 8–14. http://dx.doi.org/10.1049/iet-cvi.2018.5256.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie