Articles de revues sur le sujet « Weighted sparse reconstruction »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Weighted sparse reconstruction.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Weighted sparse reconstruction ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Deng, Jun, Guanghui Ren, Yansheng Jin et Wenjing Ning. « Iterative Weighted Gradient Projection for Sparse Reconstruction ». Information Technology Journal 10, no 7 (15 juin 2011) : 1409–14. http://dx.doi.org/10.3923/itj.2011.1409.1414.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Pei, Liye, Hua Jiang et Ming Li. « Weighted double‐backtracking matching pursuit for block‐sparse reconstruction ». IET Signal Processing 10, no 8 (octobre 2016) : 930–35. http://dx.doi.org/10.1049/iet-spr.2016.0036.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Pinchera, Daniele, et Marco Donald Migliore. « Accurate Reconstruction of the Radiation of Sparse Sources from a Small Set of Near-Field Measurements by Means of a Smooth-Weighted Norm for Cluster-Sparsity Problems ». Electronics 10, no 22 (19 novembre 2021) : 2854. http://dx.doi.org/10.3390/electronics10222854.

Texte intégral
Résumé :
The aim of this contribution is to present an approach that allows to improve the quality of the reconstruction of the far-field from a small number of measured samples by means of sparse recovery using a relatively coarse grid for source positions (with sample spacing of the order of λ/8) compared to the grid usually required. In particular, the iterative method proposed employs a smooth-weighted constrained minimization, that guarantees a better probability of correct estimate of the sparse sources and an improved quality in the reconstruction, with a similar computational effort respect to the standard ℓ1 re-weighted minimization approach.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Xu, Congcong, Bo Yang, Fupei Guo, Wenfeng Zheng et Philippe Poignet. « Sparse-view CBCT reconstruction via weighted Schatten p-norm minimization ». Optics Express 28, no 24 (9 novembre 2020) : 35469. http://dx.doi.org/10.1364/oe.404471.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Wu, Yapeng, Min Yang, Linfeng He, Qiang Lin, Meimei Wu, Zhengyao Li, Yuqing Li et Xiaoguang Liu. « Sparse-View Neutron CT Reconstruction Using a Modified Weighted Total Difference Minimization Method ». Applied Sciences 11, no 22 (19 novembre 2021) : 10942. http://dx.doi.org/10.3390/app112210942.

Texte intégral
Résumé :
Indirect neutron imaging is an effective method for nondestructive testing of spent nuclear fuel elements. Considering the difficulty of obtaining experimental data in a high-radiation environment and the characteristic of high noise of neutron images, it is difficult to use the traditional FBP algorithm to recover the complete information of the sample based on the limited projection data. Therefore, it is necessary to develop the sparse-view CT reconstruction algorithm for indirect neutron imaging. In order to improve the quality of the reconstruction image, an iterative reconstruction method combining SIRT, MRP, and WTDM regularization is proposed. The reconstruction results obtained by using the proposed method on simulated data and actual neutron projection data are compared with the results of four other algorithms (FBP, SIRT, SIRT-TV, and SIRT-WTDM). The experimental results show that the SIRT-MWTDM algorithm has great advantages in both objective evaluation index and subjective observation in the reconstruction image of simulated data and neutron projection data.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Yi, Kang Yang, Yining Zhu, Wenjun Xia, Peng Bao et Jiliu Zhou. « NOWNUNM : Nonlocal Weighted Nuclear Norm Minimization for Sparse-Sampling CT Reconstruction ». IEEE Access 6 (2018) : 73370–79. http://dx.doi.org/10.1109/access.2018.2881966.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Abbasi, Ashkan, et Amirhassan Monadjemi. « Optical coherence tomography retinal image reconstruction via nonlocal weighted sparse representation ». Journal of Biomedical Optics 23, no 03 (24 mars 2018) : 1. http://dx.doi.org/10.1117/1.jbo.23.3.036011.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zonoobi, Dornoosh, et Ashraf A. Kassim. « On the reconstruction of sequences of sparse signals – The Weighted-CS ». Journal of Visual Communication and Image Representation 24, no 2 (février 2013) : 196–202. http://dx.doi.org/10.1016/j.jvcir.2012.05.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Zheng, Penggen, Huimin Zhao, Jin Zhan, Yijun Yan, Jinchang Ren, Jujian Lv et Zhihui Huang. « Incremental learning-based visual tracking with weighted discriminative dictionaries ». International Journal of Advanced Robotic Systems 16, no 6 (1 novembre 2019) : 172988141989015. http://dx.doi.org/10.1177/1729881419890155.

Texte intégral
Résumé :
Existing sparse representation-based visual tracking methods detect the target positions by minimizing the reconstruction error. However, due to complex background, illumination change, and occlusion problems, these methods are difficult to locate the target properly. In this article, we propose a novel visual tracking method based on weighted discriminative dictionaries and a pyramidal feature selection strategy. First, we utilize color features and texture features of the training samples to obtain multiple discriminative dictionaries. Then, we use the position information of those samples to assign weights to the base vectors in dictionaries. For robust visual tracking, we propose a pyramidal sparse feature selection strategy where the weights of base vectors and reconstruction errors in different feature are integrated together to get the best target regions. At the same time, we measure feature reliability to dynamically adjust the weights of different features. In addition, we introduce a scenario-aware mechanism and an incremental dictionary update method based on noise energy analysis. Comparison experiments show that the proposed algorithm outperforms several state-of-the-art methods, and useful quantitative and qualitative analyses are also carried out.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Benuwa, Ben-Bright. « Virtual Kernel Discriminative Dictionary Learning With Weighted KNN for Video Analysis ». International Journal of Data Analytics 3, no 1 (janvier 2022) : 1–19. http://dx.doi.org/10.4018/ijda.297521.

Texte intégral
Résumé :
Recently Kernel-Based Discriminative Dictionary (KDDL) for Video Semantic Content Analysis (VSCA) has become very popular research area, particularly in Human Computer Interactions and Computer Vision decades. Nonetheless, the existing KDDL approaches based on reconstruction error classification, coupled with sparse coefficients do not fully consider discrimination, which is essential for classification performance between video samples, despite their numerous successes. In addition, the size of video samples, an important parameter in kernel-based approaches is mostly ignored. To further improve the accuracy of video semantic classification, a VSC classification approach based on Sparse Coefficient Vector and a Virtual Kernel-based Weighted KNN is proposed in this paper. In the proposed approach, a loss function that integrates reconstruction error and discrimination is put forward. The experimental results show that this method effectively improves recognition and classification accuracy for VSCA compared with some state-of-the-art baseline approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Yu, Jingjing, Qiyue Li et Haiyu Wang. « Source reconstruction for bioluminescence tomography via L1∕2 regularization ». Journal of Innovative Optical Health Sciences 11, no 02 (19 février 2018) : 1750014. http://dx.doi.org/10.1142/s1793545817500146.

Texte intégral
Résumé :
Bioluminescence tomography (BLT) is an important noninvasive optical molecular imaging modality in preclinical research. To improve the image quality, reconstruction algorithms have to deal with the inherent ill-posedness of BLT inverse problem. The sparse characteristic of bioluminescent sources in spatial distribution has been widely explored in BLT and many L1-regularized methods have been investigated due to the sparsity-inducing properties of L1 norm. In this paper, we present a reconstruction method based on L[Formula: see text] regularization to enhance sparsity of BLT solution and solve the nonconvex L[Formula: see text] norm problem by converting it to a series of weighted L1 homotopy minimization problems with iteratively updated weights. To assess the performance of the proposed reconstruction algorithm, simulations on a heterogeneous mouse model are designed to compare it with three representative sparse reconstruction algorithms, including the weighted interior-point, L1 homotopy, and the Stagewise Orthogonal Matching Pursuit algorithm. Simulation results show that the proposed method yield stable reconstruction results under different noise levels. Quantitative comparison results demonstrate that the proposed algorithm outperforms the competitor algorithms in location accuracy, multiple-source resolving and image quality.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Liu, Li, Siqi Chen, Xiuxiu Chen, Tianshi Wang et Long Zhang. « Fuzzy weighted sparse reconstruction error-steered semi-supervised learning for face recognition ». Visual Computer 36, no 8 (21 septembre 2019) : 1521–34. http://dx.doi.org/10.1007/s00371-019-01746-y.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Zhao, Yingxin, Zhiyang Liu, Yuanyuan Wang, Hong Wu et Shuxue Ding. « Sparse Coding Algorithm with Negentropy and Weighted ℓ1-Norm for Signal Reconstruction ». Entropy 19, no 11 (8 novembre 2017) : 599. http://dx.doi.org/10.3390/e19110599.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Liu, Shujun, Ningjie Pu, Jianxin Cao et Kui Zhang. « Synthetic Aperture Radar Image Despeckling Based on Multi-Weighted Sparse Coding ». Entropy 24, no 1 (7 janvier 2022) : 96. http://dx.doi.org/10.3390/e24010096.

Texte intégral
Résumé :
Synthetic aperture radar (SAR) images are inherently degraded by speckle noise caused by coherent imaging, which may affect the performance of the subsequent image analysis task. To resolve this problem, this article proposes an integrated SAR image despeckling model based on dictionary learning and multi-weighted sparse coding. First, the dictionary is trained by groups composed of similar image patches, which have the same structural features. An effective orthogonal dictionary with high sparse representation ability is realized by introducing a properly tight frame. Furthermore, the data-fidelity term and regularization terms are constrained by weighting factors. The weighted sparse representation model not only fully utilizes the interblock relevance but also reflects the importance of various structural groups in despeckling processing. The proposed model is implemented with fast and effective solving steps that simultaneously perform orthogonal dictionary learning, weight parameter updating, sparse coding, and image reconstruction. The solving steps are designed using the alternative minimization method. Finally, the speckles are further suppressed by iterative regularization methods. In a comparison study with existing methods, our method demonstrated state-of-the-art performance in suppressing speckle noise and protecting the image texture details.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Xu, Caibin, Zhibo Yang et Mingxi Deng. « Weighted Structured Sparse Reconstruction-Based Lamb Wave Imaging Exploiting Multipath Edge Reflections in an Isotropic Plate ». Sensors 20, no 12 (21 juin 2020) : 3502. http://dx.doi.org/10.3390/s20123502.

Texte intégral
Résumé :
Lamb wave-based structural health monitoring techniques have the ability to scan a large area with relatively few sensors. Lamb wave imaging is a signal processing strategy that generates an image for locating scatterers according to the received Lamb waves. This paper presents a Lamb wave imaging method, which is formulated as a weighted structured sparse reconstruction problem. A dictionary is constructed by an analytical Lamb wave scattering model and an edge reflection prediction technique, which is used to decompose the experimental scattering signals under the constraint of weighted structured sparsity. The weights are generated from the correlation coefficients between the scattering signals and the predicted ones. Simulation and experimental results from an aluminum plate verify the effectiveness of the present method, which can generate images with sparse pixel values even with very limited number of sensors.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Zhang, Minghui, Kai Xiao, Hongyang Lu et Xiaoling Xu. « Weighted two-level Bregman method with graph regularized sparse coding for MRI reconstruction ». Journal of Shenzhen University Science and Engineering 33, no 2 (2016) : 119. http://dx.doi.org/10.3724/sp.j.1249.2016.02119.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Nandi, Debashis, Jayashree Karmakar, Amish Kumar et Mrinal Kanti Mandal. « Sparse representation based multi-frame image super-resolution reconstruction using adaptive weighted features ». IET Image Processing 13, no 4 (28 mars 2019) : 663–72. http://dx.doi.org/10.1049/iet-ipr.2018.5139.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Christiansen, Bo, T. Schmith et P. Thejll. « A Surrogate Ensemble Study of Sea Level Reconstructions ». Journal of Climate 23, no 16 (15 août 2010) : 4306–26. http://dx.doi.org/10.1175/2010jcli3014.1.

Texte intégral
Résumé :
Abstract This study investigates the possibility of reconstructing past global mean sea levels. Reconstruction methods rely on historical measurements from tide gauges combined with knowledge about the spatial covariance structure of the sea level field obtained from a shorter period with spatially well-resolved satellite measurements. A surrogate ensemble method is applied based on sea levels from a 500-yr climate model simulation. Tide gauges are simulated by selecting time series from grid points along continental coastlines and on ocean islands. Reconstructions of global mean sea levels can then be compared to the known target, and the ensemble method allows an estimation of the statistical properties originating from the stochastic nature of the reconstructions. Different reconstruction methods previously used in the literature are studied, including projection and optimal interpolation methods based on EOF analysis of the calibration period. This study also includes methods where these EOFs are augmented with a homogeneous pattern, with the purpose of better capturing a possible geographically homogeneous trend. These covariance-based methods are compared to a simple weighted mean method. It is concluded that the projection and optimal interpolation methods are very sensitive to the length of the calibration period. For realistic lengths of 10 and 20 yr, very large biases and spread in the reconstructed 1900–49 trends are found. Including a homogeneous pattern in the basis drastically improves the reconstructions of the trend and reduces the sensitivity to the length of the calibration period. The projection and optimal interpolation methods are now comparable to the weighted mean with biases less than 10% in the trend. However, the spread is still considerable. The amplitude of the year-to-year variability is in general strongly overestimated by all reconstruction methods. With regards to year-to-year variability, several methods outperform the simple mean. Finally, for the projection method, reconstruction errors are decomposed into contributions from the sparse coverage of tide gauges and the incomplete knowledge of the covariance structure of the sea level field. The study finds that the contributions of the different sources depend on the diagnostics of the reconstruction. It is noted that sea level is constrained by the approximate conservation of the total mass of the ocean. This poses challenges for the sea level reconstructions that are not present for other fields such as temperature.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Ghansah, Benjamin, Ben-Bright Benuwa et Augustine Monney. « A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis ». International Journal of Intelligent Information Technologies 17, no 1 (janvier 2021) : 68–91. http://dx.doi.org/10.4018/ijiit.2021010105.

Texte intégral
Résumé :
Video semantic concept analysis has received a lot of research attention in the area of human computer interactions in recent times. Reconstruction error classification methods based on sparse coefficients do not consider discrimination, essential for classification performance between video samples. To further improve the accuracy of video semantic classification, a video semantic concept classification approach based on sparse coefficient vector (SCV) and a kernel-based weighted KNN (KWKNN) is proposed in this paper. In the proposed approach, a loss function that integrates reconstruction error and discrimination is put forward. The authors calculate the loss function value between the test sample and training samples from each class according to the loss function criterion, and then vote on statistical results. Finally, this paper modifies the vote results combined with the kernel weight coefficient of each class and determine the video semantic concept. The experimental results show that this method effectively improves the classification accuracy for video semantic analysis and shorten the time used in the semantic classification compared with some baseline approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Zhang, Shanwen, Chuanlei Zhang, Yihai Zhu et Zhuhong You. « Discriminant WSRC for Large-Scale Plant Species Recognition ». Computational Intelligence and Neuroscience 2017 (2017) : 1–10. http://dx.doi.org/10.1155/2017/9581292.

Texte intégral
Résumé :
In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Qiang Lin, 林强, 杨民 Min Yang, 唐彬 Bin Tang, 刘斌 Bin Liu, 霍合勇 Heyong Huo et 刘家伟 Jiawei Liu. « Neutron Computed Tomography Reconstruction Method Using Sparse Projections Based on Weighted Total Difference Minimization ». Acta Optica Sinica 39, no 7 (2019) : 0711003. http://dx.doi.org/10.3788/aos201939.0711003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Gorodnitsky, I. F., et B. D. Rao. « Sparse signal reconstruction from limited data using FOCUSS : a re-weighted minimum norm algorithm ». IEEE Transactions on Signal Processing 45, no 3 (mars 1997) : 600–616. http://dx.doi.org/10.1109/78.558475.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Kopsinis, Yannis, Konstantinos Slavakis et Sergios Theodoridis. « Online Sparse System Identification and Signal Reconstruction Using Projections Onto Weighted $\ell_{1}$ Balls ». IEEE Transactions on Signal Processing 59, no 3 (mars 2011) : 936–52. http://dx.doi.org/10.1109/tsp.2010.2090874.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Xu, Cai-bin, Zhi-bo Yang, Zhi Zhai, Bai-jie Qiao, Shao-hua Tian et Xue-feng Chen. « A weighted sparse reconstruction-based ultrasonic guided wave anomaly imaging method for composite laminates ». Composite Structures 209 (février 2019) : 233–41. http://dx.doi.org/10.1016/j.compstruct.2018.10.097.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Zhu, Xiaoxiu, Limin Liu, Baofeng Guo, Wenhua Hu, Lin Shi et Juntao Ma. « Two-Dimensional ISAR Fusion Imaging of Block Structure Targets ». International Journal of Antennas and Propagation 2021 (2 août 2021) : 1–18. http://dx.doi.org/10.1155/2021/6613975.

Texte intégral
Résumé :
The range resolution and azimuth resolution are restricted by the limited transmitting bandwidth and observation angle in a monostatic radar system. To improve the two-dimensional resolution of inverse synthetic aperture radar (ISAR) imaging, a fast linearized Bregman iteration for unconstrained block sparsity (FLBIUB) algorithm is proposed to achieve multiradar ISAR fusion imaging of block structure targets. First, the ISAR imaging echo data of block structure targets is established based on the geometrical theory of the diffraction model. The multiradar ISAR fusion imaging is transformed into a signal sparse representation problem by vectorization operation. Then, considering the block sparsity of the echo data of block structure targets, the FLBIUB algorithm is utilized to achieve the block sparse signal reconstruction and obtain the fusion image. The algorithm further accelerates the iterative convergence speed and improves the imaging efficiency by combining the weighted back-adding residual and condition number optimization of the basis matrix. Finally, simulation experiments show that the proposed method can effectively achieve block sparse signal reconstruction and two-dimensional multiradar ISAR fusion imaging of block structure targets.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Han, Jie, Weihua Ou, Jiahao Xiong et Shihua Feng. « Remote Heart Rate Estimation by Pulse Signal Reconstruction Based on Structural Sparse Representation ». Electronics 11, no 22 (15 novembre 2022) : 3738. http://dx.doi.org/10.3390/electronics11223738.

Texte intégral
Résumé :
In recent years, the physiological measurement based on remote photoplethysmography has attracted wide attention, especially since the epidemic of COVID-19. Many researchers paid great efforts to improve the robustness of illumination and motion variation. Most of the existing methods divided the ROIs into many sub-regions and extracted the heart rate separately, while ignoring the fact that the heart rates from different sub-regions are consistent. To address this problem, in this work, we propose a structural sparse representation method to reconstruct the pulse signals (SSR2RPS) from different sub-regions and estimate the heart rate. The structural sparse representation (SSR) method considers that the chrominance signals from different sub-regions should have a similar sparse representation on the combined dictionary. Specifically, we firstly eliminate the signal deviation trend using the adaptive iteratively re-weighted penalized least squares (Airpls) for each sub-region. Then, we conduct the sparse representation on the combined dictionary, which is constructed considering the pulsatility and periodicity of the heart rate. Finally, we obtain the reconstructed pulse signals from different sub-regions and estimate the heart rate with a power spectrum analysis. The experimental results on the public UBFC and COHFACE datasets demonstrate the significant improvement for the accuracy of the heart rate estimation under realistic conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Qu, Shuang, Shengqi Liu et Qiang Fu. « Open-set HRRP recognition method based on joint sparse representation ». Journal of Physics : Conference Series 2384, no 1 (1 décembre 2022) : 012012. http://dx.doi.org/10.1088/1742-6596/2384/1/012012.

Texte intégral
Résumé :
Abstract Aiming at the problem of multi-view high-resolution range profile (HRRP) target recognition under open set conditions, we proposed an open set recognition method based on joint sparse representation (JSR), which solves the problem of low recognition rate of traditional methods under open set conditions. This method is applied to the background of radar single-station observation. JSR is used to solve the reconstruction error of multi-view HRRP by the over-complete dictionary, while extreme value theory (EVT) is used to model the reconstruction error tailing of matching and non-matching categories and transform the open set identification problem into a hypothesis testing problem. During recognition, we use the reconstruction error to determine the candidate class, the scores of the matching class and non-matching class are obtained according to the confidence of tail distribution, and the weighted sum of the two is used as the category criterion to finally determine the target or candidate class outside the library. This method can effectively use the relevant information between multi-view HRRPs to improve the performance of HRRP recognition under open set conditions. The algorithm is tested with HRRP data generated from MSTAR inversion, and the results show that the performance of the proposed method is better than the mainstream open set recognition method.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Wang, Xin, Can Tang, Ji Li, Peng Zhang et Wei Wang. « Image Target Recognition via Mixed Feature-Based Joint Sparse Representation ». Computational Intelligence and Neuroscience 2020 (10 août 2020) : 1–8. http://dx.doi.org/10.1155/2020/8887453.

Texte intégral
Résumé :
An image target recognition approach based on mixed features and adaptive weighted joint sparse representation is proposed in this paper. This method is robust to the illumination variation, deformation, and rotation of the target image. It is a data-lightweight classification framework, which can recognize targets well with few training samples. First, Gabor wavelet transform and convolutional neural network (CNN) are used to extract the Gabor wavelet features and deep features of training samples and test samples, respectively. Then, the contribution weights of the Gabor wavelet feature vector and the deep feature vector are calculated. After adaptive weighted reconstruction, we can form the mixed features and obtain the training sample feature set and test sample feature set. Aiming at the high-dimensional problem of mixed features, we use principal component analysis (PCA) to reduce the dimensions. Lastly, the public features and private features of images are extracted from the training sample feature set so as to construct the joint feature dictionary. Based on joint feature dictionary, the sparse representation based classifier (SRC) is used to recognize the targets. The experiments on different datasets show that this approach is superior to some other advanced methods.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Li, Pei Heng, Taeho Lee et Hee Yong Youn. « Dimensionality Reduction with Sparse Locality for Principal Component Analysis ». Mathematical Problems in Engineering 2020 (20 mai 2020) : 1–12. http://dx.doi.org/10.1155/2020/9723279.

Texte intégral
Résumé :
Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to effectively consider the trade-off between the complexity and efficiency, a robust L2,p-norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.
Styles APA, Harvard, Vancouver, ISO, etc.
30

He, Jingfei, Yunpei Li, Xiaoyue Zhang et Jianwei Li. « Missing and Corrupted Data Recovery in Wireless Sensor Networks Based on Weighted Robust Principal Component Analysis ». Sensors 22, no 5 (3 mars 2022) : 1992. http://dx.doi.org/10.3390/s22051992.

Texte intégral
Résumé :
Although wireless sensor networks (WSNs) have been widely used, the existence of data loss and corruption caused by poor network conditions, sensor bandwidth, and node failure during transmission greatly affects the credibility of monitoring data. To solve this problem, this paper proposes a weighted robust principal component analysis method to recover the corrupted and missing data in WSNs. By decomposing the original data into a low-rank normal data matrix and a sparse abnormal matrix, the proposed method can identify the abnormal data and avoid the influence of corruption on the reconstruction of normal data. In addition, the low-rankness is constrained by weighted nuclear norm minimization instead of the nuclear norm minimization to preserve the major data components and ensure credible reconstruction data. An alternating direction method of multipliers algorithm is further developed to solve the resultant optimization problem. Experimental results demonstrate that the proposed method outperforms many state-of-the-art methods in terms of recovery accuracy in real WSNs.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Wang, Wantian, Ziyue Tang, Yichang Chen, Yuanpeng Zhang et Yongjian Sun. « Aircraft Target Classification for Conventional Narrow-Band Radar with Multi-Wave Gates Sparse Echo Data ». Remote Sensing 11, no 22 (18 novembre 2019) : 2700. http://dx.doi.org/10.3390/rs11222700.

Texte intégral
Résumé :
For a conventional narrow-band radar system, the detectable information of the target is limited, and it is difficult for the radar to accurately identify the target type. In particular, the classification probability will further decrease when part of the echo data is missed. By extracting the target features in time and frequency domains from multi-wave gates sparse echo data, this paper presents a classification algorithm in conventional narrow-band radar to identify three different types of aircraft target, i.e., helicopter, propeller and jet. Firstly, the classical sparse reconstruction algorithm is utilized to reconstruct the target frequency spectrum with single-wave gate sparse echo data. Then, the micro-Doppler effect caused by rotating parts of different targets is analyzed, and the micro-Doppler based features, such as amplitude deviation coefficient, time domain waveform entropy and frequency domain waveform entropy, are extracted from reconstructed echo data to identify targets. Thirdly, the target features extracted from multi-wave gates reconstructed echo data are weighted and fused to improve the accuracy of classification. Finally, the fused feature vectors are fed into a support vector machine (SVM) model for classification. By contrast with the conventional algorithm of aircraft target classification, the proposed algorithm can effectively process sparse echo data and achieve higher classification probability via weighted features fusion of multi-wave gates echo data. The experiments on synthetic data are carried out to validate the effectiveness of the proposed algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Zang, Miao, Huimin Xu et Yongmei Zhang. « Kernel-Based Multiview Joint Sparse Coding for Image Annotation ». Mathematical Problems in Engineering 2017 (2017) : 1–11. http://dx.doi.org/10.1155/2017/6727105.

Texte intégral
Résumé :
It remains a challenging task for automatic image annotation problem due to the semantic gap between visual features and semantic concepts. To reduce the gap, this paper puts forward a kernel-based multiview joint sparse coding (KMVJSC) framework for image annotation. In KMVJSC, different visual features as well as label information are considered as distinct views and are mapped to an implicit kernel space, in which the original nonlinear separable data become linearly separable. Then, all the views are integrated into a multiview joint sparse coding framework aiming to find a set of optimal sparse representations and discriminative dictionaries adaptively, which can effectively employ the complementary information of different views. An optimization algorithm is presented by extending K-singular value decomposition (KSVD) and accelerated proximal gradient (APG) algorithms to the kernel multiview framework. In addition, a label propagation scheme using the sparse reconstruction and weighted greedy label transfer algorithm is also proposed. Comparative experiments on three datasets have demonstrated the competitiveness of proposed approach compared with other related methods.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Deng, Luzhen, Deling Mi, Peng He, Peng Feng, Pengwei Yu, Mianyi Chen, Zhichao Li, Jian Wang et Biao Wei. « A CT reconstruction approach from sparse projection with adaptive-weighted diagonal total-variation in biomedical application ». Bio-Medical Materials and Engineering 26, s1 (17 août 2015) : S1685—S1693. http://dx.doi.org/10.3233/bme-151468.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Li, Shuang, Wei Liu, Daqing Zheng, Shunren Hu et Wei He. « Localization of Near-Field Sources Based on Sparse Signal Reconstruction with Regularization Parameter Selection ». International Journal of Antennas and Propagation 2017 (2017) : 1–7. http://dx.doi.org/10.1155/2017/1260601.

Texte intégral
Résumé :
Source localization using sensor array in the near-field is a two-dimensional nonlinear parameter estimation problem which requires jointly estimating the two parameters: direction-of-arrival and range. In this paper, a new source localization method based on sparse signal reconstruction is proposed in the near-field. We first utilize l1-regularized weighted least-squares to find the bearings of sources. Here, the weight is designed by making use of the probability distribution of spatial correlations among symmetric sensors of the array. Meanwhile, a theoretical guidance for choosing a proper regularization parameter is also presented. Then one well-known l1-norm optimization solver is employed to estimate the ranges. The proposed method has a lower variance and higher resolution compared with other methods. Simulation results are given to demonstrate the superior performance of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Pan, Yunwen, Junqiang Xia et Kejun Yang. « A Method for Digital Terrain Reconstruction Using Longitudinal Control Lines and Sparse Measured Cross Sections ». Remote Sensing 14, no 8 (11 avril 2022) : 1841. http://dx.doi.org/10.3390/rs14081841.

Texte intégral
Résumé :
Using longitudinal control lines and sparse measured cross sections with large spaces, a new method for quickly reconstructing digital terrains in natural riverways is presented. The longitudinal control lines in a natural riverway, mainly including the river boundaries, the thalweg, the dividing lines of floodplains and main channel, and the water edges, can be obtained by interpreting satellite images, remote sensing images or site surveys. Then, the longitudinal control lines are introduced into quadrilateral grid generation as auxiliary lines that can control longitudinal riverway trends and reflect transverse terrain changes. Then, by the equal cross-sectional area principle at the same water level, all measured cross sections are reasonably fitted. On the above basis, by virtue of the fitted cross-sectional data and the weighted distance method, the terrain interpolations along the longitudinal grid lines are conducted to obtain the elevation data of all grid nodes. Finally, according to the readable text formats of MIKE21 and SMS, the gridded digital terrain and connection information are output by computer programming to achieve good construction of the data exchange channels and fully exploit the special advantages of various software programs for digital terrain visualization and further utilization.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Feng, Jun-Jie, Gong Zhang et Fang-Qing Wen. « MIMO Radar Imaging Based on Smoothedl0Norm ». Mathematical Problems in Engineering 2015 (2015) : 1–10. http://dx.doi.org/10.1155/2015/841986.

Texte intégral
Résumé :
For radar imaging, a target usually has only a few strong scatterers which are sparsely distributed. In this paper, we propose a compressive sensing MIMO radar imaging algorithm based on smoothedl0norm. An approximate hyperbolic tangent function is proposed as the smoothed function to measure the sparsity. A revised Newton method is used to solve the optimization problem by deriving the new revised Newton directions for the sequence of approximate hyperbolic tangent functions. In order to improve robustness of the imaging algorithm, main value weighted method is proposed. Simulation results show that the proposed algorithm is superior to Orthogonal Matching Pursuit (OMP), smoothedl0method (SL0), and Bayesian method with Laplace prior in performance of sparse signal reconstruction. Two-dimensional image quality of MIMO radar using the new method has great improvement comparing with aforementioned reconstruction algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Chen, Yun, Yao Lu, Xiangyuan Ma et Yuesheng Xu. « A content-adaptive unstructured grid based regularized CT reconstruction method with a SART-type preconditioned fixed-point proximity algorithm ». Inverse Problems 38, no 3 (28 janvier 2022) : 035005. http://dx.doi.org/10.1088/1361-6420/ac490f.

Texte intégral
Résumé :
Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Li, Nanxi, Hongbo Shi, Bing Song et Yang Tao. « Temporal-Spatial Neighborhood Enhanced Sparse Autoencoder for Nonlinear Dynamic Process Monitoring ». Processes 8, no 9 (1 septembre 2020) : 1079. http://dx.doi.org/10.3390/pr8091079.

Texte intégral
Résumé :
Data-based process monitoring methods have received tremendous attention in recent years, and modern industrial process data often exhibit dynamic and nonlinear characteristics. Traditional autoencoders, such as stacked denoising autoencoders (SDAEs), have excellent nonlinear feature extraction capabilities, but they ignore the dynamic correlation between sample data. Feature extraction based on manifold learning using spatial or temporal neighbors has been widely used in dynamic process monitoring in recent years, but most of them use linear features and do not take into account the complex nonlinearities of industrial processes. Therefore, a fault detection scheme based on temporal-spatial neighborhood enhanced sparse autoencoder is proposed in this paper. Firstly, it selects the temporal neighborhood and spatial neighborhood of the sample at the current time within the time window with a certain length, the spatial similarity and time serial correlation are used for weighted reconstruction, and the reconstruction combines the current sample as the input of the sparse stack autoencoder (SSAE) to extract the correlation features between the current sample and the neighborhood information. Two statistics are constructed for fault detection. Considering that both types of neighborhood information contain spatial-temporal structural features, Bayesian fusion strategy is used to integrate the two parts of the detection results. Finally, the superiority of the method in this paper is illustrated by a numerical example and the Tennessee Eastman process.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Wang, Yuanjun, et Zeyao Qi. « A new adaptive-weighted total variation sparse-view computed tomography image reconstruction with local improved gradient information ». Journal of X-Ray Science and Technology 26, no 6 (27 décembre 2018) : 957–75. http://dx.doi.org/10.3233/xst-180412.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Stefan, W., K. Hwang, J. Hazle et R. Stafford. « SU-E-I-32 : Improving Vessel Delineation in Brain Using Susceptibility Weighted MRI and Group Sparse Reconstruction ». Medical Physics 41, no 6Part5 (29 mai 2014) : 137. http://dx.doi.org/10.1118/1.4887980.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Chen, Lihua, Xianchun Zeng, Bing Ji, Daihong Liu, Jian Wang, Jiuquan Zhang et Li Feng. « Improving dynamic contrast-enhanced MRI of the lung using motion-weighted sparse reconstruction : Initial experiences in patients ». Magnetic Resonance Imaging 68 (mai 2020) : 36–44. http://dx.doi.org/10.1016/j.mri.2020.01.013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Liu, Yan, Jianhua Ma, Yi Fan et Zhengrong Liang. « Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction ». Physics in Medicine and Biology 57, no 23 (15 novembre 2012) : 7923–56. http://dx.doi.org/10.1088/0031-9155/57/23/7923.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Drees, L., et R. Roscher. « ARCHETYPAL ANALYSIS FOR SPARSE REPRESENTATION-BASED HYPERSPECTRAL SUB-PIXEL QUANTIFICATION ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1/W1 (30 mai 2017) : 133–39. http://dx.doi.org/10.5194/isprs-annals-iv-1-w1-133-2017.

Texte intégral
Résumé :
This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ye, Shengxi. « A Face Recognition Method Based on Multifeature Fusion ». Journal of Sensors 2022 (9 septembre 2022) : 1–5. http://dx.doi.org/10.1155/2022/2985484.

Texte intégral
Résumé :
Face recognition is widely used in daily life and has an important supporting role for social management. Face recognition is mainly based on historical accumulation data to confirm people’s identities in unknown samples and obtain valuable intelligence information. For the problem of face recognition, this paper proposes a multifeature joint adaptive weighting algorithm framework. In this method, a number of different types of features are first used to describe the face characteristics. The selected features should be as complementary as possible, and the overlap redundant information should be reduced to the greatest extent, so as to ensure the performance and efficiency of multifeature fusion. In the classification stage, based on the joint sparse representation model, the multiple types of features are characterized, and their reconstruction error vectors for the corresponding features of the test sample are calculated. The joint sparse representation model can examine the correlation between different types of features, thereby improving the accuracy of representation and fully integrating the advantages of multiple types of features. At the same time, in view of the simple superposition of reconstruction errors in the traditional sparse representation model, this paper uses a random weight matrix to comprehensively consider the weighted reconstruction errors under different weight conditions, so as to obtain statistical decision quantities for the final decision. The framework proposed in this paper can adapt to different multifeature combinations and has good practicability. In the experiment, training and test sets are constructed based on public face image data sets to test the proposed method. The experimental results show that the method in this paper is more effective and robust compared with some present methods for face recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ji, Jingyu, Yuhua Zhang, Zhilong Lin, Yongke Li, Changlong Wang, Yongjiang Hu, Fuyu Huang et Jiangyi Yao. « Fusion of Infrared and Visible Images Based on Optimized Low-Rank Matrix Factorization with Guided Filtering ». Electronics 11, no 13 (26 juin 2022) : 2003. http://dx.doi.org/10.3390/electronics11132003.

Texte intégral
Résumé :
In recent years, image fusion has been a research hotspot. However, it is still a big challenge to balance the problems of noiseless image fusion and noisy image fusion. In order to improve the weak performance and low robustness of existing image fusion algorithms in noisy images, an infrared and visible image fusion algorithm based on optimized low-rank matrix factorization with guided filtering is proposed. First, the minimized error reconstruction factorization is introduced into the low-rank matrix, which effectively enhances the optimization performance, and obtains the base image with good filtering performance. Then using the base image as the guide image, the source image is decomposed into the high-frequency layer containing detail information and noise, and the low-frequency layer containing energy information through guided filtering. According to the noise intensity, the sparse reconstruction error is adaptively obtained to fuse the high-frequency layers, and the weighted average strategy is utilized to fuse the low-frequency layers. Finally, the fusion image is obtained by reconstructing the pre-fused high-frequency layer and the pre-fused low-frequency layer. The comparative experiments show that the proposed algorithm not only has good performance for noise-free images, but more importantly, it can effectively deal with the fusion of noisy images.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Wand, Weidong, Qunfei Zhang, Wentao Shi, Juan SHI, Weijie Tan et Xuhu Wang. « Iterative Sparse Covariance Matrix Fitting Direction of Arrival Estimation Method Based on Vector Hydrophone Array ». Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, no 1 (février 2020) : 14–23. http://dx.doi.org/10.1051/jnwpu/20203810014.

Texte intégral
Résumé :
Aiming at the direction of arrival (DOA) estimation of coherent signals in vector hydrophone array, an iterative sparse covariance matrix fitting algorithm is proposed. Based on the fitting criterion of weighted covariance matrix, the objective function of sparse signal power is constructed, and the recursive formula of sparse signal power iteration updating is deduced by using the properties of Frobenius norm. The present algorithm uses the idea of iterative reconstruction to calculate the power of signals on discrete grids, so that the estimated power is more accurate, and thus more accurate DOA estimation can be obtained. The theoretical analysis shows that the power of the signal at the grid point solved by the present algorithm is preprocessed by a filter, which allows signals in specified directions to pass through and attenuate signals in other directions, and has low sensitivity to the correlation of signals. The simulation results show that the average error estimated by the present method is 39.4% of the multi-signal classification high resolution method and 73.7% of the iterative adaptive sparse signal representation method when the signal-to-noise ratio is 15 dB and the non-coherent signal. Moreover, the average error estimated by the present method is 12.9% of the iterative adaptive sparse signal representation method in the case of coherent signal. Therefore, the present algorithm effectively improves the accuracy of target DOA estimation when applying to DOA estimation with highly correlated targets.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Wang, Linyu, Xiangjun Yin, Huihui Yue et Jianhong Xiang. « A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation ». Sensors 18, no 12 (4 décembre 2018) : 4260. http://dx.doi.org/10.3390/s18124260.

Texte intégral
Résumé :
Compressed sensing (CS) theory has attracted widespread attention in recent years and has been widely used in signal and image processing, such as underdetermined blind source separation (UBSS), magnetic resonance imaging (MRI), etc. As the main link of CS, the goal of sparse signal reconstruction is how to recover accurately and effectively the original signal from an underdetermined linear system of equations (ULSE). For this problem, we propose a new algorithm called the weighted regularized smoothed L 0 -norm minimization algorithm (WReSL0). Under the framework of this algorithm, we have done three things: (1) proposed a new smoothed function called the compound inverse proportional function (CIPF); (2) proposed a new weighted function; and (3) a new regularization form is derived and constructed. In this algorithm, the weighted function and the new smoothed function are combined as the sparsity-promoting object, and a new regularization form is derived and constructed to enhance de-noising performance. Performance simulation experiments on both the real signal and real images show that the proposed WReSL0 algorithm outperforms other popular approaches, such as SL0, BPDN, NSL0, and L p -RLSand achieves better performances when it is used for UBSS.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Duan, Mingshan, Jiangjiang Xia, Zhongwei Yan, Lei Han, Lejian Zhang, Hanmeng Xia et Shuang Yu. « Reconstruction of the Radar Reflectivity of Convective Storms Based on Deep Learning and Himawari-8 Observations ». Remote Sensing 13, no 16 (23 août 2021) : 3330. http://dx.doi.org/10.3390/rs13163330.

Texte intégral
Résumé :
Radar reflectivity (RR) greater than 35 dBZ usually indicates the presence of severe convective weather, which affects a variety of human activities, including aviation. However, RR data are scarce, especially in regions with poor radar coverage or substantial terrain obstructions. Fortunately, the radiance data of space-based satellites with universal coverage can be converted into a proxy field of RR. In this study, a convolutional neural network-based data-driven model is developed to convert the radiance data (infrared bands 07, 09, 13, 16, and 16–13) of Himawari-8 into the radar combined reflectivity factor (CREF). A weighted loss function is designed to solve the data imbalance problem due to the sparse convective pixels in the sample. The developed model demonstrates an overall reconstruction capability and performs well in terms of classification scores with 35 dBZ as the threshold. A five-channel input is more efficient in reconstructing the CREF than the commonly used one-channel input. In a case study of a convective event over North China in the summer using the test dataset, U-Net reproduces the location, shape and strength of the convective storm well. The present RR reconstruction technology based on deep learning and Himawari-8 radiance data is shown to be an efficient tool for producing high-resolution RR products, which are especially needed for regions without or with poor radar coverage.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Chakraborty, Madhuparna, Alaka Barik, Ravinder Nath et Victor Dutta. « NonConvex Iteratively Reweighted Least Square Optimization in Compressive Sensing ». Advanced Materials Research 341-342 (septembre 2011) : 629–33. http://dx.doi.org/10.4028/www.scientific.net/amr.341-342.629.

Texte intégral
Résumé :
In this paper, we study a method for sparse signal recovery with the help of iteratively reweighted least square approach, which in many situations outperforms other reconstruction method mentioned in literature in a way that comparatively fewer measurements are needed for exact recovery. The algorithm given involves solving a sequence of weighted minimization for nonconvex problems where the weights for the next iteration are determined from the value of current solution. We present a number of experiments demonstrating the performance of the algorithm. The performance of the algorithm is studied via computer simulation for different number of measurements, and degree of sparsity. Also the simulation results show that improvement is achieved by incorporating regularization strategy.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Nawaz Jadoon, Rab, Waqas Jadoon, Ahmad Khan, Zia ur Rehman, Sajid Shah, Iftikhar Ahmed Khan et WuYang Zhou. « Linear Discriminative Learning for Image Classification ». Mathematical Problems in Engineering 2019 (20 octobre 2019) : 1–12. http://dx.doi.org/10.1155/2019/4760614.

Texte intégral
Résumé :
In this paper, we propose a linear discriminative learning model called adaptive locality-based weighted collaborative representation (ALWCR) that formulates the image classification task as an optimization problem to reduce the reconstruction error between the query sample and its computed linear representation. The optimal linear representation for a query image is obtained by using the weighted regularized linear regression approach which incorporates intrinsic locality structure and feature variance between data into representation. The resultant representation increases the discrimination ability for correct classification. The proposed ALWCR method can be considered an extension of the collaborative representation- (CR-) based classification approach which is an alternative to the sparse representation- (SR-) based classification method. ALWCR improved the discriminant ability for classification as compared with CR original formulation and overcomes the limitations that arose due to a small training sample size and low feature dimension. Experimental results obtained using various feature dimensions on well-known publicly available face and digit datasets have verified the competitiveness of the proposed method against competing image classification methods.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie