To see the other types of publications on this topic, follow the link: L0 regularization.

Journal articles on the topic 'L0 regularization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'L0 regularization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhu, Jiehua, and Xiezhang Li. "A Smoothed l0-Norm and l1-Norm Regularization Algorithm for Computed Tomography." Journal of Applied Mathematics 2019 (June 2, 2019): 1–8. http://dx.doi.org/10.1155/2019/8398035.

Full text
Abstract:
The nonmonotone alternating direction algorithm (NADA) was recently proposed for effectively solving a class of equality-constrained nonsmooth optimization problems and applied to the total variation minimization in image reconstruction, but the reconstructed images suffer from the artifacts. Though by the l0-norm regularization the edge can be effectively retained, the problem is NP hard. The smoothed l0-norm approximates the l0-norm as a limit of smooth convex functions and provides a smooth measure of sparsity in applications. The smoothed l0-norm regularization has been an attractive research topic in sparse image and signal recovery. In this paper, we present a combined smoothed l0-norm and l1-norm regularization algorithm using the NADA for image reconstruction in computed tomography. We resolve the computation challenge resulting from the smoothed l0-norm minimization. The numerical experiments demonstrate that the proposed algorithm improves the quality of the reconstructed images with the same cost of CPU time and reduces the computation time significantly while maintaining the same image quality compared with the l1-norm regularization in absence of the smoothed l0-norm.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Xiezhang, Guocan Feng, and Jiehua Zhu. "An Algorithm of l1-Norm and l0-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection." International Journal of Biomedical Imaging 2020 (August 28, 2020): 1–6. http://dx.doi.org/10.1155/2020/8873865.

Full text
Abstract:
The l1-norm regularization has attracted attention for image reconstruction in computed tomography. The l0-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined l1-norm and l0-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving l0-norm regularization.
APA, Harvard, Vancouver, ISO, and other styles
3

Fan, Qinwei, and Ting Liu. "Smoothing L0 Regularization for Extreme Learning Machine." Mathematical Problems in Engineering 2020 (July 6, 2020): 1–10. http://dx.doi.org/10.1155/2020/9175106.

Full text
Abstract:
Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing L0 regularization. A novel algorithm updates weights in the direction along which the overall square error is reduced the most and then this new algorithm can sparse network structure very efficiently. The numerical experiments show that the ELM algorithm with smoothing L0 regularization has less hidden nodes but better generalization performance than original ELM and ELM with L1 regularization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Kyung-Sik. "Signomial Classification Method with0-regularization." IE interfaces 24, no. 2 (June 1, 2011): 151–55. http://dx.doi.org/10.7232/ieif.2011.24.2.151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Xiaoqing, Rongrong Hou, and Yuhan Wu. "Structural damage detection based on iteratively reweighted l1 regularization algorithm." Advances in Structural Engineering 22, no. 6 (December 7, 2018): 1479–87. http://dx.doi.org/10.1177/1369433218817138.

Full text
Abstract:
Structural damage usually appears in a few sections or members only, which is sparse compared with the total elements of the entire structure. According to the sparse recovery theory, the recently developed damage detection methods employ the l1 regularization technique to exploit the sparsity condition of structural damage. However, in practice, the solution obtained by the l1 regularization is typically suboptimal. The l0 regularization technique outperforms the l1 regularization in various aspects for sparse recovery, whereas the associated nonconvex optimization problem is NP-hard and computationally infeasible. In this study, a damage detection method based on the iteratively reweighted l1 regularization algorithm is proposed. An iterative procedure is employed such that the nonconvex optimization problem of the l0 regularization can be efficiently solved through transforming it into a series of weighted l1 regularization problems. Experimental example demonstrates that the proposed damage detection method can accurately locate the sparse damage over a large number of elements. The advantage of the iteratively reweighted l1 regularization algorithm over the l1 regularization in damage detection is also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Kun, Na Qi, and Qing Zhu. "Fluid Simulation with an L0 Based Optical Flow Deformation." Applied Sciences 10, no. 18 (September 12, 2020): 6351. http://dx.doi.org/10.3390/app10186351.

Full text
Abstract:
Fluid simulation can be automatically interpolated by using data-driven fluid simulations based on a space-time deformation. In this paper, we propose a novel data-driven fluid simulation scheme with the L0 based optical flow deformation method by matching two fluid surfaces rather than the L2 regularization. The L0 gradient smooth regularization can result in prominent structure of the fluid in a sparsity-control manner, thus the misalignment of the deformation can be suppressed. We adopt the objective function using an alternating minimization with a half-quadratic splitting for solving the L0 based optical flow deformation model. Experiment results demonstrate that our proposed method can generate more realistic fluid surface with the optimal space-time deformation under the L0 gradient smooth constraint than the L2 one, and outperform the state-of-the-art methods in terms of both objective and subjective quality.
APA, Harvard, Vancouver, ISO, and other styles
7

Frommlet, Florian, and Grégory Nuel. "An Adaptive Ridge Procedure for L0 Regularization." PLOS ONE 11, no. 2 (February 5, 2016): e0148620. http://dx.doi.org/10.1371/journal.pone.0148620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Lingli, and An Luo. "l1/2 regularization for wavelet frames based few-view CT reconstruction." E3S Web of Conferences 269 (2021): 01020. http://dx.doi.org/10.1051/e3sconf/202126901020.

Full text
Abstract:
Reducing the radiation exposure in computed tomography (CT) is always a significant research topic in radiology. Image reconstruction from few-view projection is a reasonable and effective way to decrease the number of rays to lower the radiation exposure. But how to maintain high image reconstruction quality while reducing radiation exposure is a major challenge. To solve this problem, several researchers are absorbed in l0 or l1 regularization based optimization models to deal with it. However, the solution of l1 regularization based optimization model is not sparser than that of l1/2 or l0 regularization, and solving the l0 regularization is more difficult than solving the l1/2 regularization. In this paper, we develop l1/2 regularization for wavelet frames based image reconstruction model to research the few-view problem. First, the existence of the solution of the corresponding model is demonstrated. Second, an alternate direction method (ADM) is utilized to separate the original problem into two subproblems, where the former subproblem about the image is solved using the idea of the proximal mapping, the simultaneous algebraic reconstruction technique (SART) and the projection and contraction (PC) algorithm, and the later subproblem about the wavelet coefficients is solved using the half thresholding (HT) algorithm. Furthermore, the convergence analysis of our method is given by the simulated implementions. Simulated and real experiments confirm the effectiveness of our method.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Guodong. "Image Decomposition Model OSV with L0 Sparse Regularization." Journal of Information and Computational Science 12, no. 2 (January 20, 2015): 743–50. http://dx.doi.org/10.12733/jics20105230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Christou, Antonis, and Andreas Artemiou. "Adaptive L0 Regularization for Sparse Support Vector Regression." Mathematics 11, no. 13 (June 22, 2023): 2808. http://dx.doi.org/10.3390/math11132808.

Full text
Abstract:
In this work, we proposed a sparse version of the Support Vector Regression (SVR) algorithm that uses regularization to achieve sparsity in function estimation. To achieve this, we used an adaptive L0 penalty that has a ridge structure and, therefore, does not introduce additional computational complexity to the algorithm. In addition to this, we used an alternative approach based on a similar proposal in the Support Vector Machine (SVM) literature. Through numerical studies, we demonstrated the effectiveness of our proposals. We believe that this is the first time someone discussed a sparse version of Support Vector Regression (in terms of variable selection and not in terms of support vector selection).
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Yangyang, Zhiming He, Xu Zhan, Yuanhua Fu, and Liming Zhou. "Three-Dimensional Sparse SAR Imaging with Generalized Lq Regularization." Remote Sensing 14, no. 2 (January 9, 2022): 288. http://dx.doi.org/10.3390/rs14020288.

Full text
Abstract:
Three-dimensional (3D) synthetic aperture radar (SAR) imaging provides complete 3D spatial information, which has been used in environmental monitoring in recent years. Compared with matched filtering (MF) algorithms, the regularization technique can improve image quality. However, due to the substantial computational cost, the existing observation-matrix-based sparse imaging algorithm is difficult to apply to large-scene and 3D reconstructions. Therefore, in this paper, novel 3D sparse reconstruction algorithms with generalized Lq-regularization are proposed. First, we combine majorization–minimization (MM) and L1 regularization (MM-L1) to improve SAR image quality. Next, we combine MM and L1/2 regularization (MM-L1/2) to achieve high-quality 3D images. Then, we present the algorithm which combines MM and L0 regularization (MM-L0) to obtain 3D images. Finally, we present a generalized MM-Lq algorithm (GMM-Lq) for sparse SAR imaging problems with arbitrary q0≤q≤1 values. The proposed algorithm can improve the performance of 3D SAR images, compared with existing regularization techniques, and effectively reduce the amount of calculation needed. Additionally, the reconstructed complex image retains the phase information, which makes the reconstructed SAR image still suitable for interferometry applications. Simulation and experimental results verify the effectiveness of the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Xiang, Jianhong, Huihui Yue, Xiangjun Yin, and Guoqing Ruan. "A Reweighted Symmetric Smoothed Function Approximating L0-Norm Regularized Sparse Reconstruction Method." Symmetry 10, no. 11 (November 2, 2018): 583. http://dx.doi.org/10.3390/sym10110583.

Full text
Abstract:
Sparse-signal recovery in noisy conditions is a problem that can be solved with current compressive-sensing (CS) technology. Although current algorithms based on L 1 regularization can solve this problem, the L 1 regularization mechanism cannot promote signal sparsity under noisy conditions, resulting in low recovery accuracy. Based on this, we propose a regularized reweighted composite trigonometric smoothed L 0 -norm minimization (RRCTSL0) algorithm in this paper. The main contributions of this paper are as follows: (1) a new smoothed symmetric composite trigonometric (CT) function is proposed to fit the L 0 -norm; (2) a new reweighted function is proposed; and (3) a new L 0 regularization objective function framework is constructed based on the idea of T i k h o n o v regularization. In the new objective function framework, Contributions (1) and (2) are combined as sparsity regularization terms, and errors as deviation terms. Furthermore, the conjugate-gradient (CG) method is used to optimize the objective function, so as to achieve accurate recovery of sparse signal and image under noisy conditions. The numerical experiments on both the simulated and real data verify that the proposed algorithm is superior to other state-of-the-art algorithms, and achieves advanced performance under noisy conditions.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Haoxiang, and Jianmin Zheng. "L0-Regularization based Material Design for Hexahedral Mesh Models." Computer-Aided Design and Applications 19, no. 6 (March 9, 2022): 1171–83. http://dx.doi.org/10.14733/cadaps.2022.1171-1183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Yan Jingwen, 闫敬文, 谢婷婷 Xie Tingting, 彭鸿 Peng Hong, and 刘攀华 Liu Panhua. "Motion Image Deblurring Based on L0 Norms Regularization Term." Laser & Optoelectronics Progress 54, no. 2 (2017): 021005. http://dx.doi.org/10.3788/lop54.021005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhao, Yong, Hong Qin, Xueying Zeng, Junli Xu, and Junyu Dong. "Robust and effective mesh denoising using L0 sparse regularization." Computer-Aided Design 101 (August 2018): 82–97. http://dx.doi.org/10.1016/j.cad.2018.04.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Dai, Ronghuo, and Jun Yang. "Amplitude-Versus-Angle (AVA) Inversion for Pre-Stack Seismic Data with L0-Norm-Gradient Regularization." Mathematics 11, no. 4 (February 9, 2023): 880. http://dx.doi.org/10.3390/math11040880.

Full text
Abstract:
Amplitude-versus-angle (AVA) inversion for pre-stack seismic data is a key technology in oil and gas reservoir prediction. Conventional AVA inversion contains two main stages. Stage one estimates the relative change rates of P-wave velocity, S-wave velocity and density, and stage two obtains the P-wave velocity, S-wave velocity and density based on their relative change rates through trace integration. An alternative way merges these two stages to estimate P-wave velocity, S-wave velocity and density directly. This way is less sensitive to noise in seismic data compared to conventional two-stage AVA inversion. However, the regularization for the direct AVA inversion is more complex. To regularize this merged inverse problem, the L0-norm-gradient of P-wave velocity, S-wave velocity and density was used. L0-norm-gradient regularization can provide inversion results with blocky features to make formation interfaces and geological edges precise. Then, L0-norm-gradient regularized AVA inversion was performed on the synthetic seismic traces. Next, a real seismic data line that contains three partial angle stack profiles was used to test the practice application. The inversion results from synthetic and real seismic data showed that L0-norm-gradient regularized AVA inversion is an effective way to estimate P-wave velocity, S-wave velocity and density.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Han-Sol, Changgyun Jin, Chanwoo Shin, and Seong-Eun Kim. "Sparse Diffusion Least Mean-Square Algorithm with Hard Thresholding over Networks." Mathematics 11, no. 22 (November 14, 2023): 4638. http://dx.doi.org/10.3390/math11224638.

Full text
Abstract:
This paper proposes a distributed estimation technique utilizing the diffusion least mean-square (LMS) algorithm, specifically designed for sparse systems in which many coefficients of the system are zeros. To efficiently utilize the sparse representation of the system and achieve a promising performance, we have incorporated L0-norm regularization into the diffusion LMS algorithm. This integration is accomplished by employing hard thresholding through a variable splitting method into the update equation. The efficacy of our approach is validated by comprehensive theoretical analysis, rigorously examining the mean stability as well as the transient and steady-state behaviors of the proposed algorithm. The proposed algorithm preserves the behavior of large coefficients and strongly enforces smaller coefficients toward zero through the relaxation of L0-norm regularization. Experimental results show that the proposed algorithm achieves superior convergence performance compared with conventional sparse algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Shuifeng, Yong Zhao, Xingyu Tuo, Deqing Mao, Yin Zhang, and Jianyu Yang. "Real Aperture Radar Angular Super-Resolution Imaging Using Modified Smoothed L0 Norm with a Regularization Strategy." Remote Sensing 16, no. 1 (December 19, 2023): 12. http://dx.doi.org/10.3390/rs16010012.

Full text
Abstract:
Restricted by the ill-posed antenna measurement matrix, the conventional smoothed L0 norm algorithm (SL0) fails to enable direct real aperture radar angular super-resolution imaging. This paper proposes a modified smoothed L0 norm (MSL0) algorithm to address this issue. First, as the pseudo-inverse of the ill-posed antenna measurement matrix is required to set the initial values and calculate the gradient projection, a regularization strategy is employed to relax the ill-posedness. Based on the regularization strategy, the proposed MSL0 algorithm can avoid noise amplification when faced with the ill-posed antenna measurement matrix of real aperture radar. Additionally, to prevent local minima problems, we introduce a hard thresholding operator, based on which the proposed MSL0 algorithm can accurately reconstruct sparse targets. Simulations and experimental results verify the performance of the proposed MSL0 algorithm.
APA, Harvard, Vancouver, ISO, and other styles
19

Cao, Chen, Matthew Greenberg, and Quan Long. "WgLink: reconstructing whole-genome viral haplotypes using L0+L1-regularization." Bioinformatics 37, no. 17 (February 3, 2021): 2744–46. http://dx.doi.org/10.1093/bioinformatics/btab076.

Full text
Abstract:
Abstract Summary Many tools can reconstruct viral sequences based on next-generation sequencing reads. Although existing tools effectively recover local regions, their accuracy suffers when reconstructing the whole viral genomes (strains). Moreover, they consume significant memory when the sequencing coverage is high or when the genome size is large. We present WgLink to meet this challenge. WgLink takes local reconstructions produced by other tools as input and patches the resulting segments together into coherent whole-genome strains. We accomplish this using an L0+L1-regularized regression, synthesizing variant allele frequency data with physical linkage between multiple variants spanning multiple regions simultaneously. WgLink achieves higher accuracy than existing tools both on simulated and on real datasets while using significantly less memory (RAM) and fewer CPU hours. Availability and implementation Source code and binaries are freely available at https://github.com/theLongLab/wglink. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
20

Xiang, Jianhong, Huihui Yue, Xiangjun Yin, and Linyu Wang. "A New Smoothed L0 Regularization Approach for Sparse Signal Recovery." Mathematical Problems in Engineering 2019 (July 17, 2019): 1–12. http://dx.doi.org/10.1155/2019/1978154.

Full text
Abstract:
Sparse signal reconstruction, as the main link of compressive sensing (CS) theory, has attracted extensive attention in recent years. The essence of sparse signal reconstruction is how to recover the original signal accurately and effectively from an underdetermined linear system equation (ULSE). For this problem, we propose a new algorithm called regularization reweighted smoothed L0 norm minimization algorithm, which is simply called RRSL0 algorithm. Three innovations are made under the framework of this method: (1) a new smoothed function called compound inverse proportional function (CIPF) is proposed; (2) a new reweighted function is proposed; and (3) a mixed conjugate gradient (MCG) method is proposed. In this algorithm, the reweighted function and the new smoothed function are combined as the sparsity promoting objective, and the constraint condition y-Φx22 is taken as a deviation term. Both of them constitute an unconstrained optimization problem under the Tikhonov regularization criterion and the MCG method constructed is used to optimize the problem and realize high-precision reconstruction of sparse signals under noise conditions. Sparse signal recovery experiments on both the simulated and real data show the proposed RRSL0 algorithm performs better than other popular approaches and achieves state-of-the-art performances in signal and image processing.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhu, Jun, Changwei Chen, Shoubao Su, and Zinan Chang. "Compressive Sensing of Multichannel EEG Signals via lq Norm and Schatten-p Norm Regularization." Mathematical Problems in Engineering 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/2189563.

Full text
Abstract:
In Wireless Body Area Networks (WBAN) the energy consumption is dominated by sensing and communication. Recently, a simultaneous cosparsity and low-rank (SCLR) optimization model has shown the state-of-the-art performance in compressive sensing (CS) recovery of multichannel EEG signals. How to solve the resulting regularization problem, involving l0 norm and rank function which is known as an NP-hard problem, is critical to the recovery results. SCLR takes use of l1 norm and nuclear norm as a convex surrogate function for l0 norm and rank function. However, l1 norm and nuclear norm cannot well approximate the l0 norm and rank because there exist irreparable gaps between them. In this paper, an optimization model with lq norm and schatten-p norm is proposed to enforce cosparsity and low-rank property in the reconstructed multichannel EEG signals. An efficient iterative scheme is used to solve the resulting nonconvex optimization problem. Experimental results have demonstrated that the proposed algorithm can significantly outperform existing state-of-the-art CS methods for compressive sensing of multichannel EEG channels.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Chuncheng, and Zhiying Long. "Euler’s Elastica Regularization for Voxel Selection of fMRI Data." International Journal of Signal Processing Systems 8, no. 2 (June 2020): 32–41. http://dx.doi.org/10.18178/ijsps.8.2.32-41.

Full text
Abstract:
Multivariate analysis methods have been widely applied to functional Magnetic Resonance Imaging (fMRI) data to reveal brain activity patterns and decode brain states. Among the various multivariate analysis methods, the multivariate regression models that take high-dimensional fMRI data as inputs while using relevant regularization were proposed for voxel selection or decoding. Although some previous studies added the sparse regularization to the multivariate regression model to select relevant voxels, the selected sparse voxels cannot be used to map brain activity of each task. Compared to the sparse regularization, the Euler’s Elastica (EE) regularization that considers the spatial information of data can identify the clustered voxels of fMRI data. Our previous study added EE Regularization to Logical Regression (EELR) and demonstrated its advantages over the other regularizations in fMRI-based decoding. In this study, we further developed a multivariate regression model using EE in 3D space as constraint for voxel selection. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of EE regression model. The performance of EE regression was compared with the Generalized Linear Model (GLM) and Total Variation (TV) regression in brain activity detection, and was compared with GLM, Laplacian Smoothed L0 norm (LSL0) and TV regression methods in feature selection for brain state decoding. The results indicated that EE regression possessed better sensitivity to detect brain regions specific to a task than did GLM and better spatial detection power than TV regression. Moreover, EE regression outperformed GLM, LSL0 and TV in feature selection.
APA, Harvard, Vancouver, ISO, and other styles
23

Wei, Zhe, Qingfa Li, Jiazhen Wei, and Wei Bian. "Neural network for a class of sparse optimization with L0-regularization." Neural Networks 151 (July 2022): 211–21. http://dx.doi.org/10.1016/j.neunet.2022.03.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Fengjun, Wei Lu, Hongmei Liu, and Fei Xue. "Natural image deblurring based on L0-regularization and kernel shape optimization." Multimedia Tools and Applications 77, no. 20 (April 18, 2018): 26239–57. http://dx.doi.org/10.1007/s11042-018-5847-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Kyuseok, and Ji-Youn Kim. "Blind Deconvolution Based on Compressed Sensing with bi-l0-l2-norm Regularization in Light Microscopy Image." International Journal of Environmental Research and Public Health 18, no. 4 (February 12, 2021): 1789. http://dx.doi.org/10.3390/ijerph18041789.

Full text
Abstract:
Blind deconvolution of light microscopy images could improve the ability of distinguishing cell-level substances. In this study, we investigated the blind deconvolution framework for a light microscope image, which combines the benefits of bi-l0-l2-norm regularization with compressed sensing and conjugated gradient algorithms. Several existing regularization approaches were limited by staircase artifacts (or cartooned artifacts) and noise amplification. Thus, we implemented our strategy to overcome these problems using the bi-l0-l2-norm regularization proposed. It was investigated through simulations and experiments using optical microscopy images including the background noise. The sharpness was improved through the successful image restoration while minimizing the noise amplification. In addition, quantitative factors of the restored images, including the intensity profile, root-mean-square error (RMSE), edge preservation index (EPI), structural similarity index measure (SSIM), and normalized noise power spectrum, were improved compared to those of existing or comparative images. In particular, the results of using the proposed method showed RMSE, EPI, and SSIM values of approximately 0.12, 0.81, and 0.88 when compared with the reference. In addition, RMSE, EPI, and SSIM values in the restored image were proven to be improved by about 5.97, 1.26, and 1.61 times compared with the degraded image. Consequently, the proposed method is expected to be effective for image restoration and to reduce the cost of a high-performance light microscope.
APA, Harvard, Vancouver, ISO, and other styles
26

Guo, Di, Zhangren Tu, Jiechao Wang, Min Xiao, Xiaofeng Du, and Xiaobo Qu. "Salt and Pepper Noise Removal with Multi-Class Dictionary Learning and L0 Norm Regularizations." Algorithms 12, no. 1 (December 25, 2018): 7. http://dx.doi.org/10.3390/a12010007.

Full text
Abstract:
Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria.
APA, Harvard, Vancouver, ISO, and other styles
27

Xu, Jian, Lanlan Rao, Franz Schreier, Dmitry S. Efremenko, Adrian Doicu, and Thomas Trautmann. "Insight into Construction of Tikhonov-Type Regularization for Atmospheric Retrievals." Atmosphere 11, no. 10 (October 1, 2020): 1052. http://dx.doi.org/10.3390/atmos11101052.

Full text
Abstract:
In atmospheric science we are confronted with inverse problems arising in applications associated with retrievals of geophysical parameters. A nonlinear mapping from geophysical quantities (e.g., atmospheric properties) to spectral measurements can be represented by a forward model. An inversion often suffers from the lack of stability and its stabilization introduced by proper approaches, however, can be treated with sufficient generality. In principle, regularization can enforce uniqueness of the solution when additional information is incorporated into the inversion process. In this paper, we analyze different forms of the regularization matrix L in the framework of Tikhonov regularization: the identity matrix L0, discrete approximations of the first and second order derivative operators L1 and L2, respectively, and the Cholesky factor of the a priori profile covariance matrix LC. Each form of L has its intrinsic pro/cons and thus may lead to different performance of inverse algorithms. An extensive comparison of different matrices is conducted with two applications using synthetic data from airborne and satellite sensors: retrieving atmospheric temperature profiles from microwave spectral measurements, and deriving aerosol properties from near infrared spectral measurements. The regularized solution obtained with L0 possesses a reasonable magnitude, but its smoothness is not always assured. The retrieval using L1 and L2 produces a solution in favor of the smoothness, and the impact of the a priori knowledge is less critical on the retrieval using L1. The retrieval performance of LC is affected by the accuracy of the a priori knowledge.
APA, Harvard, Vancouver, ISO, and other styles
28

Quasdane, Mohamed, Hassan Ramchoun, and Tawfik Masrour. "Sparse smooth group L0∘L1/2 regularization method for convolutional neural networks." Knowledge-Based Systems 284 (January 2024): 111327. http://dx.doi.org/10.1016/j.knosys.2023.111327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Linyu, Xiangjun Yin, Huihui Yue, and Jianhong Xiang. "A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation." Sensors 18, no. 12 (December 4, 2018): 4260. http://dx.doi.org/10.3390/s18124260.

Full text
Abstract:
Compressed sensing (CS) theory has attracted widespread attention in recent years and has been widely used in signal and image processing, such as underdetermined blind source separation (UBSS), magnetic resonance imaging (MRI), etc. As the main link of CS, the goal of sparse signal reconstruction is how to recover accurately and effectively the original signal from an underdetermined linear system of equations (ULSE). For this problem, we propose a new algorithm called the weighted regularized smoothed L 0 -norm minimization algorithm (WReSL0). Under the framework of this algorithm, we have done three things: (1) proposed a new smoothed function called the compound inverse proportional function (CIPF); (2) proposed a new weighted function; and (3) a new regularization form is derived and constructed. In this algorithm, the weighted function and the new smoothed function are combined as the sparsity-promoting object, and a new regularization form is derived and constructed to enhance de-noising performance. Performance simulation experiments on both the real signal and real images show that the proposed WReSL0 algorithm outperforms other popular approaches, such as SL0, BPDN, NSL0, and L p -RLSand achieves better performances when it is used for UBSS.
APA, Harvard, Vancouver, ISO, and other styles
30

Guo, Kaiwen, Feng Xu, Yangang Wang, Yebin Liu, and Qionghai Dai. "Errata to “Robust Non-Rigid Motion Tracking and Surface Reconstruction Using L0 Regularization”." IEEE Transactions on Visualization and Computer Graphics 24, no. 7 (July 1, 2018): 2268. http://dx.doi.org/10.1109/tvcg.2018.2826859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Jingjing, Guoxi Ni, and Shaowen Yan. "Alternating method based on framelet l0-norm and TV regularization for image restoration." Inverse Problems in Science and Engineering 27, no. 6 (July 30, 2018): 790–807. http://dx.doi.org/10.1080/17415977.2018.1500569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Yong, Bowei Shen, Shaofan Wang, Dehui Kong, and Baocai Yin. "L0-regularization-based skeleton optimization from consecutive point sets of kinetic human body." ISPRS Journal of Photogrammetry and Remote Sensing 143 (September 2018): 124–33. http://dx.doi.org/10.1016/j.isprsjprs.2018.04.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bouaziz, Olivier, and Grégory Nuel. "L0 Regularization for the Estimation of Piecewise Constant Hazard Rates in Survival Analysis." Applied Mathematics 08, no. 03 (2017): 377–94. http://dx.doi.org/10.4236/am.2017.83031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

WANG, YANFEI, CHANGCHUN YANG, and JINGJIE CAO. "ON TIKHONOV REGULARIZATION AND COMPRESSIVE SENSING FOR SEISMIC SIGNAL PROCESSING." Mathematical Models and Methods in Applied Sciences 22, no. 02 (February 2012): 1150008. http://dx.doi.org/10.1142/s0218202511500084.

Full text
Abstract:
Using compressive sensing and sparse regularization, one can nearly completely reconstruct the input (sparse) signal using limited numbers of observations. At the same time, the reconstruction methods by compressing sensing and optimizing techniques overcome the obstacle of the number of sampling requirement of the Shannon/Nyquist sampling theorem. It is well known that seismic reflection signal may be sparse, sometimes and the number of sampling is insufficient for seismic surveys. So, the seismic signal reconstruction problem is ill-posed. Considering the ill-posed nature and the sparsity of seismic inverse problems, we study reconstruction of the wavefield and the reflection seismic signal by Tikhonov regularization and the compressive sensing. The l0, l1 and l2 regularization models are studied. Relationship between Tikhonov regularization and the compressive sensing is established. In particular, we introduce a general lp - lq (p, q ≥ 0) regularization model, which overcome the limitation on the assumption of convexity of the objective function. Interior point methods and projected gradient methods are studied. To show the potential for application of the regularized compressive sensing method, we perform both synthetic seismic signal and field data compression and restoration simulations using a proposed piecewise random sub-sampling. Numerical performance indicates that regularized compressive sensing is applicable for practical seismic imaging.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Bin, Li Wang, Hao Yu, and Fengming Xin. "A New Regularized Reconstruction Algorithm Based on Compressed Sensing for the Sparse Underdetermined Problem and Applications of One-Dimensional and Two-Dimensional Signal Recovery." Algorithms 12, no. 7 (June 26, 2019): 126. http://dx.doi.org/10.3390/a12070126.

Full text
Abstract:
The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal well in the presence of noise. However, the ReSL0 reconstruction algorithm still has some flaws. It still chooses the original optimization method of SL0 and the Gauss approximation function, but this method has the problem of a sawtooth effect in the later optimization stage, and the convergence effect is not ideal. Therefore, we make two adjustments to the basis of the ReSL0 reconstruction algorithm: firstly, we introduce another CIPF function which has a better approximation effect than Gauss function; secondly, we combine the steepest descent method and Newton method in terms of the algorithm optimization. Then, a novel regularized recovery algorithm named combined regularized smooth L0 (CReSL0) is proposed. Under the same experimental conditions, the CReSL0 algorithm is compared with other popular reconstruction algorithms. Overall, the CReSL0 algorithm achieves excellent reconstruction performance in terms of the peak signal-to-noise ratio (PSNR) and run-time for both a one-dimensional Gauss signal and two-dimensional image reconstruction tasks.
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Kaizhu, Danian Zheng, Irwin King, and Michael R. Lyu. "Arbitrary Norm Support Vector Machines." Neural Computation 21, no. 2 (February 2009): 560–82. http://dx.doi.org/10.1162/neco.2008.12-07-667.

Full text
Abstract:
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L∞-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, − 9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
APA, Harvard, Vancouver, ISO, and other styles
37

Feng, Yayuan, Yu Shi, and Dianjun Sun. "Blind Poissonian Image Deblurring Regularized by a Denoiser Constraint and Deep Image Prior." Mathematical Problems in Engineering 2020 (August 24, 2020): 1–15. http://dx.doi.org/10.1155/2020/9483521.

Full text
Abstract:
The denoising and deblurring of Poisson images are opposite inverse problems. Single image deblurring methods are sensitive to image noise. A single noise filter can effectively remove noise in advance, but it also damages blurred information. To simultaneously solve the denoising and deblurring of Poissonian images better, we learn the implicit deep image prior from a single degraded image and use the denoiser as a regularization term to constrain the latent clear image. Combined with the explicit L0 regularization prior of the image, the denoising and deblurring model of the Poisson image is established. Then, the split Bregman iteration strategy is used to optimize the point spread function estimation and latent clear image estimation. The experimental results demonstrate that the proposed method achieves good restoration results on a series of simulated and real blurred images with Poisson noise.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Chengxiang, Xiaoyan Wang, Kequan Zhao, Min Huang, Xianyun Li, and Wei Yu. "A cascading l0 regularization reconstruction method in nonsubsampled contourlet domain for limited-angle CT." Applied Mathematics and Computation 451 (August 2023): 128013. http://dx.doi.org/10.1016/j.amc.2023.128013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Heydari, Esmail, Mohammad Shams Esfand Abadi, and Seyed Mahmoud Khademiyan. "Improved multiband structured subband adaptive filter algorithm with L0-norm regularization for sparse system identification." Digital Signal Processing 122 (April 2022): 103348. http://dx.doi.org/10.1016/j.dsp.2021.103348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Xia Chengquan, 夏成权, 梁建娟 Liang Jianjuan, 刘洪 Liu Hong, and 刘本永 Liu Benyong. "联合双通道对比度和L0正则化强度及梯度先验的模糊图像盲复原." Laser & Optoelectronics Progress 59, no. 8 (2022): 0811010. http://dx.doi.org/10.3788/lop202259.0811010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Tao, and Guibin Zhang. "Mineral Exploration Potential Estimation Using 3D Inversion: A Comparison of Three Different Norms." Remote Sensing 14, no. 11 (May 25, 2022): 2537. http://dx.doi.org/10.3390/rs14112537.

Full text
Abstract:
Gravity data have been frequently used in researching the subsurface to map the 3D geometry of the density structure, which is considered the basis for further interpretations, such as the estimation of exploration potential in mineral exploration. The gravity inversion, practically employed to map the density structure, can be achieved by different methods. The method based on Tikhonov regularization is the most commonly used among them. Usually, the subsurface is discretized into a set of cells or voxels. To recover a stable and reliable solution, constraints are introduced into the Tikhonov regularization. One constrained inversion introduces a quadratic penalty (L2 norm) into the inversion, which imposes smooth features on the recovered model. Another gravity inversion, known as sparse inversion, imposes compactness and sharp boundaries on the recovered density structure. Specifically, the L1 norm and L0 norm are favored for such a purpose. This work evaluates the merits of the gravity data inversion in cooperation with different model norms and their applicability in exploration potential estimation. Because these norms promote different features in the recovered models, the reconstructed 3D density structure reveals different geometric features of the ore deposit. We use two types of synthetic data for evaluating the performances of the inversion with different norms. Numerical results demonstrate that L0 norm-based inversion provides high-resolution recovered models and offers reliable estimates of exploration potential with minimal deviation from theoretical mass compared to inversions equipped with the other two norms. Finally, we use the gravity data collected over the iron ore deposit at the Dida mining area in Jilin province (Northeast China) for the application. It is estimated that the exploration potential of the iron ore deposits is about 3.2 million tons.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Yan, and Yulu Zhao. "Efficient sparse estimation on interval-censored data with approximated L0 norm: Application to child mortality." PLOS ONE 16, no. 4 (April 9, 2021): e0249359. http://dx.doi.org/10.1371/journal.pone.0249359.

Full text
Abstract:
A novel penalty for the proportional hazards model under the interval-censored failure time data structure is discussed, with which the subject of variable selection is rarely studied. The penalty comes from an idea to approximate some information criterion, e.g., the BIC or AIC, and the core process is to smooth the ℓ0 norm. Compared with usual regularization methods, the proposed approach is free of heavily time-consuming hyperparameter tuning. The efficiency is further improved by fitting the model and selecting variables in one step. To achieve this, sieve likelihood is introduced, which simultaneously estimates the coefficients and baseline cumulative hazards function. Furthermore, it is shown that the three desired properties for penalties, i.e., continuity, sparsity, and unbiasedness, are all guaranteed. Numerical results show that the proposed sparse estimation method is of great accuracy and efficiency. Finally, the method is used on data of Nigerian children and the key factors that have effects on child mortality are found.
APA, Harvard, Vancouver, ISO, and other styles
43

Tao, Xiangxing, Mingxin Wang, and Yanting Ji. "The Application of Graph-Structured Cox Model in Financial Risk Early Warning of Companies." Sustainability 15, no. 14 (July 10, 2023): 10802. http://dx.doi.org/10.3390/su151410802.

Full text
Abstract:
An effective financial risk forecast depends on the selection of important indicators from a broad set of financial indicators that are often correlated with one another. In this paper, we address this challenge by proposing a Cox model with a graph structure that allows us to identify and filter out the crucial indicators for financial risk forecasting. The Cox model can be converted to a weighted least squares form for the purpose of solution, where the regularization l0 compresses the signs of the variable coefficients and reduces the error caused by the compression of the coefficients. The graph structure reflects the correlations among different financial indicators and is incorporated into the model by introducing a Laplace penalty term to construct the Graph Regularization–Cox (GR-Cox) model. Monte Carlo simulation results show that the GR-Cox model outperforms the model without a graph structure with respect to the choice of parameters. Here, we apply the GR-Cox model to the forecast of the financial risk of listed companies and find that it shows good classification accuracy in practical applications. The GR-Cox model provides a new approach for improving the accuracy of financial risk early warning.
APA, Harvard, Vancouver, ISO, and other styles
44

Gebre, Mesay Geletu, and Elias Lewi. "Gravity inversion method using L0-norm constraint with auto-adaptive regularization and combined stopping criteria." Solid Earth 14, no. 2 (February 3, 2023): 101–17. http://dx.doi.org/10.5194/se-14-101-2023.

Full text
Abstract:
Abstract. We present a gravity inversion method that can produce compact and sharp images to assist the modeling of non-smooth geologic features. The proposed iterative inversion approach makes use of L0-norm-stabilizing functional, hard and physical parameter inequality constraints and a depth-weighting function. The method incorporates an auto-adaptive regularization technique, which automatically determines a suitable regularization parameter and error-weighting function that helps to improve both the stability and convergence of the method. The auto-adaptive regularization and error-weighting matrix are not dependent on the known noise level. Because of that, the method yields reasonable results even if the noise level of the data is not known properly. The utilization of an effectively combined stopping rule to terminate the inversion process is another improvement that is introduced in this work. The capacity and the efficiency of the new inversion method were tested by inverting randomly chosen synthetic and measured data. The synthetic test models consist of multiple causative blocky bodies, with different geometries and density distributions that are vertically and horizontally distributed adjacent to each other. Inversion results of the synthetic data show that the developed method can recover models that adequately match the real geometry, location and densities of the synthetic causative bodies. Furthermore, the testing of the improved approach using published real gravity data confirmed the potential and practicality of the method in producing compact and sharp inverse images of the subsurface.
APA, Harvard, Vancouver, ISO, and other styles
45

Shao, Wenze, Haisong Deng, and Zhuihui Wei. "Nonconvex Compressed Sampling of Natural Images and Applications to Compressed MR Imaging." ISRN Computational Mathematics 2012 (November 16, 2012): 1–12. http://dx.doi.org/10.5402/2012/982792.

Full text
Abstract:
There have been proposed several compressed imaging reconstruction algorithms for natural and MR images. In essence, however, most of them aim at the good reconstruction of edges in the images. In this paper, a nonconvex compressed sampling approach is proposed for structure-preserving image reconstruction, through imposing sparseness regularization on strong edges and also oscillating textures in images. The proposed approach can yield high-quality reconstruction as images are sampled at sampling ratios far below the Nyquist rate, due to the exploitation of a kind of approximate l0 seminorms. Numerous experiments are performed on the natural images and MR images. Compared with several existing algorithms, the proposed approach is more efficient and robust, not only yielding higher signal to noise ratios but also reconstructing images of better visual effects.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Lingli, Li Zeng, and Yumeng Guo. "l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction." Journal of X-Ray Science and Technology 26, no. 3 (May 25, 2018): 481–98. http://dx.doi.org/10.3233/xst-17334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Su, Yixin, Rui Zhang, Sarah Erfani, and Zhenghua Xu. "Detecting Beneficial Feature Interactions for Recommender Systems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4357–65. http://dx.doi.org/10.1609/aaai.v35i5.16561.

Full text
Abstract:
Feature interactions are essential for achieving high accuracy in recommender systems. Many studies take into account the interaction between every pair of features. However, this is suboptimal because some feature interactions may not be that relevant to the recommendation result and taking them into account may introduce noise and decrease recommendation accuracy. To make the best out of feature interactions, we propose a graph neural network approach to effectively model them, together with a novel technique to automatically detect those feature interactions that are beneficial in terms of recommendation accuracy. The automatic feature interaction detection is achieved via edge prediction with an L0 activation regularization. Our proposed model is proved to be effective through the information bottleneck principle and statistical interaction theory. Experimental results show that our model (i) outperforms existing baselines in terms of accuracy, and (ii) automatically identifies beneficial feature interactions.
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Xiaohe, and Wankou Yang. "Illumination Removal via Gaussian Difference L0 Norm Model for Facial Experssion Recognition." Mathematics 11, no. 12 (June 12, 2023): 2667. http://dx.doi.org/10.3390/math11122667.

Full text
Abstract:
Face images in the logarithmic space can be considered as a sum of the texture component and lighting map component according to Lambert Reflection. However, it is still not easy to separate these two parts, because face contour boundaries and lighting change boundaries are difficult to distinguish. In order to enhance the separation quality of these to parts, this paper proposes an illumination standardization algorithm based on extreme L0 Gaussian difference regularization constraints, assuming that illumination is massively spread all over the image but illumination change boundaries are simple, regular, and sparse enough. The proposed algorithm uses an iterative L0 Gaussian difference smoothing method, which achieves a more accurate lighting map estimation by reserving the fewest boundaries. Thus, the texture component of the original image can be restored better by simply subtracting the lighting map estimated. The experiments in this paper are organized with two steps: the first step is to observe the quality of the original texture restoration, and the second step is to test the effectiveness of our algorithm for complex face classification tasks. We choose the facial expression classification in this step. The first step experimental results show that our proposed algorithm can effectively recover face image details from extremely dark or light regions. In the second step experiment, we use a CNN classifier to test the emotion classification accuracy, making a comparison of the proposed illumination removal algorithm and the state-of-the-art illumination removal algorithm as face image preprocessing methods. The experimental results show that our algorithm works best for facial expression classification at about 5 to 7 percent accuracy higher than other algorithms. Therefore, our algorithm is proven to provide effective lighting processing technical support for the complex face classification problems which require a high degree of preservation of facial texture. The contribution of this paper is, first, that this paper proposes an enhanced TV model with an L0 boundary constraint for illumination estimation. Second, the boundary response is formulated with the Gaussian difference, which strongly responds to illumination boundaries. Third, this paper emphasizes the necessity of reserving details for preprocessing face images.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Xuru, Xueqin Sun, Yanbo Zhang, Jinxiao Pan, and Ping Chen. "Tensor Dictionary Learning with an Enhanced Sparsity Constraint for Sparse-View Spectral CT Reconstruction." Photonics 9, no. 1 (January 8, 2022): 35. http://dx.doi.org/10.3390/photonics9010035.

Full text
Abstract:
Spectral computed tomography (CT) can divide collected photons into multi-energy channels and gain multi-channel projections synchronously by using photon-counting detectors. However, reconstructed images usually contain severe noise due to the limited number of photons in the corresponding energy channel. Tensor dictionary learning (TDL)-based methods have achieved better performance, but usually lose image edge information and details, especially from an under-sampling dataset. To address this problem, this paper proposes a method termed TDL with an enhanced sparsity constraint for spectral CT reconstruction. The proposed algorithm inherits the superiority of TDL by exploring the correlation of spectral CT images. Moreover, the method designs a regularization using the L0-norm of the image gradient to constrain images and the difference between images and a prior image in each energy channel simultaneously, further improving the ability to preserve edge information and subtle image details. The split-Bregman algorithm has been applied to address the proposed objective minimization model. Several numerical simulations and realistic preclinical mice are studied to assess the effectiveness of the proposed algorithm. The results demonstrate that the proposed method improves the quality of spectral CT images in terms of noise elimination, edge preservation, and image detail recovery compared to the several existing better methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Xuru, Xueqin Sun, and Fuzhong Li. "Sparse-View Computed Tomography Reconstruction Based on a Novel Improved Prior Image Constrained Compressed Sensing Algorithm." Applied Sciences 13, no. 18 (September 14, 2023): 10320. http://dx.doi.org/10.3390/app131810320.

Full text
Abstract:
The problem of sparse-view computed tomography (SVCT) reconstruction has become a popular research issue because of its significant capacity for radiation dose reduction. However, the reconstructed images often contain serious artifacts and noise from under-sampled projection data. Although the good results achieved by the prior image constrained compressed sensing (PICCS) method, there may be some unsatisfactory results in the reconstructed images because of the image gradient L1-norm used in the original PICCS model, which leads to the image suffering from step artifacts and over-smoothing of the edge as a result. To address the above-mentioned problem, this paper proposes a novel improved PICCS algorithm (NPICCS) for SVCT reconstruction. The proposed algorithm utilizes the advantages of PICCS, which could recover more details. Moreover, the algorithm introduces the L0-norm of image gradient regularization into the framework, which overcomes the disadvantage of conventional PICCS, and enhances the capability to retain edge and fine image detail. The split Bregman method has been used to resolve the proposed mathematical model. To verify the effectiveness of the proposed method, a large number of experiments with different angles are conducted. Final experimental results show that the proposed algorithm has advantages in edge preservation, noise suppression, and image detail recovery.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography