To see the other types of publications on this topic, follow the link: Denoising Image.

Journal articles on the topic 'Denoising Image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Denoising Image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rubel, Andrii, Oleksii Rubel, Vladimir Lukin, and Karen Egiazarian. "Decision-making on image denoising expedience." Electronic Imaging 2021, no. 10 (January 18, 2021): 237–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.10.ipas-237.

Full text
Abstract:
Image denoising is a classical preprocessing stage used to enhance images. However, it is well known that there are many practical cases where different image denoising methods produce images with inappropriate visual quality, which makes an application of image denoising useless. Because of this, it is desirable to detect such cases in advance and decide how expedient is image denoising (filtering). This problem for the case of wellknown BM3D denoiser is analyzed in this paper. We propose an algorithm of decision-making on image denoising expedience for images corrupted by additive white Gaussian noise (AWGN). An algorithm of prediction of subjective image visual quality scores for denoised images using a trained artificial neural network is proposed as well. It is shown that this prediction is fast and accurate.
APA, Harvard, Vancouver, ISO, and other styles
2

R. Tripathi, Mr Vijay. "Image Denoising." IOSR Journal of Engineering 1, no. 1 (November 2011): 84–87. http://dx.doi.org/10.9790/3021-0118487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Shaoping, Xiaojun Chen, Yiling Tang, Shunliang Jiang, Xiaohui Cheng, and Nan Xiao. "Learning from Multiple Instances: A Two-Stage Unsupervised Image Denoising Framework Based on Deep Image Prior." Applied Sciences 12, no. 21 (October 24, 2022): 10767. http://dx.doi.org/10.3390/app122110767.

Full text
Abstract:
Supervised image denoising methods based on deep neural networks require a large amount of noisy-clean or noisy image pairs for network training. Thus, their performance drops drastically when the given noisy image is significantly different from the training data. Recently, several unsupervised learning models have been proposed to reduce the dependence on training data. Although unsupervised methods only require noisy images for learning, their denoising effect is relatively weak compared with supervised methods. This paper proposes a two-stage unsupervised deep learning framework based on deep image prior (DIP) to enhance the image denoising performance. First, a two-target DIP learning strategy is proposed to impose a learning restriction on the DIP optimization process. A cleaner preliminary image, together with the given noisy image, was used as the learning target of the two-target DIP learning process. We then demonstrate that adding an extra learning target with better image quality in the DIP learning process is capable of constraining the search space of the optimization process and improving the denoising performance. Furthermore, we observe that given the same network input and the same learning target, the DIP optimization process cannot generate the same denoised images. This indicates that the denoised results are uncertain, although they are similar in image quality and are complemented by local details. To utilize the uncertainty of the DIP, we employ a supervised denoising method to preprocess the given noisy image and propose an up- and down-sampling strategy to produce multiple sampled instances of the preprocessed image. These sampled instances were then fed into multiple two-target DIP learning processes to generate multiple denoised instances with different image details. Finally, we propose an unsupervised fusion network that fuses multiple denoised instances into one denoised image to further improve the denoising effect. We evaluated the proposed method through extensive experiments, including grayscale image denoising, color image denoising, and real-world image denoising. The experimental results demonstrate that the proposed framework outperforms unsupervised methods in all cases, and the denoising performance of the framework is close to or superior to that of supervised denoising methods for synthetic noisy image denoising and significantly outperforms supervised denoising methods for real-world image denoising. In summary, the proposed method is essentially a hybrid method that combines both supervised and unsupervised learning to improve denoising performance. Adopting a supervised method to generate preprocessed denoised images can utilize the external prior and help constrict the search space of the DIP, whereas using an unsupervised method to produce intermediate denoised instances can utilize the internal prior and provide adaptability to various noisy images of a real scene.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Tingsheng, Chunyang Wang, and Xuelian Liu. "Depth Image Denoising Algorithm Based on Fractional Calculus." Electronics 11, no. 12 (June 19, 2022): 1910. http://dx.doi.org/10.3390/electronics11121910.

Full text
Abstract:
Depth images are often accompanied by unavoidable and unpredictable noise. Depth image denoising algorithms mainly attempt to fill hole data and optimise edges. In this paper, we study in detail the problem of effectively filtering the data of depth images under noise interference. The classical filtering algorithm tends to blur edge and texture information, whereas the fractional integral operator can retain more edge and texture information. In this paper, the Grünwald–Letnikov-type fractional integral denoising operator is introduced into the depth image denoising process, and the convolution template of this operator is studied and improved upon to build a fractional integral denoising model and algorithm for depth image denoising. Depth images from the Redwood dataset were used to add noise, and the mask constructed by the fractional integral denoising operator was used to denoise the images by convolution. The experimental results show that the fractional integration order with the best denoising effect was −0.4 ≤ ν ≤ −0.3 and that the peak signal-to-noise ratio was improved by +3 to +6 dB. Under the same environment, median filter denoising had −15 to −30 dB distortion. The filtered depth image was converted to a point cloud image, from which the denoising effect was subjectively evaluated. Overall, the results prove that the fractional integral denoising operator can effectively handle noise in depth images while preserving their edge and texture information and thus has an excellent denoising effect.
APA, Harvard, Vancouver, ISO, and other styles
5

Bertalmío, Marcelo, and Stacey Levine. "Denoising an Image by Denoising Its Curvature Image." SIAM Journal on Imaging Sciences 7, no. 1 (January 2014): 187–211. http://dx.doi.org/10.1137/120901246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khan, Aamir, Weidong Jin, Amir Haider, MuhibUr Rahman, and Desheng Wang. "Adversarial Gaussian Denoiser for Multiple-Level Image Denoising." Sensors 21, no. 9 (April 24, 2021): 2998. http://dx.doi.org/10.3390/s21092998.

Full text
Abstract:
Image denoising is a challenging task that is essential in numerous computer vision and image processing problems. This study proposes and applies a generative adversarial network-based image denoising training architecture to multiple-level Gaussian image denoising tasks. Convolutional neural network-based denoising approaches come across a blurriness issue that produces denoised images blurry on texture details. To resolve the blurriness issue, we first performed a theoretical study of the cause of the problem. Subsequently, we proposed an adversarial Gaussian denoiser network, which uses the generative adversarial network-based adversarial learning process for image denoising tasks. This framework resolves the blurriness problem by encouraging the denoiser network to find the distribution of sharp noise-free images instead of blurry images. Experimental results demonstrate that the proposed framework can effectively resolve the blurriness problem and achieve significant denoising efficiency than the state-of-the-art denoising methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Gavini, Venkateswarlu, and Gurusamy Ramasamy Jothi Lakshmi. "CT Image Denoising Model Using Image Segmentation for Image Quality Enhancement for Liver Tumor Detection Using CNN." Traitement du Signal 39, no. 5 (November 30, 2022): 1807–14. http://dx.doi.org/10.18280/ts.390540.

Full text
Abstract:
Image denoising is an important concept in image processing for improving the image quality. It is difficult to remove noise from images because of the various causes of noise. Imaging noise is made up of many different types of noise, including Gaussian, impulse, salt, pepper, and speckle noise. Increasing emphasis has been paid to Convolution Neural Networks (CNNs) in image denoising. Image denoising has been researched using a variety of CNN approaches. For the evaluation of these methods, various datasets were utilized. Liver Tumor is the leading cause of cancer-related death worldwide. By using Computed Tomography (CT) to detect liver tumor early, millions of patients could be spared from death each year. Denoising a picture means cleaning up an image that has been corrupted by unwanted noise. Due to the fact that noise, edge, and texture are all high frequency components, denoising can be tricky, and the resulting images may be missing some finer features. Applications where recovering the original image content is vital for good performance benefit greatly from image denoising, including image reconstruction, activity recognition, image restoration, segmentation techniques, and image classification. Tumors of this type are difficult to detect and are almost always discovered at an advanced stage, posing a serious threat to the patient's life. As a result, finding a tumour at an early stage is critical. Tumors can be detected non-invasively using medical image processing. There is a pressing need for software that can automatically read, detect, and evaluate CT scans by removing noise from the images. As a result, any system must deal with a bottleneck in liver segmentation and extraction from CT scans. To segment and classify liver CT images after denoising images, a deep CNN technique is proposed in this research. An Image Quality Enhancement model with Image Denoising and Edge based Segmentation (IQE-ID-EbS) is proposed in this research that effectively reduces noise levels in the image and then performs edge based segmentation for feature extraction from the CT images. The proposed model is compared with the traditional models and the results represent that the proposed model performance is better.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Xiangning, Yan Yang, and Lening Lin. "Edge-aware image denoising algorithm." Journal of Algorithms & Computational Technology 13 (October 30, 2018): 174830181880477. http://dx.doi.org/10.1177/1748301818804774.

Full text
Abstract:
The key of image denoising algorithms is to preserve the details of the original image while denoising the noise in the image. The existing algorithms use the external information to better preserve the details of the image, but the use of external information needs the support of similar images or image patches. In this paper, an edge-aware image denoising algorithm is proposed to achieve the goal of preserving the details of original image while denoising and using only the characteristics of the noisy image. In general, image denoising algorithms use the noise prior to set parameters todenoise the noisy image. In this paper, it is found that the details of original image can be better preserved by combining the prior information of noise and the image edge features to set denoising parameters. The experimental results show that the proposed edge-aware image denoising algorithm can effectively improve the performance of block-matching and 3D filtering and patch group prior-based denoising algorithms and obtain higher peak signal-to-noise ratio and structural similarity values.
APA, Harvard, Vancouver, ISO, and other styles
9

Manjón, José V., Neil A. Thacker, Juan J. Lull, Gracian Garcia-Martí, Luís Martí-Bonmatí, and Montserrat Robles. "Multicomponent MR Image Denoising." International Journal of Biomedical Imaging 2009 (2009): 1–10. http://dx.doi.org/10.1155/2009/756897.

Full text
Abstract:
Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed.
APA, Harvard, Vancouver, ISO, and other styles
10

Badgainya, Shruti, Prof Pankaj Sahu, and Prof Vipul Awasthi. "Image Denoising by OWT for Gaussian Noise Corrupted Images." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 2477–84. http://dx.doi.org/10.31142/ijtsrd18337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Рубель, Андрей Сергеевич, and Владимир Васильевич Лукин. "АНАЛИЗ И ПРОГНОЗИРОВАНИЕ ЭФФЕКТИВНОСТИ ФИЛЬТРАЦИИ С ИСПОЛЬЗОВАНИЕМ БЕЗЭТАЛОННЫХ МЕР ВИЗУАЛЬНОГО КАЧЕСТВА ИЗОБРАЖЕНИЙ." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 1 (February 23, 2018): 4–14. http://dx.doi.org/10.32620/reks.2018.1.01.

Full text
Abstract:
Images are subject to noise during acquisition, transmission and processing. Image denoising is highly desirable, not only to provide better visual quality, but also to improve performance of the subsequent operations such as compression, segmentation, classification, object detection and recognition. In the past decades, a large number of image denoising algorithms has been developed, ranging from simple linear methods to complex methods based on similar blocks search and deep convolutional neural networks. However, most of existing denoising techniques have a tendency to oversmooth image edges, fine details and textures. Thus, there are cases when noise reduction leads to loss of image features and filtering does not produce better visual quality. According to this, it is very important to evaluate denoising result and hence to undertake a decision whether denoising is expedient. Despite the fact that image denoising has been one of the most active research areas, only a little work has been dedicated to visual quality evaluation for denoised images. There are many approaches and metrics to characterize image quality, but adequateness of these metrics is of question. Existing image quality metrics, especially no-reference ones, have not been thoroughly studies for image denoising. In terms of using visual quality metrics, it is usually supposed that the higher the improvement for a given metric, the better visual quality for denoised image. However, there are situations when denoising does not result in visual quality enhancement, especially for texture images. Thus, it would be desirable to predict human subjective evaluation for denoised image. Then, this information will clarify when denoising can be expedient. The purpose of this paper is to give analysis of denoising expedience using no-reference (NR) image quality metrics. In addition, this work considers possible ways to predict human subjective evaluation of denoised images based on several input parameters. More in details, two denoising techniques, namely the standard sliding window DCT filter and the BM3D filter have been considered. Using a specialized database of test images SubjectiveIQA, performance evaluation of existing state-of-the-art objective no-reference quality metrics for denoised images is carried out
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Juan, Zhencai Zhu, Haiying Hu, Lin Qiu, Zhenzhen Zheng, and Lei Dong. "A Novel Adaptive Group Sparse Representation Model Based on Infrared Image Denoising for Remote Sensing Application." Applied Sciences 13, no. 9 (May 6, 2023): 5749. http://dx.doi.org/10.3390/app13095749.

Full text
Abstract:
Infrared (IR) Image preprocessing is aimed at image denoising and enhancement to help with small target detection. According to the sparse representation theory, the IR original image is low rank, and the coefficient shows a sparse character. The low rank and sparse model could distinguish between the original image and noise. The IR images lack texture and details. In IR images, the small target is hard to recognize. Traditional denoising methods based on nuclear norm minimization (NNM) treat all eigenvalues equally, which blurs the concrete details. They are unable to achieve a good denoising performance. Deep learning methods necessitate a large number of train images, which are difficult to obtain in IR image denoising. It is difficult to perform well under high noise in IR image denoising. Tracking and detection would not be possible without a proper denoising method. This article fuses the weighted nuclear norm minimization (WNNM) with an adaptive similar patch, searching based on the group sparse representation for infrared images. We adaptively selected similar structural blocks based on certain computational criteria, and we used the K-nearest neighbor (KNN) cluster to constitute more similar groups, which is helpful in recovering the complex background with high Gaussian noise. Then, we shrank all eigenvalues with different weights in the WNNM model to solve the optimization problem. Our method could recover more detailed information in the images. The algorithm not only obtains good denoising results in common image denoising but also achieves good performance in infrared image denoising. The target in IR images attains a high signal for the clutter in IR detection systems for remote sensing. Under common data sets and real infrared images, it has a good noise suppression effect with a high peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM), with higher noise and a much more complex background.
APA, Harvard, Vancouver, ISO, and other styles
13

Kaur, Roopdeep, Gour Karmakar, and Muhammad Imran. "Impact of Traditional and Embedded Image Denoising on CNN-Based Deep Learning." Applied Sciences 13, no. 20 (October 22, 2023): 11560. http://dx.doi.org/10.3390/app132011560.

Full text
Abstract:
In digital image processing, filtering noise is an important step for reconstructing a high-quality image for further processing such as object segmentation, object detection, and object recognition. Various image-denoising approaches, including median, Gaussian, and bilateral filters, are available in the literature. Since convolutional neural networks (CNN) are able to directly learn complex patterns and features from data, they have become a popular choice for image-denoising tasks. As a result of their ability to learn and adapt to various denoising scenarios, CNNs are powerful tools for image denoising. Some deep learning techniques such as CNN incorporate denoising strategies directly into the CNN model layers. A primary limitation of these methods is their necessity to resize images to a consistent size. This resizing can result in a loss of vital image details, which might compromise CNN’s effectiveness. Because of this issue, we utilize a traditional denoising method as a preliminary step for noise reduction before applying CNN. To our knowledge, a comparative performance study of CNN using traditional and embedded denoising against a baseline approach (without denoising) is yet to be performed. To analyze the impact of denoising on the CNN performance, in this paper, firstly, we filter the noise from the images using traditional means of denoising method before their use in the CNN model. Secondly, we embed a denoising layer in the CNN model. To validate the performance of image denoising, we performed extensive experiments for both traffic sign and object recognition datasets. To decide whether denoising will be adopted and to decide on the type of filter to be used, we also present an approach exploiting the peak-signal-to-noise-ratio (PSNRs) distribution of images. Both CNN accuracy and PSNRs distribution are used to evaluate the effectiveness of the denoising approaches. As expected, the results vary with the type of filter, impact, and dataset used in both traditional and embedded denoising approaches. However, traditional denoising shows better accuracy, while embedded denoising shows lower computational time for most of the cases. Overall, this comparative study gives insights into whether denoising will be adopted in various CNN-based image analyses, including autonomous driving, animal detection, and facial recognition.
APA, Harvard, Vancouver, ISO, and other styles
14

Tan, Hanlin, Huaxin Xiao, Yu Liu, and Maojun Zhang. "Two-Stage CNN Model for Joint Demosaicing and Denoising of Burst Bayer Images." Computational Intelligence and Neuroscience 2022 (April 4, 2022): 1–10. http://dx.doi.org/10.1155/2022/6200931.

Full text
Abstract:
In the classical image processing pipeline, demosaicing and denoising are separated steps that may interfere with each other. Joint demosaicing and denoising utilizes the shared image prior information to guide the image recovery process. It is expected to have better performance by the joint optimization of the two problems. Besides, learning recovered images from burst (continuous exposure images) can further improve image details. This article proposes a two-stage convolutional neural network model for joint demosaicing and denoising of burst Bayer images. The proposed CNN model consists of a single-frame joint demosaicing and denoising module, a multiframe denoising module, and an optional noise estimation module. It requires a two-stage training scheme to ensure that the model converges to a good solution. Experiments on multiframe Bayer images with simulated Gaussian noise show that the proposed method has obvious performance advantages and speed advantages compared with similar approaches. Experiments on actual multiframe Bayer images verify the denoising effect and detail retention ability of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
15

Goyal, Bhawna, Ayush Dogra, Sunil Agrawal, and B. S. Sohi. "A Survey on the Image Denoising to Enhance Medical Images." Biosciences, Biotechnology Research Asia 15, no. 3 (September 3, 2018): 501–7. http://dx.doi.org/10.13005/bbra/2655.

Full text
Abstract:
While acquisition and transmission of images, all recording devices have physical limitations and traits which make them prone to noise. Noise manifests itself in the form of signal perturbation leading to deterred image observation, image analysis and image assessment. Image denoising is fundamental to the world of image processing. Thus any progress made in image denoising forms a stepping stone in our understanding of image processing and statistics. The basic fundamental for denoising of an image includes suppression of the noisy pixels while preserving as many information pixel as possible.. The manuscript provides the reader’s a typical foundation for image denoising.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Wen-Li, Jing-Yue Zheng, Kun Liang, Ke-Fan Chen, Jian-Hai Zhao, Jian-Qiang Liu, Yi Wang, and Yu-Xin Qin. "Research on Block Matching Three-Dimensional Cooperative Filtering Optical Image Denoising Algorithm Based on Noise Estimation." Journal of Nanoelectronics and Optoelectronics 16, no. 11 (November 1, 2021): 1711–19. http://dx.doi.org/10.1166/jno.2021.3132.

Full text
Abstract:
Optical image denoising is one of the important means to improve the overall clarity of the image or highlight the texture details of the target. However, studies have found that the existing optical image denoising algorithms have a good effect on the images with known noise levels, and have average denoising effects on random noising. Therefore, a novel BM3D optical image denoising algorithm based on noise estimation is proposed. Firstly, the wavelet packet transform is used for the noise estimation of the optical image. Then, according to the noise estimation result, BM3D is used to realize the denoising analysis of the optical image. Finally, PSNR is used to evaluate the effectiveness of the algorithm. Experiments show that the proposed algorithm not only achieves effective denoising of optical image, but also has strong robustness.
APA, Harvard, Vancouver, ISO, and other styles
17

Qi, Guanqiu, Gang Hu, Neal Mazur, Huahua Liang, and Matthew Haner. "A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation." Computers 10, no. 10 (October 13, 2021): 129. http://dx.doi.org/10.3390/computers10100129.

Full text
Abstract:
Multi-modality image fusion applied to improve image quality has drawn great attention from researchers in recent years. However, noise is actually generated in images captured by different types of imaging sensors, which can seriously affect the performance of multi-modality image fusion. As the fundamental method of noisy image fusion, source images are denoised first, and then the denoised images are fused. However, image denoising can decrease the sharpness of source images to affect the fusion performance. Additionally, denoising and fusion are processed in separate processing modes, which causes an increase in computation cost. To fuse noisy multi-modality image pairs accurately and efficiently, a multi-modality image simultaneous fusion and denoising method is proposed. In the proposed method, noisy source images are decomposed into cartoon and texture components. Cartoon-texture decomposition not only decomposes source images into detail and structure components for different image fusion schemes, but also isolates image noise from texture components. A Gaussian scale mixture (GSM) based sparse representation model is presented for the denoising and fusion of texture components. A spatial domain fusion rule is applied to cartoon components. The comparative experimental results confirm the proposed simultaneous image denoising and fusion method is superior to the state-of-the-art methods in terms of visual and quantitative evaluations.
APA, Harvard, Vancouver, ISO, and other styles
18

Ismael, Ahmed Abdulmaged, and Muhammet Baykara. "Digital Image Denoising Techniques Based on Multi-Resolution Wavelet Domain with Spatial Filters: A Review." Traitement du Signal 38, no. 3 (June 30, 2021): 639–51. http://dx.doi.org/10.18280/ts.380311.

Full text
Abstract:
Recently, with the explosion in the number of digital images captured every day in all life aspects, there is a growing demand for more detailed and visually attractive images. However, the images taken by current sensors are inevitably degraded by noise in various fields, such as medical, astrophysics, weather forecasting, etc., which contributes to impaired visual image quality. Therefore, work is needed to reduce noise by preserving the textural, information, and structural features of the image. So far, there are different techniques for reducing noise that various researchers have done. Each technique has its advantages and disadvantages. In this paper, a review of some significant work in the field of image denoising based on that the denoising methods is presented. These methods can be classified as wavelet domain, spatial domain, or both methods can combine to obtain the advantage them. After a brief discussion, the classification of image denoising techniques is explained. A comparative analysis of various image denoising methods is also performed to help researchers in the image denoising area. Besides, standard measurement parameters have been used to compute the results and to evaluate the performance of the used denoising techniques. This review paper aims to provide functional knowledge of image denoising methods in a nutshell for applications using images to provide ease for selecting the ideal strategy according to the necessity.
APA, Harvard, Vancouver, ISO, and other styles
19

He, Shuang Shuang, Yuan Yuan Jiang, and Jin Yan Zheng. "A Novel Image Denoising Method in 2-D Fractional Time-Frequency Domain." Applied Mechanics and Materials 734 (February 2015): 586–89. http://dx.doi.org/10.4028/www.scientific.net/amm.734.586.

Full text
Abstract:
To improve image quality and a higher level of follow-up image process needed, it's of great importance to do the image denoising process first. A new image denoising method in two-dimensional (2-D) fractional time-frequency domain is proposed in this paper. Through the realization of 2-D fractional wavelet transform algorithm, the 2-D fractional wavelet transform theory is applied to image denoising, and compare with image denoising method based on 2-D wavelet transform. A large number of image denoising simulation studies have shown that, the Peak Signal to Noise Ratio of output images based on the proposed method can be effectively improved, and preserve detail information effectively and reduce the noise at the same time. It proved 2-D fractional wavelet transform is a new and effective time-frequency domain image denoising method.
APA, Harvard, Vancouver, ISO, and other styles
20

Zou, XiuFang, Dingju Zhu, Jun Huang, Wei Lu, Xinchu Yao, and Zhaotong Lian. "WGAN-Based Image Denoising Algorithm." Journal of Global Information Management 30, no. 9 (January 2022): 1–20. http://dx.doi.org/10.4018/jgim.300821.

Full text
Abstract:
Traditional image denoising algorithms are generally based on spatial domains or transform domains to denoise and smooth the image. The denoised images are not exhaustive, and the depth-of-learning algorithm has better denoising effect and performs well while retaining the original image texture details such as edge characters. In order to enhance denoising capability of images by the restoration of texture details and noise reduction, this article proposes a network model based on the Wasserstein GAN. In the generator, small convolution size is used to extract image features with noise. The extracted image features are denoised, fused and reconstructed into denoised images. A new residual network is proposed to improve the noise removal effect. In the confrontation training, different loss functions are proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
21

A.S.C.S.Sastry, P.V.V.Kishore MIEE, Ch Raghava Prasad, and M.V.D.Prasad. "Denoising Ultrasound Medical Images." International Journal of Measurement Technologies and Instrumentation Engineering 5, no. 1 (January 2015): 1–14. http://dx.doi.org/10.4018/ijmtie.2015010101.

Full text
Abstract:
Medical ultrasound imaging has revolutioned the diagnostics of human body in the last few decades. The major drawback of ultrasound medical images is speckle noise. Speckle noise in ultrasound images is because of multiple reflections of ultrasound waves from hard tissues. Speckle noise degrades the medical ultrasound images lessening the visible quality of the image. The aim of this paper is to improve the image quality of ultrasound medical images by applying block based hard and soft thresholding on wavelet coefficients. Medical ultrasound image transformation to wavelet domain uses debauchee's mother wavelet. Divide the approximate and detailed coefficients into uniform blocks of size 8×8, 16×16, 32×32 and 64×64. Hard and soft thresholding on these blocks of approximate and detailed coefficients reduces speckle noise. Inverse transformation to original spatial domain produces a noise reduced ultrasound image. Experiments on medical ultrasound images obtained from diagnostic centers in Vijayawada, India show good improvements to ultrasound images visually. Quality of improved images in measured using peak signal to noise ratio (PSNR), image quality index (IQI), structural similarity index (SSIM).
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Yang, Saeed Anwar, Zhenyue Qin, Pan Ji, Sabrina Caldwell, and Tom Gedeon. "Disentangling Noise from Images: A Flow-Based Image Denoising Neural Network." Sensors 22, no. 24 (December 14, 2022): 9844. http://dx.doi.org/10.3390/s22249844.

Full text
Abstract:
The prevalent convolutional neural network (CNN)-based image denoising methods extract features of images to restore the clean ground truth, achieving high denoising accuracy. However, these methods may ignore the underlying distribution of clean images, inducing distortions or artifacts in denoising results. This paper proposes a new perspective to treat image denoising as a distribution learning and disentangling task. Since the noisy image distribution can be viewed as a joint distribution of clean images and noise, the denoised images can be obtained via manipulating the latent representations to the clean counterpart. This paper also provides a distribution-learning-based denoising framework. Following this framework, we present an invertible denoising network, FDN, without any assumptions on either clean or noise distributions, as well as a distribution disentanglement method. FDN learns the distribution of noisy images, which is different from the previous CNN-based discriminative mapping. Experimental results demonstrate FDN’s capacity to remove synthetic additive white Gaussian noise (AWGN) on both category-specific and remote sensing images. Furthermore, the performance of FDN surpasses that of previously published methods in real image denoising with fewer parameters and faster speed.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, Ying, Chao Ren, Shengguo Zhang, Xiaoqin Xue, Yuanyuan Liu, Jiakai Lu, and Cong Ding. "A Second-Order Method for Removing Mixed Noise from Remote Sensing Images." Sensors 23, no. 17 (August 30, 2023): 7543. http://dx.doi.org/10.3390/s23177543.

Full text
Abstract:
Remote sensing image denoising is of great significance for the subsequent use and research of images. Gaussian noise and salt-and-pepper noise are prevalent noises in images. Contemporary denoising algorithms often exhibit limitations when addressing such mixed noise scenarios, manifesting in suboptimal denoising outcomes and the potential blurring of image edges subsequent to the denoising process. To address the above problems, a second-order removal method for mixed noise in remote sensing images was proposed. In the first stage of the method, dilated convolution was introduced into the DnCNN (denoising convolutional neural network) network framework to increase the receptive field of the network, so that more feature information could be extracted from remote sensing images. Meanwhile, a DropoutLayer was introduced after the deep convolution layer to build the noise reduction model to prevent the network from overfitting and to simplify the training difficulty, and then the model was used to perform the preliminary noise reduction on the images. To further improve the image quality of the preliminary denoising results, effectively remove the salt-and-pepper noise in the mixed noise, and preserve more image edge details and texture features, the proposed method employed a second stage on the basis of adaptive median filtering. In this second stage, the median value in the original filter window median was replaced by the nearest neighbor pixel weighted median, so that the preliminary noise reduction result was subjected to secondary processing, and the final denoising result of the mixed noise of the remote sensing image was obtained. In order to verify the feasibility and effectiveness of the algorithm, the remote sensing image denoising experiments and denoised image edge detection experiments were carried out in this paper. When the experimental results are analyzed through subjective visual assessment, images denoised using the proposed method exhibit clearer and more natural details, and they effectively retain edge and texture features. In terms of objective evaluation, the performance of different denoising algorithms is compared using metrics such as mean square error (MSE), peak signal-to-noise ratio (PSNR), and mean structural similarity index (MSSIM). The experimental outcomes indicate that the proposed method for denoising mixed noise in remote sensing images outperforms traditional denoising techniques, achieving a clearer image restoration effect.
APA, Harvard, Vancouver, ISO, and other styles
24

Jebur, Rusul Sabah, Mohd Hazli Bin Mohamed Zabil, Dalal Abdulmohsin Hammood, Lim Kok Cheng, and Ali Al-Naji. "Image Denoising Using Hybrid Deep Learning Approach and Self-Improved Orca Predation Algorithm." Technologies 11, no. 4 (August 12, 2023): 111. http://dx.doi.org/10.3390/technologies11040111.

Full text
Abstract:
Image denoising is a critical task in computer vision aimed at removing unwanted noise from images, which can degrade image quality and affect visual details. This study proposes a novel approach that combines deep hybrid learning with the Self-Improved Orca Predation Algorithm (SI-OPA) for image denoising. Leveraging Bidirectional Long Short-Term Memory (Bi-LSTM) and optimized Convolutional Neural Networks (CNN), the hybrid model aims to enhance denoising performance. The CNN’s weights are optimized using SI-OPA, resulting in improved denoising accuracy. Extensive comparisons against state-of-the-art denoising methods, including traditional algorithms and deep learning-based techniques, are conducted, focusing on denoising effectiveness, computational efficiency, and preservation of image details. The proposed approach demonstrates superior performance in all aspects, highlighting its potential as a promising solution for image-denoising tasks. Implemented in Python, the hybrid model showcases the benefits of combining Bi-LSTM, optimized CNN, and SI-OPA for advanced image-denoising applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Hua, Gang, and Daihong Jiang. "A New Method of Image Denoising for Underground Coal Mine Based on the Visual Characteristics." Journal of Applied Mathematics 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/362716.

Full text
Abstract:
Affected by special underground circumstances of coal mine, the image clarity of most images captured in the mine is not very high, and a large amount of image noise is mingled with the images, which brings further downhole images processing many difficulties. Traditional image denoising method easily leads to blurred images, and the denoising effect is not very satisfactory. Aimed at the image characteristics of low image illumination and large amount of noise and based on the characteristics of color detail blindness and simultaneous contrast of human visual perception, this paper proposes a new method for image denoising based on visual characteristics. The method uses CIELab uniform color space to dynamically and adaptively decide the filter weights, thereby reducing the damage to the image contour edges and other details, so that the denoised image can have a higher clarity. Experimental results show that this method has a brilliant denoising effect and can significantly improve the subjective and objective picture quality of downhole images.
APA, Harvard, Vancouver, ISO, and other styles
26

Tan, Yi, Jin Fan, Dong Sun, Qingwei Gao, and Yixiang Lu. "Multi-scale Image Denoising via a Regularization Method." Journal of Physics: Conference Series 2253, no. 1 (April 1, 2022): 012030. http://dx.doi.org/10.1088/1742-6596/2253/1/012030.

Full text
Abstract:
Abstract Image restoration is a widely studied problem in the field of image processing. Although the existing image restoration methods based on denoising regularization have shown relatively well performance, image restoration methods for different features of unknown images have not been proposed. Since images have different features, it seems necessary to adopt different priori regular terms for different features. In this paper, we propose a multiscale image regularization denoising framework that can simultaneously perform two or more denoising prior regularization terms to better obtain the overall image restoration results. We use the alternating direction multiplier method (ADMM) to optimize the model and combine multiple denoising algorithms for extensive image deblurring and image super-resolution experiments, and our algorithm shows better performance compared to the existing state-of-the-art image restoration methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Shiwei, Yu-Hen Hu, and Hongrui Jiang. "Multi-View Image Denoising Using Convolutional Neural Network." Sensors 19, no. 11 (June 7, 2019): 2597. http://dx.doi.org/10.3390/s19112597.

Full text
Abstract:
In this paper, we propose a novel multi-view image denoising algorithm based on convolutional neural network (MVCNN). Multi-view images are arranged into 3D focus image stacks (3DFIS) according to different disparities. The MVCNN is trained to process each 3DFIS and generate a denoised image stack that contains the recovered image information for regions of particular disparities. The denoised image stacks are then fused together to produce a denoised target view image using the estimated disparity map. Different from conventional multi-view denoising approaches that group similar patches first and then perform denoising on those patches, our CNN-based algorithm saves the effort of exhaustive patch searching and greatly reduces the computational time. In the proposed MVCNN, residual learning and batch normalization strategies are also used to enhance the denoising performance and accelerate the training process. Compared with the state-of-the-art single image and multi-view denoising algorithms, experiments show that the proposed CNN-based algorithm is a highly effective and efficient method in Gaussian denoising of multi-view images.
APA, Harvard, Vancouver, ISO, and other styles
28

Oyebode, Kazeem Oyeyemi. "Investigating a Denoising Approach to an Improved Otsu Segmentation on Cell Images." Journal of Biomimetics, Biomaterials and Biomedical Engineering 33 (July 2017): 59–64. http://dx.doi.org/10.4028/www.scientific.net/jbbbe.33.59.

Full text
Abstract:
Image denoising provides an opportunity to minimize unwanted signal from any image in order to improve its interpretation by either human or machine. In the medical context, image denoising serves as a critical element of image processing as it helps to improve the quality of data presented for manual or automatic diagnosis. While there exist a number of image denoising methods such as the median, diffusion and Gaussian filtering, selecting a suitable one for cell segmentation may be challenging as one is tasked with ensuring adopted denoising method preserves critical object structures, like boundaries, while at the same time minimizing noise. In this paper, we discuss two popular denoising methods (diffusion filtering and Gaussian filtering) and investigates their significance, in improving the accuracy of segmented cell images, both individually and by their combinations. Experiment carried out on public and private datasets of cell images indicates an improved segmentation accuracy when cell images are first denoised with the combination of diffusion and Gaussian filtering as against individual denoising methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Haoming, Yue Qi, Xiaoting Xue, and Yahui Nan. "Ancient Stone Inscription Image Denoising and Inpainting Methods Based on Deep Neural Networks." Discrete Dynamics in Nature and Society 2021 (December 20, 2021): 1–11. http://dx.doi.org/10.1155/2021/7675611.

Full text
Abstract:
Chinese ancient stone inscriptions contain Chinese traditional calligraphy culture and art information. However, due to the long history of the ancient stone inscriptions, natural erosion, and poor early protection measures, there are a lot of noise in the existing ancient stone inscriptions, which has adverse effects on reading these stone inscriptions and their aesthetic appreciation. At present, digital technologies have played important roles in the protection of cultural relics. For ancient stone inscriptions, we should obtain more perfect digital results without multiple types of noise, while there are few deep learning methods designed for processing stone inscription images. Therefore, we propose a basic framework for image denoising and inpainting of stone inscriptions based on deep learning methods. Firstly, we collect as many images of stone inscriptions as possible and preprocess these images to establish an inscriptions image dataset for image denoising and inpainting. In addition, an improved GAN with a denoiser is used for generating more virtual stone inscription images to expand the dataset. On the basis of these collected and generated images, we designed a stone inscription image denoising model based on multiscale feature fusion and introduced Charbonnier loss function to improve this image denoising model. To further improve the denoising results, an image inpainting model with the coherent semantic attention mechanism is introduced to recover some effective information removed by the former denoising model as much as possible. The experimental results show that our image denoising model achieves better results on PSNR, SSIM, and CEI. The final results have obvious visual improvement compared with the original stone inscription images.
APA, Harvard, Vancouver, ISO, and other styles
30

Hartbauer, Manfred. "A Simple Denoising Algorithm for Real-World Noisy Camera Images." Journal of Imaging 9, no. 9 (September 18, 2023): 185. http://dx.doi.org/10.3390/jimaging9090185.

Full text
Abstract:
The noise statistics of real-world camera images are challenging for any denoising algorithm. Here, I describe a modified version of a bionic algorithm that improves the quality of real-word noisy camera images from a publicly available image dataset. In the first step, an adaptive local averaging filter was executed for each pixel to remove moderate sensor noise while preserving fine image details and object contours. In the second step, image sharpness was enhanced by means of an unsharp mask filter to generate output images that are close to ground-truth images (multiple averages of static camera images). The performance of this denoising algorithm was compared with five popular denoising methods: bm3d, wavelet, non-local means (NL-means), total variation (TV) denoising and bilateral filter. Results show that the two-step filter had a performance that was similar to NL-means and TV filtering. Bm3d had the best denoising performance but sometimes led to blurry images. This novel two-step filter only depends on a single parameter that can be obtained from global image statistics. To reduce computation time, denoising was restricted to the Y channel of YUV-transformed images and four image segments were simultaneously processed in parallel on a multi-core processor.
APA, Harvard, Vancouver, ISO, and other styles
31

Priyanka, Steffi, and Yuan-Kai Wang. "Fully Symmetric Convolutional Network for Effective Image Denoising." Applied Sciences 9, no. 4 (February 22, 2019): 778. http://dx.doi.org/10.3390/app9040778.

Full text
Abstract:
Neural-network-based image denoising is one of the promising approaches to deal with problems in image processing. In this work, a deep fully symmetric convolutional–deconvolutional neural network (FSCN) is proposed for image denoising. The proposed model comprises a novel architecture with a chain of successive symmetric convolutional–deconvolutional layers. This framework learns convolutional–deconvolutional mappings from corrupted images to the clean ones in an end-to-end fashion without using image priors. The convolutional layers act as feature extractor to encode primary components of the image contents while eliminating corruptions, and the deconvolutional layers then decode the image abstractions to recover the image content details. An adaptive moment optimizer is used to minimize the reconstruction loss as it is appropriate for large data and noisy images. Extensive experiments were conducted for image denoising to evaluate the FSCN model against the existing state-of-the-art denoising algorithms. The results show that the proposed model achieves superior denoising, both qualitatively and quantitatively. This work also presents the efficient implementation of the FSCN model by using GPU computing which makes it easy and attractive for practical denoising applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Liang, Dong Tai. "Color Image Denoising Using Gaussian Multiscale Multivariate Image Analysis." Applied Mechanics and Materials 37-38 (November 2010): 248–52. http://dx.doi.org/10.4028/www.scientific.net/amm.37-38.248.

Full text
Abstract:
Inspired by the human vision system, a new image representation and analysis model based on Gaussian multiscale multivariate image analysis (MIA) is proposed. The multiscale color texture representations for the original image are used to constitute the multivariate image, each channel of which represents a perceptual observation from different scales. Then the MIA decomposes this multivariate image into multiscale color texture perceptual features (the principal component score images). These score images could be interpreted as 1) the output of three color opponent channels: black versus white, red versus green and blue versus yellow, and 2) the edge information, and 3) higher-order Gaussian derivatives. Finally the color image denoising approach based on the models is presented. Experiments show that this denoising method against Gaussian filters significantly improves the denoising effect by preserving more edge information.
APA, Harvard, Vancouver, ISO, and other styles
33

Ali, Mohammed Nabih. "A wavelet-based method for MRI liver image denoising." Biomedical Engineering / Biomedizinische Technik 64, no. 6 (December 18, 2019): 699–709. http://dx.doi.org/10.1515/bmt-2018-0033.

Full text
Abstract:
Abstract Image denoising stays be a standout amongst the primary issues in the field of image processing. Several image denoising algorithms utilizing wavelet transforms have been presented. This paper deals with the use of wavelet transform for magnetic resonance imaging (MRI) liver image denoising using selected wavelet families and thresholding methods with appropriate decomposition levels. Denoised MRI liver images are compared with the original images to conclude the most suitable parameters (wavelet family, level of decomposition and thresholding type) for the denoising process. The performance of our algorithm is evaluated using the signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR) and mean square error (MSE). The results show that the Daubechies wavelet family of the tenth order with first and second of the levels of decomposition are the most optimal parameters for MRI liver image denoising.
APA, Harvard, Vancouver, ISO, and other styles
34

Gopatoti, Anandbabu, Merajothu Chandra Naik, and Kiran Kumar Gopathoti. "Convolutional Neural Network Based Image Denoising for Better Quality of Images." International Journal of Engineering & Technology 7, no. 3.27 (August 15, 2018): 356. http://dx.doi.org/10.14419/ijet.v7i3.27.17972.

Full text
Abstract:
This work gives a survey by comparing the different methods of image denoising with the help of wavelet transforms and Convolutional Neural Network. To get the better method for Image denoising, there is distinctive merging which have been used. The vital role of communication is transmitting visual information in the appearance of digital images, but on the receiver side we will get the image with corruption. Therefore, in practical analysis and facts, the powerful image denoising approach is still a legitimate undertaking. The algorithms which are very beneficial for processing the signal like compression of image and denoising the image is Wavelet transforms. To get a better quality image as output, denoising methods includes the maneuver of data of that image. The primary aim is wavelet coefficient modification inside the new basis, by that the noise within the image data can be eliminated. In this paper, we suggested different methods of image denoising from the corrupted images with the help of different noises like Gaussian and speckle noises. This paper implemented by using adaptive wavelet threshold( Sure Shrink, Block Shrink, Neigh Shrink and Bivariate Shrink) and Convolutional Neural Network(CNN) Model, the experimental consequences the comparative accuracy of our proposed work.
APA, Harvard, Vancouver, ISO, and other styles
35

Choubey, Shruti Bhargava, and S. P. V. Subba Rao. "Implementation of hybrid filter technique for noise removal from medical images." International Journal of Engineering & Technology 7, no. 1.1 (December 21, 2017): 25. http://dx.doi.org/10.14419/ijet.v7i1.1.8917.

Full text
Abstract:
Image denoising is used to eliminate the noise while retaining as much as possible the important signal features. The function of image denoising is to calculate approximately the original image form the noisy data. Image denoising still remains the challenge for researchers because noise removal introduces artifacts and causes blurring of the images. Image denoising has become an essential exercise in medical imaging especially the Magnetic Resonance Imaging (MRI). MR images are typically corrupted with noise, which hinder the medical diagnosis based on these images. The presence of noise not only causes as undesirable visual quality as well as lowers the visibility of low contrast objects. in this paper noise removal approach has proposed using hybridization of three filter with DWT method.Results calculated in terms of PSNR,MSE & TIME.
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Fangjia, Shaoping Xu, and Chongxi Li. "Boosting of Denoising Effect with Fusion Strategy." Applied Sciences 10, no. 11 (June 1, 2020): 3857. http://dx.doi.org/10.3390/app10113857.

Full text
Abstract:
Image denoising, a fundamental step in image processing, has been widely studied for several decades. Denoising methods can be classified as internal or external depending on whether they exploit the internal prior or the external noisy-clean image priors to reconstruct a latent image. Typically, these two kinds of methods have their respective merits and demerits. Using a single denoising model to improve existing methods remains a challenge. In this paper, we propose a method for boosting the denoising effect via the image fusion strategy. This study aims to boost the performance of two typical denoising methods, the nonlocally centralized sparse representation (NCSR) and residual learning of deep CNN (DnCNN). These two methods have complementary strengths and can be chosen to represent internal and external denoising methods, respectively. The boosting process is formulated as an adaptive weight-based image fusion problem by preserving the details for the initial denoised images output by the NCSR and the DnCNN. Specifically, we design two kinds of weights to adaptively reflect the influence of the pixel intensity changes and the global gradient of the initial denoised images. A linear combination of these two kinds of weights determines the final weight. The initial denoised images are integrated into the fusion framework to achieve our denoising results. Extensive experiments show that the proposed method significantly outperforms the NCSR and the DnCNN both quantitatively and visually when they are considered as individual methods; similarly, it outperforms several other state-of-the-art denoising methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Yuan, Quan, Zhenyun Peng, Zhencheng Chen, Yanke Guo, Bin Yang, and Xiangyan Zeng. "Medical Image Denoising Algorithm Based on Sparse Nonlocal Regularized Weighted Coding and Low Rank Constraint." Scientific Programming 2021 (June 7, 2021): 1–6. http://dx.doi.org/10.1155/2021/7008406.

Full text
Abstract:
Medical image information may be polluted by noise in the process of generation and transmission, which will seriously hinder the follow-up image processing and medical diagnosis. In medical images, there is a typical mixed noise composed of additive white Gaussian noise (AWGN) and impulse noise. In the conventional denoising methods, impulse noise is first removed, followed by the elimination of white Gaussian noise (WGN). However, it is difficult to separate the two kinds of noises completely in practical application. The existing denoising algorithm of weight coding based on sparse nonlocal regularization, which can simultaneously remove AWGN and impulse noise, is plagued by the problems of incomplete noise removal and serious loss of details. The denoising algorithm based on sparse representation and low rank constraint can preserve image details better. Thus, a medical image denoising algorithm based on sparse nonlocal regularization weighted coding and low rank constraint is proposed. The denoising effect of the proposed method and the original algorithm on computed tomography (CT) image and magnetic resonance (MR) image are compared. It is revealed that, under different σ and ρ values, the PSNR and FSIM values of CT and MRI images are evidently superior to those of traditional algorithms, suggesting that the algorithm proposed in this work has better denoising effects on medical images than traditional denoising algorithms.
APA, Harvard, Vancouver, ISO, and other styles
38

Feng, Yayuan, Yu Shi, and Dianjun Sun. "Blind Poissonian Image Deblurring Regularized by a Denoiser Constraint and Deep Image Prior." Mathematical Problems in Engineering 2020 (August 24, 2020): 1–15. http://dx.doi.org/10.1155/2020/9483521.

Full text
Abstract:
The denoising and deblurring of Poisson images are opposite inverse problems. Single image deblurring methods are sensitive to image noise. A single noise filter can effectively remove noise in advance, but it also damages blurred information. To simultaneously solve the denoising and deblurring of Poissonian images better, we learn the implicit deep image prior from a single degraded image and use the denoiser as a regularization term to constrain the latent clear image. Combined with the explicit L0 regularization prior of the image, the denoising and deblurring model of the Poisson image is established. Then, the split Bregman iteration strategy is used to optimize the point spread function estimation and latent clear image estimation. The experimental results demonstrate that the proposed method achieves good restoration results on a series of simulated and real blurred images with Poisson noise.
APA, Harvard, Vancouver, ISO, and other styles
39

Qi, Min, Zuo Feng Zhou, Jing Liu, Jian Zhong Cao, Hao Wang, A. Qi Yan, Deng Shan Wu, Hui Zhang, and Li Nao Tang. "Image Denoising Algorithm via Spatially Adaptive Bilateral Filtering." Advanced Materials Research 760-762 (September 2013): 1515–18. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1515.

Full text
Abstract:
The classical bilateral filtering algorithm is a non-linear and non-iterative image denoising method in spatial domain which utilizes the spatial information and the intensity information between a point and its neighbors to smooth the noisy images while preserving edges well. To further improve the image denoising performance, a spatially adaptive bilateral filtering image deonoising algorithm with low computational complexity is proposed. The proposed algorithm takes advantage of the local statistics characteristic of the image signal to better preserve the edges or textures while suppressing the noise. Experiment results show that the proposed image denoising algorithm achieves better performance than the classical bilateral filtering image denoising method.
APA, Harvard, Vancouver, ISO, and other styles
40

Jin, Yan, Wenyu Jiang, Jianlong Shao, and Jin Lu. "An Improved Image Denoising Model Based on Nonlocal Means Filter." Mathematical Problems in Engineering 2018 (July 4, 2018): 1–12. http://dx.doi.org/10.1155/2018/8593934.

Full text
Abstract:
The nonlocal means filter plays an important role in image denoising. We propose in this paper an image denoising model which is a suitable improvement of the nonlocal means filter. We compare this model with the nonlocal means filter, both theoretically and experimentally. Experiment results show that this new model provides good results for image denoising. Particularly, it is better than the nonlocal means filter when we consider the denoising for natural images with high textures.
APA, Harvard, Vancouver, ISO, and other styles
41

Malar, E., A. Kandaswamy, S. S. Kirthana, D. Nivedhitha, and M. Gauthaam. "Curvelet image denoising of mammogram images." International Journal of Medical Engineering and Informatics 5, no. 1 (2013): 60. http://dx.doi.org/10.1504/ijmei.2013.051665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Feng, Weichuan Ni, Shaojiang Liu, Zhiming Xu, Zemin Qiu, and Zhiping Wan. "A 2D image 3D reconstruction function adaptive denoising algorithm." PeerJ Computer Science 9 (October 3, 2023): e1604. http://dx.doi.org/10.7717/peerj-cs.1604.

Full text
Abstract:
To address the issue of image denoising algorithms blurring image details during the denoising process, we propose an adaptive denoising algorithm for the 3D reconstruction of 2D images. This algorithm takes into account the inherent visual characteristics of human eyes and divides the image into regions based on the entropy value of each region. The background region is subject to threshold denoising, while the target region undergoes processing using an adversarial generative network. This network effectively handles 2D target images with noise and generates a 3D model of the target. The proposed algorithm aims to enhance the noise immunity of 2D images during the 3D reconstruction process and ensure that the constructed 3D target model better preserves the original image’s detailed information. Through experimental testing on 2D images and real pedestrian videos contaminated with noise, our algorithm demonstrates stable preservation of image details. The reconstruction effect is evaluated in terms of noise reduction and the fidelity of the 3D model to the original target. The results show an average noise reduction exceeding 95% while effectively retaining most of the target’s feature information in the original image. In summary, our proposed adaptive denoising algorithm improves the 3D reconstruction process by preserving image details that are often compromised by conventional denoising techniques. This has significant implications for enhancing image quality and maintaining target information fidelity in 3D models, providing a promising approach for addressing the challenges associated with noise reduction in 2D images during 3D reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
43

Ghafar, Abdul, and Usman Sattar. "Convolutional Autoencoder for Image Denoising." UMT Artificial Intelligence Review 1, no. 2 (December 31, 2021): 1–11. http://dx.doi.org/10.32350/air.0102.01.

Full text
Abstract:
Image denoising is a process used to remove noise from the image to create a sharp and clear image. It is mainly used in medical imaging, where due to the malfunctioning of machines or due to the precautions taken to protect patients from radiation, medical imaging machines create a lot of noise in the final image. Several techniques can be used in order to avoid such distortions in the image before their final printing. Autoencoders are the most notable software used to denoise images before their final printing. These software are not intelligent so the resultant image is not of good quality. In this paper, we introduced a modified autoencoder having a deep convolutional neural network. It creates better quality images as compared to traditional autoencoders. After training with a test dataset on the tensor board, the modified autoencoder is tested on a different dataset having various shapes. The results were satisfactory but not desirable due to several reasons. Nevertheless, our proposed system still performed better than traditional autoencoders. KEYWORDS: image denoising, deep learning, convolutional neural network, image autoencoder, image convolutional autoencoder
APA, Harvard, Vancouver, ISO, and other styles
44

Hu, Yong, Shaoping Xu, Xiaohui Cheng, Changfei Zhou, and Yufeng Hu. "A Triple Deep Image Prior Model for Image Denoising Based on Mixed Priors and Noise Learning." Applied Sciences 13, no. 9 (April 23, 2023): 5265. http://dx.doi.org/10.3390/app13095265.

Full text
Abstract:
Image denoising poses a significant challenge in computer vision due to the high-level visual task’s dependency on image quality. Several advanced denoising models have been proposed in recent decades. Recently, deep image prior (DIP), using a particular network structure and a noisy image to achieve denoising, has provided a novel image denoising method. However, the denoising performance of the DIP model still lags behind that of mainstream denoising models. To improve the performance of the DIP denoising model, we propose a TripleDIP model with internal and external mixed images priors for image denoising. The TripleDIP comprises of three branches: one for content learning and two for independent noise learning. We firstly use a Transformer-based supervised model (i.e., Restormer) to obtain a pre-denoised image (used as external prior) from a given noisy image, and then take the noisy image and the pre-denoised image as the first and second target image, respectively, to perform the denoising process under the designed loss function. We add constraints between two-branch noise learning and content learning, allowing the TripleDIP to employ external prior while enhancing independent noise learning stability. Moreover, the automatic stop criterion we proposed prevents the model from overfitting the noisy image and improves the execution efficiency. The experimental results demonstrate that TripleDIP outperforms the original DIP by an average of 2.79 dB and outperforms classical unsupervised methods such as N2V by an average of 2.68 dB and the latest supervised models such as SwinIR and Restormer by an average of 0.63 dB and 0.59 dB on the Set12 dataset. This can mainly be attributed to the fact that two-branch noise learning can obtain more stable noise while constraining the content learning branch’s optimization process. Our proposed TripleDIP significantly enhances DIP denoising performance and has broad application potential in scenarios with insufficient training datasets.
APA, Harvard, Vancouver, ISO, and other styles
45

Phan, Tran Dang Khoa. "A multi-stage algorithm for image denoising based on PCA and adaptive TV-regularization." Cybernetics and Physics, Volume 10, 2021, Number 3 (November 30, 2021): 162–70. http://dx.doi.org/10.35470/2226-4116-2021-10-3-162-170.

Full text
Abstract:
In this paper, we present an image denoising algorithm comprising three stages. In the first stage, Principal Component Analysis (PCA) is used to suppress the noise. PCA is applied to image blocks to characterize localized features and rare image patches. In the second stage, we use the Gaussian curvature to develop an adaptive total-variation-based (TV) denoising model to effectively remove visual artifacts and noise residual generated by the first stage. Finally, the denoised image is sharpened in order to enhance the contrast of the denoising result. Experimental results on natural images and computed tomography (CT) images demonstrated that the proposed algorithm yields denoising results better than competing algorithms in terms of both qualitative and quantitative aspects.
APA, Harvard, Vancouver, ISO, and other styles
46

Park, Yunjin, Sukho Lee, Byeongseon Jeong, and Jungho Yoon. "Joint Demosaicing and Denoising Based on a Variational Deep Image Prior Neural Network." Sensors 20, no. 10 (May 24, 2020): 2970. http://dx.doi.org/10.3390/s20102970.

Full text
Abstract:
A joint demosaicing and denoising task refers to the task of simultaneously reconstructing and denoising a color image from a patterned image obtained by a monochrome image sensor with a color filter array. Recently, inspired by the success of deep learning in many image processing tasks, there has been research to apply convolutional neural networks (CNNs) to the task of joint demosaicing and denoising. However, such CNNs need many training data to be trained, and work well only for patterned images which have the same amount of noise they have been trained on. In this paper, we propose a variational deep image prior network for joint demosaicing and denoising which can be trained on a single patterned image and works for patterned images with different levels of noise. We also propose a new RGB color filter array (CFA) which works better with the proposed network than the conventional Bayer CFA. Mathematical justifications of why the variational deep image prior network suits the task of joint demosaicing and denoising are also given, and experimental results verify the performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
47

Hossain, Sadat, and Bumshik Lee. "NG-GAN: A Robust Noise-Generation Generative Adversarial Network for Generating Old-Image Noise." Sensors 23, no. 1 (December 26, 2022): 251. http://dx.doi.org/10.3390/s23010251.

Full text
Abstract:
Numerous old images and videos were captured and stored under unfavorable conditions. Hence, old images and videos have uncertain and different noise patterns compared with those of modern ones. Denoising old images is an effective technique for reconstructing a clean image containing crucial information. However, obtaining noisy-clean image pairs for denoising old images is difficult and challenging for supervised learning. Preparing such a pair is expensive and burdensome, as existing denoising approaches require a considerable number of noisy-clean image pairs. To address this issue, we propose a robust noise-generation generative adversarial network (NG-GAN) that utilizes unpaired datasets to replicate the noise distribution of degraded old images inspired by the CycleGAN model. In our proposed method, the perception-based image quality evaluator metric is used to control noise generation effectively. An unpaired dataset is generated by selecting clean images with features that match the old images to train the proposed model. Experimental results demonstrate that the dataset generated by our proposed NG-GAN can better train state-of-the-art denoising models by effectively denoising old videos. The denoising models exhibit significantly improved peak signal-to-noise ratios and structural similarity index measures of 0.37 dB and 0.06 on average, respectively, on the dataset generated by our proposed NG-GAN.
APA, Harvard, Vancouver, ISO, and other styles
48

Talebi, Hossein, and Peyman Milanfar. "Global Image Denoising." IEEE Transactions on Image Processing 23, no. 2 (February 2014): 755–68. http://dx.doi.org/10.1109/tip.2013.2293425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Knaus, Claude, and Matthias Zwicker. "Progressive Image Denoising." IEEE Transactions on Image Processing 23, no. 7 (July 2014): 3114–25. http://dx.doi.org/10.1109/tip.2014.2326771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Yan, and K. J. Ray Liu. "Image Denoising Games." IEEE Transactions on Circuits and Systems for Video Technology 23, no. 10 (October 2013): 1704–16. http://dx.doi.org/10.1109/tcsvt.2013.2255433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography