Academic literature on the topic 'Denoisers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Denoisers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Denoisers"

1

Gu, Jeongmin, Jose A. Iglesias-Guitian, and Bochang Moon. "Neural James-Stein Combiner for Unbiased and Biased Renderings." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–14. http://dx.doi.org/10.1145/3550454.3555496.

Full text
Abstract:
Unbiased rendering algorithms such as path tracing produce accurate images given a huge number of samples, but in practice, the techniques often leave visually distracting artifacts (i.e., noise) in their rendered images due to a limited time budget. A favored approach for mitigating the noise problem is applying learning-based denoisers to unbiased but noisy rendered images and suppressing the noise while preserving image details. However, such denoising techniques typically introduce a systematic error, i.e., the denoising bias, which does not decline as rapidly when increasing the sample size, unlike the other type of error, i.e., variance. It can technically lead to slow numerical convergence of the denoising techniques. We propose a new combination framework built upon the James-Stein (JS) estimator, which merges a pair of unbiased and biased rendering images, e.g., a path-traced image and its denoised result. Unlike existing post-correction techniques for image denoising, our framework helps an input denoiser have lower errors than its unbiased input without relying on accurate estimation of per-pixel denoising errors. We demonstrate that our framework based on the well-established JS theories allows us to improve the error reduction rates of state-of-the-art learning-based denoisers more robustly than recent post-denoisers.
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Shaokun, Fengshi Zheng, Kun Xu, and Ling-Qi Yan. "Ensemble denoising for Monte Carlo renderings." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–17. http://dx.doi.org/10.1145/3478513.3480510.

Full text
Abstract:
Various denoising methods have been proposed to clean up the noise in Monte Carlo (MC) renderings, each having different advantages, disadvantages, and applicable scenarios. In this paper, we present Ensemble Denoising , an optimization-based technique that combines multiple individual MC denoisers. The combined image is modeled as a per-pixel weighted sum of output images from the individual denoisers. Computation of the optimal weights is formulated as a constrained quadratic programming problem, where we apply a dual-buffer strategy to estimate the overall MSE. We further propose an iterative solver to overcome practical issues involved in the optimization. Besides nice theoretical properties, our ensemble denoiser is demonstrated to be effective and robust, and outperforms any individual denoiser across dozens of scenes and different levels of sample rates. We also perform a comprehensive analysis on the selection of individual denoisers to be combined, providing important and practical guides for users.
APA, Harvard, Vancouver, ISO, and other styles
3

Hofmann, Nikolai, Jon Hasselgren, and Jacob Munkberg. "Joint Neural Denoising of Surfaces and Volumes." Proceedings of the ACM on Computer Graphics and Interactive Techniques 6, no. 1 (May 12, 2023): 1–16. http://dx.doi.org/10.1145/3585497.

Full text
Abstract:
Denoisers designed for surface geometry rely on noise-free feature guides for high quality results. However, these guides are not readily available for volumes. Our method enables combined volume and surface denoising in real time from low sample count (4 spp) renderings. The rendered image is decomposed into volume and surface layers, leveraging spatio-temporal neural denoisers for both components. The individual signals are composited using learned weights and denoised transmittance. Our architecture outperforms current denoisers in scenes containing both surfaces and volumes, and produces temporally stable results at interactive rates.
APA, Harvard, Vancouver, ISO, and other styles
4

Han, Kyu Beom, Olivia G. Odenthal, Woo Jae Kim, and Sung-Eui Yoon. "Pixel-wise Guidance for Utilizing Auxiliary Features in Monte Carlo Denoising." Proceedings of the ACM on Computer Graphics and Interactive Techniques 6, no. 1 (May 12, 2023): 1–19. http://dx.doi.org/10.1145/3585505.

Full text
Abstract:
Auxiliary features such as geometric buffers (G-buffers) and path descriptors (P-buffers) have been shown to significantly improve Monte Carlo (MC) denoising. However, recent approaches implicitly learn to exploit auxiliary features for denoising, which could lead to insufficient utilization of each type of auxiliary features. To overcome such an issue, we propose a denoising framework that relies on an explicit pixel-wise guidance for utilizing auxiliary features. First, we train two denoisers, each trained by a different auxiliary feature (i.e., G-buffers or P-buffers). Then we design our ensembling network to obtain per-pixel ensembling weight maps, which represent pixel-wise guidance for which auxiliary feature should be dominant at reconstructing each individual pixel and use them to ensemble the two denoised results of our denosiers. We also propagate our pixel-wise guidance to the denoisers by jointly training the denoisers and the ensembling network, further guiding the denoisers to focus on regions where G-buffers or P-buffers are relatively important for denoising. Our result and show considerable improvement in denoising performance compared to the baseline denoising model using both G-buffers and P-buffers. The source code is available at https://github.com/qbhan/GuidanceMCDenoising.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Shuaiqi, Tong Liu, Lele Gao, Hailiang Li, Qi Hu, Jie Zhao, and Chong Wang. "Convolutional Neural Network and Guided Filtering for SAR Image Denoising." Remote Sensing 11, no. 6 (March 23, 2019): 702. http://dx.doi.org/10.3390/rs11060702.

Full text
Abstract:
Coherent noise often interferes with synthetic aperture radar (SAR), which has a huge impact on subsequent processing and analysis. This paper puts forward a novel algorithm involving the convolutional neural network (CNN) and guided filtering for SAR image denoising, which combines the advantages of model-based optimization and discriminant learning and considers how to obtain the best image information and improve the resolution of the images. The advantages of proposed method are that, firstly, an SAR image is filtered via five different level denoisers to obtain five denoised images, in which the efficient and effective CNN denoiser prior is employed. Later, a guided filtering-based fusion algorithm is used to integrate the five denoised images into a final denoised image. The experimental results indicate that the algorithm cannot eliminate noise, but it does improve the visual effect of the image significantly, allowing it to outperform some recent denoising methods in this field.
APA, Harvard, Vancouver, ISO, and other styles
6

Choi, Joon Hee, Omar A. Elgendy, and Stanley H. Chan. "Optimal Combination of Image Denoisers." IEEE Transactions on Image Processing 28, no. 8 (August 2019): 4016–31. http://dx.doi.org/10.1109/tip.2019.2903321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Meng, Xiyan, and Fang Zhuang. "A New Boosting Algorithm for Shrinkage Curve Learning." Mathematical Problems in Engineering 2022 (April 15, 2022): 1–14. http://dx.doi.org/10.1155/2022/6339758.

Full text
Abstract:
To a large extent, classical boosting denoising algorithms can improve denoising performance. However, these algorithms can only work well when the denoisers are linear. In this paper, we propose a boosting algorithm that can be used for a nonlinear denoiser. We further implement the proposed algorithm into a shrinkage curve learning denoising algorithm, which is a nonlinear denoiser. Concurrently, the convergence of the proposed algorithm is proved. Experimental results indicate that the proposed algorithm is effective and the dependence of the shrinkage curve learning denoising algorithm on training samples has improved. In addition, the proposed algorithm can achieve better performance in terms of visual quality and peak signal-to-noise ratio (PSNR).
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yukun, Bowen Wan, Daming Shi, and Xiaochun Cheng. "Generative Recorrupted-to-Recorrupted: An Unsupervised Image Denoising Network for Arbitrary Noise Distribution." Remote Sensing 15, no. 2 (January 6, 2023): 364. http://dx.doi.org/10.3390/rs15020364.

Full text
Abstract:
With the great breakthrough of supervised learning in the field of denoising, more and more works focus on end-to-end learning to train denoisers. In practice, however, it can be very challenging to obtain labels in support of this approach. The premise of this method is effective is that there is certain data support, but in practice, it is particularly difficult to obtain labels in the training data. Several unsupervised denoisers have emerged in recent years; however, to ensure their effectiveness, the noise model must be determined in advance, which limits the practical use of unsupervised denoising.n addition, obtaining inaccurate noise prior to noise estimation algorithms leads to low denoising accuracy. Therefore, we design a more practical denoiser that requires neither clean images as training labels nor noise model assumptions. Our method also needs the support of the noise model; the difference is that the model is generated by a residual image and a random mask during the network training process, and the input and target of the network are generated from a single noisy image and the noise model. At the same time, an unsupervised module and a pseudo supervised module are trained. The extensive experiments demonstrate the effectiveness of our framework and even surpass the accuracy of supervised denoising.
APA, Harvard, Vancouver, ISO, and other styles
9

Galande, Ashwini S., Vikas Thapa, Hanu Phani Ram Gurram, and Renu John. "Untrained deep network powered with explicit denoiser for phase recovery in inline holography." Applied Physics Letters 122, no. 13 (March 27, 2023): 133701. http://dx.doi.org/10.1063/5.0144795.

Full text
Abstract:
Single-shot reconstruction of the inline hologram is highly desirable as a cost-effective and portable imaging modality in resource-constrained environments. However, the twin image artifacts, caused by the propagation of the conjugated wavefront with missing phase information, contaminate the reconstruction. Existing end-to-end deep learning-based methods require massive training data pairs with environmental and system stability, which is very difficult to achieve. Recently proposed deep image prior (DIP) integrates the physical model of hologram formation into deep neural networks without any prior training requirement. However, the process of fitting the model output to a single measured hologram results in the fitting of interference-related noise. To overcome this problem, we have implemented an untrained deep neural network powered with explicit regularization by denoising (RED), which removes twin images and noise in reconstruction. Our work demonstrates the use of alternating directions of multipliers method (ADMM) to combine DIP and RED into a robust single-shot phase recovery process. The use of ADMM, which is based on the variable splitting approach, made it possible to plug and play different denoisers without the need of explicit differentiation. Experimental results show that the sparsity-promoting denoisers give better results over DIP in terms of phase signal-to-noise ratio (SNR). Considering the computational complexities, we conclude that the total variation denoiser is more appropriate for hologram reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Bong-Hyun, and S. Madhavi. "Method for Quantum Denoisers Using Convolutional Neural Network." Computational Intelligence and Neuroscience 2022 (October 6, 2022): 1–7. http://dx.doi.org/10.1155/2022/4885897.

Full text
Abstract:
In many applications of quantum information science, high-dimensional entanglement is needed. Quantum teleportation is used for transferring information from one place to another using Einstein–Podolsk–Rosen pairs (EPR) and two classical bits of communication in a channel. Since we cannot produce multiple copies of an unknown state for amplification, we will generate multiple EPR pairs. However, after the distribution of the EPR pairs, they will have decreased fidelity with the ideal EPR state. So, to maintain the quantum states and maximize the quantification of the entanglement without losing the strength of the states, we propose to denoise the channel for a few types of noise. We created a random noise source and filtered out the irrelevant information without affecting the relevant information encoded in the quantum states. The proposed model is used for successful denoising of GHZ states from spin flips and bit flip errors. Much of the research work is not carried out by using machine-language-based neural networks for noise-reduction in quantum channels. In this paper, we propose a denoiser called quantum denoiser CNQD, which uses a feedforward convolution neural network model. We tuned our model with highly entangled GHZ states with zero phases and phase between [0, ∏] mixed with different kinds of noise. Finally, the proposed model can be used for optimal quantum communication via noisy quantum channels using GHZ states.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Denoisers"

1

Bal, Shamit. "Image compression with denoised reduced-search fractal block coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq23210.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kharboutly, Anas Mustapha. "Identification du système d'acquisition d'images médicales à partir d'analyse du bruit." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT341/document.

Full text
Abstract:
Le traitement d’images médicales a pour but d’aider les médecins dans leur diagnostic et d’améliorer l’interprétation des résultats. Les scanners tomo-densitométriques (scanners X) sont des outils d’imagerie médicale utilisés pour reconstruire des images 3D du corps humain.De nos jours, il est très important de sécuriser les images médicales lors de leur transmission, leur stockage, leur visualisation ou de leur partage entre spécialistes. Par exemple, dans la criminalistique des images, la capacité d’identifier le système d’acquisition d’une image à partir de cette dernière seulement, est un enjeu actuel.Dans cette thèse, nous présentons une première analyse du problème d’identification des scanners X. Pour proposer une solution à ce type de problèmes, nous nous sommes basés sur les méthodes d’identification d’appareils photo. Elles reposent sur l’extraction de l’empreinte des capteurs. L’objectif est alors de détecter sa présence dans les images testées. Pour extraire le bruit, nous utilisons un filtre de Wiener basé sur une transformation en ondelettes. Ensuite, nous nous appuyons sur les propriétés relatives aux images médicales pour proposer des solutions avancées pour l’identification des scanners X. Ces solutions sont basées sur une nouvelle conception de leur empreinte, cette dernière étant définie en trois dimensions et sur les trois couches : os, tissu et air.Pour évaluer notre travail, nous avons généré des résultats sur un ensemble de données réelles acquises avec différents scanners X. Finalement, nos méthodes sont robustes et donnent une précision d’authentification élevée. Nous sommes en mesure d’identifier quelle machine a servi pour l’acquisition d’une image 3D et l’axe selon lequel elle a été effectuée
Medical image processing aims to help the doctors to improve the diagnosis process. Computed Tomography (CT) Scanner is an imaging medical device used to create cross-sectional 3D images of any part of the human body. Today, it is very important to secure medical images during their transmission, storage, visualization and sharing between several doctors. For example, in image forensics, a current problem consists of being able to identify an acquisition system from only digital images. In this thesis, we present one of the first analysis of CT-Scanner identification problem. We based on the camera identification methods to propose a solution for such kind of problem. It is based on extracting a sensor noise fingerprint of the CT-Scanner device. The objective then is to detect its presence in any new tested image. To extract the noise, we used a wavelet-based Wiener denoising filter. Then, we depend on the properties of medical images to propose advanced solutions for CT-Scanner identification. These solutions are based on new conceptions in the medical device fingerprint that are the three dimension fingerprint and the three layers one. To validate our work, we applied our experiments on multiple real data images of multiple CT-Scanner devices. Finally, our methods that are robust, give high identification accuracy. We were able to identify the acquisition CT-Scanner device and the acquisition axis
APA, Harvard, Vancouver, ISO, and other styles
3

Tsai, Shu-Jen Steven. "Study of Global Power System Frequency Behavior Based on Simulations and FNET Measurements." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28303.

Full text
Abstract:
A global view of power system's frequency opens up a new window to the "world" of large system's dynamics. With the aid of global positioning system (GPS), measurements from different locations can be time-synchronized; therefore, a system-wide observation and analysis would be possible. As part of the U.S. nation-wide power frequency monitoring network project (FNET), the first part of the study focuses on utilizing system simulation as a tool to assess the frequency measurement accuracy needed to observe frequency oscillations from events such as remote generation drops in three U.S. power systems. Electromechanical wave propagation phenomena during system disturbances, such as generation trip, load rejection and line opening, have been observed and discussed. Further uniform system models are developed to investigate the detailed behaviors of wave propagation. Visualization tool is developed to help to view frequency behavior simulations. Frequency replay from simulation data provides some insights of how these frequency electromechanical waves propagate when major events occur. The speeds of electromechanical wave propagation in different areas of the U.S. systems, as well as the uniform models were estimated and their characteristics were discussed. Theoretical derivation between the generator's mechanical powers and bus frequencies is provided and the delayed frequency response is illustrated. Field-measured frequency data from FNET are also examined. Outlier removal and wavelet-based denoising signal processing techniques are applied to filter out spikes and noises from measured frequency data. System's frequency statistics of three major U.S. power grids are investigated. Comparison between the data from phasor measurement unit (PMU) at a high voltage substation and from FNET taken from 110 V outlets at distribution level illustrates the close tracking between the two. Several generator trip events in the Eastern Interconnection System and the Western Electricity Coordinating Council system are recorded and the frequency patterns are analyzed. Our trigger program can detect noticeable frequency drop or rise and sample results are shown in a 13 month period. In addition to transient states' observation, the quasi-steady-state, such as oscillations, can also be observed by FNET. Several potential applications of FNET in the areas of monitoring & analysis, system control, model validation, and others are discussed. Some applications of FNET are still beyond our imagination.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Risi, Stefano. "Un metodo automatico per la ricostruzione di immagini astronomiche." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14128/.

Full text
Abstract:
La ricostruzione di un segnale digitale è un processo che trova applicazione in numerose branche scientifiche e tecnologiche come: l'astronomia, la biomedica, la geofisica e persino la musica. Questo processo ha il compito di pulire il segnale dalle interferenze che lo hanno degradato e di leggerlo nella forma emessa dalla sorgente. Quest'opera si occuperà della ricostruzione di immagini astronomiche digitali, utilizzando un modello di regolarizzazione di tipo variazionale e facendo uso della funzione di variazione totale. L' obiettivo è specializzare un algoritmo già esistente: il Constrained Least Square Total Variation per ottenerne una versione che mantenga alcune delle caratteristiche della precendente, come la ricerca automatica del parametro di regolarizzazione, ma che possa dare risultati migliori nella ricostruzione di immagini astronomiche. Capitolo 1. Viene definito il concetto di immagine e come questa venga acquisita da un dispositivo digitale. Si spiega come avviene il processo di sfocamento e di formazione del rumore, sono in seguito presentate le condizioni al bordo più utilizzate nell'ambiente della ricostruzione di immagini. Capitolo 2. Si descrivono i problemi mal posti di cui la ricostruzione di immagini fa parte e si vedono i metodi principali per la regolarizzazione di un segnale digitale, con particolare attenzione alla funzione di variazione totale come funzione di penalizzazione. Capitolo 3. L'intero capitolo è dedicato ai due metodi utilizzati, Constrained Least Square Total Varition ed il Constrained Kullback-Liebler Total Variation, questi sono metodi automatici che operano nell'abito della regolarizzazione ed utilizzano la funzione di variazione totale come funzione di penalizzazione. Capitolo 4. Vengono mostrati i risultati numerici derivanti dall'utilizzo del metodo CKLTV nella ricostruzione delle immagini test, i risultati vengono poi confrontati con quelli ottenuti tramite il metodo CLSTV.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Jong-Hoon. "Compressed sensing and finite rate of innovation for efficient data acquisition of quantitative acoustic microscopy images." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30225.

Full text
Abstract:
La microscopie acoustique quantitative (MAQ) est une modalité d'imagerie bien établie qui donne accès à des cartes paramétriques 2D représentatives des propriétés mécaniques des tissus à une échelle microscopique. Dans la plupart des études sur MAQ, l'échantillons est scanné ligne par ligne (avec un pas de 2µm) à l'aide d'un transducteur à 250 MHz. Ce type d'acquisition permet d'obtenir un cube de données RF 3D, avec deux dimensions spatiales et une dimension temporelle. Chaque signal RF correspondant à une position spatiale dans l'échantillon permet d'estimer des paramètres acoustiques comme par exemple la vitesse du son ou l'impédance. Le temps d'acquisition en MAQ est directement proportionnel à la taille de l'échantillon et peut aller de quelques minutes à quelques dizaines de minutes. Afin d'assurer des conditions d'acquisition stables et étant donnée la sensibilité des échantillons à ces conditions, diminuer le temps d'acquisition est un des grand défis en MAQ. Afin de relever ce défi, ce travail de thèse propose plusieurs solutions basées sur l'échantillonnage compressé (EC) et la théories des signaux ayant un faible nombre de degré de liberté (finite rate of innovation - FRI, en anglais). Le principe de l'EC repose sur la parcimonie des données, sur l'échantillonnage incohérent de celles-ci et sur les algorithmes d'optimisation numérique. Dans cette thèse, les phénomènes physiques derrière la MAQ sont exploités afin de créer des modèles adaptés aux contraintes de l'EC et de la FRI. Plus particulièrement, ce travail propose plusieurs pistes d'application de l'EC en MAQ : un schéma d'acquisition spatiale innovant, un algorithme de reconstruction d'images exploitant les statistiques des coefficients en ondelettes des images paramétriques, un modèle FRI adapté aux signaux RF et un schéma d'acquisition compressée dans le domaine temporel
Quantitative acoustic microscopy (QAM) is a well-accepted modality for forming 2D parameter maps making use of mechanical properties of soft tissues at microscopic scales. In leading edge QAM studies, the sample is raster-scanned (spatial step size of 2µm) using a 250 MHz transducer resulting in a 3D RF data cube, and each RF signal for each spatial location is processed to obtain acoustic parameters, e.g., speed of sound or acoustic impedance. The scanning time directly depends on the sample size and can range from few minutes to tens of minutes. In order to maintain constant experimental conditions for the sensitive thin sectioned samples, the scanning time is an important practical issue. To deal with the current challenge, we propose the novel approach inspired by compressed sensing (CS) and finite rate of innovation (FRI). The success of CS relies on the sparsity of data under consideration, incoherent measurement and optimization technique. On the other hand, the idea behind FRI is supported by a signal model fully characterized as a limited number of parameters. From this perspective, taking into account the physics leading to data acquisition of QAM system, the QAM data can be regarded as an adequate application amenable to the state of the art technologies aforementioned. However, when it comes to the mechanical structure of QAM system which does not support canonical CS measurement manners on the one hand, and the compositions of the RF signal model unsuitable to existing FRI schemes on the other hand, the advanced frameworks are still not perfect methods to resolve the problems that we are facing. In this thesis, to overcome the limitations, a novel sensing framework for CS is presented in spatial domain: a recently proposed approximate message passing (AMP) algorithm is adapted to account for the underlying data statistics of samples sparsely collected by proposed scanning patterns. In time domain, as an approach for achieving an accurate recovery from a small set of samples of QAM RF signals, we employ sum of sincs (SoS) sampling kernel and autoregressive (AR) model estimator. The spiral scanning manner, introduced as an applicable sensing technique to QAM system, contributed to the significant reduction of the number of spatial samples when reconstructing speed of sound images of a human lymph node.[...]
APA, Harvard, Vancouver, ISO, and other styles
6

Contato, Welinton Andrey. "Análise e restauração de vídeos de Microscopia Eletrônica de Baixa Energia." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-04012017-143212/.

Full text
Abstract:
A Microscopia Eletrônica de Baixa Energia (LEEM) é uma recente e poderosa modalidade para o estudo de superfície passível de uma grande quantidade de degradações, como ruídos e borramento. Ainda incipiente na literatura, este trabalho visou a análise e identificação das fontes de degradações presentes em vídeos, além da utilização de um conjunto de técnicas de remoção de ruído e borramento para a restauração de dados LEEM. Além disso, foram desenvolvidas duas novas técnicas de filtragem de vídeo como intuito de preservar detalhes pequenos e texturas presentes. Na etapa de análise foi constatado que as imagens LEEM possuem uma grande quantidade e variedade de ruídos, sendo o Gaussiano o mais preponderante. Foi também estimada a Função de Espalhamento de Ponto (PSF) do microscópio utilizado, visando o emprego de técnicas de redução de borramento. Este trabalho também analisou a combinação de técnicas de redução de borramento com as técnicas de filtragem do ruído Gaussiano existente. Foi constatado que as técnicas não locais, como Non-Local Means (NLM) eBlock-Matching 3-D (BM3D), proveem uma maior capacidade de filtragem das imagens LEEM, preservando descontinuidades. Ainda nesta análise, identificou-se que algumas técnicas de redução de borramento não são efetivas em imagens LEEM, exceto a técnica Richardson-Lucy (RL) que suprimiu grande parte do borramento sem adicionar mais degradação. A indesejável remoção de pequenas estruturas e texturas pelas técnicas de filtragem existentes motivou o desenvolvimento de duas novas técnicas de filtragem de ruído Gaussiano (NLM3D-LBP-MSB eNLM3D-LBP-Adaptive) que mostraram resultados superiores para filtragem de imagens com grande quantidade de textura. Porém, em imagens com muitas regiões homogêneas o BM3D foi superior. Avaliações quantitativas foram realizadas sobre imagens artificiais. Em imagens LEEM reais, realizou-se um experimento qualitativo em que observadores avaliaram visualmente o resultado de restaurações por diversas técnicas existentes e as propostas neste trabalho. O experimento comprovou que os métodos de filtragem não locais foram superiores, principalmente quando combinados com o método RL. Os métodos propostos produziram bons resultados, entretanto, inferiores aos exibidos pelas técnicas NLM eBM3D. Este trabalho demonstrou que as técnicas de filtragem não locais são as mais adequadas para dados LEEM. Além disso, a técnica RL mostrou-se eficaz na redução de borramento.
Low Energy Electronic Microscopy (LEEM) is a recent and powerful surface science image modality prone to considerable amounts of degradations, such as noise and blurring. Still not fully addressed in the literature, this worked aimed at analysing and identifying the sources of degradation in LEEM videos, as well as the adequacy of existing noise reduction and deblurring techniques for LEEM data. This work also presented two new noise reduction techniques aimed at preserving texture and small details. Our analysis has revealed that LEEM images exhibit a large amount and variety of noises, with Gaussian noise being the most frequent. To handle the deblurring issue, the Point Spread Function (PSF) for the microscopeused in the experiments has also been estimated. This work has also studied the combination of deblurring and denoising techniques for Gaussian noise. Results have shown that non-local techniques such as Non-Local Means (NLM) and Block-Matching 3-D (BM3D) are more adequate for filtering LEEM images, while preserving discontinuities. We have also shown that some deblurring techniques are not suitable for LEEM images, except the RichardsonLucy (RL) approach which coped with most of the blur without the addition of extra degradation. The undesirable removal of small structures and texture by the existing denoising techniques encouraged the development of two novel Gaussian denoising techniques (NLM3D-LBP-MSB and NLM3D-LBP-Adaptive) which exhibited good results for images with a large amount of texture. However, BM3D was superior for images with large homogeneous regions. Quantitative experiments have been carried out for synthetic images. For real LEEM images, a qualitative analysis has been conducted in which observers visually assessed restoration results for existing techniques and also the two proposed ones. This experiment has shown that non-local denoising methodswere superior, especially when combined with theRL method. The proposed methods produced good results, but were out performed by NLM and BM3D. This work has shown that non-local denoising techniques are more adequate for LEEM data. Also, theRL technique is very efficient for deblurring purposes.
APA, Harvard, Vancouver, ISO, and other styles
7

Gavaskar, Ruturaj G. "On Plug-and-Play Regularization using Linear Denoisers." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5973.

Full text
Abstract:
The problem of inverting a given measurement model comes up in several computational imaging applications. For example, in CT and MRI, we are required to reconstruct a high-resolution image from incomplete noisy measurements, whereas in superresolution and deblurring, we try to infer the ground-truth from low-resolution or blurred images. Traditionally, this is done by minimizing $f + \phi$, where $f$ is a data-fidelity (or loss) function that is determined by the acquisition process, and $\phi$ is a regularization (or penalty) function that is based on a subjective prior on the target image. The solution is obtained numerically using iterative algorithms such as ISTA or ADMM. While several forms of regularization and associated optimization methods have been proposed in the imaging literature of the last few decades, the use of denoisers (aka denoising priors) for image regularization is a relatively recent phenomenon. This has partly been triggered by advances in image denoising in the last 20 years, leading to the development of powerful image denoisers such as BM3D and DnCNN. In this thesis, we look at a recent protocol called Plug-and-Play (PnP) regularization, where image denoisers are deployed within iterative algorithms for image regularization. PnP consists of replacing the proximal map --- an analytical operator at the core of ISTA and ADMM --- associated with the regularizer $\phi$ with an image denoiser. This is motivated by the intuition that off-the-shelf denoisers such as BM3D and DnCNN offer better image priors than traditional hand-crafted regularizers such as total variation. While PnP does not use an explicit regularizer, it still makes use of the data-fidelity function $f$. However, since the replacement of the proximal map with a denoiser is ad-hoc, the optimization perspective is lost --- it is not clear if the PnP iterations can be interpreted as optimizing some objective function $f + \phi$. Remarkably, PnP reconstructions are of high quality and competitive with state-of-the-art methods. Following this, researchers have tried explaining why plugging a denoiser within an inversion algorithm should work in the first place, why it produces high-quality images, and whether the final reconstruction is optimal in some sense. In this thesis, we try answering such questions, some of which have been the topic of active research in the imaging community in recent years. Specifically, we consider the following questions. --> Fixed-point convergence: Under what conditions does the sequence of iterates generated by a PnP algorithm converge? Moreover, are these conditions met by existing real-world denoisers? --> Optimality and objective convergence: Can we interpret PnP as an algorithm that minimizes $f + \phi$ for some appropriate $\phi$? Moreover, does the algorithm converge to a solution of this objective function? --> Exact and robust recovery: Under what conditions can we recover the ground-truth exactly via PnP? And is the reconstruction robust to noise in the measurements? While early work on PnP has attempted to answer some of these questions, many of the underlying assumptions are either strong or unverifiable. This is essentially because denoisers such as BM3D and DnCNN are mathematically complex, nonlinear and difficult to characterize. A first step in understanding complex nonlinear phenomena is often to develop an understanding of some linear approximation. In this spirit, we focus our attention on denoisers that are linear. In fact, there exists a broad class of real-world denoisers that are linear and whose performance is quite decent; examples include kernel filters (e.g. NLM, bilateral filter) and their symmetrized counterparts. This class has a simple characterization that helps to keep the analysis tractable and the assumptions verifiable. Our main contributions lie in resolving the aforementioned questions for PnP algorithms where the plugged denoiser belongs to this class. We summarize them below. --> We prove fixed-point convergence of the PnP version of ISTA under mild assumptions on the measurement model. --> Based on the theory of proximal maps, we prove that a PnP algorithm in fact minimizes a convex objective function $f + \phi$, subject to some algorithmic modifications that arise from the algebraic properties of the denoiser. Notably, unlike previous results, our analysis applies to non-symmetric linear filters. --> Under certain verifiable assumptions, we prove that a signal can be recovered exactly (resp. robustly) from clean (resp. noisy) measurements using PnP regularization. As a more profound application, in the spirit of classical compressed sensing, we are able to derive probabilistic guarantees on exact and robust recovery for the compressed sensing problem where the sensing matrix is random. An implication of our analysis is that the range of the linear denoiser plays the role of a signal prior and its dimension essentially controls the size of the set of recoverable signals. In particular, we are able to derive the sample complexity of compressed sensing as a function of distortion error and success rate. We validate our theoretical findings numerically, discuss their implications and mention possible future research directions.
APA, Harvard, Vancouver, ISO, and other styles
8

Nair, Pravin. "Provably Convergent Algorithms for Denoiser-Driven Image Regularization." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5887.

Full text
Abstract:
Some fundamental reconstruction tasks in image processing can be posed as an inverse problem where we are required to invert a given forward model. For example, in deblurring and superresolution, the ground-truth image needs to be estimated from blurred and low-resolution images, whereas in CT and MR imaging, a high-resolution image must be reconstructed from a few linear measurements. Such inverse problems are invariably ill-posed—they exhibit non-unique solutions and the process of direct inversion is unstable. Some form of image model (or prior) on the ground truth is required to regularize the inversion process. For example, a classical solution involves minimizing f + g , where the loss term f is derived from the forward model and the regularizer g is used to constrain the search space. The challenge is to come up with a formula for g that can yield good image reconstructions. This has been the center of research activity in image reconstruction for the last few decades. “Regularization using denoising" is a recent breakthrough in which a powerful denoiser is used for regularization purposes, instead of having to specify some hand-crafted g (but the loss f is still used). This has been empirically shown to yield significantly better results than staple f + g minimization. In fact, the results are generally comparable and often superior to state-of-the-art deep learning methods. In this thesis, we consider two such popular models for image regularization—Plug-and-Play (PnP) and Regularization by Denoising (RED). In particular, we focus on the convergence aspect of these iterative algorithms which is not well understood even for simple denoisers. This is important since the lack of convergence guarantee can result in spurious reconstructions in imaging applications. The contributions of the thesis in this regard are as follows. PnP with linear denoisers: We show that for a class of non-symmetric linear denoisers that includes kernel denoisers such as nonlocal means, one can associate a convex regularizer g with the denoiser. More precisely, we show that any such linear denoiser can be expressed as the proximal operator of a convex function, provided we work with a non-standard inner product (instead of the Euclidean inner product). In particular, the regularizer is quadratic, but unlike classical quadratic regularizers, the quadratic form is derived from the observed data. A direct implication of this observation is that (a simple variant of) the PnP algorithm based on this linear denoiser amounts to solving an optimization problem of the form f + g , though it was not originally conceived this way. Consequently, if f is convex, both objective and iterate convergence are guaranteed for the PnP algorithm. Apart from the convergence guarantee, we go on to show that this observation has algorithmic value as well. For example, in the case of linear inverse problems such as superresolution, deblurring and inpainting (where f is quadratic), we can reduce the problem of minimizing f + g to a linear system. In particular, we show how using Krylov solvers we can solve this system efficiently in just few iterations. Surprisingly, the reconstructions are found to be comparable with state-of-theart deep learning methods. To the best of our knowledge, the possibility of achieving near state-of-the-art image reconstructions using a linear solver has not been demonstrated before. PnP and RED with learning-based denoisers: In general, state-of-the-art PnP and RED algorithms rely on trained CNN denoisers such as DnCNN. Unlike linear denoisers, it is difficult to place PnP and RED algorithms within an optimization framework in the case of CNN denoisers. Nonetheless, we can still try to understand the convergence of the sequence of iterates generated by these algorithms. For convex loss f , we show that this question can be resolved using the theory of monotone operators — the denoiser being averaged (a subclass of nonexpansive operators) is sufficient for iterate convergence of PnP and RED. Using numerical examples, we show that existing CNN denoisers are not nonexpansive and can cause PnP and RED algorithms to diverge. Can we train denoisers that are provably nonexpansive? Unfortunately, this is computationally challenging—simply checking nonexpansivity of a CNN is known to be intractable. As a result, existing algorithms for training nonexpansive CNNs either cannot guarantee nonexpansivity or are computation intensive. We show that this problem can be solved by moving away from CNN denoisers to unfolded deep denoisers. In particular, we are able to construct unfolded networks that are efficiently trainable and come with convergence guarantees for PnP and RED algorithms, and whose regularization capacity can be matched withCNNdenoisers. Presumably, we are the first to propose a simple framework for training provably averaged (contractive) denoisers using unfolding networks. We provide numerical results to validate our theoretical results and compare our algorithms with state-of-the-art regularization techniques. We also point out some future research directions stemming from the thesis.
APA, Harvard, Vancouver, ISO, and other styles
9

Yuan, Ming-pin, and 袁鳴彬. "Adaptive DeNoise Filter." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/61007015843867316761.

Full text
Abstract:
碩士
雲林科技大學
電機工程系碩士班
98
In the world of multimedia, digital images are often degraded by noise introduced from transmission errors and image acquisition storage devices. The perception and recognition of human visualization is therefore severely influenced. In order to stabilize the image preprocessing systems, the removal of the above mentioned noise becomes an important issue in this application area. In this thesis, we propose an adaptive wide range noise density method, which is called the Adaptive DeNoise Filter (AND). The main focus of this work is to remove salt-and pepper noise to obtain better image quality of images. An image denoise filter performs two steps: the first step is using Lifting-based Discrete Wavelet Transform (LDWT) and Adaptive Median Filter (AMF) to obtain the high noise density for next step. At the second stage, we use the adaptive search window median filtering to remove noise by using the noise information obtained in the first step. Based on the algorithm, we can find the noise density to adjust window sizes for achieving better image restoration performance. Experimental results show that the proposed method can recover the test image for noise densities from 5% to 90%. The average PSNR of 25.05 dB satisfies the sensitivity of human visual perception.
APA, Harvard, Vancouver, ISO, and other styles
10

"Image cosegmentation and denoise." 2012. http://library.cuhk.edu.hk/record=b5549125.

Full text
Abstract:
我们提出了两个新的方法来解决低级别计算机视觉任务,即图像共同分割和降噪。
在共同分割模型上,我们发现对象对应可以为前景统计估计提供有用的信息。我们的方法可以处理极具挑战性的场景,如变形,角度的变化和显着不同的视角和尺度。此外,我们研究了一种新的能量最小化模型,可以同时处理多个图像。真实和基准数据的定性和定量实验证明该方法的有效性。
另一方面,噪音始终和高频图像结构是紧耦合的,从而使得减少噪音非常很难。在我们的降噪模型中,我们建议稍微使图像光学离焦,以减少图像和噪声的耦合。这使得我们能更有效地降低噪音,随后恢复失焦。我们的分析显示,这是可能的,并且用许多例子证明我们的技术,其中包括低光图像。
We present two novel methods to tackle low level computer vision tasks,i.e., image cosegmentation and denoise .
In our cosegmentationmodel, we discover object correspondence canprovide useful information for foreground statistical estimation. Ourmethod can handle extremely challenging scenarios such as deformation, perspective changes and dramatically different viewpoints/scales. In addition, we develop a novel energy minimization model that can handlemultiple images. Experiments on real and benchmark data qualitatively and quantitatively demonstrate the effectiveness of the approach.
One the other hand, noise is always tightly coupled with high-frequencyimage structure, making noise reduction generally very difficult. In ourdenoise model, we propose slightly optically defocusing the image in orderto loosen this noise-image structure coupling. This allows us to more effectively reduce noise and subsequently restore the small defocus. Weanalytically show how this is possible, and demonstrate our technique on a number of examples that include low-light images.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Qin, Zenglu.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 64-71).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
Abstract --- p.i
Acknowledgement --- p.ii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation and Objectives --- p.1
Chapter 1.1.1 --- Cosegmentation --- p.1
Chapter 1.1.2 --- Image Denoise --- p.4
Chapter 1.2 --- Thesis Outline --- p.7
Chapter 2 --- Background --- p.8
Chapter 2.1 --- Cosegmentation --- p.8
Chapter 2.2 --- Image Denoise --- p.10
Chapter 3 --- Cosegmentation of Multiple Deformable Objects --- p.12
Chapter 3.1 --- Related Work --- p.12
Chapter 3.2 --- Object Corresponding Cosegmentation --- p.13
Chapter 3.3 --- Importance Map with Object Correspondence --- p.15
Chapter 3.3.1 --- Feature Importance Map --- p.16
Chapter 3.3.2 --- Importance Energy E[subscript i](xp) --- p.20
Chapter 3.4 --- Experimental Result --- p.20
Chapter 3.4.1 --- Two-Image Cosegmentation --- p.21
Chapter 3.4.2 --- ETHZ Toys Dataset --- p.22
Chapter 3.4.3 --- More Results --- p.24
Chapter 3.5 --- Summary --- p.27
Chapter 4 --- Using Optical Defocus to Denoise --- p.28
Chapter 4.1 --- Related Work --- p.29
Chapter 4.2 --- Noise Analysis --- p.30
Chapter 4.3 --- Noise Estimation with Focal Blur --- p.33
Chapter 4.3.1 --- Noise Estimation with a Convolution Model --- p.33
Chapter 4.3.2 --- Determining λ --- p.41
Chapter 4.4 --- Final Deconvolution and Error Analysis --- p.43
Chapter 4.5 --- Implementation --- p.45
Chapter 4.6 --- Quantitative Evaluation --- p.47
Chapter 4.7 --- More Experimental Results --- p.53
Chapter 4.8 --- Summary --- p.56
Chapter 5 --- Conclusion --- p.62
Bibliography --- p.64
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Denoisers"

1

Behringer, Uli. Denoiser: The audio interactive noise reduction system, Model SNR 2000. 2nd ed. Willich-Münchheide: Behringer Spezielle Studiotechnik, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Peter Junteng. Using Gaussian process regression to denoise images and remove artefacts from microarray data. 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Denoisers"

1

Stergiopoulou, Vasiliki, Subhadip Mukherjee, Luca Calatroni, and Laure Blanc-Féraud. "Fluctuation-Based Deconvolution in Fluorescence Microscopy Using Plug-and-Play Denoisers." In Lecture Notes in Computer Science, 498–510. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-31975-4_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhdan, Dmitry. "ReBLUR: A Hierarchical Recurrent Denoiser." In Ray Tracing Gems II, 823–44. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7185-8_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gimel’farb, Georgy. "Adaptive Context for a Discrete Universal Denoiser." In Lecture Notes in Computer Science, 477–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27868-9_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cha, Sungmin, and Taesup Moon. "UDLR Convolutional Network for Adaptive Image Denoiser." In Robot Intelligence Technology and Applications, 55–61. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7780-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Repala, Hari Kishan, Aneeta Christopher, and P. V. Sudeep. "Blind Image Restoration with CNN Denoiser Prior." In Proceedings of International Conference on Data Science and Applications, 737–48. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5348-3_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Buchholz, Tim-Oliver, Mangal Prakash, Deborah Schmidt, Alexander Krull, and Florian Jug. "DenoiSeg: Joint Denoising and Segmentation." In Computer Vision – ECCV 2020 Workshops, 324–37. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dogra, Manmohan, Saumya Borwankar, and Jayashree Domala. "Noise Removal from Audio Using CNN and Denoiser." In Advances in Speech and Music Technology, 37–48. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6881-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gao, Yushu, Lin Zhu, Hao-Dong Zhu, Yong Gan, and Li Shang. "Extract Features Using Stacked Denoised Autoencoder." In Intelligent Computing in Bioinformatics, 10–14. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09330-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Mengke, Dongqing Li, Shaohua Wu, and Qinyu Zhang. "Feature-Aware Adaptive Denoiser-Selection for Compressed Image Reconstruction." In Lecture Notes in Electrical Engineering, 537–45. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0187-6_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Gurprem, Ajay Mittal, and Naveen Aggarwal. "Deep Convolution Neural Network Based Denoiser for Mammographic Images." In Communications in Computer and Information Science, 177–87. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9939-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Denoisers"

1

Bled, Clement, and Francois Pitie. "Assessing Advances in Real Noise Image Denoisers." In CVMP '22: European Conference on Visual Media Production. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3565516.3565524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Shabili, Abdullah H., Hassan Mansour, and Petros T. Boufounos. "Learning Plug-And-Play Proximal Quasi-Newton Denoisers." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yanghao, Bichuan Guo, Jiangtao Wen, Zhen Xia, Shan Liu, and Yuxing Han. "Learning Model-Blind Temporal Denoisers without Ground Truths." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Yuxiang, Bo Zhang, and Raoul Florent. "Understanding neural-network denoisers through an activation function perspective." In 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017. http://dx.doi.org/10.1109/icip.2017.8296827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Reddy K., Pavan Kumar, and Kunal N. Chaudhury. "Learning Iteration-Dependent Denoisers for Model-Consistent Compressive Sensing." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bigdeli, Siavash, David Honzátko, Sabine Süsstrunk, and L. Dunbar. "Image Restoration using Plug-and-Play CNN MAP Denoisers." In 15th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2020. http://dx.doi.org/10.5220/0008990700850092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rio, Jules, Olivier Alata, Fabien Momey, and Christophe Ducottet. "Leveraging end-to-end denoisers for denoising periodic signals." In 2021 29th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco54536.2021.9615932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ordentlich, Erik. "Denoising as well as the best of any two denoisers." In 2013 IEEE International Symposium on Information Theory (ISIT). IEEE, 2013. http://dx.doi.org/10.1109/isit.2013.6620452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Talegaonkar, Chinmay, and Ajit Rajwade. "PERFORMANCE BOUNDS FOR TRACTABLE POISSON DENOISERS WITH PRINCIPLED PARAMETER TUNING." In 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2018. http://dx.doi.org/10.1109/globalsip.2018.8646382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yan, Hanshu, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, and Vincent Y. F. Tan. "Towards Adversarially Robust Deep Image Denoising." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/211.

Full text
Abstract:
This work systematically investigates the adversarial robustness of deep image denoisers (DIDs), i.e, how well DIDs can recover the ground truth from noisy observations degraded by adversarial perturbations. Firstly, to evaluate DIDs’ robustness, we propose a novel adversarial attack, namely Observation-based Zero-mean Attack (OBSATK), to craft adversarial zero-mean perturbations on given noisy images. We find that existing DIDs are vulnerable to the adversarial noise generated by OBSATK. Secondly, to robustify DIDs, we pro- pose an adversarial training strategy, hybrid adversarial training (HAT), that jointly trains DIDs with adversarial and non-adversarial noisy data to ensure that the reconstruction quality is high and the denoisers around non-adversarial data are locally smooth. The resultant DIDs can effectively remove various types of synthetic and adversarial noise. We also uncover that the robustness of DIDs benefits their generalization capability on unseen real-world noise. Indeed, HAT-trained DIDs can recover high-quality clean images from real-world noise even without training on real noisy data. Extensive experiments on benchmark datasets, including Set68, PolyU, and SIDD, corroborate the effectiveness of OBSATK and HAT.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography