To see the other types of publications on this topic, follow the link: Denoising.

Dissertations / Theses on the topic 'Denoising'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Denoising.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kan, Hasan E. "Bootstrap based signal denoising." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://bosun.nps.edu/uhtbin/hyperion.exe/02Sep%5FKan.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2002.
Thesis Advisor(s): Monique P. Fargues, Ralph D. Hippenstiel. "September 2002." Includes bibliographical references (p. 89-90). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
2

NIBHANUPUDI, SWATHI. "SIGNAL DENOISING USING WAVELETS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1070577417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ehret, Thibaud. "Video denoising and applications." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN018.

Full text
Abstract:
Cette thèse est dédiée es débruitage vidéo. La première partie se concentre sur les méthodes de débruitage de vidéo à patches. Nous étudions en détail VBM3D, une méthode populaire de débruitage vidéo, pour comprendre les méchanismes qui ont fait son succès. Nous présentons aussi une implémentation temps-réel sur care graphique de cette méthode. Nous étudions ensuite l'impacte de la recherche de patches pour le débruitage vidéo et en particulier commen une recherche globale peut améliorer la qualité du débruitage. Enfin, nous proposons une nouvelle méthode causale et récursive appelée NL-Kalman qui produit ne rès bonne consistance temporelle.Dans la deuxième partie, nous étudions les méthodes d'apprentissage pour le débruitage. Nous présentons l'une des toutes premières architecture de réseau qui est compétitive avec l'état de l'art. Nous montrons aussi que les méthodes basées sur l'apprentissage profond offrent de nouvelles opportunités. En particulier, il devient possible de débruiter sans connaître le modèle du bruit. Grâce à la méthode proposée, même les vidéos traitées par une chaîne de traitement inconnue peuvent être débruitées. Nous étudions aussi le cas de données mosaïquées. En particulier, nous montrons que les réseaux de neurones sont largement supérieurs aux méthodes précédentes. Nous proposons aussi une nouvelle méthode d'apprentissage pour démosaïckage sans avoir besoin de vérité terrain.Dans une troisième partie nous présentons différentes application aux techniques utilisées pour le débruitage. Le premier problème étudié est la détection d'anomalie. Nous montrons que ce problème peut être ramené à détecter des anomalies dans du bruit. Nous regardons aussi la détection de falsification et en particulier la détection de copié-collé. Tout comme le débruitage à patches, ce problème peut être résolu à l'aide d'une recherche de patches similaires. Pour cela, nous étudions en détail PatchMatch et l'utilisons pour détecter des falsifications. Nous présentons aussi une méthode basée sur une association de patches parcimonieuse
This thesis studies the problem of video denoising. In the first part we focus on patch-based video denoising methods. We study in details VBM3D, a popular video denoising method, to understand the mechanisms that made its success. We also present a real-time implementation on GPU of this method. We then study the impact of patch search in video denoising and in particular how searching for similar patches in the entire video, a global patch search, improves the denoising quality. Finally, we propose a novel causal and recursive method called NL-Kalman that produces very good temporal consistency.In the second part, we look at the newer trend of deep learning for image and video denoising. We present one of the first neural network architecture, using temporal self-similarity, competitive with state-of-the-art patch-based video denoising methods. We also show that deep learning offers new opportunities. In particular, it allows for denoising without knowing the noise model. We propose a framework that allows denoising of videos that have been through an unknown processing pipeline. We then look at the case of mosaicked data. In particular, we show that deep learning is undeniably superior to previous approaches for demosaicking. We also propose a novel training process for demosaicking without ground-truth based on multiple raw acquisition. This allows training for real case applications. In the third part we present different applications taking advantage of mechanisms similar those studied for denoising. The first problem studied is anomaly detection. We show that this problem can be reduced to detecting anomalies in noise. We also look at forgery detection and in particular copy-paste forgeries. Just like for patch-based denoising, solving this problem requires searching for similar patches. For that, we do an in-depth study of PatchMatch and see how it can be used for detecting forgeries. We also present an efficient method based on sparse patch matching
APA, Harvard, Vancouver, ISO, and other styles
4

Kan, Hasan Ertam. "Bootstrap based signal denoising." Thesis, Monterey, California. Naval Postgraduate School, 2002. http://hdl.handle.net/10945/2883.

Full text
Abstract:
Approved for public release, distribution is unlimited
"This work accomplishes signal denoising using the Bootstrap method when the additive noise is Gaussian. The noisy signal is separated into frequency bands using the Fourier or Wavelet transform. Each frequency band is tested for Gaussianity by evaluating the kurtosis. The Bootstrap method is used to increase the reliability of the kurtosis estimate. Noise effects are minimized using a hard or soft thresholding scheme on the frequency bands that were estimated to be Gaussian. The recovered signal is obtained by applying the appropriate inverse transform to the modified frequency bands. The denoising scheme is tested using three test signals. Results show that FFT-based denoising schemes perform better than WT-based denoising schemes on the stationary sinusoidal signals, whereas WT-based schemes outperform FFT-based schemes on chirp type signals. Results also show that hard thresholding never outperforms soft thresholding, at best its performance is similar to soft thresholding."--p.i.
First Lieutenant, Turkish Army
APA, Harvard, Vancouver, ISO, and other styles
5

Gaspar, John M. "Denoising amplicon-based metagenomic data." Thesis, University of New Hampshire, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3581214.

Full text
Abstract:

Reducing the effects of sequencing errors and PCR artifacts has emerged as an essential component in amplicon-based metagenomic studies. Denoising algorithms have been written that can reduce error rates in mock community data, in which the true sequences are known, but they were designed to be used in studies of real communities. To evaluate the outcome of the denoising process, we developed methods that do not rely on a priori knowledge of the correct sequences, and we applied these methods to a real-world dataset. We found that the denoising algorithms had substantial negative side-effects on the sequence data. For example, in the most widely used denoising pipeline, AmpliconNoise, the algorithm that was designed to remove pyrosequencing errors changed the reads in a manner inconsistent with the known spectrum of these errors, until one of the parameters was increased substantially from its default value.

With these shortcomings in mind, we developed a novel denoising program, FlowClus. FlowClus uses a systematic approach to filter and denoise reads efficiently. When denoising real datasets, FlowClus provides feedback about the process that can be used as the basis to adjust the parameters of the algorithm to suit the particular dataset. FlowClus produced a lower error rate compared to other denoising algorithms when analyzing a mock community dataset, while retaining significantly more sequence information. Among its other attributes, FlowClus can analyze longer reads being generated from current protocols and irregular flow orders. It has processed a full plate (1.5 million reads) in less than four hours; using its more efficient (but less precise) trie analysis option, this time was further reduced, to less than seven minutes.

APA, Harvard, Vancouver, ISO, and other styles
6

Bayreuther, Moritz, Jamin Cristall, and Felix J. Herrmann. "Curvelet denoising of 4d seismic." European Association of Geoscientists and Engineers, 2004. http://hdl.handle.net/2429/453.

Full text
Abstract:
With burgeoning world demand and a limited rate of discovery of new reserves, there is increasing impetus upon the industry to optimize recovery from already existing fields. 4D, or time-lapse, seismic imaging is an emerging technology that holds great promise to better monitor and optimise reservoir production. The basic idea behind 4D seismic is that when multiple 3D surveys are acquired at separate calendar times over a producing field, the reservoir geology will not change from survey to survey but the state of the reservoir fluids will change. Thus, taking the difference between two 3D surveys should remove the static geologic contribution to the data and isolate the timevarying fluid flow component. However, a major challenge in 4D seismic is that acquisition and processing differences between 3D surveys often overshadow the changes caused by fluid flow. This problem is compounded when 4D effects are sought to be derived from vintage 3D data sets that were not originally acquired with 4D in mind. The goal of this study is to remove the acquisition and imaging artefacts from a 4D seismic difference cube using Curvelet processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Offei, Felix. "Denoising Tandem Mass Spectrometry Data." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3218.

Full text
Abstract:
Protein identification using tandem mass spectrometry (MS/MS) has proven to be an effective way to identify proteins in a biological sample. An observed spectrum is constructed from the data produced by the tandem mass spectrometer. A protein can be identified if the observed spectrum aligns with the theoretical spectrum. However, data generated by the tandem mass spectrometer are affected by errors thus making protein identification challenging in the field of proteomics. Some of these errors include wrong calibration of the instrument, instrument distortion and noise. In this thesis, we present a pre-processing method, which focuses on the removal of noisy data with the hope of aiding in better identification of proteins. We employ the method of binning to reduce the number of noise peaks in the data without sacrificing the alignment of the observed spectrum with the theoretical spectrum. In some cases, the alignment of the two spectra improved.
APA, Harvard, Vancouver, ISO, and other styles
8

Ghazel, Mohsen. "Adaptive Fractal and Wavelet Image Denoising." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/882.

Full text
Abstract:
The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor quality image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresholding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes.
APA, Harvard, Vancouver, ISO, and other styles
9

Rafi, Nazari Mina. "Denoising and Demosaicking of Color Images." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35802.

Full text
Abstract:
Most digital cameras capture images through Color Filter Arrays (CFA), and reconstruct the full color image from the CFA image. Each CFA pixel only captures one primary color component at each pixel location; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. Some other CFAs contain four color filters. The additional filter is a panchromatic/white filter, and it usually receives the full light spectrum. In this research, we studied and compared different four channel CFAs with panchromatic/white filter, and compared them with three channel CFAs. An appropriate demosaicking algorithm has been developed for each CFA. The most well-known three-channel CFA is Bayer. The Fujifilm X-Trans pattern has been studied in this work as another three-channel CFA with a different structure. Three different four-channel CFAs have been discussed in this research: RGBW-Kodak, RGBW-Bayer and RGBW- $5 \times 5$. The structure and the number of filters for each color are different for these CFAs. Since the Least-Square Luma-Chroma Demultiplexing method is a state of the art demosaicking method for the Bayer CFA, we designed the Least-Square method for RGBW CFAs. The effect of noise on different CFA patterns will be discussed for four channel CFAs. The Kodak database has been used to evaluate our non-adaptive and adaptive demosaicking methods as well as the optimized algorithms with the least square method. The captured values of white (panchromatic/clear) filters in RGBW CFAs have been estimated using red, green and blue filter values. Sets of optimized coefficients have been proposed to estimate the white filter values accurately. The results have been validated using the actual white values of a hyperspectral image dataset. A new denoising-demosaicking method for RGBW-Bayer CFA has been presented in this research. The algorithm has been tested on the Kodak dataset using the estimated value of white filters and a hyperspectral image dataset using the actual value of white filters, and the results have been compared. The results in both cases have been compared with the previous works on RGB-Bayer CFA, and it shows that the proposed algorithm using RGBW-Bayer CFA is working better than RGB-Bayer CFA in presence of noise.
APA, Harvard, Vancouver, ISO, and other styles
10

Gärdenäs, Anders Derk. "Denoising and renoising of videofor compression." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-340425.

Full text
Abstract:
Videos contain increasingly more data due to increased resolutions. Codecs are further developed and improved to reduce the amount of data in videos. One difficulty with video encoding is noise handling, it's expensive to store noise and the final result is not always aesthetically pleasing. In this thesis project an algorithm is developed and presented which improves the visual quality while reducing the bit-rate of the video, by improved management of noise. The aim of the algorithm is to store noise information in a specific noise parameterinstead of mixing the noise with the visual information. The algorithm was developed to be part of the modern codec JEM, a successor of the h.264 and h.265 codecs. The algorithm can be summarized in the following steps: the first step is to identify how much noise there is in the video, which is done with a temporal noise identification algorithm. The noise identification is done at the start of the encoding process. The second step is to remove noise from the video with a denoising algorithm, this is done during the encoding processes. The third and final step is reapplication of the noise, this is done using the noise parameters computed in step one. The third step isdone during the decoding phase. The result was evaluated in a subjective survey consisting of five people evaluating 27 different versions of three videos. The result of the subjective survey shows a consistently improved visual qualityresulting from the proposed technique, achieving an improved score from 3.35 to 3.6on average on a subjective 1-5 scale where 5 is the best score. Furthermore, the bit-rate was significantly reduced by denoising. Bit-rate reduction is particularly high in high-quality videos, where the average reduction of as much as 49% is achieved. Another finding of this thesis is that the same video quality can be achieved using 2.7% less data by using a denoising tool as part of the video encoder. In conclusion, it ispossible to improve video quality while reducing the bit-rate using the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
11

Hella, Vegard. "Digital Audio Restoration : Denoising phonograph recordings." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-23521.

Full text
Abstract:
This master's thesis realize an audio noise reduction tool by use of digital signal processing. The tool is used to restore phonograph recordings. The recordings are restored on behalf of Ringve Museum, Norway's national museum of music and musical instruments. Sometimes the noise can be louder than the actual audio. In the view of a museum or library institution, this makes them less valuable as they are not presentable to the general public. A common restoration environment will include multiple tools. We will only specialize in one of them reducing broadband, stationary and additive noise. This is often perceived as static hiss or buzz. To realize the tool we use the numerical computation environment MATLAB. In MATLAB the calculations are written using a high-level programming language with many embedded functions. There are several established algorithms specializing in noise reduction of audio and speech. We will look at some basic and some complex algorithms that are based on the Short Time Fourier Transform (STFT). This technique slices the audio in short time frames to be able to analyze the local complex frequency spectrum. The noise reduction procedure compare the audio spectrum with its estimated noise spectrum to calculate an attenuation at each frequency. The attenuated signal is transformed back into time domain. Some of the algorithms are based on the Wiener filter or AR-modeling. The program will include a user interface with selectable algorithms and parameters. In old recordings a certain level of noise may be wanted to preserve authenticity. Thus a noise floor generator will be implemented.Some necessary theory of digital signal processing will be given, but some general knowledge will be required. The noise reduction theory is presented before the realization and program flow is explained. A listening test will be conducted. Audio examples are used to illustrate the general results, and the development process, results and further work is discussed. The program gave better results than one of the commercial available softwares. Another important result is that the stationary property is a poor approximation. The phonograph noise exhibits periodical properties with longer time periods than used in the short time transform. A model incorporating this feature should be implemented.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Zhi. "Variational image segmentation, inpainting and denoising." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/292.

Full text
Abstract:
Variational methods have attracted much attention in the past decade. With rigorous mathematical analysis and computational methods, variational minimization models can handle many practical problems arising in image processing, such as image segmentation and image restoration. We propose a two-stage image segmentation approach for color images, in the first stage, the primal-dual algorithm is applied to efficiently solve the proposed minimization problem for a smoothed image solution without irrelevant and trivial information, then in the second stage, we adopt the hillclimbing procedure to segment the smoothed image. For multiplicative noise removal, we employ a difference of convex algorithm to solve the non-convex AA model. And we also improve the non-local total variation model. More precisely, we add an extra term to impose regularity to the graph formed by the weights between pixels. Thin structures can benefit from this regularization term, because it allows to adapt the weights value from the global point of view, thus thin features will not be overlooked like in the conventional non-local models. Since now the non-local total variation term has two variables, the image u and weights v, and it is concave with respect to v, the proximal alternating linearized minimization algorithm is naturally applied with variable metrics to solve the non-convex model efficiently. In the meantime, the efficiency of the proposed approaches is demonstrated on problems including image segmentation, image inpainting and image denoising.
APA, Harvard, Vancouver, ISO, and other styles
13

Khadivi, Pejman. "Online Denoising Solutions for Forecasting Applications." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/72907.

Full text
Abstract:
Dealing with noisy time series is a crucial task in many data-driven real-time applications. Due to the inaccuracies in data acquisition, time series suffer from noise and instability which leads to inaccurate forecasting results. Therefore, in order to improve the performance of time series forecasting, an important pre-processing step is the denoising of data before performing any action. In this research, we will propose various approaches to tackle the noisy time series in forecasting applications. For this purpose, we use different machine learning methods and information theoretical approaches to develop online denoising algorithms. In this dissertation, we propose four categories of time series denoising methods that can be used in different situations, depending on the noise and time series properties. In the first category, a seasonal regression technique is proposed for the denoising of time series with seasonal behavior. In the second category, multiple discrete universal denoisers are developed that can be used for the online denoising of discrete value time series. In the third category, we develop a noisy channel reversal model based on the similarities between time series forecasting and data communication and use that model to deploy an out-of-band noise filtering in forecasting applications. The last category of the proposed methods is deep-learning based denoisers. We use information theoretic concepts to analyze a general feed-forward deep neural network and to prove theoretical bounds for deep neural networks behavior. Furthermore, we propose a denoising deep neural network method for the online denoising of time series. Real-world and synthetic time series are used for numerical experiments and performance evaluations. Experimental results show that the proposed methods can efficiently denoise the time series and improve their quality.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Jiachao. "Image denoising for real image sensors." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1437954286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Wu. "Optimal Denoising for Photon-limited Imaging." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1446401290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Danda, Swetha. "Generalized diffusion model for image denoising." Morgantown, W. Va. : [West Virginia University Libraries], 2007. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5481.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2007.
Title from document title page. Document formatted into pages; contains viii, 62 p. : ill. Includes abstract. Includes bibliographical references (p. 59-62).
APA, Harvard, Vancouver, ISO, and other styles
17

Deng, Hao. "Mathematical approaches to digital color image denoising." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31708.

Full text
Abstract:
Thesis (Ph.D)--Mathematics, Georgia Institute of Technology, 2010.
Committee Chair: Haomin Zhou; Committee Member: Luca Dieci; Committee Member: Ronghua Pan; Committee Member: Sung Ha Kang; Committee Member: Yang Wang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
18

Elahi, Pegah. "Application of Noise Invalidation Denoising in MRI." Thesis, Linköpings universitet, Medicinsk informatik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-85215.

Full text
Abstract:
Magnetic Resonance Imaging (MRI) is a common medical imaging tool that have beenused in clinical industry for diagnostic and research purposes. These images are subjectto noises while capturing the data that can eect the image quality and diagnostics.Therefore, improving the quality of the generated images from both resolution andsignal to noise ratio (SNR) perspective is critical. Wavelet based denoising technique isone of the common tools to remove the noise in the MRI images. The noise is eliminatedfrom the detailed coecients of the signal in the wavelet domain. This can be done byapplying thresholding methods. The main task here is to nd an optimal threshold andkeep all the coecients larger than this threshold as the noiseless ones. Noise InvalidationDenoising technique is a method in which the optimal threshold is found by comparingthe noisy signal to a noise signature (function of noise statistics). The original NIDeapproach is developed for one dimensional signals with additive Gaussian noise. In thiswork, the existing NIDe approach has been generalized for applications in MRI imageswith dierent noise distribution. The developed algorithm was tested on simulated datafrom the Brainweb database and compared with the well-known Non Local Mean lteringmethod for MRI. The results indicated better detailed structural preserving forthe NIDe approach on the magnitude data while the signal to noise ratio is compatible.The algorithm shows an important advantageous which is less computational complexitythan the NLM method. On the other hand, the Unbiased NLM technique is combinedwith the proposed technique, it can yield the same structural similarity while the signalto noise ratio is improved.
APA, Harvard, Vancouver, ISO, and other styles
19

Hussain, Israr. "Non-gaussianity based image deblurring and denoising." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

De, Santis Simone. "Quantum Median Filter for Total Variation denoising." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
In this work we present Quantum Median Filter, an image processing algorithm for applying Total Variation denoising to quantum image representations. After a brief introduction to TV model and quantum computing, we present QMF algorithm and discuss its design and efficiency; then we implement and simulate the quantum circuit using Qiskit library; finally we apply it to a set of noisy images, in order to compare and evaluate experimental results.
APA, Harvard, Vancouver, ISO, and other styles
21

Sarjanoja, S. (Sampsa). "BM3D image denoising using heterogeneous computing platforms." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201504141380.

Full text
Abstract:
Noise reduction is one of the most fundamental digital image processing problems, and is often designed to be solved at an early stage of the image processing path. Noise appears on the images in many different ways, and it is inevitable. In general, various image processing algorithms perform better if their input is as error-free as possible. In order to keep the processing delays small in different computing platforms, it is important that the noise reduction is performed swiftly. The recent progress in the entertainment industry has led to major improvements in the computing capabilities of graphics cards. Today, graphics circuits consist of several hundreds or even thousands of computing units. Using these computing units for general-purpose computation is possible with OpenCL and CUDA programming interfaces. In applications where the processed data is relatively independent, using parallel computing units may increase the performance significantly. Graphics chips enabled with general-purpose computation capabilities are becoming more common also in mobile devices. In addition, photography has never been as popular as it is nowadays by using mobile devices. This thesis aims to implement the calculation of the state-of-the-art technology used in noise reduction, block-matching and three-dimensional filtering (BM3D), to be executed in heterogeneous computing environments. This study evaluates the performance of the presented implementations by making comparisons with existing implementations. The presented implementations achieve significant benefits from the use of parallel computing devices. At the same time the comparisons illustrate general problems in the utilization of using massively parallel processing for the calculation of complex imaging algorithms
Kohinanpoisto on yksi keskeisimmistä digitaaliseen kuvankäsittelyyn liittyvistä ongelmista, joka useimmiten pyritään ratkaisemaan jo signaalinkäsittelyvuon varhaisessa vaiheessa. Kohinaa ilmestyy kuviin monella eri tavalla ja sen esiintyminen on väistämätöntä. Useat kuvankäsittelyalgoritmit toimivat paremmin, jos niiden syöte on valmiiksi mahdollisimman virheetöntä käsiteltäväksi. Jotta kuvankäsittelyviiveet pysyisivät pieninä eri laskenta-alustoilla, on tärkeää että myös kohinanpoisto suoritetaan nopeasti. Viihdeteollisuuden kehityksen myötä näytönohjaimien laskentateho on moninkertaistunut. Nykyisin näytönohjainpiirit koostuvat useista sadoista tai jopa tuhansista laskentayksiköistä. Näiden laskentayksiköiden käyttäminen yleiskäyttöiseen laskentaan on mahdollista OpenCL- ja CUDA-ohjelmointirajapinnoilla. Rinnakkaislaskenta usealla laskentayksiköllä mahdollistaa suuria suorituskyvyn parannuksia käyttökohteissa, joissa käsiteltävä tieto on toisistaan riippumatonta tai löyhästi riippuvaista. Näytönohjainpiirien käyttö yleisessä laskennassa on yleistymässä myös mobiililaitteissa. Lisäksi valokuvaaminen on nykypäivänä suosituinta juuri mobiililaitteilla. Tämä diplomityö pyrkii selvittämään viimeisimmän kohinanpoistoon käytettävän tekniikan, lohkonsovitus ja kolmiulotteinen suodatus (block-matching and three-dimensional filtering, BM3D), laskennan toteuttamista heterogeenisissä laskentaympäristöissä. Työssä arvioidaan esiteltyjen toteutusten suorituskykyä tekemällä vertailuja jo olemassa oleviin toteutuksiin. Esitellyt toteutukset saavuttavat merkittäviä hyötyjä rinnakkaislaskennan käyttämisestä. Samalla vertailuissa havainnollistetaan yleisiä ongelmakohtia näytönohjainlaskennan hyödyntämisessä monimutkaisten kuvankäsittelyalgoritmien laskentaan
APA, Harvard, Vancouver, ISO, and other styles
22

Ma, Xiandong. "Wavelets for partial discharge denoising and analysis." Thesis, Glasgow Caledonian University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hughes, John B. "Signal enhancement using time-frequency based denoising." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion/03Mar%5FHughes.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Agresti, Gianluca. "Data Driven Approaches for Depth Data Denoising." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422722.

Full text
Abstract:
The scene depth is an important information that can be used to retrieve the scene geometry, a missing element in standard color images. For this reason, the depth information is usually employed in many applications such as 3D reconstruction, autonomous driving and robotics. The last decade has seen the spread of different commercial devices able to sense the scene depth. Among these, Time-of-Flight (ToF) cameras are becoming popular because they are relatively cheap and they can be miniaturized and implemented on portable devices. Stereo vision systems are the most widespread 3D sensors and they are simply composed by two standard color cameras. However, they are not free from flaws, in particular they fail when the scene has no texture. Active stereo and structured light systems have been developed to overcome this issue by using external light projectors. This thesis collects the findings of my Ph.D. research, which are mainly devoted to the denoising of depth data. First, some of the most widespread commercial 3D sensors are introduced with their strengths and limitations. Then, some techniques for the quality enhancement of ToF depth acquisition are presented and compared with other state-of-the-art methods. A first proposed method is based on a hardware modification of the standard ToF projector. A second approach instead uses multi-frequency ToF recordings as input of a deep learning network to improve the depth estimation. A particular focus will be given to how the denoising performance degrades, when the network is trained on synthetic data and tested on real data. Thus, a method to reduce the gap in performance will be proposed. Since ToF and stereo vision systems have complementary characteristics, the possibility to fuse the information coming from these sensors is analysed and a method based on a locally consistent fusion, guided by a learning based reliability measure for the two sensors, is proposed. A part of this thesis is dedicated to the description of the data acquisition procedures and the related labeling required to collect the datasets we used for the training and evaluation of the proposed methods.
La profondità della scena è un importante informazione che può essere usata per recuperare la geometria della scena stessa, un elemento mancante nelle semplici immagini a colori. Per questo motivo, questi dati sono spesso usati in molte applicazioni come ricostruzione 3D, guida autonoma e robotica. L'ultima decade ha visto il diffondersi di diversi dispositivi capaci di stimare la profondità di una scena. Tra questi, le telecamere Time-of-Flight (ToF) stanno diventando sempre più popolari poiché sono relativamente poco costose e possono essere miniaturizzate e implementate su dispositivi portatili. I sistemi a visione stereoscopica sono i sensori 3D più diffusi e sono composti da due semplici telecamere a colori. Questi sensori non sono però privi di difetti, in particolare non riescono a stimare in maniera corretta la profondità di scene prive di texture. I sistemi stereoscopici attivi e i sistemi a luce strutturata sono stati sviluppati per risolvere questo problema usando un proiettore esterno. Questa tesi presenta i risultati che ho ottenuto durante il mio Dottorato di Ricerca presso l'Università degli Studi di Padova. Lo scopo principale del mio lavoro è stato quello di presentare metodi per il miglioramento dei dati 3D acquisiti con sensori commerciali. Nella prima parte della tesi i sensori 3D più diffusi verranno presentati introducendo i loro punti di forza e debolezza. In seguito verranno descritti dei metodi per il miglioramento della qualità dei dati di profondità acquisiti con telecamere ToF. Un primo metodo sfrutta una modifica hardware del proiettore ToF. Il secondo utilizza una rete neurale convoluzionale (CNN) che sfrutta dati acquisiti da una telecamera ToF per stimare un'accurata mappa di profondità della scena. Nel mio lavoro è stata data attenzione a come le prestazioni di questo metodo peggiorano quando la CNN è allenata su dati sintetici e testata su dati reali. Di conseguenza, un metodo per ridurre tale perdita di prestazioni verrà presentato. Poiché le mappe di profondità acquisite con sensori ToF e sistemi stereoscopici hanno proprietà complementari, la possibilità di fondere queste due sorgenti di informazioni è stata investigata. In particolare, è stato presentato un metodo di fusione che rinforza la consistenza locale dei dati e che sfrutta una stima dell'accuratezza dei due sensori, calcolata con una CNN, per guidare il processo di fusione. Una parte della tesi è dedita alla descrizione delle procedure di acquisizione dei dati utilizzati per l'allenamento e la valutazione dei metodi presentati.
APA, Harvard, Vancouver, ISO, and other styles
25

Houdard, Antoine. "Some advances in patch-based image denoising." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT005/document.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte des méthodes non locales pour le traitement d'images et a pour application principale le débruitage, bien que les méthodes étudiées soient suffisamment génériques pour être applicables à d'autres problèmes inverses en imagerie. Les images naturelles sont constituées de structures redondantes, et cette redondance peut être exploitée à des fins de restauration. Une manière classique d’exploiter cette auto-similarité est de découper l'image en patchs. Ces derniers peuvent ensuite être regroupés, comparés et filtrés ensemble.Dans le premier chapitre, le principe du "global denoising" est reformulé avec le formalisme classique de l'estimation diagonale et son comportement asymptotique est étudié dans le cas oracle. Des conditions précises à la fois sur l'image et sur le filtre global sont introduites pour assurer et quantifier la convergence.Le deuxième chapitre est consacré à l'étude d’a priori gaussiens ou de type mélange de gaussiennes pour le débruitage d'images par patches. Ces a priori sont largement utilisés pour la restauration d'image. Nous proposons ici quelques indices pour répondre aux questions suivantes : Pourquoi ces a priori sont-ils si largement utilisés ? Quelles informations encodent-ils ?Le troisième chapitre propose un modèle probabiliste de mélange pour les patchs bruités, adapté à la grande dimension. Il en résulte un algorithme de débruitage qui atteint les performance de l'état-de-l'art.Le dernier chapitre explore des pistes d'agrégation différentes et propose une écriture de l’étape d'agrégation sous la forme d'un problème de moindre carrés
This thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed
APA, Harvard, Vancouver, ISO, and other styles
26

Houdard, Antoine. "Some advances in patch-based image denoising." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT005.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte des méthodes non locales pour le traitement d'images et a pour application principale le débruitage, bien que les méthodes étudiées soient suffisamment génériques pour être applicables à d'autres problèmes inverses en imagerie. Les images naturelles sont constituées de structures redondantes, et cette redondance peut être exploitée à des fins de restauration. Une manière classique d’exploiter cette auto-similarité est de découper l'image en patchs. Ces derniers peuvent ensuite être regroupés, comparés et filtrés ensemble.Dans le premier chapitre, le principe du "global denoising" est reformulé avec le formalisme classique de l'estimation diagonale et son comportement asymptotique est étudié dans le cas oracle. Des conditions précises à la fois sur l'image et sur le filtre global sont introduites pour assurer et quantifier la convergence.Le deuxième chapitre est consacré à l'étude d’a priori gaussiens ou de type mélange de gaussiennes pour le débruitage d'images par patches. Ces a priori sont largement utilisés pour la restauration d'image. Nous proposons ici quelques indices pour répondre aux questions suivantes : Pourquoi ces a priori sont-ils si largement utilisés ? Quelles informations encodent-ils ?Le troisième chapitre propose un modèle probabiliste de mélange pour les patchs bruités, adapté à la grande dimension. Il en résulte un algorithme de débruitage qui atteint les performance de l'état-de-l'art.Le dernier chapitre explore des pistes d'agrégation différentes et propose une écriture de l’étape d'agrégation sous la forme d'un problème de moindre carrés
This thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed
APA, Harvard, Vancouver, ISO, and other styles
27

McGraw, Tim E. "Denoising, segmentation and visualization of diffusion weighted MRI." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Björling, Robin. "Denoising of Infrared Images Using Independent Component Analysis." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4954.

Full text
Abstract:

Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer.


The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.

APA, Harvard, Vancouver, ISO, and other styles
29

Tuncer, Guney. "A Java Toolbox For Wavelet Based Image Denoising." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12608037/index.pdf.

Full text
Abstract:
Wavelet methods for image denoising have became widespread for the last decade. The effectiveness of this denoising scheme is influenced by many factors. Highlights can be listed as choosing of wavelet used, the threshold determination and transform level selection for thresholding. For threshold calculation one of the classical solutions is Wiener filter as a linear estimator. Another one is VisuShrink using global thresholding for nonlinear area. The purpose of this work is to develop a Java toolbox which is used to find best denoising schemes for distinct image types particularly Synthetic Aperture Radar (SAR) images. This can be accomplished by comparing these basic methods with well known data adaptive thresholding methods such as SureShrink, BayeShrink, Generalized Cross Validation and Hypothesis Testing. Some nonwavelet denoising process are also introduced. Along with simple mean and median filters, more statistically adaptive median, Lee, Kuan and Frost filtering techniques are also tested to assist wavelet based denoising scheme. All of these methods on the basis of wavelet models and some traditional methods will be implemented in pure java code using plug-in concept of ImageJ which is a popular image processing tool written in Java.
APA, Harvard, Vancouver, ISO, and other styles
30

Michael, Simon. "A Comparison of Data Transformations in Image Denoising." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375715.

Full text
Abstract:
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
APA, Harvard, Vancouver, ISO, and other styles
31

Almahdi, Redha A. "Recursive Non-Local Means Filter for Video Denoising." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1481033972368771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Aparnnaa. "Image Denoising and Noise Estimation by Wavelet Transformation." Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1555929391906805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lind, Johan. "Evaluating CNN-based models for unsupervised image denoising." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176092.

Full text
Abstract:
Images are often corrupted by noise which reduces their visual quality and interferes with analysis. Convolutional Neural Networks (CNNs) have become a popular method for denoising images, but their training typically relies on access to thousands of pairs of noisy and clean versions of the same underlying picture. Unsupervised methods lack this requirement and can instead be trained purely using noisy images. This thesis evaluated two different unsupervised denoising algorithms: Noise2Self (N2S) and Parametric Probabilistic Noise2Void (PPN2V), both of which train an internal CNN to denoise images. Four different CNNs were tested in order to investigate how the performance of these algorithms would be affected by different network architectures. The testing used two different datasets: one containing clean images corrupted by synthetic noise, and one containing images damaged by real noise originating from the camera used to capture them. Two of the networks, UNet and a CBAM-augmented UNet resulted in high performance competitive with the strong classical denoisers BM3D and NLM. The other two networks - GRDN and MultiResUNet - on the other hand generally caused poor performance.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Xiaoyang. "Advanced numerical methods for image denoising and segmentation." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11954/.

Full text
Abstract:
Image denoising is one of the most major steps in current image processing. It is a pre-processing step which aims to remove certain unknown, random noise from an image and obtain an image free of noise for further image processing, such as image segmentation. Image segmentation, as another branch of image processing, plays a significant role in connecting low-level image processing and high-level image processing. Its goal is to segment an image into different parts and extract meaningful information for image analysis and understanding. In recent years, methods based on PDEs and variational functional became very popular in both image denoising and image segmentation. These two branches of methods are presented and investigated in this thesis. In this thesis, several typical methods based on PDE are reviewed and examined. These include the isotropic diffusion model, the anisotropic diffusion model (the P-M model), the fourth-order PDE model (the Y-K model), and the active contour model in image segmentation. Based on the analysis of behaviours of each model, some improvements are proposed. First, a new coefficient is provided for the P-M model to obtain a well-posed model and reduce the “block effect”. Second, a weighted sum operator is used to replace the Laplacian operator in the Y-K model. Such replacement can relieve the creation of the speckles which is brought in by the Y-K model and preserve more details. Third, an adaptive relaxation method with a discontinuity treatment is proposed to improve the numerical solution of the Y-K model. Fourth, an active contour model coupling with the anisotropic diffusion model is proposed to build a noise-resistance segmentation method. Finally, in this thesis, three ways of deriving PDE are developed and summarised. The issue of PSNR is also discussed at the end of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
35

Liao, Zhiwu. "Image denoising using wavelet domain hidden Markov models." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zoppo, Viviana. "Denoising di immagini mediante tecniche basate sulla Total Variation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9499/.

Full text
Abstract:
Il lavoro svolto in questa tesi è legato allo studio ed alla formulazione di metodi computazionali volti all’eliminazione del noise (rumore) presente nelle immagini, cioè il processo di “denoising” che è quello di ricostruire un’immagine corrotta da rumore avendo a disposizione una conoscenza a priori del fenomeno di degrado. Il problema del denoising è formulato come un problema di minimo di un funzionale dato dalla somma di una funzione che rappresenta l’adattamento dei dati e la Variazione Totale. I metodi di denoising di immagini saranno affrontati attraverso tecniche basate sullo split Bregman e la Total Variation (TV) pesata che è un problema mal condizionato, cioè un problema sensibile a piccole perturbazioni sui dati. Queste tecniche permettono di ottimizzare dal punto di vista della visualizzazione le immagini in esame.
APA, Harvard, Vancouver, ISO, and other styles
37

Barsanti, Robert J. "Denoising of ocean acoustic signals using wavelet-based techniques." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA329379.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering and M.S. in Engineering Acoustics) Naval Postgraduate School, December 1996.
Thesis advisor(s): Monique P. Fargues and Ralph Hippenstiel. "December 1996." Includes bibliographical references (p. 99-101). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
38

Cebeci, Coskun. "Denoising of acoustic signals using wavelet/wiener based techniques." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA349997.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, June 1998.
"June 1998." Thesis advisor(s): Monique P. Fargues, Ralph D. Hippenstiel. Includes bibliographical references (p. 63-64). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bhattacharya, Gautam. "Sparse denoising of audio by greedy time-frequency shrinkage." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123263.

Full text
Abstract:
Matching Pursuit (MP) is a greedy algorithm that iteratively builds a sparse signal representation. This work presents an analysis of MP in the context of audio denoising. By interpreting the algorithm as a simple shrinkage approach, we identify the factors critical to its success, and propose several approaches to improve its performance and robustness. We also develop several model enhancements and introduce an audio denoising approach called Greedy Time-Frequency Shrinkage (GTFS). Numerical experiments are performed on a wide range of audio signals, and we demonstrate that GTFS denoising is able to yield results that are competitive with state-of-the-art audio denoising approaches. Notably, GTFS retains a small percentage of a signal's transform coefficients for building a denoised representation, i.e., it produces very sparse denoised results.
L'algorithme de Matching Pursuit (MP) construit par itérations une représentation parcimonieuse du signal, au prix d'un coût de calcul élevé. Ce mémoire présente une analyse de l'algorithme de MP dans le contexte du débruitage audio. En interprétant l'algorithme MP comme une méthode de contraction simple (simple shrinkage), nous chercherons à identifier les facteurs essentiels à son succès, puis proposerons plusieurs approches afin d'en améliorer les performances et la robustesse. Plusieurs améliorations du modèle seront ainsi développées, et une approche du débruitage audio dénommée Greedy Time-Frequency Shrinkage (GTFS) sera présentée en détails. Des expérimentations numériques appliquées à un large éventail de signaux sonores démontrent que les résultats obtenus par débruitage GTFS s'avèrent compétitifs face aux méthodes de débruitage audio qui constituent l'état de l'art. En particulier, le GTFS ne retient qu'un faible pourcentage des coefficients de la transformée du signal pour en construire une représentation débruitée, et produit ainsi des résultats débruités très compacts.
APA, Harvard, Vancouver, ISO, and other styles
41

Tucker, Dewey S. (Dewey Stanton). "Wavelet denoising techniques with applications to high-resolution radar." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Yang. "Image Denoising: Invertible and General Denoising Frameworks." Phd thesis, 2022. http://hdl.handle.net/1885/270008.

Full text
Abstract:
The widespread use of digital cameras has resulted in a massive number of images being taken every day. However, due to the limitations of sensors and environments such as light conditions, the images are usually contaminated by noise. Obtaining visually clean images are essential for the accuracy of downstream high-level vision tasks. Thus, denoising is a crucial preprocessing step. A fundamental challenge in image denoising is to restore recognizable frequencies in edge and fine-scaled texture regions. Traditional methods usually employ hand-crafted priors to enhance the restoration of these high frequency regions, which seem to be omitted in current deep learning models. We explored whether the clean gradients can be utilized in deep networks as a prior as well as how to incorporate this prior in the networks to boost recovery of missing or obscured picture elements. We present results showing that fusing the pre-denoised images' gradient in the shallow layer contributes to recovering better edges and textures. We also propose a regularization loss term to ensure that the reconstructed images' gradients are close to the clean gradients. Both techniques are indispensable for enhancing the restored image frequencies. We also studied how to make the network preserve input information for better restoration of the high-frequency details. According to the definition of mutual information, we presented that invertibility is indispensable for information losslessness. Then, we proposed the Invertible Restoring Autoencoder (IRAE) network, a multiscale invertible encoder-decoder network. The superiority of this network was verified on three different low-level tasks, image denoising, JPEG image decompression and image inpainting. IRAE showed a good direction to explore more invertible architectures for image restoration. We attempted to further reduce the model size of invertible restoration networks. Our intuition was to use the same learned parameters to encode the noisy images in the forward pass and reconstruct the clean images in the backward pass. However, existing invertible networks use the same distribution for both the input and output obtained in the reversed pass. For our noise removal purpose, the input is noisy, but the reversed output is clean, following two different distributions. It was challenging to design lightweight invertible architectures for denoising. We presented InvDN, converting the noisy input to a clean low-resolution image and a noisy latent representation. To address the challenge mentioned above, we replaced the noisy representation with a clean one random sampled from Gaussian during the reverse pass. InvDN achieved state-of-the-art on real image denoising with much fewer parameters and less run time than existing state-of-the-art models. In addition, InvDN could also generate new noisy images for data augmentation. We also rethought image denoising from a novel aspect and introduced a more general denoising framework. Our framework utilized invertible networks to learn a noisy image distribution, which could be considered as the joint distribution of clean content and noise. The noisy input was mapped to representations in the latent space. A novel disentanglement strategy was applied to the latent representations to obtain the representations for the clean content, which were passed to the reversed network to get the clean image. Since this concept was a novel attempt, we also explored different data augmentation and training strategies for this framework. The proposed FDN was trained and tested from simple to complex tasks on distribution-clear class-specific synthetic noisy datasets, more general remote sensing datasets, and real noisy datasets and achieved competitive results with fewer parameters and faster speed. This work contributed a novel perspective and potential direction to design low-level task models in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Hua, Gang. "Noncoherent image denoising." Thesis, 2005. http://hdl.handle.net/1911/17859.

Full text
Abstract:
The techniques of Translation Invariant (TI) denoising and statistical modeling are widely used in image denoising. This thesis studies how these techniques exploit location information in images and identifies a class of noncoherent image denoising algorithms. We analyze the performance of TI denoising from the perspective of cyclic-basis reconstruction. It shows that TI denoising achieves an average performance without direct estimation of location information. Motivated by this perspective, we propose a Redundant Quaternion Wavelet Transform (RQWT) which both avoids aliasing and separates local signal energy and location information into quaternion magnitude and phases respectively. RQWT is a natural framework for studying the statistical models in noncoherent image denoisers, because they all ignore quaternion phases. Straightforward signal estimation in the RQWT framework closely matches the state-of-the-art noncoherent image denoisers and provides a natural bound on their performance, thereby showing the importance of exploring location information in quaternion phases.
APA, Harvard, Vancouver, ISO, and other styles
44

"Tomographic reconstruction and denoising." 2011. http://library.cuhk.edu.hk/record=b5894709.

Full text
Abstract:
Ma, Ka Lim.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves [110]-117).
Abstracts in English and Chinese.
Chapter 1 --- Radon Transform and Medical Tomography --- p.1
Chapter 1.1 --- Computed Tomography --- p.2
Chapter 1.2 --- Emission Computed Tomography --- p.4
Chapter 1.2.1 --- SPECT --- p.5
Chapter 1.2.2 --- PET --- p.6
Chapter 1.3 --- Radon Transform --- p.8
Chapter 1.3.1 --- Properties of Radon Transform --- p.10
Chapter 1.3.2 --- Fourier Slice Theorem --- p.11
Chapter 1.4 --- Research Objective --- p.12
Chapter 2 --- Popular Tomographic Reconstruction Algorithms --- p.14
Chapter 2.1 --- Analytic Method --- p.15
Chapter 2.1.1 --- Direct Fourier Method (DFM) --- p.15
Chapter 2.1.2 --- Backprojection (BP) --- p.17
Chapter 2.1.3 --- Backprojection Filtering (BPF) --- p.19
Chapter 2.1.4 --- Filtered Backprojection (FBP) --- p.21
Chapter 2.2 --- Iterative Method --- p.23
Chapter 2.2.1 --- Maximum Likelihood - Expectation Maximization (ML-EM) --- p.25
Chapter 2.2.2 --- Ordered Subsets Expectation Maximization (OSEM) --- p.27
Chapter 3 --- Consistent Reconstruction --- p.30
Chapter 3.1 --- Directional Filter Bank (DFB) --- p.30
Chapter 3.1.1 --- Interpolation in horizontal function space --- p.32
Chapter 3.1.2 --- Directional Multiresolution Analysis --- p.33
Chapter 3.1.3 --- Iterated Filter Bank Equivalence --- p.36
Chapter 3.1.4 --- Vertical Directional Function Space --- p.38
Chapter 3.1.5 --- Summary --- p.40
Chapter 3.2 --- Reconstruction Scheme --- p.42
Chapter 3.2.1 --- Choices for basis function 6m --- p.43
Chapter 3.2.2 --- Choices for coordinate mapping function wm --- p.46
Chapter 3.2.3 --- Summary --- p.49
Chapter 3.3 --- Experiment --- p.49
Chapter 3.3.1 --- Experiment for consistent reconstruction with different choices --- p.50
Chapter 3.3.2 --- Experiment for comparison with different reconstruction methods --- p.54
Chapter 3.4 --- Conclusion --- p.56
Chapter 4 --- Tomographic Denoising --- p.57
Chapter 4.1 --- SURE-LET and PURE-LET denoising --- p.59
Chapter 4.1.1 --- SURE-LET --- p.60
Chapter 4.1.2 --- PURE-LET --- p.62
Chapter 4.2 --- Experiment --- p.64
Chapter 4.2.1 --- Experiment on SURE-LET Denoising --- p.65
Chapter 4.2.2 --- Experiment on PURE-LET Denoising --- p.69
Chapter 4.2.3 --- Conclusion --- p.76
Chapter 5 --- Sinogram Retrieval --- p.77
Chapter 5.1 --- Sinogram Retrieval Method --- p.78
Chapter 5.1.1 --- MATLAB Radon Function --- p.79
Chapter 5.1.2 --- Subordinate Gradient (SG) Algorithm --- p.81
Chapter 5.1.3 --- Orthonormal Subordinate Gradient (OSG) Algorithm --- p.81
Chapter 5.2 --- Experiment --- p.84
Chapter 5.2.1 --- Limitation of Sinogram Retrieval --- p.84
Chapter 5.2.2 --- Comparison of Sinogram Retrieval Algorithms --- p.86
Chapter 5.2.3 --- Embedded in Tomographic Reconstruction --- p.88
Chapter 5.2.4 --- Embedded in Tomographic Denoising --- p.90
Chapter 5.3 --- Conclusion --- p.96
Chapter 6 --- Conclusion --- p.97
Chapter 6.1 --- Summary --- p.97
Chapter 6.1.1 --- Tomographic Reconstruction --- p.97
Chapter 6.1.2 --- Tomographic Denoising --- p.98
Chapter 6.1.3 --- Sinogram Retrieval --- p.98
Chapter 6.2 --- Future Research --- p.99
Chapter 6.2.1 --- Tomographic Reconstruction --- p.99
Chapter 6.2.2 --- Tomographic Denoising --- p.99
Chapter 6.2.3 --- Sinogram Retrieval --- p.99
Chapter A --- Examples of Radon Transform --- p.100
Chapter B --- Experimental Phantom Image --- p.104
Chapter C --- Results of sinogram retrieval experiments --- p.107
Bibliography --- p.110
APA, Harvard, Vancouver, ISO, and other styles
45

Bao, Yufang. "Nonlinear image denoising methodologies." 2002. http://www.lib.ncsu.edu/theses/available/etd-05172002-131134/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

McIlhagga, William H. "Denoising and contrast constancy." 2004. http://hdl.handle.net/10454/3909.

Full text
Abstract:
No
Contrast constancy is the ability to perceive object contrast independent of size or spatial frequency, even though these affect both retinal contrast and detectability. Like other perceptual constancies, it is evidence that the visual system infers the stable properties of objects from the changing properties of retinal images. Here it is shown that perceived contrast is based on an optimal thresholding estimator of object contrast, that is identical to the VisuShrink estimator used in wavelet denoising.
APA, Harvard, Vancouver, ISO, and other styles
47

Poderico, Mariana. "Denoising of SAR images." Tesi di dottorato, 2011. http://www.fedoa.unina.it/8779/1/Poderico_Mariana_24.pdf.

Full text
Abstract:
Matter of discussion in this Ph. D. thesis is SAR (Synthetic Aperture Radar) image denoising. Main elements of innovation are the introduction of SAR-BM3D, a denoising algorithm optimized for SAR data, and the introduction of a benchmark which enables the objective performance comparison of SAR denoising algorithms on simulated canonical SAR images. In the first part of the thesis, basic concepts on SAR images are introduced, with special emphasis on its peculiar multiplicative noise, called speckle. A description of key ideas and tools of denoising techniques known in literature then follows. After introducing the basic elements of SAR data processing, the main statistical features of SAR images are described, and it is clarified in which context the denoising techniques operate. Techniques are then classified as those that follow the homomorphic approach, where the multiplicative noise is turned into additive noise through a logarithmic transform, and those that take explicitly into account the multiplicative nature of noise. Afterwards, it is described how the introduction of the wavelet transform has brought new ideas into SAR image denoising and how the non-local filtering strategy, originally proposed in the AWGN field, has provided relevant results also in the application to SAR. In this context, the novel SAR-BM3D algorithm is introduced which, starting from key elements of wavelet-based and non-local filtering implemented in BM3D, optimizes the elaboration for SAR data, following a non-homomorphic approach. A very detailed experimental analysis on simulated SAR images, obtained as optical images corrupted by artificial speckle, has been performed: results proved the SAR-BM3D algorithm to outperform traditional approaches, both in terms of PSNR and visual inspection. Due to the well-known difficulties of evaluating the performance of denoising techniques on real SAR images, a workaround has been proposed. Rather than resorting to images corrupted by artificial speckle, a physical SAR simulator, SARAS (developed by the remote-sensing group of the Federico II University of Naples) has been used to generate a set of canonical benchmark SAR scenes. The main advantage of SARAS images is the availability of both the noisy and clean versions of the images, the latter acting as a reference to objectively evaluate the performances of different algorithms. We have shown in detail the procedure which leads to a definition of an objective criterion to compare results provided by different algorithms when working on real SAR images. For this purpose, different test cases have been selected and specific measures, suitable for the various scenes have been proposed for the characterization. At the end of the thesis, open issues are pointed out and future research is outlined.
APA, Harvard, Vancouver, ISO, and other styles
48

Parida, Satyabrata. "Denoising Of Satellite Images." Thesis, 2014. http://ethesis.nitrkl.ac.in/6612/1/Satyabrata_Parida_PROJECT_THESIS.pdf.

Full text
Abstract:
We use images in our day to day life for keeping a record of information or merely to convey a message. There are a number of parameters which determine the quality of an image or a photograph most of which cannot be solved manually without the help of a computer whatsoever any image that has been captured represents a deteriorated version of the original image. However its clear that by any means we can never get the ideal image which is hypothetical as it is 100% accurate which is not possible. Our aim in image processing is to get the best possible image with minimum number of errors. In order to come to the conclusion of a certain task the correction of this deteriorated version is of optimal importance. Rectifying too much lighting effects, instance noising, geometrical faults, unwanted colour variations and blur are some of the important parameters we need to attend to in order to get a good and useful image. In this paper, the deterioration of images because of noising has been addressed. Noise is any undesired information which adversely affects the quality and content of our image. Primary factors responsible for creating noise in an image are the medium through which photograph is taken (climatic and atmospheric factors like pressure and temperature), the accuracy of the instrument used to take the photograph (for instance camera) and the quantization of data used to store the image. This noise can be removed by an image processing technique called Image restoration. Image restoration process is concerned with the reconstruction of the original image from a noisy one.That is it tries to perform an operation on the image as the inverse of the imperfections in the image formation system. Degraded image can be perfected by various processes which are actually the reverse of noising. These filtering techniques are very simple and can be applied very easily through software. Some filtering processes have better performance than the others. This depends on the type of noise the image has. These filters are used in a variety of applications efficiently in preprocessing module. In this paper, the restoration performance of Arithmetic mean filter, Geometric mean filter and Median filter have been analyzed. The performance of these filters is analyzed by applying it on satellite images which are affected by Impulse noise, Speckle noise and Gaussian noise. Since the satellite images are being corrupted by various noises, the satellite images are considered in this paper to analyze the performance of arithmetic mean filter, geometric mean filter and median filter. By observing the obtained results and PSNR value for various satellite images under different noises, we have recorded the following conclusion. • the median filter gives better performance for satellite images affected by impulse noise than arithmetic mean filter and geometric mean filter. •the arithmetic mean filter gives better performance for gaussian noise than median filter and geometric mean filters for all satellite images. •the arithmetic mean filter gives better performance for speckle noise than median filter and geometric mean filter for all satellite images. Median Filter is an image filter that is more effective in situations where white spots and black spots appear on the image. For this technique the middle value of the m×n window is considered to replace the black and white pixels.After white spots and black spots appear on the image, it becomes pretty difficult to find which pixel is the affected pixel. Replacing those affected pixels with AMF, GMF and HMF is not enough because those pixels are replaced by a value which is not appropriate to the original one. It is observed that the median filter gives better performance than AMF and GMF for distorted images. The performance of restoration filter can be increased further to completely remove noise and to preserve the edges of the image by using both linear and nonlinear filter together.
APA, Harvard, Vancouver, ISO, and other styles
49

Cho, Dongwook. "Image denoising using wavelet transforms." Thesis, 2004. http://spectrum.library.concordia.ca/8141/1/MQ94737.pdf.

Full text
Abstract:
Image denoising is a fundamental process in image processing, pattern recognition, and computer vision fields. The main goal of image denoising is to enhance or restore a noisy image and help the other system (or human) to understand it better. In this thesis, we discuss some efficient approaches for image denoising using wavelet transforms. Since Donoho proposed a simple thresholding method, many different approaches have been suggested for a decade. They have shown that denoising using wavelet transforms produces superb results. This is because wavelet transform has the compaction property of having only a small number of large coefficients and a large number of small coefficients. In the first part of the thesis, some important wavelet transforms for image denoising and a literature review on the existing methods are described. In the latter part, we propose two different approaches for image denoising. The first approach is to take advantage of the higher order statistical coupling between neighbouring wavelet coefficients and their corresponding coefficients in the parent level with effective translation-invariant wavelet transforms. The other is based on multivariate statistical modeling and the clean coefficients are estimated in a general rule using Bayesian approach. Various estimation expressions can be obtained by a priori probability distribution, called multivariate generalized Gaussian distribution (MGGD). The method can take into account various related information. The experimental results show that both of our methods give comparatively higher PSNR and less visual artifact than other methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Tseng, Yi-Man, and 曾怡滿. "Noise Models and Denoising Techniques." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/57266931192491025786.

Full text
Abstract:
碩士
臺灣大學
數學研究所
98
We propose to review four common types of image noises, including Gaussian noise, uniform noise, Poisson noise and salt & pepper noise. We set basic one-dimensional and two-dimensional images, and add four types of noises on different levels. We will denoise these corrupted images by using total variation, soft-thresholding and adaptive median filter, respectively. Finally, compare the PSNR values to analyse the denoising effect, edges preserving, and blurring.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography