Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Denoisers.

Rozprawy doktorskie na temat „Denoisers”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 18 najlepszych rozpraw doktorskich naukowych na temat „Denoisers”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Bal, Shamit. "Image compression with denoised reduced-search fractal block coding". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq23210.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kharboutly, Anas Mustapha. "Identification du système d'acquisition d'images médicales à partir d'analyse du bruit". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT341/document.

Pełny tekst źródła
Streszczenie:
Le traitement d’images médicales a pour but d’aider les médecins dans leur diagnostic et d’améliorer l’interprétation des résultats. Les scanners tomo-densitométriques (scanners X) sont des outils d’imagerie médicale utilisés pour reconstruire des images 3D du corps humain.De nos jours, il est très important de sécuriser les images médicales lors de leur transmission, leur stockage, leur visualisation ou de leur partage entre spécialistes. Par exemple, dans la criminalistique des images, la capacité d’identifier le système d’acquisition d’une image à partir de cette dernière seulement, est un enjeu actuel.Dans cette thèse, nous présentons une première analyse du problème d’identification des scanners X. Pour proposer une solution à ce type de problèmes, nous nous sommes basés sur les méthodes d’identification d’appareils photo. Elles reposent sur l’extraction de l’empreinte des capteurs. L’objectif est alors de détecter sa présence dans les images testées. Pour extraire le bruit, nous utilisons un filtre de Wiener basé sur une transformation en ondelettes. Ensuite, nous nous appuyons sur les propriétés relatives aux images médicales pour proposer des solutions avancées pour l’identification des scanners X. Ces solutions sont basées sur une nouvelle conception de leur empreinte, cette dernière étant définie en trois dimensions et sur les trois couches : os, tissu et air.Pour évaluer notre travail, nous avons généré des résultats sur un ensemble de données réelles acquises avec différents scanners X. Finalement, nos méthodes sont robustes et donnent une précision d’authentification élevée. Nous sommes en mesure d’identifier quelle machine a servi pour l’acquisition d’une image 3D et l’axe selon lequel elle a été effectuée
Medical image processing aims to help the doctors to improve the diagnosis process. Computed Tomography (CT) Scanner is an imaging medical device used to create cross-sectional 3D images of any part of the human body. Today, it is very important to secure medical images during their transmission, storage, visualization and sharing between several doctors. For example, in image forensics, a current problem consists of being able to identify an acquisition system from only digital images. In this thesis, we present one of the first analysis of CT-Scanner identification problem. We based on the camera identification methods to propose a solution for such kind of problem. It is based on extracting a sensor noise fingerprint of the CT-Scanner device. The objective then is to detect its presence in any new tested image. To extract the noise, we used a wavelet-based Wiener denoising filter. Then, we depend on the properties of medical images to propose advanced solutions for CT-Scanner identification. These solutions are based on new conceptions in the medical device fingerprint that are the three dimension fingerprint and the three layers one. To validate our work, we applied our experiments on multiple real data images of multiple CT-Scanner devices. Finally, our methods that are robust, give high identification accuracy. We were able to identify the acquisition CT-Scanner device and the acquisition axis
Style APA, Harvard, Vancouver, ISO itp.
3

Tsai, Shu-Jen Steven. "Study of Global Power System Frequency Behavior Based on Simulations and FNET Measurements". Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28303.

Pełny tekst źródła
Streszczenie:
A global view of power system's frequency opens up a new window to the "world" of large system's dynamics. With the aid of global positioning system (GPS), measurements from different locations can be time-synchronized; therefore, a system-wide observation and analysis would be possible. As part of the U.S. nation-wide power frequency monitoring network project (FNET), the first part of the study focuses on utilizing system simulation as a tool to assess the frequency measurement accuracy needed to observe frequency oscillations from events such as remote generation drops in three U.S. power systems. Electromechanical wave propagation phenomena during system disturbances, such as generation trip, load rejection and line opening, have been observed and discussed. Further uniform system models are developed to investigate the detailed behaviors of wave propagation. Visualization tool is developed to help to view frequency behavior simulations. Frequency replay from simulation data provides some insights of how these frequency electromechanical waves propagate when major events occur. The speeds of electromechanical wave propagation in different areas of the U.S. systems, as well as the uniform models were estimated and their characteristics were discussed. Theoretical derivation between the generator's mechanical powers and bus frequencies is provided and the delayed frequency response is illustrated. Field-measured frequency data from FNET are also examined. Outlier removal and wavelet-based denoising signal processing techniques are applied to filter out spikes and noises from measured frequency data. System's frequency statistics of three major U.S. power grids are investigated. Comparison between the data from phasor measurement unit (PMU) at a high voltage substation and from FNET taken from 110 V outlets at distribution level illustrates the close tracking between the two. Several generator trip events in the Eastern Interconnection System and the Western Electricity Coordinating Council system are recorded and the frequency patterns are analyzed. Our trigger program can detect noticeable frequency drop or rise and sample results are shown in a 13 month period. In addition to transient states' observation, the quasi-steady-state, such as oscillations, can also be observed by FNET. Several potential applications of FNET in the areas of monitoring & analysis, system control, model validation, and others are discussed. Some applications of FNET are still beyond our imagination.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
4

Risi, Stefano. "Un metodo automatico per la ricostruzione di immagini astronomiche". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14128/.

Pełny tekst źródła
Streszczenie:
La ricostruzione di un segnale digitale è un processo che trova applicazione in numerose branche scientifiche e tecnologiche come: l'astronomia, la biomedica, la geofisica e persino la musica. Questo processo ha il compito di pulire il segnale dalle interferenze che lo hanno degradato e di leggerlo nella forma emessa dalla sorgente. Quest'opera si occuperà della ricostruzione di immagini astronomiche digitali, utilizzando un modello di regolarizzazione di tipo variazionale e facendo uso della funzione di variazione totale. L' obiettivo è specializzare un algoritmo già esistente: il Constrained Least Square Total Variation per ottenerne una versione che mantenga alcune delle caratteristiche della precendente, come la ricerca automatica del parametro di regolarizzazione, ma che possa dare risultati migliori nella ricostruzione di immagini astronomiche. Capitolo 1. Viene definito il concetto di immagine e come questa venga acquisita da un dispositivo digitale. Si spiega come avviene il processo di sfocamento e di formazione del rumore, sono in seguito presentate le condizioni al bordo più utilizzate nell'ambiente della ricostruzione di immagini. Capitolo 2. Si descrivono i problemi mal posti di cui la ricostruzione di immagini fa parte e si vedono i metodi principali per la regolarizzazione di un segnale digitale, con particolare attenzione alla funzione di variazione totale come funzione di penalizzazione. Capitolo 3. L'intero capitolo è dedicato ai due metodi utilizzati, Constrained Least Square Total Varition ed il Constrained Kullback-Liebler Total Variation, questi sono metodi automatici che operano nell'abito della regolarizzazione ed utilizzano la funzione di variazione totale come funzione di penalizzazione. Capitolo 4. Vengono mostrati i risultati numerici derivanti dall'utilizzo del metodo CKLTV nella ricostruzione delle immagini test, i risultati vengono poi confrontati con quelli ottenuti tramite il metodo CLSTV.
Style APA, Harvard, Vancouver, ISO itp.
5

Kim, Jong-Hoon. "Compressed sensing and finite rate of innovation for efficient data acquisition of quantitative acoustic microscopy images". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30225.

Pełny tekst źródła
Streszczenie:
La microscopie acoustique quantitative (MAQ) est une modalité d'imagerie bien établie qui donne accès à des cartes paramétriques 2D représentatives des propriétés mécaniques des tissus à une échelle microscopique. Dans la plupart des études sur MAQ, l'échantillons est scanné ligne par ligne (avec un pas de 2µm) à l'aide d'un transducteur à 250 MHz. Ce type d'acquisition permet d'obtenir un cube de données RF 3D, avec deux dimensions spatiales et une dimension temporelle. Chaque signal RF correspondant à une position spatiale dans l'échantillon permet d'estimer des paramètres acoustiques comme par exemple la vitesse du son ou l'impédance. Le temps d'acquisition en MAQ est directement proportionnel à la taille de l'échantillon et peut aller de quelques minutes à quelques dizaines de minutes. Afin d'assurer des conditions d'acquisition stables et étant donnée la sensibilité des échantillons à ces conditions, diminuer le temps d'acquisition est un des grand défis en MAQ. Afin de relever ce défi, ce travail de thèse propose plusieurs solutions basées sur l'échantillonnage compressé (EC) et la théories des signaux ayant un faible nombre de degré de liberté (finite rate of innovation - FRI, en anglais). Le principe de l'EC repose sur la parcimonie des données, sur l'échantillonnage incohérent de celles-ci et sur les algorithmes d'optimisation numérique. Dans cette thèse, les phénomènes physiques derrière la MAQ sont exploités afin de créer des modèles adaptés aux contraintes de l'EC et de la FRI. Plus particulièrement, ce travail propose plusieurs pistes d'application de l'EC en MAQ : un schéma d'acquisition spatiale innovant, un algorithme de reconstruction d'images exploitant les statistiques des coefficients en ondelettes des images paramétriques, un modèle FRI adapté aux signaux RF et un schéma d'acquisition compressée dans le domaine temporel
Quantitative acoustic microscopy (QAM) is a well-accepted modality for forming 2D parameter maps making use of mechanical properties of soft tissues at microscopic scales. In leading edge QAM studies, the sample is raster-scanned (spatial step size of 2µm) using a 250 MHz transducer resulting in a 3D RF data cube, and each RF signal for each spatial location is processed to obtain acoustic parameters, e.g., speed of sound or acoustic impedance. The scanning time directly depends on the sample size and can range from few minutes to tens of minutes. In order to maintain constant experimental conditions for the sensitive thin sectioned samples, the scanning time is an important practical issue. To deal with the current challenge, we propose the novel approach inspired by compressed sensing (CS) and finite rate of innovation (FRI). The success of CS relies on the sparsity of data under consideration, incoherent measurement and optimization technique. On the other hand, the idea behind FRI is supported by a signal model fully characterized as a limited number of parameters. From this perspective, taking into account the physics leading to data acquisition of QAM system, the QAM data can be regarded as an adequate application amenable to the state of the art technologies aforementioned. However, when it comes to the mechanical structure of QAM system which does not support canonical CS measurement manners on the one hand, and the compositions of the RF signal model unsuitable to existing FRI schemes on the other hand, the advanced frameworks are still not perfect methods to resolve the problems that we are facing. In this thesis, to overcome the limitations, a novel sensing framework for CS is presented in spatial domain: a recently proposed approximate message passing (AMP) algorithm is adapted to account for the underlying data statistics of samples sparsely collected by proposed scanning patterns. In time domain, as an approach for achieving an accurate recovery from a small set of samples of QAM RF signals, we employ sum of sincs (SoS) sampling kernel and autoregressive (AR) model estimator. The spiral scanning manner, introduced as an applicable sensing technique to QAM system, contributed to the significant reduction of the number of spatial samples when reconstructing speed of sound images of a human lymph node.[...]
Style APA, Harvard, Vancouver, ISO itp.
6

Contato, Welinton Andrey. "Análise e restauração de vídeos de Microscopia Eletrônica de Baixa Energia". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-04012017-143212/.

Pełny tekst źródła
Streszczenie:
A Microscopia Eletrônica de Baixa Energia (LEEM) é uma recente e poderosa modalidade para o estudo de superfície passível de uma grande quantidade de degradações, como ruídos e borramento. Ainda incipiente na literatura, este trabalho visou a análise e identificação das fontes de degradações presentes em vídeos, além da utilização de um conjunto de técnicas de remoção de ruído e borramento para a restauração de dados LEEM. Além disso, foram desenvolvidas duas novas técnicas de filtragem de vídeo como intuito de preservar detalhes pequenos e texturas presentes. Na etapa de análise foi constatado que as imagens LEEM possuem uma grande quantidade e variedade de ruídos, sendo o Gaussiano o mais preponderante. Foi também estimada a Função de Espalhamento de Ponto (PSF) do microscópio utilizado, visando o emprego de técnicas de redução de borramento. Este trabalho também analisou a combinação de técnicas de redução de borramento com as técnicas de filtragem do ruído Gaussiano existente. Foi constatado que as técnicas não locais, como Non-Local Means (NLM) eBlock-Matching 3-D (BM3D), proveem uma maior capacidade de filtragem das imagens LEEM, preservando descontinuidades. Ainda nesta análise, identificou-se que algumas técnicas de redução de borramento não são efetivas em imagens LEEM, exceto a técnica Richardson-Lucy (RL) que suprimiu grande parte do borramento sem adicionar mais degradação. A indesejável remoção de pequenas estruturas e texturas pelas técnicas de filtragem existentes motivou o desenvolvimento de duas novas técnicas de filtragem de ruído Gaussiano (NLM3D-LBP-MSB eNLM3D-LBP-Adaptive) que mostraram resultados superiores para filtragem de imagens com grande quantidade de textura. Porém, em imagens com muitas regiões homogêneas o BM3D foi superior. Avaliações quantitativas foram realizadas sobre imagens artificiais. Em imagens LEEM reais, realizou-se um experimento qualitativo em que observadores avaliaram visualmente o resultado de restaurações por diversas técnicas existentes e as propostas neste trabalho. O experimento comprovou que os métodos de filtragem não locais foram superiores, principalmente quando combinados com o método RL. Os métodos propostos produziram bons resultados, entretanto, inferiores aos exibidos pelas técnicas NLM eBM3D. Este trabalho demonstrou que as técnicas de filtragem não locais são as mais adequadas para dados LEEM. Além disso, a técnica RL mostrou-se eficaz na redução de borramento.
Low Energy Electronic Microscopy (LEEM) is a recent and powerful surface science image modality prone to considerable amounts of degradations, such as noise and blurring. Still not fully addressed in the literature, this worked aimed at analysing and identifying the sources of degradation in LEEM videos, as well as the adequacy of existing noise reduction and deblurring techniques for LEEM data. This work also presented two new noise reduction techniques aimed at preserving texture and small details. Our analysis has revealed that LEEM images exhibit a large amount and variety of noises, with Gaussian noise being the most frequent. To handle the deblurring issue, the Point Spread Function (PSF) for the microscopeused in the experiments has also been estimated. This work has also studied the combination of deblurring and denoising techniques for Gaussian noise. Results have shown that non-local techniques such as Non-Local Means (NLM) and Block-Matching 3-D (BM3D) are more adequate for filtering LEEM images, while preserving discontinuities. We have also shown that some deblurring techniques are not suitable for LEEM images, except the RichardsonLucy (RL) approach which coped with most of the blur without the addition of extra degradation. The undesirable removal of small structures and texture by the existing denoising techniques encouraged the development of two novel Gaussian denoising techniques (NLM3D-LBP-MSB and NLM3D-LBP-Adaptive) which exhibited good results for images with a large amount of texture. However, BM3D was superior for images with large homogeneous regions. Quantitative experiments have been carried out for synthetic images. For real LEEM images, a qualitative analysis has been conducted in which observers visually assessed restoration results for existing techniques and also the two proposed ones. This experiment has shown that non-local denoising methodswere superior, especially when combined with theRL method. The proposed methods produced good results, but were out performed by NLM and BM3D. This work has shown that non-local denoising techniques are more adequate for LEEM data. Also, theRL technique is very efficient for deblurring purposes.
Style APA, Harvard, Vancouver, ISO itp.
7

Gavaskar, Ruturaj G. "On Plug-and-Play Regularization using Linear Denoisers". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5973.

Pełny tekst źródła
Streszczenie:
The problem of inverting a given measurement model comes up in several computational imaging applications. For example, in CT and MRI, we are required to reconstruct a high-resolution image from incomplete noisy measurements, whereas in superresolution and deblurring, we try to infer the ground-truth from low-resolution or blurred images. Traditionally, this is done by minimizing $f + \phi$, where $f$ is a data-fidelity (or loss) function that is determined by the acquisition process, and $\phi$ is a regularization (or penalty) function that is based on a subjective prior on the target image. The solution is obtained numerically using iterative algorithms such as ISTA or ADMM. While several forms of regularization and associated optimization methods have been proposed in the imaging literature of the last few decades, the use of denoisers (aka denoising priors) for image regularization is a relatively recent phenomenon. This has partly been triggered by advances in image denoising in the last 20 years, leading to the development of powerful image denoisers such as BM3D and DnCNN. In this thesis, we look at a recent protocol called Plug-and-Play (PnP) regularization, where image denoisers are deployed within iterative algorithms for image regularization. PnP consists of replacing the proximal map --- an analytical operator at the core of ISTA and ADMM --- associated with the regularizer $\phi$ with an image denoiser. This is motivated by the intuition that off-the-shelf denoisers such as BM3D and DnCNN offer better image priors than traditional hand-crafted regularizers such as total variation. While PnP does not use an explicit regularizer, it still makes use of the data-fidelity function $f$. However, since the replacement of the proximal map with a denoiser is ad-hoc, the optimization perspective is lost --- it is not clear if the PnP iterations can be interpreted as optimizing some objective function $f + \phi$. Remarkably, PnP reconstructions are of high quality and competitive with state-of-the-art methods. Following this, researchers have tried explaining why plugging a denoiser within an inversion algorithm should work in the first place, why it produces high-quality images, and whether the final reconstruction is optimal in some sense. In this thesis, we try answering such questions, some of which have been the topic of active research in the imaging community in recent years. Specifically, we consider the following questions. --> Fixed-point convergence: Under what conditions does the sequence of iterates generated by a PnP algorithm converge? Moreover, are these conditions met by existing real-world denoisers? --> Optimality and objective convergence: Can we interpret PnP as an algorithm that minimizes $f + \phi$ for some appropriate $\phi$? Moreover, does the algorithm converge to a solution of this objective function? --> Exact and robust recovery: Under what conditions can we recover the ground-truth exactly via PnP? And is the reconstruction robust to noise in the measurements? While early work on PnP has attempted to answer some of these questions, many of the underlying assumptions are either strong or unverifiable. This is essentially because denoisers such as BM3D and DnCNN are mathematically complex, nonlinear and difficult to characterize. A first step in understanding complex nonlinear phenomena is often to develop an understanding of some linear approximation. In this spirit, we focus our attention on denoisers that are linear. In fact, there exists a broad class of real-world denoisers that are linear and whose performance is quite decent; examples include kernel filters (e.g. NLM, bilateral filter) and their symmetrized counterparts. This class has a simple characterization that helps to keep the analysis tractable and the assumptions verifiable. Our main contributions lie in resolving the aforementioned questions for PnP algorithms where the plugged denoiser belongs to this class. We summarize them below. --> We prove fixed-point convergence of the PnP version of ISTA under mild assumptions on the measurement model. --> Based on the theory of proximal maps, we prove that a PnP algorithm in fact minimizes a convex objective function $f + \phi$, subject to some algorithmic modifications that arise from the algebraic properties of the denoiser. Notably, unlike previous results, our analysis applies to non-symmetric linear filters. --> Under certain verifiable assumptions, we prove that a signal can be recovered exactly (resp. robustly) from clean (resp. noisy) measurements using PnP regularization. As a more profound application, in the spirit of classical compressed sensing, we are able to derive probabilistic guarantees on exact and robust recovery for the compressed sensing problem where the sensing matrix is random. An implication of our analysis is that the range of the linear denoiser plays the role of a signal prior and its dimension essentially controls the size of the set of recoverable signals. In particular, we are able to derive the sample complexity of compressed sensing as a function of distortion error and success rate. We validate our theoretical findings numerically, discuss their implications and mention possible future research directions.
Style APA, Harvard, Vancouver, ISO itp.
8

Nair, Pravin. "Provably Convergent Algorithms for Denoiser-Driven Image Regularization". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5887.

Pełny tekst źródła
Streszczenie:
Some fundamental reconstruction tasks in image processing can be posed as an inverse problem where we are required to invert a given forward model. For example, in deblurring and superresolution, the ground-truth image needs to be estimated from blurred and low-resolution images, whereas in CT and MR imaging, a high-resolution image must be reconstructed from a few linear measurements. Such inverse problems are invariably ill-posed—they exhibit non-unique solutions and the process of direct inversion is unstable. Some form of image model (or prior) on the ground truth is required to regularize the inversion process. For example, a classical solution involves minimizing f + g , where the loss term f is derived from the forward model and the regularizer g is used to constrain the search space. The challenge is to come up with a formula for g that can yield good image reconstructions. This has been the center of research activity in image reconstruction for the last few decades. “Regularization using denoising" is a recent breakthrough in which a powerful denoiser is used for regularization purposes, instead of having to specify some hand-crafted g (but the loss f is still used). This has been empirically shown to yield significantly better results than staple f + g minimization. In fact, the results are generally comparable and often superior to state-of-the-art deep learning methods. In this thesis, we consider two such popular models for image regularization—Plug-and-Play (PnP) and Regularization by Denoising (RED). In particular, we focus on the convergence aspect of these iterative algorithms which is not well understood even for simple denoisers. This is important since the lack of convergence guarantee can result in spurious reconstructions in imaging applications. The contributions of the thesis in this regard are as follows. PnP with linear denoisers: We show that for a class of non-symmetric linear denoisers that includes kernel denoisers such as nonlocal means, one can associate a convex regularizer g with the denoiser. More precisely, we show that any such linear denoiser can be expressed as the proximal operator of a convex function, provided we work with a non-standard inner product (instead of the Euclidean inner product). In particular, the regularizer is quadratic, but unlike classical quadratic regularizers, the quadratic form is derived from the observed data. A direct implication of this observation is that (a simple variant of) the PnP algorithm based on this linear denoiser amounts to solving an optimization problem of the form f + g , though it was not originally conceived this way. Consequently, if f is convex, both objective and iterate convergence are guaranteed for the PnP algorithm. Apart from the convergence guarantee, we go on to show that this observation has algorithmic value as well. For example, in the case of linear inverse problems such as superresolution, deblurring and inpainting (where f is quadratic), we can reduce the problem of minimizing f + g to a linear system. In particular, we show how using Krylov solvers we can solve this system efficiently in just few iterations. Surprisingly, the reconstructions are found to be comparable with state-of-theart deep learning methods. To the best of our knowledge, the possibility of achieving near state-of-the-art image reconstructions using a linear solver has not been demonstrated before. PnP and RED with learning-based denoisers: In general, state-of-the-art PnP and RED algorithms rely on trained CNN denoisers such as DnCNN. Unlike linear denoisers, it is difficult to place PnP and RED algorithms within an optimization framework in the case of CNN denoisers. Nonetheless, we can still try to understand the convergence of the sequence of iterates generated by these algorithms. For convex loss f , we show that this question can be resolved using the theory of monotone operators — the denoiser being averaged (a subclass of nonexpansive operators) is sufficient for iterate convergence of PnP and RED. Using numerical examples, we show that existing CNN denoisers are not nonexpansive and can cause PnP and RED algorithms to diverge. Can we train denoisers that are provably nonexpansive? Unfortunately, this is computationally challenging—simply checking nonexpansivity of a CNN is known to be intractable. As a result, existing algorithms for training nonexpansive CNNs either cannot guarantee nonexpansivity or are computation intensive. We show that this problem can be solved by moving away from CNN denoisers to unfolded deep denoisers. In particular, we are able to construct unfolded networks that are efficiently trainable and come with convergence guarantees for PnP and RED algorithms, and whose regularization capacity can be matched withCNNdenoisers. Presumably, we are the first to propose a simple framework for training provably averaged (contractive) denoisers using unfolding networks. We provide numerical results to validate our theoretical results and compare our algorithms with state-of-the-art regularization techniques. We also point out some future research directions stemming from the thesis.
Style APA, Harvard, Vancouver, ISO itp.
9

Yuan, Ming-pin, i 袁鳴彬. "Adaptive DeNoise Filter". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/61007015843867316761.

Pełny tekst źródła
Streszczenie:
碩士
雲林科技大學
電機工程系碩士班
98
In the world of multimedia, digital images are often degraded by noise introduced from transmission errors and image acquisition storage devices. The perception and recognition of human visualization is therefore severely influenced. In order to stabilize the image preprocessing systems, the removal of the above mentioned noise becomes an important issue in this application area. In this thesis, we propose an adaptive wide range noise density method, which is called the Adaptive DeNoise Filter (AND). The main focus of this work is to remove salt-and pepper noise to obtain better image quality of images. An image denoise filter performs two steps: the first step is using Lifting-based Discrete Wavelet Transform (LDWT) and Adaptive Median Filter (AMF) to obtain the high noise density for next step. At the second stage, we use the adaptive search window median filtering to remove noise by using the noise information obtained in the first step. Based on the algorithm, we can find the noise density to adjust window sizes for achieving better image restoration performance. Experimental results show that the proposed method can recover the test image for noise densities from 5% to 90%. The average PSNR of 25.05 dB satisfies the sensitivity of human visual perception.
Style APA, Harvard, Vancouver, ISO itp.
10

"Image cosegmentation and denoise". 2012. http://library.cuhk.edu.hk/record=b5549125.

Pełny tekst źródła
Streszczenie:
我们提出了两个新的方法来解决低级别计算机视觉任务,即图像共同分割和降噪。
在共同分割模型上,我们发现对象对应可以为前景统计估计提供有用的信息。我们的方法可以处理极具挑战性的场景,如变形,角度的变化和显着不同的视角和尺度。此外,我们研究了一种新的能量最小化模型,可以同时处理多个图像。真实和基准数据的定性和定量实验证明该方法的有效性。
另一方面,噪音始终和高频图像结构是紧耦合的,从而使得减少噪音非常很难。在我们的降噪模型中,我们建议稍微使图像光学离焦,以减少图像和噪声的耦合。这使得我们能更有效地降低噪音,随后恢复失焦。我们的分析显示,这是可能的,并且用许多例子证明我们的技术,其中包括低光图像。
We present two novel methods to tackle low level computer vision tasks,i.e., image cosegmentation and denoise .
In our cosegmentationmodel, we discover object correspondence canprovide useful information for foreground statistical estimation. Ourmethod can handle extremely challenging scenarios such as deformation, perspective changes and dramatically different viewpoints/scales. In addition, we develop a novel energy minimization model that can handlemultiple images. Experiments on real and benchmark data qualitatively and quantitatively demonstrate the effectiveness of the approach.
One the other hand, noise is always tightly coupled with high-frequencyimage structure, making noise reduction generally very difficult. In ourdenoise model, we propose slightly optically defocusing the image in orderto loosen this noise-image structure coupling. This allows us to more effectively reduce noise and subsequently restore the small defocus. Weanalytically show how this is possible, and demonstrate our technique on a number of examples that include low-light images.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Qin, Zenglu.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 64-71).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
Abstract --- p.i
Acknowledgement --- p.ii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation and Objectives --- p.1
Chapter 1.1.1 --- Cosegmentation --- p.1
Chapter 1.1.2 --- Image Denoise --- p.4
Chapter 1.2 --- Thesis Outline --- p.7
Chapter 2 --- Background --- p.8
Chapter 2.1 --- Cosegmentation --- p.8
Chapter 2.2 --- Image Denoise --- p.10
Chapter 3 --- Cosegmentation of Multiple Deformable Objects --- p.12
Chapter 3.1 --- Related Work --- p.12
Chapter 3.2 --- Object Corresponding Cosegmentation --- p.13
Chapter 3.3 --- Importance Map with Object Correspondence --- p.15
Chapter 3.3.1 --- Feature Importance Map --- p.16
Chapter 3.3.2 --- Importance Energy E[subscript i](xp) --- p.20
Chapter 3.4 --- Experimental Result --- p.20
Chapter 3.4.1 --- Two-Image Cosegmentation --- p.21
Chapter 3.4.2 --- ETHZ Toys Dataset --- p.22
Chapter 3.4.3 --- More Results --- p.24
Chapter 3.5 --- Summary --- p.27
Chapter 4 --- Using Optical Defocus to Denoise --- p.28
Chapter 4.1 --- Related Work --- p.29
Chapter 4.2 --- Noise Analysis --- p.30
Chapter 4.3 --- Noise Estimation with Focal Blur --- p.33
Chapter 4.3.1 --- Noise Estimation with a Convolution Model --- p.33
Chapter 4.3.2 --- Determining λ --- p.41
Chapter 4.4 --- Final Deconvolution and Error Analysis --- p.43
Chapter 4.5 --- Implementation --- p.45
Chapter 4.6 --- Quantitative Evaluation --- p.47
Chapter 4.7 --- More Experimental Results --- p.53
Chapter 4.8 --- Summary --- p.56
Chapter 5 --- Conclusion --- p.62
Bibliography --- p.64
Style APA, Harvard, Vancouver, ISO itp.
11

Lin, Hong-Dun, i 林宏墩. "Subband Filtering Technique for Medical Image Enhancement and Denoise". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/07949290446584444512.

Pełny tekst źródła
Streszczenie:
碩士
中原大學
電機工程學系
87
Currently, the medical images are used to clinical diagnosis broadly, and the image quality is very important to the accuracy of diagnosis. However, the contrast of the original medical image is not always very good and sometimes the noise in the image is also very high. For this reason, to improve the image quality is a very important research. This thesis is divided into two parts to process medical images. The first part is image enhancement and the second one is image denoising. The methods of these two parts both based on the Subband analysis. By using the image enhancement method of this thesis, the digital mammogram and the chest X-ray image are applied to demonstrate the image enhancement effect that provided by this thesis, and compare to the several traditional image enhancement methods. According to the enhancement simulation results of real images with phantom that are simulated the tumor and microcalificartions experiments, the contrast of the image can be improved about 5.5 times, and the experiments of real images of the patients with disease can also obtain quiet well results. In the second part, a phantom study was used to validate the performance of the denoising technique. The cylinder phantom contains two different diameters of 25 objects. It was imaged by a PET system to 15 transaxial PET phantom images. The profiles of the original image and with subband analysis denoising process are measured to evaluate and compare the noise levels by the subband denoising process. Comparing the standard deviations of the profiles in the horizontal and vertical direction, it is clearly shown that the noise level in the denoised image has around 40% improvement in average. According to the results of the two parts, the presented methods can improve the image contrast of the medical image efficiently and inhibit the original image noise conspicuously. The method can be useful to the image preprocess of medical image.
Style APA, Harvard, Vancouver, ISO itp.
12

Chen, Chih-Hsien, i 陳芝仙. "A Study of Denoise Algorithm for Extremely Corrupted Images". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/91446528400215832561.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
電機工程系
96
Digital images could be contaminated by impulse noise during image transmission. It could severely degrade the image quality. A switching median filter is proposed in this thesis for effectively denoising extremely corrupted images and preserving images detail and determining whether the current pixel is corrupted. The proposed method is based on the BDND (Boundary Discriminative Noise Detection) algorithm. We modify its defects on unequal densities of “low-intensity impulse noise” and “high-intensity impulse noise” and provide novel noise detection techniques for noise image with the corruption range of both low-intensity and high-intensity noise. We use standard median filter and only consider the uncorrupted pixels to filter out the noise. Four noise models are considered for performance evaluation. The result can clearly show that our proposed modified BDND attains good performance and improves image quality. Index Items-Image denoising, impulse noise detection, nonlinear filter, switching median filter.
Style APA, Harvard, Vancouver, ISO itp.
13

CHUNG, HSIN-YI, i 鍾幸宜. "Improvement of Denoised Image Quality Using Global Similarity Post-Processing Approach". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5wdh26.

Pełny tekst źródła
Streszczenie:
碩士
亞洲大學
資訊傳播學系
104
The quality of digital image would be deteriorated by the corruption of impulse noise in the record or transmission. How to efficiently remove this impulse noise for a corrupted image is an important research task. This thesis proposes a post-processing method for the improvement of a denoised image in the corruption of salt-and-pepper noise. In the first stage, a variable-size local window incorporated with pixel probability adaptation method is employed to remove impulse noise for a noisy image. In the second stage, a codebook which is constituted of a clean pixel with its surrounding neighbors by using the denoised image is established for post-processing. A restored pixel and its neighbors in a local window are compared with each codeword of the codebook to select an optimum codeword. The restored pixel is replaced by the center pixel of the optimum codeword when the distance between the optimum codeword and the pixels in the local window is smaller than a given threshold. A smoothed pixel is replaced by a clean version. Experimental results show that the proposed post-processing method can improve the quality of a denoised image. The objective measure in terms of peak SNR can be further improved.
Style APA, Harvard, Vancouver, ISO itp.
14

Wang, Xiao-Yu, i 王孝宇. "The Implementation of the ECG Denoise Filter for Multiple and Variant Noises". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/49787635587518262687.

Pełny tekst źródła
Streszczenie:
碩士
亞洲大學
光電與通訊學系
103
In this Thesis, we use a sparse normal-form transformation matrix to transfer any original given realization to the sparse normal-form one. Such a realization is simulta-neously with high computational efficiency as well as strong robustness under finite precision implementations. In addition, we utilize the IIR filter bank and the developed algorithm to detect multiple noises in an ECG signal. Based on MATLAB program, the effectiveness of the algorithm is verified.
Style APA, Harvard, Vancouver, ISO itp.
15

Cipli, Gorkem. "Underwater audio event detection, identification and classification framework (AQUA)". Thesis, 2016. http://hdl.handle.net/1828/7690.

Pełny tekst źródła
Streszczenie:
An audio event detection and classification framework (AQUA) is developed for the North Pacific underwater acoustic research community. AQUA has been developed, tested, and verified on Ocean Networks Canada (ONC) hydrophone data. Ocean Networks Canada is an non-governmental organization collecting underwater passive acoustic data. AQUA enables the processing of a large acoustic database that grows at a rate of 5 GB per day. Novel algorithms to overcome challenges such as activity detection in broadband non-Gaussian type noise have achieved accurate and high classification rates. The main AQUA modules are blind activity detector, denoiser and classifier. The AQUA algorithms yield promising classification results with accurate time stamps.
Graduate
Style APA, Harvard, Vancouver, ISO itp.
16

Liu, Peter Junteng. "Using Gaussian process regression to denoise images and remove artefacts from microarray data". 2007. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=452813&T=F.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Chen, Chang-Jui, i 陳昶叡. "An Effective Denoise Method Based on Two-Stage Strategy for Removing Impulse Noise". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/17109566467844206903.

Pełny tekst źródła
Streszczenie:
碩士
國立中興大學
資訊科學與工程學系所
98
Removing noise plays an important role in digital image pre-processing. Noise effect the result of image processing directly, such as image segmentation, image fusion, and edge detection. The goal of noise removal is to suppress the noise while preserving image details. In this thesis, we propose an effective denoise method based on two-stage strategy for removing impulse noise. In the first phase, we detect noisy pixels accurately in the corrupted image by using a hierarchy of windows instead of a fixed window. In the second phase, we restore the noise candidates detected from first phase by using a modified edge-preserving regularization method. The modified edge-preserving regularization method can accurately restore the image corrupted with high noise ratio. We design two different experiments to prove that our method has a good performance. In the first experiment, we restore single ratio images. In the other experiment, we focus on the restoration of composite noise ratio images. In both experiments, the performance of our method in noise detection and image restoration is better than other methods.
Style APA, Harvard, Vancouver, ISO itp.
18

"Filtering Methods for Mass Spectrometry-based Peptide Identification Processes". Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-10-1271.

Pełny tekst źródła
Streszczenie:
Tandem mass spectrometry (MS/MS) is a powerful tool for identifying peptide sequences. In a typical experiment, incorrect peptide identifications may result due to noise contained in the MS/MS spectra and to the low quality of the spectra. Filtering methods are widely used to remove the noise and improve the quality of the spectra before the subsequent spectra identification process. However, existing filtering methods often use features and empirically assigned weights. These weights may not reflect the reality that the contribution (reflected by weight) of each feature may vary from dataset to dataset. Therefore, filtering methods that can adapt to different datasets have the potential to improve peptide identification results. This thesis proposes two adaptive filtering methods; denoising and quality assessment, both of which improve efficiency and effectiveness of peptide identification. First, the denoising approach employs an adaptive method for picking signal peaks that is more suitable for the datasets of interest. By applying the approach to two tandem mass spectra datasets, about 66% of peaks (likely noise peaks) can be removed. The number of peptides identified later by peptide identification on those datasets increased by 14% and 23%, respectively, compared to previous work (Ding et al., 2009a). Second, the quality assessment method estimates the probabilities of spectra being high quality based on quality assessments of the individual features. The probabilities are estimated by solving a constraint optimization problem. Experimental results on two datasets illustrate that searching only the high-quality tandem spectra determined using this method saves about 56% and 62% of database searching time and loses 9% of high-quality spectra. Finally, the thesis suggests future research directions including feature selection and clustering of peptides.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii