Gotowa bibliografia na temat „Regularization by Denoising”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Regularization by Denoising”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Regularization by Denoising"

1

Lin, Huangxing, Yihong Zhuang, Xinghao Ding, Delu Zeng, Yue Huang, Xiaotong Tu i John Paisley. "Self-Supervised Image Denoising Using Implicit Deep Denoiser Prior". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 2 (26.06.2023): 1586–94. http://dx.doi.org/10.1609/aaai.v37i2.25245.

Pełny tekst źródła
Streszczenie:
We devise a new regularization for denoising with self-supervised learning. The regularization uses a deep image prior learned by the network, rather than a traditional predefined prior. Specifically, we treat the output of the network as a ``prior'' that we again denoise after ``re-noising.'' The network is updated to minimize the discrepancy between the twice-denoised image and its prior. We demonstrate that this regularization enables the network to learn to denoise even if it has not seen any clean images. The effectiveness of our method is based on the fact that CNNs naturally tend to capture low-level image statistics. Since our method utilizes the image prior implicitly captured by the deep denoising CNN to guide denoising, we refer to this training strategy as an Implicit Deep Denoiser Prior (IDDP). IDDP can be seen as a mixture of learning-based methods and traditional model-based denoising methods, in which regularization is adaptively formulated using the output of the network. We apply IDDP to various denoising tasks using only observed corrupted data and show that it achieves better denoising results than other self-supervised denoising methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Prasath, V. "A well-posed multiscale regularization scheme for digital image denoising". International Journal of Applied Mathematics and Computer Science 21, nr 4 (1.12.2011): 769–77. http://dx.doi.org/10.2478/v10006-011-0061-7.

Pełny tekst źródła
Streszczenie:
A well-posed multiscale regularization scheme for digital image denoisingWe propose an edge adaptive digital image denoising and restoration scheme based on space dependent regularization. Traditional gradient based schemes use an edge map computed from gradients alone to drive the regularization. This may lead to the oversmoothing of the input image, and noise along edges can be amplified. To avoid these drawbacks, we make use of a multiscale descriptor given by a contextual edge detector obtained from local variances. Using a smooth transition from the computed edges, the proposed scheme removes noise in flat regions and preserves edges without oscillations. By incorporating a space dependent adaptive regularization parameter, image smoothing is driven along probable edges and not across them. The well-posedness of the corresponding minimization problem is proved in the space of functions of bounded variation. The corresponding gradient descent scheme is implemented and further numerical results illustrate the advantages of using the adaptive parameter in the regularization scheme. Compared with similar edge preserving regularization schemes, the proposed adaptive weight based scheme provides a better multiscale edge map, which in turn produces better restoration.
Style APA, Harvard, Vancouver, ISO itp.
3

Tan, Yi, Jin Fan, Dong Sun, Qingwei Gao i Yixiang Lu. "Multi-scale Image Denoising via a Regularization Method". Journal of Physics: Conference Series 2253, nr 1 (1.04.2022): 012030. http://dx.doi.org/10.1088/1742-6596/2253/1/012030.

Pełny tekst źródła
Streszczenie:
Abstract Image restoration is a widely studied problem in the field of image processing. Although the existing image restoration methods based on denoising regularization have shown relatively well performance, image restoration methods for different features of unknown images have not been proposed. Since images have different features, it seems necessary to adopt different priori regular terms for different features. In this paper, we propose a multiscale image regularization denoising framework that can simultaneously perform two or more denoising prior regularization terms to better obtain the overall image restoration results. We use the alternating direction multiplier method (ADMM) to optimize the model and combine multiple denoising algorithms for extensive image deblurring and image super-resolution experiments, and our algorithm shows better performance compared to the existing state-of-the-art image restoration methods.
Style APA, Harvard, Vancouver, ISO itp.
4

Li, Ao, Deyun Chen, Kezheng Lin i Guanglu Sun. "Hyperspectral Image Denoising with Composite Regularization Models". Journal of Sensors 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/6586032.

Pełny tekst źródła
Streszczenie:
Denoising is a fundamental task in hyperspectral image (HSI) processing that can improve the performance of classification, unmixing, and other subsequent applications. In an HSI, there is a large amount of local and global redundancy in its spatial domain that can be used to preserve the details and texture. In addition, the correlation of the spectral domain is another valuable property that can be utilized to obtain good results. Therefore, in this paper, we proposed a novel HSI denoising scheme that exploits composite spatial-spectral information using a nonlocal technique (NLT). First, a specific way to extract patches is employed to mine the spatial-spectral knowledge effectively. Next, a framework with composite regularization models is used to implement the denoising. A number of HSI data sets are used in our evaluation experiments and the results demonstrate that the proposed algorithm outperforms other state-of-the-art HSI denoising methods.
Style APA, Harvard, Vancouver, ISO itp.
5

Li, Shu, Xi Yang, Haonan Liu, Yuwei Cai i Zhenming Peng. "Seismic Data Denoising Based on Sparse and Low-Rank Regularization". Energies 13, nr 2 (13.01.2020): 372. http://dx.doi.org/10.3390/en13020372.

Pełny tekst źródła
Streszczenie:
Seismic denoising is a core task of seismic data processing. The quality of a denoising result directly affects data analysis, inversion, imaging and other applications. For the past ten years, there have mainly been two classes of methods for seismic denoising. One is based on the sparsity of seismic data. This kind of method can make use of the sparsity of seismic data in local area. The other is based on nonlocal self-similarity, and it can utilize the spatial information of seismic data. Sparsity and nonlocal self-similarity are important prior information. However, there is no seismic denoising method using both of them. To jointly use the sparsity and nonlocal self-similarity of seismic data, we propose a seismic denoising method using sparsity and low-rank regularization (called SD-SpaLR). Experimental results showed that the SD-SpaLR method has better performance than the conventional wavelet denoising and total variation denoising. This is because both the sparsity and the nonlocal self-similarity of seismic data are utilized in seismic denoising. This study is of significance for designing new seismic data analysis, processing and inversion methods.
Style APA, Harvard, Vancouver, ISO itp.
6

Baloch, Gulsher, Huseyin Ozkaramanli i Runyi Yu. "Residual Correlation Regularization Based Image Denoising". IEEE Signal Processing Letters 25, nr 2 (luty 2018): 298–302. http://dx.doi.org/10.1109/lsp.2017.2789018.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Guan Nan, Dan Er Xu, Rong Chen, Zu Fang Huang i Zhong Jian Teng. "Iterative Regularization Model for Image Denoising Based on Dual Norms". Applied Mechanics and Materials 182-183 (czerwiec 2012): 1245–49. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1245.

Pełny tekst źródła
Streszczenie:
Image denoising algorithm based on gradient dependent energy functional often compromised the image features like textures or certain details. This paper proposes an iterative regularization model based on Dual Norms for image denoising. By using iterative regularization model, the oscillating patterns of texture and detail are added back to fit and compute the original Dual Norms model, and the iterative behavior avoids overfull smoothing while denoising the features of textures and details to a certain extent. In addition, the iterative procedure is proposed in this paper, and the proposed algorithm also be proved the convergence property. Experimental results show that the proposed method can achieve a batter result in preserving not only the features of textures for image denoising but also the details for image.
Style APA, Harvard, Vancouver, ISO itp.
8

Guo, Li, Weilong Chen, Yu Liao, Honghua Liao i Jun Li. "An Edge-Preserved Image Denoising Algorithm Based on Local Adaptive Regularization". Journal of Sensors 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/2019569.

Pełny tekst źródła
Streszczenie:
Image denoising methods are often based on the minimization of an appropriately defined energy function. Many gradient dependent energy functions, such as Potts model and total variation denoising, regard image as piecewise constant function. In these methods, some important information such as edge sharpness and location is well preserved, but some detailed image feature like texture is often compromised in the process of denoising. For this reason, an image denoising method based on local adaptive regularization is proposed in this paper, which can adaptively adjust denoising degree of noisy image by adding spatial variable fidelity term, so as to better preserve fine scale features of image. Experimental results show that the proposed denoising method can achieve state-of-the-art subjective visual effect, and the signal-noise-ratio (SNR) is also objectively improved by 0.3–0.6 dB.
Style APA, Harvard, Vancouver, ISO itp.
9

Liu, Kui, Jieqing Tan i Benyue Su. "An Adaptive Image Denoising Model Based on Tikhonov and TV Regularizations". Advances in Multimedia 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/934834.

Pełny tekst źródła
Streszczenie:
To avoid the staircase artifacts, an adaptive image denoising model is proposed by the weighted combination of Tikhonov regularization and total variation regularization. In our model, Tikhonov regularization and total variation regularization can be adaptively selected based on the gradient information of the image. When the pixels belong to the smooth regions, Tikhonov regularization is adopted, which can eliminate the staircase artifacts. When the pixels locate at the edges, total variation regularization is selected, which can preserve the edges. We employ the split Bregman method to solve our model. Experimental results demonstrate that our model can obtain better performance than those of other models.
Style APA, Harvard, Vancouver, ISO itp.
10

Shen, Lixin, Bruce W. Suter i Erin E. Tripp. "Algorithmic versatility of SPF-regularization methods". Analysis and Applications 19, nr 01 (3.07.2020): 43–69. http://dx.doi.org/10.1142/s0219530520400060.

Pełny tekst źródła
Streszczenie:
Sparsity promoting functions (SPFs) are commonly used in optimization problems to find solutions which are sparse in some basis. For example, the [Formula: see text]-regularized wavelet model and the Rudin–Osher–Fatemi total variation (ROF-TV) model are some of the most well-known models for signal and image denoising, respectively. However, recent work demonstrates that convexity is not always desirable in SPFs. In this paper, we replace convex SPFs with their induced nonconvex SPFs and develop algorithms for the resulting model by exploring the intrinsic structures of the nonconvex SPFs. These functions are defined as the difference of the convex SPF and its Moreau envelope. We also present simulations illustrating the performance of a special SPF and the developed algorithms in image denoising.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Regularization by Denoising"

1

Jalalzai, Khalid. "Regularization of inverse problems in image processing". Phd thesis, Ecole Polytechnique X, 2012. http://pastel.archives-ouvertes.fr/pastel-00787790.

Pełny tekst źródła
Streszczenie:
Les problèmes inverses consistent à retrouver une donnée qui a été transformée ou perturbée. Ils nécessitent une régularisation puisque mal posés. En traitement d'images, la variation totale en tant qu'outil de régularisation a l'avantage de préserver les discontinuités tout en créant des zones lisses, résultats établis dans cette thèse dans un cadre continu et pour des énergies générales. En outre, nous proposons et étudions une variante de la variation totale. Nous établissons une formulation duale qui nous permet de démontrer que cette variante coïncide avec la variation totale sur des ensembles de périmètre fini. Ces dernières années les méthodes non-locales exploitant les auto-similarités dans les images ont connu un succès particulier. Nous adaptons cette approche au problème de complétion de spectre pour des problèmes inverses généraux. La dernière partie est consacrée aux aspects algorithmiques inhérents à l'optimisation des énergies convexes considérées. Nous étudions la convergence et la complexité d'une famille récente d'algorithmes dits Primal-Dual.
Style APA, Harvard, Vancouver, ISO itp.
2

Laruelo, Fernandez Andrea. "Integration of magnetic resonance spectroscopic imaging into the radiotherapy treatment planning". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30126/document.

Pełny tekst źródła
Streszczenie:
L'objectif de cette thèse est de proposer de nouveaux algorithmes pour surmonter les limitations actuelles et de relever les défis ouverts dans le traitement de l'imagerie spectroscopique par résonance magnétique (ISRM). L'ISRM est une modalité non invasive capable de fournir la distribution spatiale des composés biochimiques (métabolites) utilisés comme biomarqueurs de la maladie. Les informations fournies par l'ISRM peuvent être utilisées pour le diagnostic, le traitement et le suivi de plusieurs maladies telles que le cancer ou des troubles neurologiques. Cette modalité se montre utile en routine clinique notamment lorsqu'il est possible d'en extraire des informations précises et fiables. Malgré les nombreuses publications sur le sujet, l'interprétation des données d'ISRM est toujours un problème difficile en raison de différents facteurs tels que le faible rapport signal sur bruit des signaux, le chevauchement des raies spectrales ou la présence de signaux de nuisance. Cette thèse aborde le problème de l'interprétation des données d'ISRM et la caractérisation de la rechute des patients souffrant de tumeurs cérébrales. Ces objectifs sont abordés à travers une approche méthodologique intégrant des connaissances a priori sur les données d'ISRM avec une régularisation spatio-spectrale. Concernant le cadre applicatif, cette thèse contribue à l'intégration de l'ISRM dans le workflow de traitement en radiothérapie dans le cadre du projet européen SUMMER (Software for the Use of Multi-Modality images in External Radiotherapy) financé par la Commission européenne (FP7-PEOPLE-ITN)
The aim of this thesis is to propose new algorithms to overcome the current limitations and to address the open challenges in the processing of magnetic resonance spectroscopic imaging (MRSI) data. MRSI is a non-invasive modality able to provide the spatial distribution of relevant biochemical compounds (metabolites) commonly used as biomarkers of disease. Information provided by MRSI can be used as a valuable insight for the diagnosis, treatment and follow-up of several diseases such as cancer or neurological disorders. Obtaining accurate and reliable information from in vivo MRSI signals is a crucial requirement for the clinical utility of this technique. Despite the numerous publications on the topic, the interpretation of MRSI data is still a challenging problem due to different factors such as the low signal-to-noise ratio (SNR) of the signals, the overlap of spectral lines or the presence of nuisance components. This thesis addresses the problem of interpreting MRSI data and characterizing recurrence in tumor brain patients. These objectives are addressed through a methodological approach based on novel processing methods that incorporate prior knowledge on the MRSI data using a spatio-spectral regularization. As an application, the thesis addresses the integration of MRSI into the radiotherapy treatment workflow within the context of the European project SUMMER (Software for the Use of Multi-Modality images in External Radiotherapy) founded by the European Commission (FP7-PEOPLE-ITN framework)
Style APA, Harvard, Vancouver, ISO itp.
3

Heinrich, André. "Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration". Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108923.

Pełny tekst źródła
Streszczenie:
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
Style APA, Harvard, Vancouver, ISO itp.
4

Castellanos, Lopez Clara. "Accélération et régularisation de la méthode d'inversion des formes d'ondes complètes en exploration sismique". Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-01064412.

Pełny tekst źródła
Streszczenie:
Actuellement, le principal obstacle à la mise en œuvre de la FWI élastique en trois dimensions sur des cas d'étude réalistes réside dans le coût de calcul associé aux taches de modélisation sismique. Pour surmonter cette difficulté, je propose deux contributions. Tout d'abord, je propose de calculer le gradient de la fonctionnelle avec la méthode de l'état adjoint à partir d'une forme symétrisée des équations de l'élastodynamique formulées sous forme d'un système du premier ordre en vitesse-contrainte. Cette formulation auto-adjointe des équations de l'élastodynamique permet de calculer les champs incidents et adjoints intervenant dans l'expression du gradient avec un seul opérateur de modélisation numérique. Le gradient ainsi calculé facilite également l'interfaçage de plusieurs outils de modélisation avec l'algorithme d'inversion. Deuxièmement, j'explore dans cette thèse dans quelle mesure les encodages des sources avec des algorithmes d'optimisation du second-ordre de quasi-Newton et de Newton tronqué permettait de réduire encore le coût de la FWI. Finalement, le problème d'optimisation associé à la FWI est mal posé, nécessitant ainsi d'ajouter des contraintes de régularisation à la fonctionnelle à minimiser. Je montre ici comment une régularisation fondée sur la variation totale du modèle fournissait une représentation adéquate des modèles du sous-sol en préservant le caractère discontinu des interfaces lithologiques. Pour améliorer les images du sous-sol, je propose un algorithme de débruitage fondé sur une variation totale locale au sein duquel j'incorpore l'information structurale fournie par une image migrée pour préserver les structures de faible dimension.
Style APA, Harvard, Vancouver, ISO itp.
5

Nair, Pravin. "Provably Convergent Algorithms for Denoiser-Driven Image Regularization". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5887.

Pełny tekst źródła
Streszczenie:
Some fundamental reconstruction tasks in image processing can be posed as an inverse problem where we are required to invert a given forward model. For example, in deblurring and superresolution, the ground-truth image needs to be estimated from blurred and low-resolution images, whereas in CT and MR imaging, a high-resolution image must be reconstructed from a few linear measurements. Such inverse problems are invariably ill-posed—they exhibit non-unique solutions and the process of direct inversion is unstable. Some form of image model (or prior) on the ground truth is required to regularize the inversion process. For example, a classical solution involves minimizing f + g , where the loss term f is derived from the forward model and the regularizer g is used to constrain the search space. The challenge is to come up with a formula for g that can yield good image reconstructions. This has been the center of research activity in image reconstruction for the last few decades. “Regularization using denoising" is a recent breakthrough in which a powerful denoiser is used for regularization purposes, instead of having to specify some hand-crafted g (but the loss f is still used). This has been empirically shown to yield significantly better results than staple f + g minimization. In fact, the results are generally comparable and often superior to state-of-the-art deep learning methods. In this thesis, we consider two such popular models for image regularization—Plug-and-Play (PnP) and Regularization by Denoising (RED). In particular, we focus on the convergence aspect of these iterative algorithms which is not well understood even for simple denoisers. This is important since the lack of convergence guarantee can result in spurious reconstructions in imaging applications. The contributions of the thesis in this regard are as follows. PnP with linear denoisers: We show that for a class of non-symmetric linear denoisers that includes kernel denoisers such as nonlocal means, one can associate a convex regularizer g with the denoiser. More precisely, we show that any such linear denoiser can be expressed as the proximal operator of a convex function, provided we work with a non-standard inner product (instead of the Euclidean inner product). In particular, the regularizer is quadratic, but unlike classical quadratic regularizers, the quadratic form is derived from the observed data. A direct implication of this observation is that (a simple variant of) the PnP algorithm based on this linear denoiser amounts to solving an optimization problem of the form f + g , though it was not originally conceived this way. Consequently, if f is convex, both objective and iterate convergence are guaranteed for the PnP algorithm. Apart from the convergence guarantee, we go on to show that this observation has algorithmic value as well. For example, in the case of linear inverse problems such as superresolution, deblurring and inpainting (where f is quadratic), we can reduce the problem of minimizing f + g to a linear system. In particular, we show how using Krylov solvers we can solve this system efficiently in just few iterations. Surprisingly, the reconstructions are found to be comparable with state-of-theart deep learning methods. To the best of our knowledge, the possibility of achieving near state-of-the-art image reconstructions using a linear solver has not been demonstrated before. PnP and RED with learning-based denoisers: In general, state-of-the-art PnP and RED algorithms rely on trained CNN denoisers such as DnCNN. Unlike linear denoisers, it is difficult to place PnP and RED algorithms within an optimization framework in the case of CNN denoisers. Nonetheless, we can still try to understand the convergence of the sequence of iterates generated by these algorithms. For convex loss f , we show that this question can be resolved using the theory of monotone operators — the denoiser being averaged (a subclass of nonexpansive operators) is sufficient for iterate convergence of PnP and RED. Using numerical examples, we show that existing CNN denoisers are not nonexpansive and can cause PnP and RED algorithms to diverge. Can we train denoisers that are provably nonexpansive? Unfortunately, this is computationally challenging—simply checking nonexpansivity of a CNN is known to be intractable. As a result, existing algorithms for training nonexpansive CNNs either cannot guarantee nonexpansivity or are computation intensive. We show that this problem can be solved by moving away from CNN denoisers to unfolded deep denoisers. In particular, we are able to construct unfolded networks that are efficiently trainable and come with convergence guarantees for PnP and RED algorithms, and whose regularization capacity can be matched withCNNdenoisers. Presumably, we are the first to propose a simple framework for training provably averaged (contractive) denoisers using unfolding networks. We provide numerical results to validate our theoretical results and compare our algorithms with state-of-the-art regularization techniques. We also point out some future research directions stemming from the thesis.
Style APA, Harvard, Vancouver, ISO itp.
6

Gavaskar, Ruturaj G. "On Plug-and-Play Regularization using Linear Denoisers". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5973.

Pełny tekst źródła
Streszczenie:
The problem of inverting a given measurement model comes up in several computational imaging applications. For example, in CT and MRI, we are required to reconstruct a high-resolution image from incomplete noisy measurements, whereas in superresolution and deblurring, we try to infer the ground-truth from low-resolution or blurred images. Traditionally, this is done by minimizing $f + \phi$, where $f$ is a data-fidelity (or loss) function that is determined by the acquisition process, and $\phi$ is a regularization (or penalty) function that is based on a subjective prior on the target image. The solution is obtained numerically using iterative algorithms such as ISTA or ADMM. While several forms of regularization and associated optimization methods have been proposed in the imaging literature of the last few decades, the use of denoisers (aka denoising priors) for image regularization is a relatively recent phenomenon. This has partly been triggered by advances in image denoising in the last 20 years, leading to the development of powerful image denoisers such as BM3D and DnCNN. In this thesis, we look at a recent protocol called Plug-and-Play (PnP) regularization, where image denoisers are deployed within iterative algorithms for image regularization. PnP consists of replacing the proximal map --- an analytical operator at the core of ISTA and ADMM --- associated with the regularizer $\phi$ with an image denoiser. This is motivated by the intuition that off-the-shelf denoisers such as BM3D and DnCNN offer better image priors than traditional hand-crafted regularizers such as total variation. While PnP does not use an explicit regularizer, it still makes use of the data-fidelity function $f$. However, since the replacement of the proximal map with a denoiser is ad-hoc, the optimization perspective is lost --- it is not clear if the PnP iterations can be interpreted as optimizing some objective function $f + \phi$. Remarkably, PnP reconstructions are of high quality and competitive with state-of-the-art methods. Following this, researchers have tried explaining why plugging a denoiser within an inversion algorithm should work in the first place, why it produces high-quality images, and whether the final reconstruction is optimal in some sense. In this thesis, we try answering such questions, some of which have been the topic of active research in the imaging community in recent years. Specifically, we consider the following questions. --> Fixed-point convergence: Under what conditions does the sequence of iterates generated by a PnP algorithm converge? Moreover, are these conditions met by existing real-world denoisers? --> Optimality and objective convergence: Can we interpret PnP as an algorithm that minimizes $f + \phi$ for some appropriate $\phi$? Moreover, does the algorithm converge to a solution of this objective function? --> Exact and robust recovery: Under what conditions can we recover the ground-truth exactly via PnP? And is the reconstruction robust to noise in the measurements? While early work on PnP has attempted to answer some of these questions, many of the underlying assumptions are either strong or unverifiable. This is essentially because denoisers such as BM3D and DnCNN are mathematically complex, nonlinear and difficult to characterize. A first step in understanding complex nonlinear phenomena is often to develop an understanding of some linear approximation. In this spirit, we focus our attention on denoisers that are linear. In fact, there exists a broad class of real-world denoisers that are linear and whose performance is quite decent; examples include kernel filters (e.g. NLM, bilateral filter) and their symmetrized counterparts. This class has a simple characterization that helps to keep the analysis tractable and the assumptions verifiable. Our main contributions lie in resolving the aforementioned questions for PnP algorithms where the plugged denoiser belongs to this class. We summarize them below. --> We prove fixed-point convergence of the PnP version of ISTA under mild assumptions on the measurement model. --> Based on the theory of proximal maps, we prove that a PnP algorithm in fact minimizes a convex objective function $f + \phi$, subject to some algorithmic modifications that arise from the algebraic properties of the denoiser. Notably, unlike previous results, our analysis applies to non-symmetric linear filters. --> Under certain verifiable assumptions, we prove that a signal can be recovered exactly (resp. robustly) from clean (resp. noisy) measurements using PnP regularization. As a more profound application, in the spirit of classical compressed sensing, we are able to derive probabilistic guarantees on exact and robust recovery for the compressed sensing problem where the sensing matrix is random. An implication of our analysis is that the range of the linear denoiser plays the role of a signal prior and its dimension essentially controls the size of the set of recoverable signals. In particular, we are able to derive the sample complexity of compressed sensing as a function of distortion error and success rate. We validate our theoretical findings numerically, discuss their implications and mention possible future research directions.
Style APA, Harvard, Vancouver, ISO itp.
7

Michenková, Marie. "Regularizační metody založené na metodách nejmenších čtverců". Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-330700.

Pełny tekst źródła
Streszczenie:
Title: Regularization Techniques Based on the Least Squares Method Author: Marie Michenková Department: Department of Numerical Mathematics Supervisor: RNDr. Iveta Hnětynková, Ph.D. Abstract: In this thesis we consider a linear inverse problem Ax ≈ b, where A is a linear operator with smoothing property and b represents an observation vector polluted by unknown noise. It was shown in [Hnětynková, Plešinger, Strakoš, 2009] that high-frequency noise reveals during the Golub-Kahan iterative bidiagonalization in the left bidiagonalization vectors. We propose a method that identifies the iteration with maximal noise revealing and reduces a portion of high-frequency noise in the data by subtracting the corresponding (properly scaled) left bidiagonalization vector from b. This method is tested for different types of noise. Further, Hnětynková, Plešinger, and Strakoš provided an estimator of the noise level in the data. We propose a modification of this estimator based on the knowledge of the point of noise revealing. Keywords: ill-posed problems, regularization, Golub-Kahan iterative bidiagonalization, noise revealing, noise estimate, denoising 1
Style APA, Harvard, Vancouver, ISO itp.
8

Heinrich, André. "Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration". Doctoral thesis, 2012. https://monarch.qucosa.de/id/qucosa%3A19869.

Pełny tekst źródła
Streszczenie:
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Regularization by Denoising"

1

Lanza, Alessandro, Serena Morigi i Fiorella Sgallari. "Convex Image Denoising via Non-Convex Regularization". W Lecture Notes in Computer Science, 666–77. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18461-6_53.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Shi, Hui, Yann Traonmilin i Jean-François Aujol. "Compressive Learning of Deep Regularization for Denoising". W Lecture Notes in Computer Science, 162–74. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-31975-4_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lucchese, Mirko, Iuri Frosio i N. Alberto Borghese. "Optimal Choice of Regularization Parameter in Image Denoising". W Image Analysis and Processing – ICIAP 2011, 534–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24085-0_55.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Calderon, Felix, i Carlos A. Júnez–Ferreira. "Regularization with Adaptive Neighborhood Condition for Image Denoising". W Advances in Soft Computing, 398–406. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25330-0_35.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Peng, Yong, Shen Wang i Bao-Liang Lu. "Marginalized Denoising Autoencoder via Graph Regularization for Domain Adaptation". W Neural Information Processing, 156–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42042-9_20.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ghoniem, Mahmoud, Youssef Chahir i Abderrahim Elmoataz. "Video Denoising and Simplification Via Discrete Regularization on Graphs". W Advanced Concepts for Intelligent Vision Systems, 380–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88458-3_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Zhang, J. W., J. Liu, Y. H. Zheng i J. Wang. "Regularization Parameter Selection for Gaussian Mixture Model Based Image Denoising Method". W Advances in Computer Science and Ubiquitous Computing, 291–97. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3023-9_47.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kim, Yunho, Paul M. Thompson, Arthur W. Toga, Luminita Vese i Liang Zhan. "HARDI Denoising: Variational Regularization of the Spherical Apparent Diffusion Coefficient sADC". W Lecture Notes in Computer Science, 515–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02498-6_43.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Li, Li, Xiaohong Shen i Shanshan Gao. "Image Denoising Using Expected Patch Log Likelihood and Hyper-laplacian Regularization". W Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 761–70. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70665-4_82.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Lucchese, Mirko, i N. Alberto Borghese. "Denoising of Digital Radiographic Images with Automatic Regularization Based on Total Variation". W Image Analysis and Processing – ICIAP 2009, 711–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04146-4_76.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Regularization by Denoising"

1

Bahia, B., i M. Sacchi. "Deblending via Regularization by Denoising". W 82nd EAGE Annual Conference & Exhibition. European Association of Geoscientists & Engineers, 2020. http://dx.doi.org/10.3997/2214-4609.202011992.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hu, Yuyang, Jiaming Liu, Xiaojian Xu i Ulugbek S. Kamilov. "Monotonically Convergent Regularization by Denoising". W 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897639.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bruni, V., i D. Vitulano. "Signal and image denoising without regularization". W 2013 20th IEEE International Conference on Image Processing (ICIP). IEEE, 2013. http://dx.doi.org/10.1109/icip.2013.6738111.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Hongyi Liu i Zhihui Wei. "Structure-preserved NLTV regularization for image denoising". W 2011 International Conference on Image Analysis and Signal Processing (IASP). IEEE, 2011. http://dx.doi.org/10.1109/iasp.2011.6109033.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Clinchant, Stephane, Gabriela Csurka i Boris Chidlovskii. "A Domain Adaptation Regularization for Denoising Autoencoders". W Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/p16-2005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Carrera, Anthony, Adrian Basarab i Roberto Lavarello. "Attenuation coefficient imaging using regularization by denoising". W 2022 IEEE International Ultrasonics Symposium (IUS). IEEE, 2022. http://dx.doi.org/10.1109/ius54386.2022.9957734.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ghoniem, Mahmoud, Youssef Chahir i Abderrahim Elmoataz. "Video denoising via discrete regularization on graphs". W 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761412.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Xie, Qi, Qian Zhao, Deyu Meng, Zongben Xu, Shuhang Gu, Wangmeng Zuo i Lei Zhang. "Multispectral Images Denoising by Intrinsic Tensor Sparsity Regularization". W 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.187.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Rey, Samuel, i Antonio G. Marques. "Robust Graph-Filter Identification with Graph Denoising Regularization". W ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414909.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Charest, Michael, Michael Elad i Peyman Milanfar. "A General Iterative Regularization Framework For Image Denoising". W 2006 40th Annual Conference on Information Sciences and Systems. IEEE, 2006. http://dx.doi.org/10.1109/ciss.2006.286510.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii