Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Deep Image Prior.

Zeitschriftenartikel zum Thema „Deep Image Prior“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Deep Image Prior" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Ulyanov, Dmitry, Andrea Vedaldi und Victor Lempitsky. „Deep Image Prior“. International Journal of Computer Vision 128, Nr. 7 (04.03.2020): 1867–88. http://dx.doi.org/10.1007/s11263-020-01303-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Shin, Chang Jong, Tae Bok Lee und Yong Seok Heo. „Dual Image Deblurring Using Deep Image Prior“. Electronics 10, Nr. 17 (24.08.2021): 2045. http://dx.doi.org/10.3390/electronics10172045.

Der volle Inhalt der Quelle
Annotation:
Blind image deblurring, one of the main problems in image restoration, is a challenging, ill-posed problem. Hence, it is important to design a prior to solve it. Recently, deep image prior (DIP) has shown that convolutional neural networks (CNNs) can be a powerful prior for a single natural image. Previous DIP-based deblurring methods exploited CNNs as a prior when solving the blind deburring problem and performed remarkably well. However, these methods do not completely utilize the given multiple blurry images, and have limitations of performance for severely blurred images. This is because their architectures are strictly designed to utilize a single image. In this paper, we propose a method called DualDeblur, which uses dual blurry images to generate a single sharp image. DualDeblur jointly utilizes the complementary information of multiple blurry images to capture image statistics for a single sharp image. Additionally, we propose an adaptive L2_SSIM loss that enhances both pixel accuracy and structural properties. Extensive experiments show the superior performance of our method to previous methods in both qualitative and quantitative evaluations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cannas, Edoardo Daniele, Sara Mandelli, Paolo Bestagini, Stefano Tubaro und Edward J. Delp. „Deep Image Prior Amplitude SAR Image Anonymization“. Remote Sensing 15, Nr. 15 (27.07.2023): 3750. http://dx.doi.org/10.3390/rs15153750.

Der volle Inhalt der Quelle
Annotation:
This paper presents an extensive evaluation of the Deep Image Prior (DIP) technique for image inpainting on Synthetic Aperture Radar (SAR) images. SAR images are gaining popularity in various applications, but there may be a need to conceal certain regions of them. Image inpainting provides a solution for this. However, not all inpainting techniques are designed to work on SAR images. Some are intended for use on photographs, while others have to be specifically trained on top of a huge set of images. In this work, we evaluate the performance of the DIP technique that is capable of addressing these challenges: it can adapt to the image under analysis including SAR imagery; it does not require any training. Our results demonstrate that the DIP method achieves great performance in terms of objective and semantic metrics. This indicates that the DIP method is a promising approach for inpainting SAR images, and can provide high-quality results that meet the requirements of various applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shi, Yu, Cien Fan, Lian Zou, Caixia Sun und Yifeng Liu. „Unsupervised Adversarial Defense through Tandem Deep Image Priors“. Electronics 9, Nr. 11 (19.11.2020): 1957. http://dx.doi.org/10.3390/electronics9111957.

Der volle Inhalt der Quelle
Annotation:
Deep neural networks are vulnerable to the adversarial example synthesized by adding imperceptible perturbations to the original image but can fool the classifier to provide wrong prediction outputs. This paper proposes an image restoration approach which provides a strong defense mechanism to provide robustness against adversarial attacks. We show that the unsupervised image restoration framework, deep image prior, can effectively eliminate the influence of adversarial perturbations. The proposed method uses multiple deep image prior networks called tandem deep image priors to recover the original image from adversarial example. Tandem deep image priors contain two deep image prior networks. The first network captures the main information of images and the second network recovers original image based on the prior information provided by the first network. The proposed method reduces the number of iterations originally required by deep image prior network and does not require adjusting the classifier or pre-training. It can be combined with other defensive methods. Our experiments show that the proposed method surprisingly achieves higher classification accuracy on ImageNet against a wide variety of adversarial attacks than previous state-of-the-art defense methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gong, Kuang, Ciprian Catana, Jinyi Qi und Quanzheng Li. „PET Image Reconstruction Using Deep Image Prior“. IEEE Transactions on Medical Imaging 38, Nr. 7 (Juli 2019): 1655–65. http://dx.doi.org/10.1109/tmi.2018.2888491.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Han, Sujy, Tae Bok Lee und Yong Seok Heo. „Deep Image Prior for Super Resolution of Noisy Image“. Electronics 10, Nr. 16 (20.08.2021): 2014. http://dx.doi.org/10.3390/electronics10162014.

Der volle Inhalt der Quelle
Annotation:
Single image super-resolution task aims to reconstruct a high-resolution image from a low-resolution image. Recently, it has been shown that by using deep image prior (DIP), a single neural network is sufficient to capture low-level image statistics using only a single image without data-driven training such that it can be used for various image restoration problems. However, super-resolution tasks are difficult to perform with DIP when the target image is noisy. The super-resolved image becomes noisy because the reconstruction loss of DIP does not consider the noise in the target image. Furthermore, when the target image contains noise, the optimization process of DIP becomes unstable and sensitive to noise. In this paper, we propose a noise-robust and stable framework based on DIP. To this end, we propose a noise-estimation method using the generative adversarial network (GAN) and self-supervision loss (SSL). We show that a generator of DIP can learn the distribution of noise in the target image with the proposed framework. Moreover, we argue that the optimization process of DIP is stabilized when the proposed self-supervision loss is incorporated. The experiments show that the proposed method quantitatively and qualitatively outperforms existing single image super-resolution methods for noisy images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Xie, Zhonghua, Lingjun Liu, Zhongliang Luo und Jianfeng Huang. „Image Denoising Using Nonlocal Regularized Deep Image Prior“. Symmetry 13, Nr. 11 (07.11.2021): 2114. http://dx.doi.org/10.3390/sym13112114.

Der volle Inhalt der Quelle
Annotation:
Deep neural networks have shown great potential in various low-level vision tasks, leading to several state-of-the-art image denoising techniques. Training a deep neural network in a supervised fashion usually requires the collection of a great number of examples and the consumption of a significant amount of time. However, the collection of training samples is very difficult for some application scenarios, such as the full-sampled data of magnetic resonance imaging and the data of satellite remote sensing imaging. In this paper, we overcome the problem of a lack of training data by using an unsupervised deep-learning-based method. Specifically, we propose a deep-learning-based method based on the deep image prior (DIP) method, which only requires a noisy image as training data, without any clean data. It infers the natural images with random inputs and the corrupted observation with the help of performing correction via a convolutional network. We improve the original DIP method as follows: Firstly, the original optimization objective function is modified by adding nonlocal regularizers, consisting of a spatial filter and a frequency domain filter, to promote the gradient sparsity of the solution. Secondly, we solve the optimization problem with the alternating direction method of multipliers (ADMM) framework, resulting in two separate optimization problems, including a symmetric U-Net training step and a plug-and-play proximal denoising step. As such, the proposed method exploits the powerful denoising ability of both deep neural networks and nonlocal regularizations. Experiments validate the effectiveness of leveraging a combination of DIP and nonlocal regularizers, and demonstrate the superior performance of the proposed method both quantitatively and visually compared with the original DIP method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chen, Yingxia, Yuqi Li, Tingting Wang, Yan Chen und Faming Fang. „DPDU-Net: Double Prior Deep Unrolling Network for Pansharpening“. Remote Sensing 16, Nr. 12 (13.06.2024): 2141. http://dx.doi.org/10.3390/rs16122141.

Der volle Inhalt der Quelle
Annotation:
The objective of the pansharpening task is to integrate multispectral (MS) images with low spatial resolution (LR) and to integrate panchromatic (PAN) images with high spatial resolution (HR) to generate HRMS images. Recently, deep learning-based pansharpening methods have been widely studied. However, traditional deep learning methods lack transparency while deep unrolling methods have limited performance when using one implicit prior for HRMS images. To address this issue, we incorporate one implicit prior with a semi-implicit prior and propose a double prior deep unrolling network (DPDU-Net) for pansharpening. Specifically, we first formulate the objective function based on observation models of PAN and LRMS images and two priors of an HRMS image. In addition to the implicit prior in the image domain, we enforce the sparsity of the HRMS image in a certain multi-scale implicit space; thereby, the feature map can obtain better sparse representation ability. We optimize the proposed objective function via alternating iteration. Then, the iterative process is unrolled into an elaborate network, with each iteration corresponding to a stage of the network. We conduct both reduced-resolution and full-resolution experiments on two satellite datasets. Both visual comparisons and metric-based evaluations consistently demonstrate the superiority of the proposed DPDU-Net.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

You, Shaopei, Jianlou Xu, Yajing Fan, Yuying Guo und Xiaodong Wang. „Combining Deep Image Prior and Second-Order Generalized Total Variance for Image Inpainting“. Mathematics 11, Nr. 14 (21.07.2023): 3201. http://dx.doi.org/10.3390/math11143201.

Der volle Inhalt der Quelle
Annotation:
Image inpainting is a crucial task in computer vision that aims to restore missing and occluded parts of damaged images. Deep-learning-based image inpainting methods have gained popularity in recent research. One such method is the deep image prior, which is unsupervised and does not require a large number of training samples. However, the deep image prior method often encounters overfitting problems, resulting in blurred image edges. In contrast, the second-order total generalized variation can effectively protect the image edge information. In this paper, we propose a novel image restoration model that combines the strengths of both the deep image prior and the second-order total generalized variation. Our model aims to better preserve the edges of the image structure. To effectively solve the optimization problem, we employ the augmented Lagrangian method and the alternating direction method of the multiplier. Numerical experiments show that the proposed method can repair images more effectively, retain more image details, and achieve higher performance than some recent methods in terms of peak signal-to-noise ratio and structural similarity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Fan, Wenshi, Hancheng Yu, Tianming Chen und Sheng Ji. „OCT Image Restoration Using Non-Local Deep Image Prior“. Electronics 9, Nr. 5 (11.05.2020): 784. http://dx.doi.org/10.3390/electronics9050784.

Der volle Inhalt der Quelle
Annotation:
In recent years, convolutional neural networks (CNN) have been widely used in image denoising for their high performance. One difficulty in applying the CNN to medical image denoising such as speckle reduction in the optical coherence tomography (OCT) image is that a large amount of high-quality data is required for training, which is an inherent limitation for OCT despeckling. Recently, deep image prior (DIP) networks have been proposed for image restoration without pre-training since the CNN structures have the intrinsic ability to capture the low-level statistics of a single image. However, the DIP has difficulty finding a good balance between maintaining details and suppressing speckle noise. Inspired by DIP, in this paper, a sorted non-local statics which measures the signal autocorrelation in the differences between the constructed image and the input image is proposed for OCT image restoration. By adding the sorted non-local statics as a regularization loss in the DIP learning, more low-level image statistics are captured by CNN networks in the process of OCT image restoration. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

李, 都. „Multiplicative Noise Image Denoising Based on Deep Image Prior“. Advances in Applied Mathematics 12, Nr. 05 (2023): 2227–34. http://dx.doi.org/10.12677/aam.2023.125228.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Wu, Yumo, Jianing Sun, Wengu Chen und Junping Yin. „Improved Image Compressive Sensing Recovery with Low-Rank Prior and Deep Image Prior“. Signal Processing 205 (April 2023): 108896. http://dx.doi.org/10.1016/j.sigpro.2022.108896.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Hu, Yong, Shaoping Xu, Xiaohui Cheng, Changfei Zhou und Yufeng Hu. „A Triple Deep Image Prior Model for Image Denoising Based on Mixed Priors and Noise Learning“. Applied Sciences 13, Nr. 9 (23.04.2023): 5265. http://dx.doi.org/10.3390/app13095265.

Der volle Inhalt der Quelle
Annotation:
Image denoising poses a significant challenge in computer vision due to the high-level visual task’s dependency on image quality. Several advanced denoising models have been proposed in recent decades. Recently, deep image prior (DIP), using a particular network structure and a noisy image to achieve denoising, has provided a novel image denoising method. However, the denoising performance of the DIP model still lags behind that of mainstream denoising models. To improve the performance of the DIP denoising model, we propose a TripleDIP model with internal and external mixed images priors for image denoising. The TripleDIP comprises of three branches: one for content learning and two for independent noise learning. We firstly use a Transformer-based supervised model (i.e., Restormer) to obtain a pre-denoised image (used as external prior) from a given noisy image, and then take the noisy image and the pre-denoised image as the first and second target image, respectively, to perform the denoising process under the designed loss function. We add constraints between two-branch noise learning and content learning, allowing the TripleDIP to employ external prior while enhancing independent noise learning stability. Moreover, the automatic stop criterion we proposed prevents the model from overfitting the noisy image and improves the execution efficiency. The experimental results demonstrate that TripleDIP outperforms the original DIP by an average of 2.79 dB and outperforms classical unsupervised methods such as N2V by an average of 2.68 dB and the latest supervised models such as SwinIR and Restormer by an average of 0.63 dB and 0.59 dB on the Set12 dataset. This can mainly be attributed to the fact that two-branch noise learning can obtain more stable noise while constraining the content learning branch’s optimization process. Our proposed TripleDIP significantly enhances DIP denoising performance and has broad application potential in scenarios with insufficient training datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Feng, Yayuan, Yu Shi und Dianjun Sun. „Blind Poissonian Image Deblurring Regularized by a Denoiser Constraint and Deep Image Prior“. Mathematical Problems in Engineering 2020 (24.08.2020): 1–15. http://dx.doi.org/10.1155/2020/9483521.

Der volle Inhalt der Quelle
Annotation:
The denoising and deblurring of Poisson images are opposite inverse problems. Single image deblurring methods are sensitive to image noise. A single noise filter can effectively remove noise in advance, but it also damages blurred information. To simultaneously solve the denoising and deblurring of Poissonian images better, we learn the implicit deep image prior from a single degraded image and use the denoiser as a regularization term to constrain the latent clear image. Combined with the explicit L0 regularization prior of the image, the denoising and deblurring model of the Poisson image is established. Then, the split Bregman iteration strategy is used to optimize the point spread function estimation and latent clear image estimation. The experimental results demonstrate that the proposed method achieves good restoration results on a series of simulated and real blurred images with Poisson noise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Xu, Lu, und Ying Wei. „“Pyramid Deep dehazing”: An unsupervised single image dehazing method using deep image prior“. Optics & Laser Technology 148 (April 2022): 107788. http://dx.doi.org/10.1016/j.optlastec.2021.107788.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Ho, Kary, Andrew Gilbert, Hailin Jin und John Collomosse. „Neural architecture search for deep image prior“. Computers & Graphics 98 (August 2021): 188–96. http://dx.doi.org/10.1016/j.cag.2021.05.013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Zhou, Kevin C., und Roarke Horstmeyer. „Diffraction tomography with a deep image prior“. Optics Express 28, Nr. 9 (16.04.2020): 12872. http://dx.doi.org/10.1364/oe.379200.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Fang, Yingying, und Tieyong Zeng. „Learning deep edge prior for image denoising“. Computer Vision and Image Understanding 200 (November 2020): 103044. http://dx.doi.org/10.1016/j.cviu.2020.103044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Lin, Huangxing, Yihong Zhuang, Xinghao Ding, Delu Zeng, Yue Huang, Xiaotong Tu und John Paisley. „Self-Supervised Image Denoising Using Implicit Deep Denoiser Prior“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 2 (26.06.2023): 1586–94. http://dx.doi.org/10.1609/aaai.v37i2.25245.

Der volle Inhalt der Quelle
Annotation:
We devise a new regularization for denoising with self-supervised learning. The regularization uses a deep image prior learned by the network, rather than a traditional predefined prior. Specifically, we treat the output of the network as a ``prior'' that we again denoise after ``re-noising.'' The network is updated to minimize the discrepancy between the twice-denoised image and its prior. We demonstrate that this regularization enables the network to learn to denoise even if it has not seen any clean images. The effectiveness of our method is based on the fact that CNNs naturally tend to capture low-level image statistics. Since our method utilizes the image prior implicitly captured by the deep denoising CNN to guide denoising, we refer to this training strategy as an Implicit Deep Denoiser Prior (IDDP). IDDP can be seen as a mixture of learning-based methods and traditional model-based denoising methods, in which regularization is adaptively formulated using the output of the network. We apply IDDP to various denoising tasks using only observed corrupted data and show that it achieves better denoising results than other self-supervised denoising methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Gao, Xianjie, Mingliang Zhang und Jinming Luo. „Low-Light Image Enhancement via Retinex-Style Decomposition of Denoised Deep Image Prior“. Sensors 22, Nr. 15 (26.07.2022): 5593. http://dx.doi.org/10.3390/s22155593.

Der volle Inhalt der Quelle
Annotation:
Low-light images are a common phenomenon when taking photos in low-light environments with inappropriate camera equipment, leading to shortcomings such as low contrast, color distortion, uneven brightness, and high loss of detail. These shortcomings are not only subjectively annoying but also affect the performance of many computer vision systems. Enhanced low-light images can be better applied to image recognition, object detection and image segmentation. This paper proposes a novel RetinexDIP method to enhance images. Noise is considered as a factor in image decomposition using deep learning generative strategies. The involvement of noise makes the image more real, weakens the coupling relationship between the three components, avoids overfitting, and improves generalization. Extensive experiments demonstrate that our method outperforms existing methods qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Yamawaki, Kazuhiro, und Xian-Hua Han. „Zero-Shot Blind Learning for Single-Image Super-Resolution“. Information 14, Nr. 1 (05.01.2023): 33. http://dx.doi.org/10.3390/info14010033.

Der volle Inhalt der Quelle
Annotation:
Deep convolutional neural networks (DCNNs) have manifested significant performance gains for single-image super-resolution (SISR) in the past few years. Most of the existing methods are generally implemented in a fully supervised way using large-scale training samples and only learn the SR models restricted to specific data. Thus, the adaptation of these models to real low-resolution (LR) images captured under uncontrolled imaging conditions usually leads to poor SR results. This study proposes a zero-shot blind SR framework via leveraging the power of deep learning, but without the requirement of the prior training using predefined imaged samples. It is well known that there are two unknown data: the underlying target high-resolution (HR) images and the degradation operations in the imaging procedure hidden in the observed LR images. Taking these in mind, we specifically employed two deep networks for respectively modeling the priors of both the target HR image and its corresponding degradation kernel and designed a degradation block to realize the observation procedure of the LR image. Via formulating the loss function as the approximation error of the observed LR image, we established a completely blind end-to-end zero-shot learning framework for simultaneously predicting the target HR image and the degradation kernel without any external data. In particular, we adopted a multi-scale encoder–decoder subnet to serve as the image prior learning network, a simple fully connected subnet to serve as the kernel prior learning network, and a specific depthwise convolutional block to implement the degradation procedure. We conducted extensive experiments on several benchmark datasets and manifested the great superiority and high generalization of our method over both SOTA supervised and unsupervised SR methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Park, Yunjin, Sukho Lee, Byeongseon Jeong und Jungho Yoon. „Joint Demosaicing and Denoising Based on a Variational Deep Image Prior Neural Network“. Sensors 20, Nr. 10 (24.05.2020): 2970. http://dx.doi.org/10.3390/s20102970.

Der volle Inhalt der Quelle
Annotation:
A joint demosaicing and denoising task refers to the task of simultaneously reconstructing and denoising a color image from a patterned image obtained by a monochrome image sensor with a color filter array. Recently, inspired by the success of deep learning in many image processing tasks, there has been research to apply convolutional neural networks (CNNs) to the task of joint demosaicing and denoising. However, such CNNs need many training data to be trained, and work well only for patterned images which have the same amount of noise they have been trained on. In this paper, we propose a variational deep image prior network for joint demosaicing and denoising which can be trained on a single patterned image and works for patterned images with different levels of noise. We also propose a new RGB color filter array (CFA) which works better with the proposed network than the conventional Bayer CFA. Mathematical justifications of why the variational deep image prior network suits the task of joint demosaicing and denoising are also given, and experimental results verify the performance of the proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Du, Wanlin. „An investigation of rain streak removal models based on expert experience and deep learning“. Highlights in Science, Engineering and Technology 56 (14.07.2023): 14–28. http://dx.doi.org/10.54097/hset.v56i.9812.

Der volle Inhalt der Quelle
Annotation:
Computer vision technology has a wide range of applications in today's society, and image rain removal is of great importance in outdoor vision capture. Today's image de-rain techniques are divided into video de-rain, and image de-rain, with the image de-rain task being more difficult than the video de-rain task due to the lack of a time factor. Current image rain removal methods are divided into three main types: filter-based methods, a priori knowledge-based methods and deep learning methods. Although these methods can achieve the image rain removal requirements to a certain extent, there is still no highly generalized method that can better solve the image rain removal problem in all cases. This paper first considers a filter-based approach, which takes less time to run but is difficult to remove cleanly for complex rain streaks. Secondly, this paper examines an a priori knowledge-based approach, which requires the study of how rain images are constructed as a priori knowledge and then uses the existing prior knowledge to remove rain from the images. This approach has a high reliance on prior knowledge and poor generalization. Finally, this paper investigates a deep learning-based method that requires a large number of supervised samples for training and has a better rain removal effect, but ignores the prior knowledge of rain streaks and is prone to overfitting. Based on these three methods we collected some data and experimental results in this field and summarized and analyzed them, giving reasons for the strengths and weaknesses of these three models and presenting new perspectives for future improvements in image de-rain methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Zhao, Di, Li-Zhi Zhao, Yong-Jin Gan und Bin-Yi Qin. „Undersampled magnetic resonance image reconstruction based on support prior and deep image prior without pre-training“. Acta Physica Sinica 71, Nr. 5 (2022): 058701. http://dx.doi.org/10.7498/aps.71.20211761.

Der volle Inhalt der Quelle
Annotation:
Magnetic resonance imaging (MRI) method based on deep learning needs large-quantity and high-quality patient-based datasets for pre-training. However, this is a challenge to the clinical applications because it is difficult to obtain a sufficient quantity of patient-based MR datasets due to the limitation of equipment and patient privacy concerns. In this paper, we propose a novel undersampled MRI reconstruction method based on deep learning. This method does not require any pre-training procedures and does not depend on training datasets. The proposed method is inspired by the traditional deep image prior (DIP) framework, and integrates the structure prior and support prior of the target MR image to improve the efficiency of learning. Based on the similarity between the reference image and the target image, the high-resolution reference image obtained in advance is used as the network input, thereby incorporating the structural prior information into network. By taking the coefficient index set of the reference image with large amplitude in the wavelet domain as the known support of the target image, the regularization constraint term is constructed, and the network training is transformed into the optimization process of network parameters. Experimental results show that the proposed method can obtain more accurate reconstructions from undersampled <i>k</i>-space data, and has obvious advantages in preserving tissue features and detailed texture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Yao, Chen, und Yan Xia. „Deep Colorization for Surveillance Images“. MATEC Web of Conferences 228 (2018): 02009. http://dx.doi.org/10.1051/matecconf/201822802009.

Der volle Inhalt der Quelle
Annotation:
In video surveillance application, grayscale image often influences the image processing results. In order to solve the colorization problem for surveillance images, this paper propose a fully end-to-end approach to obtain a reasonable colorization results. A CNN learning structure and gradient prior are be used for chromatic space inferring. Finally, our experimental results show our advantage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Zhao, Di, Yanhu Huang, Feng Zhao, Binyi Qin und Jincun Zheng. „Reference-Driven Undersampled MR Image Reconstruction Using Wavelet Sparsity-Constrained Deep Image Prior“. Computational and Mathematical Methods in Medicine 2021 (20.01.2021): 1–12. http://dx.doi.org/10.1155/2021/8865582.

Der volle Inhalt der Quelle
Annotation:
Deep learning has shown potential in significantly improving performance for undersampled magnetic resonance (MR) image reconstruction. However, one challenge for the application of deep learning to clinical scenarios is the requirement of large, high-quality patient-based datasets for network training. In this paper, we propose a novel deep learning-based method for undersampled MR image reconstruction that does not require pre-training procedure and pre-training datasets. The proposed reference-driven method using wavelet sparsity-constrained deep image prior (RWS-DIP) is based on the DIP framework and thereby reduces the dependence on datasets. Moreover, RWS-DIP explores and introduces structure and sparsity priors into network learning to improve the efficiency of learning. By employing a high-resolution reference image as the network input, RWS-DIP incorporates structural information into network. RWS-DIP also uses the wavelet sparsity to further enrich the implicit regularization of traditional DIP by formulating the training of network parameters as a constrained optimization problem, which is solved using the alternating direction method of multipliers (ADMM) algorithm. Experiments on in vivo MR scans have demonstrated that the RWS-DIP method can reconstruct MR images more accurately and preserve features and textures from undersampled k -space measurements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Yang Aiping, 杨爱萍, 王金斌 Wang Jinbin, 杨炳旺 Yang Bingwang und 何宇清 He Yuqing. „Joint Deep Denoising Prior for Image Blind Deblurring“. Acta Optica Sinica 38, Nr. 10 (2018): 1010003. http://dx.doi.org/10.3788/aos201838.1010003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Wang, Haijun, Wenli Zheng, Yaowei Wang, Tengfei Yang, Kaibing Zhang und Youlin Shang. „Single hyperspectral image super-resolution using a progressive upsampling deep prior network“. Electronic Research Archive 32, Nr. 7 (2024): 4517–42. http://dx.doi.org/10.3934/era.2024205.

Der volle Inhalt der Quelle
Annotation:
<p>Hyperspectral image super-resolution (SR) aims to enhance the spectral and spatial resolution of remote sensing images, enabling more accurate and detailed analysis of ground objects. However, hyperspectral images have high dimensional characteristics and complex spectral patterns. As a result, it is critical to effectively leverage the spatial non-local self-similarity and spectral correlation within hyperspectral images. To address this, we have proposed a novel single hyperspectral image SR method based on a progressive upsampling deep prior network. Specifically, we introduced the spatial-spectral attention fusion unit (S<sup>2</sup>AF) based on residual connections, in order to extract spatial and spectral information from hyperspectral images. Then we developed the group convolutional upsampling (GCU) to efficiently utilize the spatial and spectral prior information inherent in hyperspectral images. To address the challenges posed by the high dimensionality of hyperspectral images and limited training dataset, we implemented a parameter-sharing grouped convolutional upsampling framework within the GCU to ensure model stability and enhance performance. The experimental results on three benchmark datasets demonstrated that the proposed single hyperspectral image SR using a progressive upsampling deep prior network (PUDPN) method effectively improves the reconstruction quality of hyperspectral images and achieves promising performance.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Zhou, Hao, Huajun Feng, Wenbin Xu, Zhihai Xu, Qi Li und Yueting Chen. „Deep denoiser prior based deep analytic network for lensless image restoration“. Optics Express 29, Nr. 17 (09.08.2021): 27237. http://dx.doi.org/10.1364/oe.432544.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Sun, Yanglin, Jianjun Liu, Jinlong Yang, Zhiyong Xiao und Zebin Wu. „A deep image prior-based interpretable network for hyperspectral image fusion“. Remote Sensing Letters 12, Nr. 12 (06.10.2021): 1250–59. http://dx.doi.org/10.1080/2150704x.2021.1979270.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

胡, 锦华. „Salt-and-Pepper Noise Image Denoising Based on Deep Image Prior“. Advances in Applied Mathematics 13, Nr. 06 (2024): 2734–41. http://dx.doi.org/10.12677/aam.2024.136262.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Hartanto, Cahyo Adhi, und Laksmita Rahadianti. „Single Image Dehazing Using Deep Learning“. JOIV : International Journal on Informatics Visualization 5, Nr. 1 (22.03.2021): 76. http://dx.doi.org/10.30630/joiv.5.1.431.

Der volle Inhalt der Quelle
Annotation:
Many real-world situations such as bad weather may result in hazy environments. Images captured in these hazy conditions will have low image quality due to microparticles in the air. The microparticles light to scatter and absorb, resulting in hazy images with various effects. In recent years, image dehazing has been researched in depth to handle images captured in these conditions. Various methods were developed, from traditional methods to deep learning methods. Traditional methods focus more on the use of statistical prior. These statistical prior have weaknesses in certain conditions. This paper proposes a novel architecture based on PDR-Net by using a pyramid dilated convolution and pre-processing modules, processing modules, post-processing modules, and attention applications. The proposed network is trained to minimize L1 loss and perceptual loss with the O-Haze dataset. To evaluate our architecture's result, we used structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and color difference as an objective assessment and psychovisual experiment as a subjective assessment. Our architecture obtained better results than the previous method using the O-Haze dataset with an SSIM of 0.798, a PSNR of 25.39, but not better on the color difference. The SSIM and PSNR results were strengthened by using subjective assessments and 65 respondents, most of whom chose the results of the restoration of the image produced by our architecture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Kim, Sunwoo, Soohyun Kim und Seungryong Kim. „Deep Translation Prior: Test-Time Training for Photorealistic Style Transfer“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 1 (28.06.2022): 1183–91. http://dx.doi.org/10.1609/aaai.v36i1.20004.

Der volle Inhalt der Quelle
Annotation:
Recent techniques to solve photorealistic style transfer within deep convolutional neural networks (CNNs) generally require intensive training from large-scale datasets, thus having limited applicability and poor generalization ability to unseen images or styles. To overcome this, we propose a novel framework, dubbed Deep Translation Prior (DTP), to accomplish photorealistic style transfer through test-time training on given input image pair with untrained networks, which learns an image pair-specific translation prior and thus yields better performance and generalization. Tailored for such test-time training for style transfer, we present novel network architectures, with two sub-modules of correspondence and generation modules, and loss functions consisting of contrastive content, style, and cycle consistency losses. Our framework does not require offline training phase for style transfer, which has been one of the main challenges in existing methods, but the networks are to be solely learned during test time. Experimental results prove that our framework has a better generalization ability to unseen image pairs and even outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Luo, Zhijian, Siyu Chen und Yuntao Qian. „Learning deep optimizer for blind image deconvolution“. International Journal of Wavelets, Multiresolution and Information Processing 17, Nr. 06 (November 2019): 1950044. http://dx.doi.org/10.1142/s0219691319500449.

Der volle Inhalt der Quelle
Annotation:
In blind image deconvolution, priors are often leveraged to constrain the solution space, so as to alleviate the under-determinacy. Priors which are trained separately from the task of deconvolution tend to be unstable. We propose the Golf Optimizer, a novel but simple form of network that learns deep priors from data with better propagation behavior. Like playing golf, our method first estimates an aggressive propagation towards optimum using one network, and recurrently applies a residual CNN to learn the gradient of prior for delicate correction on restoration. Experiments show that our network achieves competitive performance on GoPro dataset, and our model is extremely lightweight compared with the state-of-the-art works.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Qiu, Yuanhong, Shuanlong Niu, Tongzhi Niu, Weifeng Li und Bin Li. „Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection“. Symmetry 14, Nr. 7 (19.07.2022): 1473. http://dx.doi.org/10.3390/sym14071473.

Der volle Inhalt der Quelle
Annotation:
Images in real surface defect detection scenes often suffer from uneven illumination. Retinex-based image enhancement methods can effectively eliminate the interference caused by uneven illumination and improve the visual quality of such images. However, these methods suffer from the loss of defect-discriminative information and a high computational burden. To address the above issues, we propose a joint-prior-based uneven illumination enhancement (JPUIE) method. Specifically, a semi-coupled retinex model is first constructed to accurately and effectively eliminate uneven illumination. Furthermore, a multiscale Gaussian-difference-based background prior is proposed to reweight the data consistency term, thereby avoiding the loss of defect information in the enhanced image. Last, by using the powerful nonlinear fitting ability of deep neural networks, a deep denoised prior is proposed to replace existing physics priors, effectively reducing the time consumption. Various experiments are carried out on public and private datasets, which are used to compare the defect images and enhanced results in a symmetric way. The experimental results demonstrate that our method is more conducive to downstream visual inspection tasks than other methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Jung, SeHee, SungMin Yang, Eunseok Lee, YongHak Lee, Jisun Ko, Sungjae Lee, JunSang Cho, Jaehwa Lee und SungHwan Kim. „Estimation of Particulate Levels Using Deep Dehazing Network and Temporal Prior“. Journal of Sensors 2020 (07.07.2020): 1–9. http://dx.doi.org/10.1155/2020/8841811.

Der volle Inhalt der Quelle
Annotation:
Particulate matters (PM) have become one of the important pollutants that deteriorate public health. Since PM is ubiquitous in the atmosphere, it is closely related to life quality in many different ways. Thus, a system to accurately monitor PM in diverse environments is imperative. Previous studies using digital images have relied on individual atmospheric images, not benefiting from both spatial and temporal effects of image sequences. This weakness led to undermining predictive power. To address this drawback, we propose a predictive model using the deep dehazing cascaded CNN and temporal priors. The temporal prior accommodates instantaneous visual moves and estimates PM concentration from residuals between the original and dehazed images. The present method also provides, as by-product, high-quality dehazed image sequences superior to the nontemporal methods. The improvements are supported by various experiments under a range of simulation scenarios and assessments using standard metrics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

He, Yifan, Wei Cao, Xiaofeng Du und Changlin Chen. „Internal Learning for Image Super-Resolution by Adaptive Feature Transform“. Symmetry 12, Nr. 10 (14.10.2020): 1686. http://dx.doi.org/10.3390/sym12101686.

Der volle Inhalt der Quelle
Annotation:
Recent years have witnessed the great success of image super-resolution based on deep learning. However, it is hard to adapt a well-trained deep model for a specific image for further improvement. Since the internal repetition of patterns is widely observed in visual entities, internal self-similarity is expected to help improve image super-resolution. In this paper, we focus on exploiting a complementary relation between external and internal example-based super-resolution methods. Specifically, we first develop a basic network learning external prior from large scale training data and then learn the internal prior from the given low-resolution image for task adaptation. By simply embedding a few additional layers into a pre-trained deep neural network, the image-adaptive super-resolution method exploits the internal prior for a specific image, and the external prior from a well-trained super-resolution model. We achieve 0.18 dB PSNR improvements over the basic network’s results on standard datasets. Extensive experiments under image super-resolution tasks demonstrate that the proposed method is flexible and can be integrated with lightweight networks. The proposed method boosts the performance for images with repetitive structures, and it improves the accuracy of the reconstructed image of the lightweight model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Petrovskaia, Anna, Raghavendra Jana und Ivan Oseledets. „A Single Image Deep Learning Approach to Restoration of Corrupted Landsat-7 Satellite Images“. Sensors 22, Nr. 23 (28.11.2022): 9273. http://dx.doi.org/10.3390/s22239273.

Der volle Inhalt der Quelle
Annotation:
Remote sensing is increasingly recognized as a convenient tool with a wide variety of uses in agriculture. Landsat-7 has supplied multi-spectral imagery of the Earth’s surface for more than 4 years and has become an important data source for a large number of research and policy-making initiatives. Unfortunately, a scan line corrector (SLC) on Landsat-7 broke down in May 2003, which caused the loss of up to 22 percent of any given scene. We present a single-image approach based on leveraging the abilities of the deep image prior method to fill in gaps using only the corrupt image. We test the ability of deep image prior to reconstruct remote sensing scenes with different levels of corruption in them. Additionally, we compare the performance of our approach with the performance of classical single-image gap-filling methods. We demonstrate a quantitative advantage of the proposed approach compared with classical gap-filling methods. The lowest-performing restoration made by the deep image prior approach reaches 0.812 in r2, while the best value for the classical approaches is 0.685. We also present the robustness of deep image prior in comparing the influence of the number of corrupted pixels on the restoration results. The usage of this approach could expand the possibilities for a wide variety of agricultural studies and applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Amal Joseph, Binny S, Abhishek V A, Nithin Raj und Vimel Manoj. „DEEP FACE - On the Reconstruction of Face Images from Deep Face Templates“. international journal of engineering technology and management sciences 7, Nr. 4 (2023): 606–11. http://dx.doi.org/10.46647/ijetms.2023.v07i04.083.

Der volle Inhalt der Quelle
Annotation:
The paper on “Reconstruction of Face Images from Deep Face Templates" presents a novel approach for face image reconstruction using deep learning techniques. The proposed method utilizes a pre-trained deep face template, which is a convolutional neural network (CNN) trained on a large-scale face dataset, as a prior to guide the reconstruction process. Specifically, the method solves an optimization problem that balances the fidelity to the input image and the similarity to the deep face template. Its then evaluated with the method on two face image datasets, and demonstrate that their method outperforms several state-of-the-art methods in terms of reconstruction quality, especially for images with large occlusions or low resolutions. Moreover, they show that the deep face template can capture high-level face attributes, such as pose, identity, and expression, which can be used for various face-related tasks, such as face recognition, attribute manipulation, and generation. Overall, the paper presents a promising direction for face image reconstruction using deep learning techniques, and highlights the potential of deep face templates for capturing and utilizing high-level face attributes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Xu, Shaoping, Xiaojun Chen, Yiling Tang, Shunliang Jiang, Xiaohui Cheng und Nan Xiao. „Learning from Multiple Instances: A Two-Stage Unsupervised Image Denoising Framework Based on Deep Image Prior“. Applied Sciences 12, Nr. 21 (24.10.2022): 10767. http://dx.doi.org/10.3390/app122110767.

Der volle Inhalt der Quelle
Annotation:
Supervised image denoising methods based on deep neural networks require a large amount of noisy-clean or noisy image pairs for network training. Thus, their performance drops drastically when the given noisy image is significantly different from the training data. Recently, several unsupervised learning models have been proposed to reduce the dependence on training data. Although unsupervised methods only require noisy images for learning, their denoising effect is relatively weak compared with supervised methods. This paper proposes a two-stage unsupervised deep learning framework based on deep image prior (DIP) to enhance the image denoising performance. First, a two-target DIP learning strategy is proposed to impose a learning restriction on the DIP optimization process. A cleaner preliminary image, together with the given noisy image, was used as the learning target of the two-target DIP learning process. We then demonstrate that adding an extra learning target with better image quality in the DIP learning process is capable of constraining the search space of the optimization process and improving the denoising performance. Furthermore, we observe that given the same network input and the same learning target, the DIP optimization process cannot generate the same denoised images. This indicates that the denoised results are uncertain, although they are similar in image quality and are complemented by local details. To utilize the uncertainty of the DIP, we employ a supervised denoising method to preprocess the given noisy image and propose an up- and down-sampling strategy to produce multiple sampled instances of the preprocessed image. These sampled instances were then fed into multiple two-target DIP learning processes to generate multiple denoised instances with different image details. Finally, we propose an unsupervised fusion network that fuses multiple denoised instances into one denoised image to further improve the denoising effect. We evaluated the proposed method through extensive experiments, including grayscale image denoising, color image denoising, and real-world image denoising. The experimental results demonstrate that the proposed framework outperforms unsupervised methods in all cases, and the denoising performance of the framework is close to or superior to that of supervised denoising methods for synthetic noisy image denoising and significantly outperforms supervised denoising methods for real-world image denoising. In summary, the proposed method is essentially a hybrid method that combines both supervised and unsupervised learning to improve denoising performance. Adopting a supervised method to generate preprocessed denoised images can utilize the external prior and help constrict the search space of the DIP, whereas using an unsupervised method to produce intermediate denoised instances can utilize the internal prior and provide adaptability to various noisy images of a real scene.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Ji, Shuichen, Shaoping Xu, Qiangqiang Cheng, Nan Xiao, Changfei Zhou und Minghai Xiong. „A Masked-Pre-Training-Based Fast Deep Image Prior Denoising Model“. Applied Sciences 14, Nr. 12 (12.06.2024): 5125. http://dx.doi.org/10.3390/app14125125.

Der volle Inhalt der Quelle
Annotation:
Compared to supervised denoising models based on deep learning, the unsupervised Deep Image Prior (DIP) denoising approach offers greater flexibility and practicality by operating solely with the given noisy image. However, the random initialization of network input and network parameters in the DIP leads to a slow convergence during iterative training, affecting the execution efficiency heavily. To address this issue, we propose the Masked-Pre-training-Based Fast DIP (MPFDIP) Denoising Model in this paper. We enhance the classical Restormer framework by improving its Transformer core module and incorporating sampling, residual learning, and refinement techniques. This results in a fast network called FRformer (Fast Restormer). The FRformer model undergoes offline supervised training using the masked processing technique for pre-training. For a specific noisy image, the pre-trained FRformer network, with its learned parameters, replaces the UNet network used in the original DIP model. The online iterative training of the replaced model follows the DIP unsupervised training approach, utilizing multi-target images and an adaptive loss function. This strategy further improves the denoising effectiveness of the pre-trained model. Extensive experiments demonstrate that the MPFDIP model outperforms existing mainstream deep-learning-based denoising models in reducing Gaussian noise, mixed Gaussian–Poisson noise, and low-dose CT noise. It also significantly enhances the execution efficiency compared to the original DIP model. This improvement is mainly attributed to the FRformer network’s initialization parameters obtained through masked pre-training, which exhibit strong generalization capabilities for various types and intensities of noise and already provide some denoising effect. Using them as initialization parameters greatly improves the convergence speed of unsupervised iterative training in the DIP. Additionally, the techniques of multi-target images and the adaptive loss function further enhance the denoising process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Jiang, Hao, Qing Zhang, Yongwei Nie, Lei Zhu und Wei‐Shi Zheng. „Learning Multi‐Scale Deep Image Prior for High‐Quality Unsupervised Image Denoising“. Computer Graphics Forum 41, Nr. 7 (Oktober 2022): 323–34. http://dx.doi.org/10.1111/cgf.14680.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wang, Yifan, Shuang Xu, Xiangyong Cao, Qiao Ke, Teng-Yu Ji und Xiangxiang Zhu. „Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior“. Remote Sensing 15, Nr. 8 (08.04.2023): 1970. http://dx.doi.org/10.3390/rs15081970.

Der volle Inhalt der Quelle
Annotation:
Deep image prior (DIP) is a powerful technique for image restoration that leverages an untrained network as a handcrafted prior. DIP can also be used for hyperspectral image (HSI) denoising tasks and has achieved impressive performance. Recent works further incorporate different regularization terms to enhance the performance of DIP and successfully show notable improvements. However, most DIP-based methods for HSI denoising rarely consider the distribution of complicated HSI mixed noise. In this paper, we propose the asymmetric Laplace noise modeling deep image prior (ALDIP) for HSI mixed noise removal. Based on the observation that real-world HSI noise exhibits heavy-tailed and asymmetric properties, we model the HSI noise of each band using an asymmetric Laplace distribution. Furthermore, in order to fully exploit the spatial–spectral correlation, we propose ALDIP-SSTV, which combines ALDIP with a spatial–spectral total variation (SSTV) term to preserve more spatial–spectral information. Experiments on both synthetic data and real-world data demonstrate that ALDIP and ALDIP-SSTV outperform state-of-the-art HSI denoising methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Shao, Xiaofeng, Zhenping Qiang, Fei Dai, Libo He und Hong Lin. „Face Image Completion Based on GAN Prior“. Electronics 11, Nr. 13 (26.06.2022): 1997. http://dx.doi.org/10.3390/electronics11131997.

Der volle Inhalt der Quelle
Annotation:
Face images are often used in social and entertainment activities to interact with information. However, during the transmission of digital images, there are factors that may destroy or obscure the key elements of the image, which may hinder the understanding of the image’s content. Therefore, the study of image completion of human faces has become an important research branch in the field of computer image processing. Compared with traditional image inpainting methods, deep-learning-based inpainting methods have significantly improved the results on face images, but in the case of complex semantic information and large missing areas, the completion results are still blurred, and the color of the boundary is inconsistent and does not match human visual perception. To solve this problem, this paper proposes a face completion method based on GAN priori to guide the network to complete face images by directly using the rich and diverse a priori information in the pre-trained GAN. The network model is a coarse-to-fine structure, where the damaged face images and the corresponding masks are first input to the coarse network to obtain the coarse results, and then the coarse results are input to the fine network with multi-resolution skip connections. The fine network uses the a priori information from the pre-trained GAN to guide the network to generate the face images, and finally uses the SN-PatchGAN discriminator to evaluate the completion results. The experiment is performed on the CelebA-HQ dataset. Compared with the latest three completion methods, the qualitative and quantitative experimental analysis shows that our method has obvious improvement in texture and fidelity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Lin, Jian, Qiurong Yan, Shang Lu, Yongjian Zheng, Shida Sun und Zhen Wei. „A Compressed Reconstruction Network Combining Deep Image Prior and Autoencoding Priors for Single-Pixel Imaging“. Photonics 9, Nr. 5 (13.05.2022): 343. http://dx.doi.org/10.3390/photonics9050343.

Der volle Inhalt der Quelle
Annotation:
Single-pixel imaging (SPI) is a promising imaging scheme based on compressive sensing. However, its application in high-resolution and real-time scenarios is a great challenge due to the long sampling and reconstruction required. The Deep Learning Compressed Network (DLCNet) can avoid the long-time iterative operation required by traditional reconstruction algorithms, and can achieve fast and high-quality reconstruction; hence, Deep-Learning-based SPI has attracted much attention. DLCNets learn prior distributions of real pictures from massive datasets, while the Deep Image Prior (DIP) uses a neural network′s own structural prior to solve inverse problems without requiring a lot of training data. This paper proposes a compressed reconstruction network (DPAP) based on DIP for Single-pixel imaging. DPAP is designed as two learning stages, which enables DPAP to focus on statistical information of the image structure at different scales. In order to obtain prior information from the dataset, the measurement matrix is jointly optimized by a network and multiple autoencoders are trained as regularization terms to be added to the loss function. Extensive simulations and practical experiments demonstrate that the proposed network outperforms existing algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Vu, Tri, Anthony DiSpirito, Daiwei Li, Zixuan Wang, Xiaoyi Zhu, Maomao Chen, Laiming Jiang et al. „Deep image prior for undersampling high-speed photoacoustic microscopy“. Photoacoustics 22 (Juni 2021): 100266. http://dx.doi.org/10.1016/j.pacs.2021.100266.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Li, Yuan, Shasha Wang und Lei Chen. „Self-supervised image blind deblurring using deep generator prior“. Optoelectronics Letters 18, Nr. 3 (März 2022): 187–92. http://dx.doi.org/10.1007/s11801-022-1111-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lai, Zeqiang, Kaixuan Wei und Ying Fu. „Deep plug-and-play prior for hyperspectral image restoration“. Neurocomputing 481 (April 2022): 281–93. http://dx.doi.org/10.1016/j.neucom.2022.01.057.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Dong, Weisheng, Peiyao Wang, Wotao Yin, Guangming Shi, Fangfang Wu und Xiaotong Lu. „Denoising Prior Driven Deep Neural Network for Image Restoration“. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, Nr. 10 (01.10.2019): 2305–18. http://dx.doi.org/10.1109/tpami.2018.2873610.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Segawa, Ryo, Hitoshi Hayashi und Shohei Fujii. „Proposal of New Activation Function in Deep Image Prior“. IEEJ Transactions on Electrical and Electronic Engineering 15, Nr. 8 (17.06.2020): 1248–49. http://dx.doi.org/10.1002/tee.23191.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie