Literatura científica selecionada sobre o tema "Deep Image Prior"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Deep Image Prior".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Deep Image Prior"

1

Ulyanov, Dmitry, Andrea Vedaldi e Victor Lempitsky. "Deep Image Prior". International Journal of Computer Vision 128, n.º 7 (4 de março de 2020): 1867–88. http://dx.doi.org/10.1007/s11263-020-01303-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Shin, Chang Jong, Tae Bok Lee e Yong Seok Heo. "Dual Image Deblurring Using Deep Image Prior". Electronics 10, n.º 17 (24 de agosto de 2021): 2045. http://dx.doi.org/10.3390/electronics10172045.

Texto completo da fonte
Resumo:
Blind image deblurring, one of the main problems in image restoration, is a challenging, ill-posed problem. Hence, it is important to design a prior to solve it. Recently, deep image prior (DIP) has shown that convolutional neural networks (CNNs) can be a powerful prior for a single natural image. Previous DIP-based deblurring methods exploited CNNs as a prior when solving the blind deburring problem and performed remarkably well. However, these methods do not completely utilize the given multiple blurry images, and have limitations of performance for severely blurred images. This is because their architectures are strictly designed to utilize a single image. In this paper, we propose a method called DualDeblur, which uses dual blurry images to generate a single sharp image. DualDeblur jointly utilizes the complementary information of multiple blurry images to capture image statistics for a single sharp image. Additionally, we propose an adaptive L2_SSIM loss that enhances both pixel accuracy and structural properties. Extensive experiments show the superior performance of our method to previous methods in both qualitative and quantitative evaluations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Cannas, Edoardo Daniele, Sara Mandelli, Paolo Bestagini, Stefano Tubaro e Edward J. Delp. "Deep Image Prior Amplitude SAR Image Anonymization". Remote Sensing 15, n.º 15 (27 de julho de 2023): 3750. http://dx.doi.org/10.3390/rs15153750.

Texto completo da fonte
Resumo:
This paper presents an extensive evaluation of the Deep Image Prior (DIP) technique for image inpainting on Synthetic Aperture Radar (SAR) images. SAR images are gaining popularity in various applications, but there may be a need to conceal certain regions of them. Image inpainting provides a solution for this. However, not all inpainting techniques are designed to work on SAR images. Some are intended for use on photographs, while others have to be specifically trained on top of a huge set of images. In this work, we evaluate the performance of the DIP technique that is capable of addressing these challenges: it can adapt to the image under analysis including SAR imagery; it does not require any training. Our results demonstrate that the DIP method achieves great performance in terms of objective and semantic metrics. This indicates that the DIP method is a promising approach for inpainting SAR images, and can provide high-quality results that meet the requirements of various applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Shi, Yu, Cien Fan, Lian Zou, Caixia Sun e Yifeng Liu. "Unsupervised Adversarial Defense through Tandem Deep Image Priors". Electronics 9, n.º 11 (19 de novembro de 2020): 1957. http://dx.doi.org/10.3390/electronics9111957.

Texto completo da fonte
Resumo:
Deep neural networks are vulnerable to the adversarial example synthesized by adding imperceptible perturbations to the original image but can fool the classifier to provide wrong prediction outputs. This paper proposes an image restoration approach which provides a strong defense mechanism to provide robustness against adversarial attacks. We show that the unsupervised image restoration framework, deep image prior, can effectively eliminate the influence of adversarial perturbations. The proposed method uses multiple deep image prior networks called tandem deep image priors to recover the original image from adversarial example. Tandem deep image priors contain two deep image prior networks. The first network captures the main information of images and the second network recovers original image based on the prior information provided by the first network. The proposed method reduces the number of iterations originally required by deep image prior network and does not require adjusting the classifier or pre-training. It can be combined with other defensive methods. Our experiments show that the proposed method surprisingly achieves higher classification accuracy on ImageNet against a wide variety of adversarial attacks than previous state-of-the-art defense methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Gong, Kuang, Ciprian Catana, Jinyi Qi e Quanzheng Li. "PET Image Reconstruction Using Deep Image Prior". IEEE Transactions on Medical Imaging 38, n.º 7 (julho de 2019): 1655–65. http://dx.doi.org/10.1109/tmi.2018.2888491.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Han, Sujy, Tae Bok Lee e Yong Seok Heo. "Deep Image Prior for Super Resolution of Noisy Image". Electronics 10, n.º 16 (20 de agosto de 2021): 2014. http://dx.doi.org/10.3390/electronics10162014.

Texto completo da fonte
Resumo:
Single image super-resolution task aims to reconstruct a high-resolution image from a low-resolution image. Recently, it has been shown that by using deep image prior (DIP), a single neural network is sufficient to capture low-level image statistics using only a single image without data-driven training such that it can be used for various image restoration problems. However, super-resolution tasks are difficult to perform with DIP when the target image is noisy. The super-resolved image becomes noisy because the reconstruction loss of DIP does not consider the noise in the target image. Furthermore, when the target image contains noise, the optimization process of DIP becomes unstable and sensitive to noise. In this paper, we propose a noise-robust and stable framework based on DIP. To this end, we propose a noise-estimation method using the generative adversarial network (GAN) and self-supervision loss (SSL). We show that a generator of DIP can learn the distribution of noise in the target image with the proposed framework. Moreover, we argue that the optimization process of DIP is stabilized when the proposed self-supervision loss is incorporated. The experiments show that the proposed method quantitatively and qualitatively outperforms existing single image super-resolution methods for noisy images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Xie, Zhonghua, Lingjun Liu, Zhongliang Luo e Jianfeng Huang. "Image Denoising Using Nonlocal Regularized Deep Image Prior". Symmetry 13, n.º 11 (7 de novembro de 2021): 2114. http://dx.doi.org/10.3390/sym13112114.

Texto completo da fonte
Resumo:
Deep neural networks have shown great potential in various low-level vision tasks, leading to several state-of-the-art image denoising techniques. Training a deep neural network in a supervised fashion usually requires the collection of a great number of examples and the consumption of a significant amount of time. However, the collection of training samples is very difficult for some application scenarios, such as the full-sampled data of magnetic resonance imaging and the data of satellite remote sensing imaging. In this paper, we overcome the problem of a lack of training data by using an unsupervised deep-learning-based method. Specifically, we propose a deep-learning-based method based on the deep image prior (DIP) method, which only requires a noisy image as training data, without any clean data. It infers the natural images with random inputs and the corrupted observation with the help of performing correction via a convolutional network. We improve the original DIP method as follows: Firstly, the original optimization objective function is modified by adding nonlocal regularizers, consisting of a spatial filter and a frequency domain filter, to promote the gradient sparsity of the solution. Secondly, we solve the optimization problem with the alternating direction method of multipliers (ADMM) framework, resulting in two separate optimization problems, including a symmetric U-Net training step and a plug-and-play proximal denoising step. As such, the proposed method exploits the powerful denoising ability of both deep neural networks and nonlocal regularizations. Experiments validate the effectiveness of leveraging a combination of DIP and nonlocal regularizers, and demonstrate the superior performance of the proposed method both quantitatively and visually compared with the original DIP method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Chen, Yingxia, Yuqi Li, Tingting Wang, Yan Chen e Faming Fang. "DPDU-Net: Double Prior Deep Unrolling Network for Pansharpening". Remote Sensing 16, n.º 12 (13 de junho de 2024): 2141. http://dx.doi.org/10.3390/rs16122141.

Texto completo da fonte
Resumo:
The objective of the pansharpening task is to integrate multispectral (MS) images with low spatial resolution (LR) and to integrate panchromatic (PAN) images with high spatial resolution (HR) to generate HRMS images. Recently, deep learning-based pansharpening methods have been widely studied. However, traditional deep learning methods lack transparency while deep unrolling methods have limited performance when using one implicit prior for HRMS images. To address this issue, we incorporate one implicit prior with a semi-implicit prior and propose a double prior deep unrolling network (DPDU-Net) for pansharpening. Specifically, we first formulate the objective function based on observation models of PAN and LRMS images and two priors of an HRMS image. In addition to the implicit prior in the image domain, we enforce the sparsity of the HRMS image in a certain multi-scale implicit space; thereby, the feature map can obtain better sparse representation ability. We optimize the proposed objective function via alternating iteration. Then, the iterative process is unrolled into an elaborate network, with each iteration corresponding to a stage of the network. We conduct both reduced-resolution and full-resolution experiments on two satellite datasets. Both visual comparisons and metric-based evaluations consistently demonstrate the superiority of the proposed DPDU-Net.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

You, Shaopei, Jianlou Xu, Yajing Fan, Yuying Guo e Xiaodong Wang. "Combining Deep Image Prior and Second-Order Generalized Total Variance for Image Inpainting". Mathematics 11, n.º 14 (21 de julho de 2023): 3201. http://dx.doi.org/10.3390/math11143201.

Texto completo da fonte
Resumo:
Image inpainting is a crucial task in computer vision that aims to restore missing and occluded parts of damaged images. Deep-learning-based image inpainting methods have gained popularity in recent research. One such method is the deep image prior, which is unsupervised and does not require a large number of training samples. However, the deep image prior method often encounters overfitting problems, resulting in blurred image edges. In contrast, the second-order total generalized variation can effectively protect the image edge information. In this paper, we propose a novel image restoration model that combines the strengths of both the deep image prior and the second-order total generalized variation. Our model aims to better preserve the edges of the image structure. To effectively solve the optimization problem, we employ the augmented Lagrangian method and the alternating direction method of the multiplier. Numerical experiments show that the proposed method can repair images more effectively, retain more image details, and achieve higher performance than some recent methods in terms of peak signal-to-noise ratio and structural similarity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Fan, Wenshi, Hancheng Yu, Tianming Chen e Sheng Ji. "OCT Image Restoration Using Non-Local Deep Image Prior". Electronics 9, n.º 5 (11 de maio de 2020): 784. http://dx.doi.org/10.3390/electronics9050784.

Texto completo da fonte
Resumo:
In recent years, convolutional neural networks (CNN) have been widely used in image denoising for their high performance. One difficulty in applying the CNN to medical image denoising such as speckle reduction in the optical coherence tomography (OCT) image is that a large amount of high-quality data is required for training, which is an inherent limitation for OCT despeckling. Recently, deep image prior (DIP) networks have been proposed for image restoration without pre-training since the CNN structures have the intrinsic ability to capture the low-level statistics of a single image. However, the DIP has difficulty finding a good balance between maintaining details and suppressing speckle noise. Inspired by DIP, in this paper, a sorted non-local statics which measures the signal autocorrelation in the differences between the constructed image and the input image is proposed for OCT image restoration. By adding the sorted non-local statics as a regularization loss in the DIP learning, more low-level image statistics are captured by CNN networks in the process of OCT image restoration. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Deep Image Prior"

1

Liu, Yang. "Application of prior information to discriminative feature learning". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/285558.

Texto completo da fonte
Resumo:
Learning discriminative feature representations has attracted a great deal of attention since it is a critical step to facilitate the subsequent classification, retrieval and recommendation tasks. In this dissertation, besides incorporating prior knowledge about image labels into the image classification as most prevalent feature learning methods currently do, we also explore some other general-purpose priors and verify their effectiveness in the discriminant feature learning. As a more powerful representation can be learned by implementing such general priors, our approaches achieve state-of-the-art results on challenging benchmarks. We elaborate on these general-purpose priors and highlight where we have made novel contributions. We apply sparsity and hierarchical priors to the explanatory factors that describe the data, in order to better discover the data structure. More specifically, in the first approach we propose that we only incorporate sparse priors into the feature learning. To this end, we present a support discrimination dictionary learning method, which finds a dictionary under which the feature representation of images from the same class have a common sparse structure while the size of the overlapped signal support of different classes is minimised. Then we incorporate sparse priors and hierarchical priors into a unified framework, that is capable of controlling the sparsity of the neuron activation in deep neural networks. Our proposed approach automatically selects the most useful low-level features and effectively combines them into more powerful and discriminative features for our specific image classification problem. We also explore priors on the relationships between multiple factors. When multiple independent factors exist in the image generation process and only some of them are of interest to us, we propose a novel multi-task adversarial network to learn a disentangled feature which is optimized with respect to the factor of interest to us, while being distraction factors agnostic. When common factors exist in multiple tasks, leveraging common factors cannot only make the learned feature representation more robust, but also enable the model to generalise from very few labelled samples. More specifically, we address the domain adaptation problem and propose the re-weighted adversarial adaptation network to reduce the feature distribution divergence and adapt the classifier from source to target domains.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Merasli, Alexandre. "Reconstruction d’images TEP par des méthodes d’optimisation hybrides utilisant un réseau de neurones non supervisé et de l'information anatomique". Electronic Thesis or Diss., Nantes Université, 2024. http://www.theses.fr/2024NANU1003.

Texto completo da fonte
Resumo:
La TEP est une modalité d’imagerie fonctionnelle utilisée en oncologie permettant de réaliser une imagerie quantitative de la distribution d’un traceur radioactif injecté au patient. Les données brutes TEP présentent un niveau de bruit intrinsèquement élevé et une résolution spatiale modeste, en comparaison avec les modalités d’imagerie anatomiques telles que l’IRM et la TDM. Par ailleurs, les méthodes standards de reconstruction des images TEP à partir des données brutes introduisent du biais positif dans les régions de faible activité, en particulier dans le cas de faibles statistiques d'acquisition (données très bruitées). Dans ce travail, un nouvel algorithme de reconstruction, appelé DNA, a été développé. Par l'intermédiaire de l’algorithme ADMM, le DNA combine la récente méthode du Deep Image Prior (DIP) pour limiter la propagation du bruit et améliorer la résolution spatiale par l’apport d’informations anatomiques, et une méthode de réduction de biais développée pour l’imagerie TEP à faibles statistiques. En revanche, l’utilisation du DIP et d’ADMM requiert l’ajustement de nombreux hyperparamètres, souvent choisis manuellement. Une étude a été menée pour en optimiser certains de façon automatique, avec des méthodes pouvant être étendues à d’autres algorithmes. Enfin, l’utilisation d’informations anatomiques, notamment avec le DIP, permet d’améliorer la qualité des images TEP mais peut générer des artéfacts lorsque les informations des modalités ne concordent pas spatialement. C’est le cas notamment lorsque les tumeurs présentent des contours anatomiques et fonctionnels différents. Deux méthodes ont été développées pour éliminer ces artéfacts tout en préservant au mieux les informations utiles apportées par l’imagerie anatomique
PET is a functional imaging modality used in oncology to obtain a quantitative image of the distribution of a radiotracer injected into a patient. The raw PET data are characterized by a high level of noise and modest spatial resolution, compared to anatomical imaging modalities such as MRI or CT. In addition, standard methods for image reconstruction from the PET raw data introduce a positive bias in low activity regions, especially when dealing with low statistics acquisitions (highly noisy data). In this work, a new reconstruction algorithm, called DNA, has been developed. Using the ADMM algorithm, DNA combines the recently proposed Deep Image Prior (DIP) method to limit noise propagation and improve spatial resolution by using anatomical information, and a bias reduction method developed for low statistics PET imaging. However, the use of DIP and ADMM algorithms requires the tuning of many hyperparameters, which are often selected manually. A study has been carried out to tune some of them automatically, using methods that could benefit other algorithms. Finally, the use of anatomical information, especially with DIP, allows an improvement of the PET image quality, but can generate artifacts when information from one modality does not spatially match with the other. This is particularly the case when tumors have different anatomical and functional contours. Two methods have been developed to remove these artifacts while trying to preserve the useful information provided by the anatomical modality
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Deng, Mo Ph D. Massachusetts Institute of Technology. "Deep learning with physical and power-spectral priors for robust image inversion". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127013.

Texto completo da fonte
Resumo:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 169-182).
Computational imaging is the class of imaging systems that utilizes inverse algorithms to recover unknown objects of interest from physical measurements. Deep learning has been used in computational imaging, typically in the supervised mode and in an End-to-End fashion. However, treating the machine learning algorithm as a mere black-box is not the most efficient, as the measurement formation process (a.k.a. the forward operator), which depends on the optical apparatus, is known to us. Therefore, it is inefficient to let the neural network to explain, at least partly, the system physics. Also, some prior knowledge of the class of objects of interest can be leveraged to make the training more efficient. The main theme of this thesis is to design more efficient deep learning algorithms with the help of physical and power-spectral priors.
We first propose the learning to synthesize by DNN (LS-DNN) scheme, where we propose a dual-channel DNN architecture, each designated to low and high frequency band, respectively, to split, process, and subsequently, learns to recombine low and high frequencies for better inverse conversion. Results show that the LS-DNN scheme largely improves reconstruction quality in many applications, especially in the most severely ill-posed case. In this application, we have implicitly incorporated the system physics through data pre-processing; and the power-spectral prior through the design of the band-splitting configuration. We then propose to use the Phase Extraction Neural Networks (PhENN) trained with perceptual loss, that is based on extracted feature maps from pre-trained classification neural networks, to tackle the problem of low-light phase retrieval under low-light conditions.
This essentially transfer the knowledge, or features relevant to classifications, and thus corresponding to human perceptual quality, to the image-transformation network (such as PhENN). We find that the commonly defined perceptual loss need to be refined for the low-light applications, to avoid the strengthened "grid-like" artifacts and achieve superior reconstruction quality. Moreover, we investigate empirically the interplay between the physical and con-tent prior in using deep learning for computational imaging. More specifically, we investigate the effect of training examples to the learning of the underlying physical map and find that using training datasets with higher Shannon entropy is more beneficial to guide the training to correspond better to the system physics and thus the trained mode generalizes better to test examples disjoint from the training set.
Conversely, if more restricted examples are used as training examples, the training can be guided to undesirably "remember" to produce the ones similar as those in training, making the cross-domain generalization problematic. Next, we also propose to use deep learning to greatly accelerate the optical diffraction tomography algorithm. Unlike previous algorithms that involve iterative optimization algorithms, we present significant progresses towards 3D refractive index (RI) maps from a single-shot angle-multiplexing interferogram. Last but not least, we propose to use cascaded neural networks to incorporate the system physics directly into the machine learning algorithms, while leaving the trainable architectures to learn to function as the ideal Proximal mapping associated with the efficient regularization of the data. We show that this unrolled scheme significantly outperforms the End-to-End scheme, in low-light imaging applications.
by Mo Deng.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ganaye, Pierre-Antoine. "A priori et apprentissage profond pour la segmentation en imagerie cérébrale". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI100.

Texto completo da fonte
Resumo:
L'imagerie médicale est un domaine vaste guidé par les avancées en instrumentation, en techniques d'acquisition et en traitement d’images. Les progrès réalisés dans ces grandes disciplines concourent tous à l'amélioration de la compréhension de phénomènes physiologiques comme pathologiques. En parallèle, l'accès à des bases de données d'imagerie plus large, associé au développement de la puissance de calcul, a favorisé le développement de méthodologies par apprentissage machine pour le traitement automatique des images dont les approches basées sur des réseaux de neurones profonds. Parmi les applications où les réseaux de neurones profonds apportent des solutions, on trouve la segmentation d’images qui consiste à localiser et délimiter dans une image les régions avec des propriétés spécifiques qui seront associées à une même structure. Malgré de nombreux travaux récents en segmentation d’images par réseaux de neurones, l'apprentissage des paramètres d'un réseau de neurones reste guidé par des mesures de performances quantitatives n'incluant pas la connaissance de haut niveau de l'anatomie. L’objectif de cette thèse est de développer des méthodes permettant d’intégrer des a priori dans des réseaux de neurones profonds, en ciblant la segmentation de structures cérébrales en imagerie IRM. Notre première contribution propose une stratégie d'intégration de la position spatiale du patch à classifier, pour améliorer le pouvoir discriminant du modèle de segmentation. Ce premier travail corrige considérablement les erreurs de segmentation étant très éloignées de la réalité anatomique, en améliorant également la qualité globale des résultats. Notre deuxième contribution est ciblée sur une méthodologie pour contraindre les relations d'adjacence entre les structures anatomiques, et ce directement lors de l'apprentissage des paramètres du réseau, dans le but de renforcer le réalisme des segmentations produites. Nos expériences permettent de conclure que la contrainte proposée corrige les adjacences non-admises, améliorant ainsi la consistance anatomique des segmentations produites par le réseau de neurones
Medical imaging is a vast field guided by advances in instrumentation, acquisition techniques and image processing. Advances in these major disciplines all contribute to the improvement of the understanding of both physiological and pathological phenomena. In parallel, access to broader imaging databases, combined with the development of computing power, has fostered the development of machine learning methodologies for automatic image processing, including approaches based on deep neural networks. Among the applications where deep neural networks provide solutions, we find image segmentation, which consists in locating and delimiting in an image regions with specific properties that will be associated with the same structure. Despite many recent studies in deep learning based segmentation, learning the parameters of a neural network is still guided by quantitative performance measures that do not include high-level knowledge of anatomy. The objective of this thesis is to develop methods to integrate a priori into deep neural networks, targeting the segmentation of brain structures in MRI imaging. Our first contribution proposes a strategy for integrating the spatial position of the patch to be classified, to improve the discriminating power of the segmentation model. This first work considerably corrects segmentation errors that are far away from the anatomical reality, also improving the overall quality of the results. Our second contribution focuses on a methodology to constrain adjacency relationships between anatomical structures, directly while learning network parameters, in order to reinforce the realism of the produced segmentations. Our experiments conclude that the proposed constraint corrects non-admitted adjacencies, thus improving the anatomical consistency of the segmentations produced by the neural network
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zheng-YiLi e 李政毅. "Structural RPN: Integrating Prior Parametric Model to Deep CNN for Medical Image Applications". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/326476.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Pandey, Gaurav. "Deep Learning with Minimal Supervision". Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4315.

Texto completo da fonte
Resumo:
Abstract In recent years, deep neural networks have achieved extraordinary performance on supervised learning tasks. Convolutional neural networks (CNN) have vastly improved the state of the art for most computer vision tasks including object recognition and segmentation. However, their success relies on the presence of a large amount of labeled data. In contrast, relatively fewer work has been done in deep learning to handle scenarios when access to ground truth is limited, partial or completely absent. In this thesis, we propose models to handle challenging problems with limited labeled information. Our first contribution is a neural architecture that allows for the extraction of infinitely many features from an object while allowing for tractable inference. This is achieved by using the `kernel trick', that is, we express the inner product in the infinite dimensional feature space as a kernel. The kernel can either be computed exactly for single layer feedforward networks, or approximated by an iterative algorithm for deep convolutional networks. The corresponding models are referred to as stretched deep networks (SDN). We show that when the amount of training data is limited, SDNs with random weights drastically outperform fully supervised CNNs with similar architectures. While SDNs perform reasonably well for classification with limited labeled data, they can not utilize unlabeled data which is often much easier to obtain. A common approach to utilize unlabeled data is to couple the classifier with an autoencoder (or its variants) thereby minimizing reconstruction error in addition to the classification error. We discuss the limitations of decoder based architectures and propose a model that allows for the utilization of unlabeled data without the need of a decoder. This is achieved by jointly modeling the distribution of data and latent features in a manner that explicitly assigns zero probability to unobserved data. The joint probability of the data and the latent features is maximized using a two-step EM-like procedure. Depending on the task, we allow the latent features to be one-hot or real-valued vectors and define a suitable prior on the features. For instance, one-hot features correspond to class labels and are directly used for the unsupervised and semi-supervised classification tasks. For real-valued features, we use hierarchical Bayesian models as priors over the latent features. Hence, the proposed model, which we refer to as discriminative encoder (or DisCoder), is flexible in the type of latent features that it can capture. The proposed model achieves state-of-the-art performance on several challenging datasets. Having addressed the problem of utilizing unlabeled data for classification, we move to a domain where obtaining labels is a lot more expensive, that is, semantic segmentation of images. Explicitly labeling each pixel of an image with the object that the pixel belongs to, is an expensive operation, in terms of time as well as effort? Currently, only a few classes of images have been densely (pixel-level) labeled. Even among these classes, only a few images per class have pixel-level supervision. Models that rely on densely-labeled images, cannot utilize a much larger set of weakly annotated images available on the web. Moreover, these models cannot learn the segmentation masks for new classes, where there is no densely labeled data. Hence, we propose a model for utilizing weakly-labeled data for semantic segmentation of images. This is achieved by generating fake labels for each image, while simultaneously forcing the output of the CNN to satisfy the mean-field constraints imposed by a conditional random field. We show that one can enforce the CRF constraints by forcing the distribution at each pixel to be close to the distribution of its neighbors. The proposed model is very fast to train and achieves state-of-the-art performance on the popular VOC-2012 dataset for the task of weakly supervised semantic segmentation of images.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Deep Image Prior"

1

Wang, Hongyan, Xin Wang e Zhixun Su. "Single Image Dehazing with Deep-Image-Prior Networks". In Lecture Notes in Computer Science, 78–90. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46311-2_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Dittmer, Sören, Tobias Kluth, Daniel Otero Baguer e Peter Maass. "A Deep Prior Approach to Magnetic Particle Imaging". In Machine Learning for Medical Image Reconstruction, 113–22. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61598-7_11.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Laves, Max-Heinrich, Malte Tölle e Tobias Ortmaier. "Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior". In Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis, 81–96. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60365-6_9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sudarshan, Viswanath P., K. Pavan Kumar Reddy, Mohana Singh, Jayavardhana Gubbi e Arpan Pal. "Uncertainty-Informed Bayesian PET Image Reconstruction Using a Deep Image Prior". In Machine Learning for Medical Image Reconstruction, 145–55. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17247-2_15.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ferreira, Leonardo A., Roberto G. Beraldo, Ricardo Suyama, Fernando S. Moura e André K. Takahata. "2D Electrical Impedance Tomography Brain Image Reconstruction Using Deep Image Prior". In IFMBE Proceedings, 272–82. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49404-8_27.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Benfenati, Alessandro, Ambra Catozzi, Giorgia Franchini e Federica Porta. "Piece-wise Constant Image Segmentation with a Deep Image Prior Approach". In Lecture Notes in Computer Science, 352–62. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-31975-4_27.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Agazzotti, Gaetano, Fabien Pierre e Frédéric Sur. "Deep Image Prior Regularized by Coupled Total Variation for Image Colorization". In Lecture Notes in Computer Science, 301–13. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-31975-4_23.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Meyer, Lina, Lena-Marie Woelk, Christine E. Gee, Christian Lohr, Sukanya A. Kannabiran, Björn-Philipp Diercks e René Werner. "Deep Image Prior for Spatio-temporal Fluorescence Microscopy Images DECO-DIP". In Bildverarbeitung für die Medizin 2024, 322–27. Wiesbaden: Springer Fachmedien Wiesbaden, 2024. http://dx.doi.org/10.1007/978-3-658-44037-4_82.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Chen, Yun-Chun, Chen Gao, Esther Robb e Jia-Bin Huang. "NAS-DIP: Learning Deep Image Prior with Neural Architecture Search". In Computer Vision – ECCV 2020, 442–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58523-5_26.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Pan, Xingang, Xiaohang Zhan, Bo Dai, Dahua Lin, Chen Change Loy e Ping Luo. "Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation". In Computer Vision – ECCV 2020, 262–77. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58536-5_16.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Deep Image Prior"

1

Shabtay, Nimrod, Eli Schwartz e Raja Giryes. "Deep Phase Coded Image Prior". In 2024 IEEE International Conference on Computational Photography (ICCP), 1–12. IEEE, 2024. http://dx.doi.org/10.1109/iccp61108.2024.10645026.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yuan, Weimin, Yinuo Wang, Ning Li, Cai Meng e Xiangzhi Bai. "Mixed Degradation Image Restoration via Deep Image Prior Empowered by Deep Denoising Engine". In 2024 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650215.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhang, Yifan, Chaoqun Dong e Shaohui Mei. "Cycle-Consistent Sparse Unmixing Network Based on Deep Image Prior". In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, 9231–34. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10641125.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sultan, Muhammad Ahmad, Chong Chen, Yingmin Liu, Xuan Lei e Rizwan Ahmad. "Deep Image Prior with Structured Sparsity (Discus) for Dynamic MRI Reconstruction". In 2024 IEEE International Symposium on Biomedical Imaging (ISBI), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/isbi56570.2024.10635579.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sfountouris, Loukas, e Athanasios A. Rontogiannis. "Hyperspectral Image Denoising by Jointly Using Variational Bayes Matrix Factorization and Deep Image Prior". In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, 7626–30. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10642320.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Lempitsky, Victor, Andrea Vedaldi e Dmitry Ulyanov. "Deep Image Prior". In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00984.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Balušík, Peter. "Image demosaicing using Deep Image Prior". In STUDENT EEICT 2023. Brno: Brno University of Technology, Faculty of Electrical Engineering and Communication, 2023. http://dx.doi.org/10.13164/eeict.2023.17.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Li, Taihui, Hengkang Wang, Zhong Zhuang e Ju Sun. "Deep Random Projector: Accelerated Deep Image Prior". In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01743.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Li, Jikai, Ruiki Kobayashi, Shogo Muramatsu e Gwanggil Jeon. "Image Restoration with Structured Deep Image Prior". In 2021 36th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC). IEEE, 2021. http://dx.doi.org/10.1109/itc-cscc52171.2021.9524738.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Shi, Yinxia, Desheng Wen e Tuochi Jiang. "Deep image prior for polarization image demosaicking". In 2023 4th International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE). IEEE, 2023. http://dx.doi.org/10.1109/icbase59196.2023.10303066.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia