Добірка наукової літератури з теми "Low-light enhancement and denoising"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Low-light enhancement and denoising".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Low-light enhancement and denoising"

1

Carré, Maxime, and Michel Jourlin. "Extending Camera’s Capabilities in Low Light Conditions Based on LIP Enhancement Coupled with CNN Denoising." Sensors 21, no. 23 (November 27, 2021): 7906. http://dx.doi.org/10.3390/s21237906.

Повний текст джерела
Анотація:
Using a sensor in variable lighting conditions, especially very low-light conditions, requires the application of image enhancement followed by denoising to retrieve correct information. The limits of such a process are explored in the present paper, with the objective of preserving the quality of enhanced images. The LIP (Logarithmic Image Processing) framework was initially created to process images acquired in transmission. The compatibility of this framework with the human visual system makes possible its application to images acquired in reflection. Previous works have established the ability of the LIP laws to perform a precise simulation of exposure time variation. Such a simulation permits the enhancement of low-light images, but a denoising step is required, realized by using a CNN (Convolutional Neural Network). A main contribution of the paper consists of using rigorous tools (metrics) to estimate the enhancement reliability in terms of noise reduction, visual image quality, and color preservation. Thanks to these tools, it has been established that the standard exposure time can be significantly reduced, which considerably enlarges the use of a given sensor. Moreover, the contribution of the LIP enhancement and denoising step are evaluated separately.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Jialiang, Ruiwen Ji, Jingwen Wang, Hongcheng Sun, and Mingye Ju. "DEGAN: Decompose-Enhance-GAN Network for Simultaneous Low-Light Image Lightening and Denoising." Electronics 12, no. 14 (July 11, 2023): 3038. http://dx.doi.org/10.3390/electronics12143038.

Повний текст джерела
Анотація:
Images taken in low-light situations frequently have a significant quality reduction. Taking care of these degradation problems in low-light images is essential for raising their visual quality and enhancing high-level visual task performance. However, because of the inherent information loss in dark images, conventional Retinex-based approaches for low-light image enhancement frequently fail to accomplish real denoising. This research introduces DEGANet, a revolutionary deep-learning framework created particularly for improving and denoising low-light images. To overcome these restrictions, DEGANet makes use of the strength of a Generative Adversarial Network (GAN). The Decom-Net, Enhance-Net, and an Adversarial Generative Network (GAN) are three linked subnets that make up our novel Retinex-based DEGANet architecture. The Decom-Net is in charge of separating the reflectance and illumination components from the input low-light image. This decomposition enables Enhance-Net to effectively enhance the illumination component, thereby improving the overall image quality. Due to the complicated noise patterns, fluctuating intensities, and intrinsic information loss in low-light images, denoising them presents a significant challenge. By incorporating a GAN into our architecture, DEGANet is able to effectively denoise and smooth the enhanced image as well as retrieve the original data and fill in the gaps, producing an output that is aesthetically beautiful while maintaining key features. Through a comprehensive set of studies, we demonstrate that DEGANet exceeds current state-of-the-art methods in both terms of image enhancement and denoising quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kim, Minjae, Dubok Park, David Han, and Hanseok Ko. "A novel approach for denoising and enhancement of extremely low-light video." IEEE Transactions on Consumer Electronics 61, no. 1 (February 2015): 72–80. http://dx.doi.org/10.1109/tce.2015.7064113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Malik, Sameer, and Rajiv Soundararajan. "A low light natural image statistical model for joint contrast enhancement and denoising." Signal Processing: Image Communication 99 (November 2021): 116433. http://dx.doi.org/10.1016/j.image.2021.116433.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Das Mou, Trisha, Saadia Binte Alam, Md Hasibur Rahman, Gautam Srivastava, Mahady Hasan, and Mohammad Faisal Uddin. "Multi-Range Sequential Learning Based Dark Image Enhancement with Color Upgradation." Applied Sciences 13, no. 2 (January 12, 2023): 1034. http://dx.doi.org/10.3390/app13021034.

Повний текст джерела
Анотація:
Images under low-light conditions suffer from noise, blurring, and low contrast, thus limiting the precise detection of objects. For this purpose, a novel method is introduced based on convolutional neural network (CNN) dual attention unit (DAU) and selective kernel feature synthesis (SKFS) that merges with the Retinex theory-based model for the enhancement of dark images under low-light conditions. The model mentioned in this paper is a multi-scale residual block made up of several essential components equivalent to an onward convolutional neural network with a VGG16 architecture and various Gaussian convolution kernels. In addition, backpropagation optimizes most of the parameters in this model, whereas the values in conventional models depend on an artificial environment. The model was constructed using simultaneous multi-resolution convolution and dual attention processes. We performed our experiment in the Tesla T4 GPU of Google Colab using the Customized Raw Image Dataset, College Image Dataset (CID), Extreme low-light denoising dataset (ELD), and ExDark dataset. In this approach, an extended set of features is set up to learn from several scales to incorporate contextual data. An extensive performance evaluation on the four above-mentioned standard image datasets showed that MSR-MIRNeT produced standard image enhancement and denoising results with a precision of 97.33%; additionally, the PSNR/SSIM result is 29.73/0.963 which is better than previously established models (MSR, MIRNet, etc.). Furthermore, the output of the proposed model (MSR-MIRNet) shows that this model can be implemented in medical image processing, such as detecting fine scars on pelvic bone segmentation imaging, enhancing contrast for tuberculosis analysis, and being beneficial for robotic visualization in dark environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Han, Guang, Yingfan Wang, Jixin Liu, and Fanyu Zeng. "Low-light images enhancement and denoising network based on unsupervised learning multi-stream feature modeling." Journal of Visual Communication and Image Representation 96 (October 2023): 103932. http://dx.doi.org/10.1016/j.jvcir.2023.103932.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hu, Linshu, Mengjiao Qin, Feng Zhang, Zhenhong Du, and Renyi Liu. "RSCNN: A CNN-Based Method to Enhance Low-Light Remote-Sensing Images." Remote Sensing 13, no. 1 (December 26, 2020): 62. http://dx.doi.org/10.3390/rs13010062.

Повний текст джерела
Анотація:
Image enhancement (IE) technology can help enhance the brightness of remote-sensing images to obtain better interpretation and visualization effects. Convolutional neural networks (CNN), such as the Low-light CNN (LLCNN) and Super-resolution CNN (SRCNN), have achieved great success in image enhancement, image super resolution, and other image-processing applications. Therefore, we adopt CNN to propose a new neural network architecture with end-to-end strategy for low-light remote-sensing IE, named remote-sensing CNN (RSCNN). In RSCNN, an upsampling operator is adopted to help learn more multi-scaled features. With respect to the lack of labeled training data in remote-sensing image datasets for IE, we use real natural image patches to train firstly and then perform fine-tuning operations with simulated remote-sensing image pairs. Reasonably designed experiments are carried out, and the results quantitatively show the superiority of RSCNN in terms of structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) over conventional techniques for low-light remote-sensing IE. Furthermore, the results of our method have obvious qualitative advantages in denoising and maintaining the authenticity of colors and textures.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhao, Meng Ling, and Min Xia Jiang. "Research on Enhanced of Mine-Underground Picture." Advanced Materials Research 490-495 (March 2012): 548–52. http://dx.doi.org/10.4028/www.scientific.net/amr.490-495.548.

Повний текст джерела
Анотація:
Because of the based on S3C6410 Field information recorder mine- underground non-uniform illumination and mine- underground non-uniform illumination that a large of noise collected and transferred,image is low contrast ,dim and dark. Based on the theory of Donoho's wavelet threshold denoising, several typical wavelet threshold denoising methods are compared.the best denoising effect of peak signal to noise ratio is obtained. The image enhancement method that combination of the adaptive thresholding denoising and histogram equalization is proposed. The experiment result shows that the method has a good denoising performance, which removed the readout noise of CCD Camera,at the same time, image quality is improved .So the wavelet enhancement in image processing of mine- underground can improve image quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wu, Zeju, Yang Ji, Lijun Song, and Jianyuan Sun. "Underwater Image Enhancement Based on Color Correction and Detail Enhancement." Journal of Marine Science and Engineering 10, no. 10 (October 17, 2022): 1513. http://dx.doi.org/10.3390/jmse10101513.

Повний текст джерела
Анотація:
To solve the problems of underwater image color deviation, low contrast, and blurred details, an algorithm based on color correction and detail enhancement is proposed. First, the improved nonlocal means denoising algorithm is used to denoise the underwater image. The combination of Gaussian weighted spatial distance and Gaussian weighted Euclidean distance is used as the index of nonlocal means denoising algorithm to measure the similarity of structural blocks. The improved algorithm can retain more edge features and texture information while maintaining noise reduction ability. Then, the improved U-Net is used for color correction. Introducing residual structure and attention mechanism into U-Net can effectively enhance feature extraction ability and prevent network degradation. Finally, a sharpening algorithm based on maximum a posteriori is proposed to enhance the image after color correction, which can increase the detailed information of the image without expanding the noise. The experimental results show that the proposed algorithm has a remarkable effect on underwater image enhancement.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Xu, Xiaogang, Ruixing Wang, Chi-Wing Fu, and Jiaya Jia. "Deep Parametric 3D Filters for Joint Video Denoising and Illumination Enhancement in Video Super Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3054–62. http://dx.doi.org/10.1609/aaai.v37i3.25409.

Повний текст джерела
Анотація:
Despite the quality improvement brought by the recent methods, video super-resolution (SR) is still very challenging, especially for videos that are low-light and noisy. The current best solution is to subsequently employ best models of video SR, denoising, and illumination enhancement, but doing so often lowers the image quality, due to the inconsistency between the models. This paper presents a new parametric representation called the Deep Parametric 3D Filters (DP3DF), which incorporates local spatiotemporal information to enable simultaneous denoising, illumination enhancement, and SR efficiently in a single encoder-and-decoder network. Also, a dynamic residual frame is jointly learned with the DP3DF via a shared backbone to further boost the SR quality. We performed extensive experiments, including a large-scale user study, to show our method's effectiveness. Our method consistently surpasses the best state-of-the-art methods on all the challenging real datasets with top PSNR and user ratings, yet having a very fast run time. The code is available at https://github.com/xiaogang00/DP3DF.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Low-light enhancement and denoising"

1

Dalasari, Venkata Gopi Krishna, and Sri Krishna Jayanty. "Low Light Video Enhancement along with Objective and Subjective Quality Assessment." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13500.

Повний текст джерела
Анотація:
Enhancing low light videos has been quite a challenge over the years. A video taken in low light always has the issues of low dynamic range and high noise. This master thesis presents contribution within the field of low light video enhancement. Three models are proposed with different tone mapping algorithms for extremely low light low quality video enhancement. For temporal noise removal, a motion compensated kalman structure is presented. Dynamic range of the low light video is stretched using three different methods. In Model 1, dynamic range is increased by adjustment of RGB histograms using gamma correction with a modified version of adaptive clipping thresholds. In Model 2, a shape preserving dynamic range stretch of the RGB histogram is applied using SMQT. In Model 3, contrast enhancement is done using CLAHE. In the final stage, the residual noise is removed using an efficient NLM. The performance of the models are compared on various Objective VQA metrics like NIQE, GCF and SSIM. To evaluate the actual performance of the models subjective tests are conducted, due to the large number of applications that target humans as the end user of the video.The performance of the three models are compared for a total of ten real time input videos taken in extremely low light environment. A total of 25 human observers subjectively evaluated the performance of the three models based on the parameters: contrast, visibility, visually pleasing, amount of noise and overall quality. A detailed statistical evaluation of the relative performance of the three models is also provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bagchi, Deblin. "Transfer learning approaches for feature denoising and low-resource speech recognition." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1577641434371497.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Meng. "Low-Observable Object Detection and Tracking Using Advanced Image Processing Techniques." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1396465762.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Miller, Sarah Victoria. "Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596444315236623.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chikkamadal, Manjunatha Prathiksha. "Aitchison Geometry and Wavelet Based Joint Demosaicking and Denoising for Low Light Imaging." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1627842487059544.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

CASULA, ESTER ANNA RITA. "Low mass dimuon production with the ALICE muon spectrometer." Doctoral thesis, Università degli Studi di Cagliari, 2014. http://hdl.handle.net/11584/266451.

Повний текст джерела
Анотація:
Low mass vector meson (ρ, ω,Φ ) production provides key information on the hot and dense state of strongly interacting matter produced in high-energy heavy ion collisions (called Quark Gluon Plasma). Strangeness enhancement is one of the possible signatures of the Quark Gluon Plasma formation and can be accessed through the measurement of Φ meson production with respect to ρ and Φ mesons, while the measurement of the Φ nuclear modification factor provides a powerful tool to probe the production dynamics and hadronization process in relativistic heavy ion collisions. Vector mesons can be detected through their decays into muon pairs with the ALICE muon spectrometer. This thesis presents the results on the measurement of the Φ differential cross section, as a function of the transverse momentum, in pp collisions at √s = 2.76 TeV; the measurement of the Φ fyield and of the nuclear modification factor RpA at forward and backward rapidity, as a function of the transverse momentum, in p-Pb collisions at √s = 5.02 TeV; the measurement of the Φ/ (ρ+ω) ratio, as well as of the Φ nuclear modification factors RAA and RCP , as a function of the number of participating nucleons, in Pb-Pb collisions at √sNN = 2.76 TeV.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Landin, Roman. "Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300055.

Повний текст джерела
Анотація:
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications.
Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Hsueh-I., and 陳學儀. "Deep Burst Low Light Image Enhancement with Alignment, Denoising and Blending." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/sfk685.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊網路與多媒體研究所
106
Taking photos under low light environment is always a challenge for most camera. In this thesis, we propose a neural network pipeline for processing burst short-exposure raw data. Our method contains alignment, denoising and blending. First, we use FlowNet2.0 to predict the optical flow between burst images and align these burst images. And then, we feed the aligned burst raw data into a DenoiseUNet, which includes denoise-part and color-part, to generate an RGB image. Finally, we use a MaskUNet to generate a mask that can distinguish misalignment. We blend the outputs from single raw image and from burst raw images by the mask. Our method proves that using burst inputs has significantly improvement than single input.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Malik, Sameer. "Low Light Image Restoration: Models, Algorithms and Learning with Limited Data." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6120.

Повний текст джерела
Анотація:
The ability to capture high quality images under low-light conditions is an important feature of most hand-held devices and surveillance cameras. Images captured under such conditions often suffer from multiple distortions such as poor contrast, low brightness, color-cast and severe noise. While adjusting camera hardware settings such as aperture width, ISO level and exposure time can improve the contrast and brightness levels in the captured image, they often introduce artifacts including shallow depth-of-field, noise and motion blur. Thus, it is important to study image processing approaches to improve the quality of low-light images. In this thesis, we study the problem of low-light image restoration. In particular, we study the design of low-light image restoration algorithms based on statistical models, deep learning architectures and learning approaches when only limited labelled training data is available. In our statistical model approach, the low-light natural image in the band pass domain is modelled by statistically relating a Gaussian scale mixture model for the pristine image, with the low-light image, through a detail loss coefficient and Gaussian noise. The detail loss coefficient in turn is statistically described using a posterior distribution with respect to its estimate based on a prior contrast enhancement algorithm. We then design our low-light enhancement and denoising (LLEAD) method by computing the minimum mean squared error estimate of the pristine image band pass coefficients. We create the Indian Institute of Science low-light image dataset of well-lit and low-light image pairs to learn the model parameters and evaluate our enhancement method. We show through extensive experiments on multiple datasets that our method helps better enhance the contrast while simultaneously controlling the noise when compared to other classical joint contrast enhancement and denoising methods. Deep convolutional neural networks (CNNs) based on residual learning and end-to-end multiscale learning have been successful in achieving state of the art performance in image restoration. However, their application to joint contrast enhancement and denoising under low-light conditions is challenging owing to the complex nature of the distortion process involving both loss of details and noise. We address this challenge through two lines of approaches, one which exploits the statistics of natural images and the other which exploits the structure of the distortion process. We first propose a multiscale learning approach by learning the subbands obtained in a Laplacian pyramid decomposition. We refer to our framework as low-light restoration network (LLRNet). Our approach consists of a bank of CNNs where each CNN is trained to learn to explicitly predict different subbands of the Laplacian pyramid of the well exposed image. We show through extensive experiments on multiple datasets that our approach produces better quality restored images when compared to other low-light restoration methods. In our second line of approach, we learn a distortion model that relates a noisy low- light and ground truth image pair. The low-light image is modeled to suffer from contrast distortion and additive noise. We model the loss of contrast through a parametric function, which enables the estimation of the underlying noise. We then use a pair of CNN models to learn the noise and the parameters of a function to achieve contrast enhancement. This contrast enhancement function is modeled as a linear combination of multiple gamma transforms. We show through extensive evaluations that our low-light Image Model for Enhancement Network (LLIMENet) achieves superior restoration performance when compared to other methods on several publicly available datasets. While CNN models are fairly successful in low-light image restoration, such approaches require a large number of paired low-light and ground truth image pairs for training. Thus, we study the problem of semi-supervised learning for low-light image restoration when limited low-light images have ground truth labels. Our main contributions in this work are twofold. We first deploy an ensemble of low-light restoration networks to restore the unlabeled images and generate a set of potential pseudo-labels. We model the contrast distortions in the labeled set to generate different sets of training data and create the ensemble of networks. We then design a contrastive self-supervised learning based image quality measure to obtain the pseudo-label among the images restored by the ensemble. We show that training the restoration network with the pseudo-labels allows us to achieve excellent restoration performance even with very few labeled pairs. Our extensive experiments on multiple datasets show the superior performance of our semi-supervised low-light image restoration compared to other approaches. Finally, we study an even more constrained problem setting when only very few labelled image pairs are available for training. To address this challenge, we augment the available labelled data with large number of low-light and ground-truth image pairs through a CNN based model that generates low-light images. In particular, we introduce a contrast distortion auto-encoder framework that learns to disentangle the contrast distortion and content features from a low-light image. The contrast distortion features from a low-light image are then fused with the content features from another pristine image to create a low-light version of the pristine image. We achieve the disentanglement of distortion from image content through the novel use of a contrastive loss to constrain the training. We then use the generated data to train low-light restoration models. We evaluate our data generation method in the 5-shot and 10-shot labelled data settings to show the effectiveness of our models.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Szu-Chieh, and 王思傑. "Extreme Low Light Image Enhancement with Generative Adversarial Networks." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cz8pqb.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊工程學研究所
107
Taking photos under low light environments is always a challenge for current imaging pipelines. Image noise and artifacts corrupt the image. Tak- ing the great success of deep learning into consideration recently, it may be straightforward to train a deep convolutional network to perform enhance- ment on such images to restore the underlying clean image. However, the large number of parameters in deep models may require a large amount of data to train. For the low light image enhancement task, paired data requires a short exposure image and a long exposure image to be taken with perfect alignment, which may not be achievable in every scene, thus limiting the choice of possible scenes to capture paired data and increasing the effort to collect training data. Also, data-driven solutions tend to replace the entire camera pipeline and cannot be easily integrated to existing pipelines. There- fore, we propose to handle the task with our 2-stage pipeline, consisting of an imperfect denoise network, and a bias correction net BC-UNet. Our method only requires noisy bursts of short exposure images and unpaired long expo- sure images, relaxing the effort of collecting training data. Also, our method works in raw domain and is capable of being easily integrated into the ex- isting camera pipeline. Our method achieves comparable improvements to other methods under the same settings.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Low-light enhancement and denoising"

1

Hong, M. H. Laser applications in nanotechnology. Edited by A. V. Narlikar and Y. Y. Fu. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199533060.013.24.

Повний текст джерела
Анотація:
This article discusses a variety of laser applications in nanotechnology. The laser has proven to be one of many mature and reliable manufacturing tools, with applications in modern industries, from surface cleaning to thin-film deposition. Laser nanoengineering has several advantages over electron-beam and focused ion beam processing. For example, it is a low-cost, high-speed process in air, vacuum or chemical environments and also has the capability to fulfill flexible integration control. This article considers laser nanotechnology in the following areas: pulsed laser ablation for nanomaterials synthesis; laser nanoprocessing to make nanobumps for disk media nanotribology and anneal ultrashort PN junctions; surface nanopatterning with near-field, and light-enhancement effects; and large-area parallel laser nanopatterning by laser interference lithography and laser irradiation through a microlens array. Based on these applications, the article argues that the laser will continue to be one of the highly potential nanoengineering means in next-generation manufacturing.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Low-light enhancement and denoising"

1

Li, Mading, Jiaying Liu, Wenhan Yang, and Zongming Guo. "Joint Denoising and Enhancement for Low-Light Images via Retinex Model." In Communications in Computer and Information Science, 91–99. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8108-8_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Vogt, Carson, Geng Lyu, and Kartic Subr. "Lightless Fields: Enhancement and Denoising of Light-Deficient Light Fields." In Advances in Visual Computing, 383–96. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64556-4_30.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Haodian, Yang Wang, Yang Cao, and Zheng-Jun Zha. "Fusion-Based Low-Light Image Enhancement." In MultiMedia Modeling, 121–33. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fotiadou, Konstantina, Grigorios Tsagkatakis, and Panagiotis Tsakalides. "Low Light Image Enhancement via Sparse Representations." In Lecture Notes in Computer Science, 84–93. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11758-4_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kavya, Avvaru Greeshma, Uruguti Aparna, and Pallikonda Sarah Suhasini. "Enhancement of Low-Light Images Using CNN." In Emerging Research in Computing, Information, Communication and Applications, 1–9. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Song, Juan, Liang Zhang, Peiyi Shen, Xilu Peng, and Guangming Zhu. "Single Low-Light Image Enhancement Using Luminance Map." In Communications in Computer and Information Science, 101–10. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3005-5_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tu, Juanjuan, Zongliang Gan, and Feng Liu. "Retinex Based Flicker-Free Low-Light Video Enhancement." In Pattern Recognition and Computer Vision, 367–79. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31723-2_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nie, Xixi, Zilong Song, Bing Zhou, and Yating Wei. "LESN: Low-Light Image Enhancement via Siamese Network." In Lecture Notes in Electrical Engineering, 183–94. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6963-7_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Panwar, Moomal, and Sanjay B. C. Gaur. "Inception-Based CNN for Low-Light Image Enhancement." In Computational Vision and Bio-Inspired Computing, 533–45. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9573-5_39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Priyadarshini, R., Arvind Bharani, E. Rahimankhan, and N. Rajendran. "Low-Light Image Enhancement Using Deep Convolutional Network." In Innovative Data Communication Technologies and Application, 695–705. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9651-3_57.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Low-light enhancement and denoising"

1

Feng, Hansen, Lizhi Wang, Yuzhi Wang, and Hua Huang. "Learnability Enhancement for Low-light Raw Denoising." In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548186.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Guo, Yu, Yuxu Lu, Meifang Yang, and Ryan Wen Liu. "Low-light Image Enhancement with Deep Blind Denoising." In ICMLC 2020: 2020 12th International Conference on Machine Learning and Computing. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3383972.3384022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gao, Yin, Chao Yan, Huixiong Zeng, Qiming Li, and Jun Li. "Adaptive Low-Light Image Enhancement with Decomposition Denoising." In 2022 7th International Conference on Robotics and Automation Engineering (ICRAE). IEEE, 2022. http://dx.doi.org/10.1109/icrae56463.2022.10056212.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lim, Jaemoon, Jin-Hwan Kim, Jae-Young Sim, and Chang-Su Kim. "Robust contrast enhancement of noisy low-light images: Denoising-enhancement-completion." In 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7351583.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Song, Yuda, Yunfang Zhu, and Xin Du. "Automatical Enhancement and Denoising of Extremely Low-light Images." In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412195.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wenshuai Yin, Xiangbo Lin, and Yi Sun. "A novel framework for low-light colour image enhancement and denoising." In 2011 3rd International Conference on Awareness Science and Technology (iCAST). IEEE, 2011. http://dx.doi.org/10.1109/icawst.2011.6163088.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yu, Long, Haonan Su, and Cheolkon Jung. "Joint Enhancement And Denoising of Low Light Images Via JND Transform." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Jikun, Weixiang Liang, Xianbo Wang, and Zhixin Yang. "An Image Denoising and Enhancement Approach for Dynamic Low-light Environment." In 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). IEEE, 2022. http://dx.doi.org/10.1109/ipec54454.2022.9777630.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhou, Dewei, Zongxin Yang, and Yi Yang. "Pyramid Diffusion Models for Low-light Image Enhancement." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/199.

Повний текст джерела
Анотація:
Recovering noise-covered details from low-light images is challenging, and the results given by previous methods leave room for improvement. Recent diffusion models show realistic and detailed image generation through a sequence of denoising refinements and motivate us to introduce them to low-light image enhancement for recovering realistic details. However, we found two problems when doing this, i.e., 1) diffusion models keep constant resolution in one reverse process, which limits the speed; 2) diffusion models sometimes result in global degradation (e.g., RGB shift). To address the above problems, this paper proposes a Pyramid Diffusion model (PyDiff) for low-light image enhancement. PyDiff uses a novel pyramid diffusion method to perform sampling in a pyramid resolution style (i.e., progressively increasing resolution in one reverse process). Pyramid diffusion makes PyDiff much faster than vanilla diffusion models and introduces no performance degradation. Furthermore, PyDiff uses a global corrector to alleviate the global degradation that may occur in the reverse process, significantly improving the performance and making the training of diffusion models easier with little additional computational consumption. Extensive experiments on popular benchmarks show that PyDiff achieves superior performance and efficiency. Moreover, PyDiff can generalize well to unseen noise and illumination distributions. Code and supplementary materials are available at https://github.com/limuloo/PyDIff.git.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Lin, Ronggang Wang, Wenmin Wang, and Wen Gao. "A low-light image enhancement method for both denoising and contrast enlarging." In 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7351501.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Low-light enhancement and denoising"

1

Birkmire, Robert, Juejun Hu, and Kathleen Richardson. Beyond the Lambertian limit: Novel low-symmetry gratings for ultimate light trapping enhancement in next-generation photovoltaics. Office of Scientific and Technical Information (OSTI), May 2016. http://dx.doi.org/10.2172/1419008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії