Статті в журналах з теми "Low-light images"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Low-light images.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Low-light images".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Patil, Akshay, Tejas Chaudhari, Ketan Deo, Kalpesh Sonawane, and Rupali Bora. "Low Light Image Enhancement for Dark Images." International Journal of Data Science and Analysis 6, no. 4 (2020): 99. http://dx.doi.org/10.11648/j.ijdsa.20200604.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hu, Zhe, Sunghyun Cho, Jue Wang, and Ming-Hsuan Yang. "Deblurring Low-Light Images with Light Streaks." IEEE Transactions on Pattern Analysis and Machine Intelligence 40, no. 10 (October 1, 2018): 2329–41. http://dx.doi.org/10.1109/tpami.2017.2768365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yang, Yi, Zhengguo Li, and Shiqian Wu. "Low-Light Image Brightening via Fusing Additional Virtual Images." Sensors 20, no. 16 (August 17, 2020): 4614. http://dx.doi.org/10.3390/s20164614.

Повний текст джерела
Анотація:
Capturing high-quality images via mobile devices in low-light or backlighting conditions is very challenging. In this paper, a new, single image brightening algorithm is proposed to enhance an image captured in low-light conditions. Two virtual images with larger exposure times are generated to increase brightness and enhance fine details of the underexposed regions. In order to reduce the brightness change, the virtual images are generated via intensity mapping functions (IMFs) which are computed using available camera response functions (CRFs). To avoid possible color distortion in the virtual image due to one-to-many mapping, a least square minimization problem is formulated to determine brightening factors for all pixels in the underexposed regions. In addition, an edge-preserving smoothing technique is adopted to avoid noise in the underexposed regions from being amplified in the virtual images. The final brightened image is obtained by fusing the original image and two virtual images via a gradient domain guided image filtering (GGIF) based multiscale exposure fusion (MEF) with properly defined weights for all the images. Experimental results show that the relative brightness and color are preserved better by the proposed algorithm. The details in bright regions are also preserved well in the final image. The proposed algorithm is expected to be useful for computational photography on smart phones.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

FENG Wei, 冯. 维., 吴贵铭 WU Gui-ming, 赵大兴 ZHAO Da-xing, and 刘红帝 LIU Hong-di. "Multi images fusion Retinex for low light image enhancement." Optics and Precision Engineering 28, no. 3 (2020): 736–44. http://dx.doi.org/10.3788/ope.20202803.0736.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lee, Hosang. "Successive Low-Light Image Enhancement Using an Image-Adaptive Mask." Symmetry 14, no. 6 (June 6, 2022): 1165. http://dx.doi.org/10.3390/sym14061165.

Повний текст джерела
Анотація:
Low-light images are obtained in dark environments or in environments where there is insufficient light. Because of this, low-light images have low intensity values and dimmed features, making it difficult to directly apply computer vision or image recognition software to them. Therefore, to use computer vision processing on low-light images, an image improvement procedure is needed. There have been many studies on how to enhance low-light images. However, some of the existing methods create artifact and distortion effects in the resulting images. To improve low-light images, their contrast should be stretched naturally according to their features. This paper proposes the use of a low-light image enhancement method utilizing an image-adaptive mask that is composed of an image-adaptive ellipse. As a result, the low-light regions of the image are stretched and the bright regions are enhanced in a way that appears natural by an image-adaptive mask. Moreover, images that have been enhanced using the proposed method are color balanced, as this method has a color compensation effect due to the use of an image-adaptive mask. As a result, the improved image can better reflect the image’s subject, such as a sunset, and appears natural. However, when low-light images are stretched, the noise elements are also enhanced, causing part of the enhanced image to look dim and hazy. To tackle this issue, this paper proposes the use of guided image filtering based on using triple terms for the image-adaptive value. Images enhanced by the proposed method look natural and are objectively superior to those enhanced via other state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xu, Xin, Shiqin Wang, Zheng Wang, Xiaolong Zhang, and Ruimin Hu. "Exploring Image Enhancement for Salient Object Detection in Low Light Images." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (March 31, 2021): 1–19. http://dx.doi.org/10.1145/3414839.

Повний текст джерела
Анотація:
Low light images captured in a non-uniform illumination environment usually are degraded with the scene depth and the corresponding environment lights. This degradation results in severe object information loss in the degraded image modality, which makes the salient object detection more challenging due to low contrast property and artificial light influence. However, existing salient object detection models are developed based on the assumption that the images are captured under a sufficient brightness environment, which is impractical in real-world scenarios. In this work, we propose an image enhancement approach to facilitate the salient object detection in low light images. The proposed model directly embeds the physical lighting model into the deep neural network to describe the degradation of low light images, in which the environment light is treated as a point-wise variate and changes with local content. Moreover, a Non-Local-Block Layer is utilized to capture the difference of local content of an object against its local neighborhood favoring regions. To quantitative evaluation, we construct a low light Images dataset with pixel-level human-labeled ground-truth annotations and report promising results on four public datasets and our benchmark dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Haofeng, Wenhan Yang, Yueyu Hu, Jiaying Liu, and Ling-Yu Duan. "Towards Low Light Enhancement With RAW Images." IEEE Transactions on Image Processing 31 (2022): 1391–405. http://dx.doi.org/10.1109/tip.2022.3140610.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Yufei, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau, and Alex Kot. "Low-Light Image Enhancement with Normalizing Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2604–12. http://dx.doi.org/10.1609/aaai.v36i3.20162.

Повний текст джерела
Анотація:
To enhance low-light images to normally-exposed ones is highly ill-posed, namely that the mapping relationship between them is one-to-many. Previous works based on the pixel-wise reconstruction losses and deterministic processes fail to capture the complex conditional distribution of normally exposed images, which results in improper brightness, residual noise, and artifacts. In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model. An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution. In this way, the conditional distribution of the normally exposed images can be well modeled, and the enhancement process, i.e., the other inference direction of the invertible network, is equivalent to being constrained by a loss function that better describes the manifold structure of natural images during the training. The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Matsui, Sosuke, Takahiro Okabe, Mihoko Shimano, and Yoichi Sato. "Image Enhancement of Low-light Scenes with Near-infrared Flash Images." IPSJ Transactions on Computer Vision and Applications 2 (2010): 215–23. http://dx.doi.org/10.2197/ipsjtcva.2.215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cao, Shuning, Yi Chang, Shengqi Xu, Houzhang Fang, and Luxin Yan. "Nonlinear Deblurring for Low-Light Saturated Image." Sensors 23, no. 8 (April 7, 2023): 3784. http://dx.doi.org/10.3390/s23083784.

Повний текст джерела
Анотація:
Single image deblurring has achieved significant progress for natural daytime images. Saturation is a common phenomenon in blurry images, due to the low light conditions and long exposure times. However, conventional linear deblurring methods usually deal with natural blurry images well but result in severe ringing artifacts when recovering low-light saturated blurry images. To solve this problem, we formulate the saturation deblurring problem as a nonlinear model, in which all the saturated and unsaturated pixels are modeled adaptively. Specifically, we additionally introduce a nonlinear function to the convolution operator to accommodate the procedure of the saturation in the presence of the blurring. The proposed method has two advantages over previous methods. On the one hand, the proposed method achieves the same high quality of restoring the natural image as seen in conventional deblurring methods, while also reducing the estimation errors in saturated areas and suppressing ringing artifacts. On the other hand, compared with the recent saturated-based deblurring methods, the proposed method captures the formation of unsaturated and saturated degradations straightforwardly rather than with cumbersome and error-prone detection steps. Note that, this nonlinear degradation model can be naturally formulated into a maximum-a posterioriframework, and can be efficiently decoupled into several solvable sub-problems via the alternating direction method of multipliers (ADMM). Experimental results on both synthetic and real-world images demonstrate that the proposed deblurring algorithm outperforms the state-of-the-art low-light saturation-based deblurring methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Li, Jinfeng. "Low-light image enhancement with contrast regularization." Frontiers in Computing and Intelligent Systems 1, no. 3 (October 19, 2022): 25–28. http://dx.doi.org/10.54097/fcis.v1i3.2022.

Повний текст джерела
Анотація:
Because the processing of existing low-light images undergoes multiple sampling processing, there is serious information degradation, and only clear images are used as positive samples to guide network training, low-light image enhancement processing is still a challenging and unsettled problem. Therefore, a multi-scale contrast learning low-light image enhancement network is proposed. First, the image generates rich features through the input module, and then the features are imported into a multi-scale enhancement network with dense residual blocks, using positive and negative samples to guide the network training, and finally using the refinement module to enrich the image details. Experimental results on the dataset show that this method can reduce noise and artifacts in low-light images, and can improve contrast and brightness, demonstrating its advantages.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Vishnu, Choundur. "Low Light Image Enhancement using Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 3463–72. http://dx.doi.org/10.22214/ijraset.2021.35787.

Повний текст джерела
Анотація:
Great quality images and pictures are remarkable for some perceptions. Nonetheless, not each and every images are in acceptable features and quality as they are capture in non-identical light atmosphere. At the point when an image is capture in a low light state the pixel esteems are in a low-esteem range, which will cause image quality to decrease evidently. Since the entire image shows up dull, it's difficult to recognize items or surfaces clearly. Thus, it is vital to improve the nature of low-light images. Low light image enhancement is required in numerous PC vision undertakings for object location and scene understanding. In some cases there is a condition when image caught in low light consistently experience the ill effects of low difference and splendor which builds the trouble of resulting undeniable level undertaking in incredible degree. Low light image improvement utilizing convolutional neural network framework accepts dull or dark images as information and creates brilliant images as a yield without upsetting the substance of the image. So understanding the scene caught through image becomes simpler task.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhou, Chu, Minggui Teng, Youwei Lyu, Si Li, Chao Xu, and Boxin Shi. "Polarization-Aware Low-Light Image Enhancement." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3742–50. http://dx.doi.org/10.1609/aaai.v37i3.25486.

Повний текст джерела
Анотація:
Polarization-based vision algorithms have found uses in various applications since polarization provides additional physical constraints. However, in low-light conditions, their performance would be severely degenerated since the captured polarized images could be noisy, leading to noticeable degradation in the degree of polarization (DoP) and the angle of polarization (AoP). Existing low-light image enhancement methods cannot handle the polarized images well since they operate in the intensity domain, without effectively exploiting the information provided by polarization. In this paper, we propose a Stokes-domain enhancement pipeline along with a dual-branch neural network to handle the problem in a polarization-aware manner. Two application scenarios (reflection removal and shape from polarization) are presented to show how our enhancement can improve their results.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Cagigal, Manuel P. "Object movement characterization from low-light-level images." Optical Engineering 33, no. 8 (August 1, 1994): 2810. http://dx.doi.org/10.1117/12.176520.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bernroider, Gustav. "Processing biological images from very low light emissions." Journal of Bioluminescence and Chemiluminescence 9, no. 3 (May 1994): 127–33. http://dx.doi.org/10.1002/bio.1170090305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Jin Zhu, Jin Zhu, Weiqi Jin Weiqi Jin, Li Li Li Li, Zhenghao Han Zhenghao Han, and Xia Wang Xia Wang. "Fusion of the low-light-level visible and infrared images for night-vision context enhancement." Chinese Optics Letters 16, no. 1 (2018): 013501. http://dx.doi.org/10.3788/col201816.013501.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Al-Hashim, Mohammad Abid, and Zohair Al-Ameen. "Retinex-Based Multiphase Algorithm for Low-Light Image Enhancement." Traitement du Signal 37, no. 5 (November 25, 2020): 733–43. http://dx.doi.org/10.18280/ts.370505.

Повний текст джерела
Анотація:
These days, digital images are one of the most profound methods used to represent information. Still, various images are obtained with a low-light effect due to numerous unavoidable reasons. It may be problematic for humans and computer-related applications to perceive and extract valuable information from such images properly. Hence, the observed quality of low-light images should be ameliorated for improved analysis, understanding, and interpretation. Currently, the enhancement of low-light images is a challenging task since various factors, including brightness, contrast, and colors should be considered effectively to produce results with adequate quality. Therefore, a retinex-based multiphase algorithm is developed in this study, in that it computes the illumination image somewhat similar to the single-scale retinex algorithm, takes the logs of both the original and the illumination images, subtract them using a modified approach, the result is then processed by a gamma-corrected sigmoid function and further processed by a normalization function to produce to the final result. The proposed algorithm is tested using natural low-light images, evaluated using specialized metrics, and compared with eight different sophisticated methods. The attained experiential outcomes revealed that the proposed algorithm has delivered the best performances concerning processing speed, perceived quality, and evaluation metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Cho, Se Woon, Na Rae Baek, Ja Hyung Koo, Muhammad Arsalan, and Kang Ryoung Park. "Semantic Segmentation With Low Light Images by Modified CycleGAN-Based Image Enhancement." IEEE Access 8 (2020): 93561–85. http://dx.doi.org/10.1109/access.2020.2994969.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Zhai, Guangtao, Wei Sun, Xiongkuo Min, and Jiantao Zhou. "Perceptual Quality Assessment of Low-light Image Enhancement." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 4 (November 30, 2021): 1–24. http://dx.doi.org/10.1145/3457905.

Повний текст джерела
Анотація:
Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Li, Enyu, and Wei Zhang. "Smoke Image Segmentation Algorithm Suitable for Low-Light Scenes." Fire 6, no. 6 (May 25, 2023): 217. http://dx.doi.org/10.3390/fire6060217.

Повний текст джерела
Анотація:
The real-time monitoring and analysis system based on video images has been implemented to detect fire accidents on site. While most segmentation methods can accurately segment smoke areas in bright and clear images, it becomes challenging to obtain high performance due to the low brightness and contrast of low-light smoke images. An image enhancement model cascaded with a semantic segmentation model was proposed to enhance the segmentation effect of low-light smoke images. The modified Cycle-Consistent Generative Adversarial Network (CycleGAN) was used to enhance the low-light images, making smoke features apparent and improving the detection ability of the subsequent segmentation model. The smoke segmentation model was based on Transformers and HRNet, where semantic features at different scales were fused in a dense form. The addition of attention modules of spatial dimension and channel dimension to the feature extraction units established the relationship mappings between pixels and features in the two-dimensional spatial directions, which improved the segmentation ability. Through the Foreground Feature Localization Module (FFLM), the discrimination between foreground and background features was increased, and the ability of the model to distinguish the thinner positions of smoke edges was improved. The enhanced segmentation method achieved a segmentation accuracy of 91.68% on the self-built dataset with synthetic low-light images and an overall detection time of 120.1 ms. This method can successfully meet the fire detection demands in low-light environments at night and lay a foundation for expanding the all-weather application of initial fire detection technology based on image analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yao, Zhuo. "Low-Light Image Enhancement and Target Detection Based on Deep Learning." Traitement du Signal 39, no. 4 (August 31, 2022): 1213–20. http://dx.doi.org/10.18280/ts.390413.

Повний текст джерела
Анотація:
Most computer vision applications demand input images to meet their specific requirements. To complete different vision tasks, e.g., object detection, object recognition, and object retrieval, low-light images must be enhanced by different methods to achieve different processing effects. The existing image enhancement methods, which are based on non-physical imaging models, and image generation methods, which are based on deep learning, are not ideal for low-light image processing. To solve the problem, this paper explores low-light image enhancement and target detection based on deep learning. Firstly, a simplified expression was constructed for the optical imaging model of low-light images, and a Haze-line was proposed for color correction of low-light images, which can effectively enhance low-light images based on the global background light and medium transmission rate of the optical imaging model of such images. Next, network framework adopted by the proposed low-light image enhancement model was introduced in detail: the framework includes two deep domain adaptation modules that realize domain transformation and image enhancement, respectively, and the loss functions of the model were presented. To detect targets based on the output enhanced image, a joint enhancement and target detection method was proposed for low-light images. The effectiveness of the constructed model was demonstrated through experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Hao, Shijie, Xu Han, Yanrong Guo, and Meng Wang. "Decoupled Low-Light Image Enhancement." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (November 30, 2022): 1–19. http://dx.doi.org/10.1145/3498341.

Повний текст джерела
Анотація:
The visual quality of photographs taken under imperfect lightness conditions can be degenerated by multiple factors, e.g., low lightness, imaging noise, color distortion, and so on. Current low-light image enhancement models focus on the improvement of low lightness only, or simply deal with all the degeneration factors as a whole, therefore leading to sub-optimal results. In this article, we propose to decouple the enhancement model into two sequential stages. The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping. The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors. The decoupled model facilitates the enhancement in two aspects. On the one hand, the whole low-light enhancement can be divided into two easier subtasks. The first one only aims to enhance the visibility. It also helps to bridge the large intensity gap between the low-light and normal-light images. In this way, the second subtask can be described as the local appearance adjustment. On the other hand, since the parameter matrix learned from the first stage is aware of the lightness distribution and the scene structure, it can be incorporated into the second stage as the complementary information. In the experiments, our model demonstrates the state-of-the-art performance in both qualitative and quantitative comparisons, compared with other low-light image enhancement models. In addition, the ablation studies also validate the effectiveness of our model in multiple aspects, such as model structure and loss function.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Filin, A., A. Kopylov, and I. Gracheva. "A SINGLE IMAGE DEHAZING DATASET WITH LOW-LIGHT REAL-WORLD INDOOR IMAGES, DEPTH MAPS AND INFRARED IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W3-2023 (May 12, 2023): 53–57. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w3-2023-53-2023.

Повний текст джерела
Анотація:
Abstract. Benchmarking of haze removal methods and training related models requires appropriate datasets. The most objective metrics of assessment quality of dehazing are shown by reference metrics – i.e. those in which the reconstructed image is compared with the reference (ground-truth) image without haze. The dehazing datasets consist of pairs where haze is artificially synthesized on ground-truth images are not well suited for the assessment of the quality of dehazing methods. Accommodation of the real-world environment for take truthful pairs of hazy and haze-free images are difficult, so there are few image dehazing datasets, which consists with the real both hazy and haze-free images. The currently researcher’s attention is shifting to dehazing on “more complex” images, including those that are obtained in insufficient illumination conditions and with the presence of localized light sources. It is almost no datasets with such pairs of images, which makes it difficult of objective assessment of image dehazing methods. In this paper, we present extended version of our previously proposed dataset of this kind with more haze density levels and depths of scenes. It consists of images of 2 scenes at 4 lighting and 8 haze density levels – 64 frames in total. In addition to images in the visible spectrum, for each frame depth map and thermal image was captured. An experimental evaluation of state-of-the art haze removal methods was carried out on the resulting dataset. The dataset is available for free download at https://data.mendeley.com/datasets/jjpcj7fy6t.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Gerace, Ivan, Luigi Bedini, Anna Tonazzini, and Paolo Gualtieri. "Edge-preserving restoration of low-light-level microscope images." Micron 26, no. 3 (January 1995): 195–99. http://dx.doi.org/10.1016/0968-4328(94)00057-w.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Nam, Se Hyun, Yu Hwan Kim, Jiho Choi, Seung Baek Hong, Muhammad Owais, and Kang Ryoung Park. "LAE-GAN-Based Face Image Restoration for Low-Light Age Estimation." Mathematics 9, no. 18 (September 19, 2021): 2329. http://dx.doi.org/10.3390/math9182329.

Повний текст джерела
Анотація:
Age estimation is applicable in various fields, and among them, research on age estimation using human facial images, which are the easiest to acquire, is being actively conducted. Since the emergence of deep learning, studies on age estimation using various types of convolutional neural networks (CNN) have been conducted, and they have resulted in good performances, as clear images with high illumination were typically used in these studies. However, human facial images are typically captured in low-light environments. Age information can be lost in facial images captured in low-illumination environments, where noise and blur generated by the camera in the captured image reduce the age estimation performance. No study has yet been conducted on age estimation using facial images captured under low light. In order to overcome this problem, this study proposes a new generative adversarial network for low-light age estimation (LAE-GAN), which compensates for the brightness of human facial images captured in low-light environments, and a CNN-based age estimation method in which compensated images are input. When the experiment was conducted using the MORPH, AFAD, and FG-NET databases—which are open databases—the proposed method exhibited more accurate age estimation performance and brightness compensation in low-light images compared to state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Liang, Hong, Ankang Yu, Mingwen Shao, and Yuru Tian. "Multi-Feature Guided Low-Light Image Enhancement." Applied Sciences 11, no. 11 (May 29, 2021): 5055. http://dx.doi.org/10.3390/app11115055.

Повний текст джерела
Анотація:
Due to the characteristics of low signal-to-noise ratio and low contrast, low-light images will have problems such as color distortion, low visibility, and accompanying noise, which will cause the accuracy of the target detection problem to drop or even miss the detection target. However, recalibrating the dataset for this type of image will face problems such as increased cost or reduced model robustness. To solve this kind of problem, we propose a low-light image enhancement model based on deep learning. In this paper, the feature extraction is guided by the illumination map and noise map, and then the neural network is trained to predict the local affine model coefficients in the bilateral space. Through these methods, our network can effectively denoise and enhance images. We have conducted extensive experiments on the LOL datasets, and the results show that, compared with traditional image enhancement algorithms, the model is superior to traditional methods in image quality and speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

R, Navinprashath R., Radhesh Bhat, Narendra Kumar Chepuri, Tom Korah Manalody, and Dipanjan Ghosh. "Real time enhancement of low light images for low cost embedded platforms." Electronic Imaging 2019, no. 9 (January 13, 2019): 361–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.9.imse-361.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Teku, Sandhya Kumari, S. Koteswara Rao, and I. Santhi Prabha. "Contrast Enhanced Low-light Visible and Infrared Image Fusion." Defence Science Journal 66, no. 3 (April 25, 2016): 266. http://dx.doi.org/10.14429/dsj.66.9340.

Повний текст джерела
Анотація:
<p>Multi-modal image fusion objective is to combine complementary information obtained from multiple modalities into a single representation with increased reliability and interpretation. The images obtained from low-light visible cameras containing fine details of the scene and infrared cameras with high contrast details are the two modalities considered for fusion. In this paper, the low-light images with low target contrast are enhanced by using the phenomenon of stochastic resonance prior to fusion. Entropy is used as a measure to tune iteratively the coefficients using bistable system parameters. The combined advantage of multi scale decomposition approach and principal component analysis is utilized for the fusion of enhanced low-light visible and infrared images. Experimental results were carried out on different image datasets and analysis of the proposed methods were discussed. </p>
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zhu, Minfeng, Pingbo Pan, Wei Chen, and Yi Yang. "EEMEFN: Low-Light Image Enhancement via Edge-Enhanced Multi-Exposure Fusion Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 13106–13. http://dx.doi.org/10.1609/aaai.v34i07.7013.

Повний текст джерела
Анотація:
This work focuses on the extremely low-light image enhancement, which aims to improve image brightness and reveal hidden information in darken areas. Recently, image enhancement approaches have yielded impressive progress. However, existing methods still suffer from three main problems: (1) low-light images usually are high-contrast. Existing methods may fail to recover images details in extremely dark or bright areas; (2) current methods cannot precisely correct the color of low-light images; (3) when the object edges are unclear, the pixel-wise loss may treat pixels of different objects equally and produce blurry images. In this paper, we propose a two-stage method called Edge-Enhanced Multi-Exposure Fusion Network (EEMEFN) to enhance extremely low-light images. In the first stage, we employ a multi-exposure fusion module to address the high contrast and color bias issues. We synthesize a set of images with different exposure time from a single image and construct an accurate normal-light image by combining well-exposed areas under different illumination conditions. Thus, it can produce realistic initial images with correct color from extremely noisy and low-light images. Secondly, we introduce an edge enhancement module to refine the initial images with the help of the edge information. Therefore, our method can reconstruct high-quality images with sharp edges when minimizing the pixel-wise loss. Experiments on the See-in-the-Dark dataset indicate that our EEMEFN approach achieves state-of-the-art performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Tang, Yang, Shuang Song, Shengxi Gui, Weilun Chao, Chinmin Cheng, and Rongjun Qin. "Active and Low-Cost Hyperspectral Imaging for the Spectral Analysis of a Low-Light Environment." Sensors 23, no. 3 (January 28, 2023): 1437. http://dx.doi.org/10.3390/s23031437.

Повний текст джерела
Анотація:
Hyperspectral imaging is capable of capturing information beyond conventional RGB cameras; therefore, several applications of this have been found, such as material identification and spectral analysis. However, similar to many camera systems, most of the existing hyperspectral cameras are still passive imaging systems. Such systems require an external light source to illuminate the objects, to capture the spectral intensity. As a result, the collected images highly depend on the environment lighting and the imaging system cannot function in a dark or low-light environment. This work develops a prototype system for active hyperspectral imaging, which actively emits diverse single-wavelength light rays at a specific frequency when imaging. This concept has several advantages: first, using the controlled lighting, the magnitude of the individual bands is more standardized to extract reflectance information; second, the system is capable of focusing on the desired spectral range by adjusting the number and type of LEDs; third, an active system could be mechanically easier to manufacture, since it does not require complex band filters as used in passive systems. Three lab experiments show that such a design is feasible and could yield informative hyperspectral images in low light or dark environments: (1) spectral analysis: this system’s hyperspectral images improve food ripening and stone type discernibility over RGB images; (2) interpretability: this system’s hyperspectral images improve machine learning accuracy. Therefore, it can potentially benefit the academic and industry segments, such as geochemistry, earth science, subsurface energy, and mining.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Han, Yongcheng, Wenwen Zhang, and Weiji He. "Low-light image enhancement based on simulated multi-exposure fusion." Journal of Physics: Conference Series 2478, no. 6 (June 1, 2023): 062022. http://dx.doi.org/10.1088/1742-6596/2478/6/062022.

Повний текст джерела
Анотація:
Abstract We propose an efficient and novel framework for low-light image enhancement, which aims to reveal information hidden in the darkness and improve overall brightness and local contrast. Inspired by exposure fusion technique, we employ simulated multi-exposure images fusion to derive bright, natural and satisfactory results, while images are taken under poor conditions such as insufficient or uneven illumination, back-lit and limited exposure time. Specifically, we first design a novel method to generate synthesized images with varying exposure time from a single image. Thus, each image of these artificial sequences contains necessary information for the final desired enhanced result. We then introduce a flexible multi-exposure fusion framework to achieve fused images, which comprises a weight map prediction module and a multi-scale fusion module. Extensive experiments show that our approach can achieve similar or better performance compared to serval state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Oh, Geunwoo, Jonghee Back, Jae-Pil Heo, and Bochang Moon. "Robust Image Denoising of No-Flash Images Guided by Consistent Flash Images." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 1993–2001. http://dx.doi.org/10.1609/aaai.v37i2.25291.

Повний текст джерела
Анотація:
Images taken in low light conditions typically contain distracting noise, and eliminating such noise is a crucial computer vision problem. Additional photos captured with a camera flash can guide an image denoiser to preserve edges since the flash images often contain fine details with reduced noise. Nonetheless, a denoiser can be misled by inconsistent flash images, which have image structures (e.g., edges) that do not exist in no-flash images. Unfortunately, this disparity frequently occurs as the flash/no-flash pairs are taken in different light conditions. We propose a learning-based technique that robustly fuses the image pairs while considering their inconsistency. Our framework infers consistent flash image patches locally, which have similar image structures with the ground truth, and denoises no-flash images using the inferred ones via a combination model. We demonstrate that our technique can produce more robust results than state-of-the-art methods, given various flash/no-flash pairs with inconsistent image structures. The source code is available at https://github.com/CGLab-GIST/RIDFnF.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Unnikrishnan, Hari, and Ruhan Bevi Azad. "Non-Local Retinex Based Dehazing and Low Light Enhancement of Images." Traitement du Signal 39, no. 3 (June 30, 2022): 879–92. http://dx.doi.org/10.18280/ts.390313.

Повний текст джерела
Анотація:
We know that a vast amount of research has recently been done on dehazing single images. More work is done on day-time images than night-time images. Also, enhancement of low light images is another area in which lots of research going on. In this paper, a simple yet effective unified variational model is proposed for dehazing of day and night images and low-light enhancement based on non-local global variational regularization. Given the relation between image dehazing and retinex, the haze removal process can minimize a variational retinex model. Estimating of ambient light and transmission maps is a key step in modern dehazing methods. Atmospheric light is not uniform and constant for hazy night images, as night scenes often contain multiple light sources. Often lit and non-illuminated regions have different colour characteristics and cause total variation colour distortion and halo artifacts. Our work directly implements a non-local retinal model based on the L2 norm that simulates the average activity of inhibitory and excitatory neuronal populations in the cortex to overcome this problem. This potential biological feasibility of the L2 norm of our work is divided into two parts using a filtered gradient approach, the reflection sparse prior and the reflection gradient fidelity before the observed image gradient. This unified framework of NLTV-Retinex and DCP efficiently performs low-light enhancement and dehazing of day and night images. We show results obtained using our method on daytime and night-time images and a low-light image dataset. We quantitatively and qualitatively compare our results with recently reported methods, which demonstrate the effectiveness of our method.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Dong, Beibei, Zhenyu Wang, Zhihao Gu, and Jingjing Yang. "Private Face Image Generation Method Based on Deidentification in Low Light." Computational Intelligence and Neuroscience 2022 (March 17, 2022): 1–11. http://dx.doi.org/10.1155/2022/5818180.

Повний текст джерела
Анотація:
The existing face image recognition algorithm can accurately identify underexposed facial images, but the abuse of face image recognition technology can associate face features with personally identifiable information, resulting in privacy disclosure of the users. The paper puts forward a method for private face image generation based on deidentification under low light. First of all, the light enhancement and attenuation networks are pretrained using the training set, and low-light face images in the test set are input into the light enhancement network for photo enhancement. Then the facial area is captured by the face interception network, and corresponding latent code will be created through the latent code generation network and feature disentanglement will be done. Tiny noise will be added to the latent code by the face generation network to create deidentified face images which will be input in a light attenuation network to generate private facial images in a low-lighting style. At last, experiments show that, compared with other state-of-the-art algorithms, this method is more successful in generating low-light private face images with the most similar structure to original photos. It protects users’ privacy effectively by reducing the accuracy of the face recognition network, while also ensuring the practicability of the images.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Bhadouria, Aashi Singh. "Non-Uniform Illumination and Low Light Image Enhancement Techniques: An Exhaustive Study." International Journal for Research in Applied Science and Engineering Technology 10, no. 2 (February 28, 2022): 804–11. http://dx.doi.org/10.22214/ijraset.2022.40385.

Повний текст джерела
Анотація:
Abstract: Different kinds of Images captured are an important medium to represent meaningful information. It can be problematic for artificial intelligence, computer vision techniques and detection algorithms to extract valuable information from those images with poor lighting. In this paper, a study of low illumination-based low-light night image enhancement techniques are presented which work on reflectance, degradation, unsatisfactory lightings, noise, limited range visibility, low contrast, color variations, illumination, color distortion, and quality is reduced. Improving the images in low light conditions is a prerequisite in many fields, such as surveillance systems, road safety and inland waterway transport, object tracking, scientific research, the detection system, the counting system and the navigation system. Low-illumination or night image enhancement algorithms can advance the visual quality of low-light images and these images can be used in many practical application’s artificial intelligence and computer vision techniques. The methods used for enhancement of low illumination must perform, preserving details, contrast improvement, color correction, noise reduction, image enhancement, restoration, etc. Keywords: Image Enhancement, Low Illumination, Reflectance, Low Contrast, Low Light Images, Night Time Images, Low Visibility Images.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Pandurangan, Durai, R. Saravana Kumar, Lukas Gebremariam, L. Arulmurugan, and S. Tamilselvan. "Combined Gray Level Transformation Technique for Low Light Color Image Enhancement." Journal of Computational and Theoretical Nanoscience 18, no. 4 (April 1, 2021): 1221–26. http://dx.doi.org/10.1166/jctn.2021.9392.

Повний текст джерела
Анотація:
Insufficient and poor lightning conditions affect the quality of videos and images captured by the camcorders. The low quality images decrease the performances of computer vision systems in smart traffic, video surveillance, and other imaging systems applications. In this paper, combined gray level transformation technique is proposed to enhance the less quality of illuminated images. This technique is composed of log transformation, power law transformation and adaptive histogram equalization process to improve the low light illumination image estimated using HIS color model. Finally, the enhanced illumination image is blended with original reflectance image to get enhanced color image. This paper shows that the proposed algorithm on various weakly illuminated images is enhanced better and has taken reduced computation time than previous image processing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

WANG, Dianwei, Wang LIU, Jie FANG, and Zhijie XU. "Enhancement algorithm of low illumination image for UAV images inspired by biological vision." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 41, no. 1 (February 2023): 144–52. http://dx.doi.org/10.1051/jnwpu/20234110144.

Повний текст джерела
Анотація:
To address the issue of low brightness, high noise and obscure details of UAV aerial low-light images, this paper proposes an UAV aerial low-light image enhancement algorithm based on dual-path inspired by the dual-path model in human vision system. Firstly, a U-Net network based on residual element is constructed to decompose UAV aerial low-light image into structural path and detail path. Then, an improved generative adversarial network (GAN) is proposed to enhance the structural path, and edge enhancement module is added to enhance the edge information of the image. Secondly, the noise suppression strategy is adopted in detail path to reduce the influence of noise on image. Finally, the output of the two paths is fused to obtain the enhanced image. The experimental results show that the proposed algorithm visually improves the brightness and detail information of the image, and the objective evaluation index is better than the other comparison algorithms. In addition, this paper also verifies the influence of the proposed algorithm on the target detection algorithm under low illumination conditions, and the experimental results show that the proposed algorithm can effectively improve the performance of the target detection algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kun, Yue, Gong Chunqing, and Gao Yuehui. "An Optimized LIME Scheme for Medical Low Light Level Image Enhancement." Computational Intelligence and Neuroscience 2022 (September 22, 2022): 1–8. http://dx.doi.org/10.1155/2022/9613936.

Повний текст джерела
Анотація:
The role of medical image technology in the medical field is becoming more and more obvious. Doctors can use medical image technology to more accurately understand the patient’s condition and make accurate judgments and diagnosis and treatment in order to make a large number of medical blurred images clear and easy to identify. Inspired by the human vision system (HVS), we propose a simple and effective method of low-light image enhancement. In the proposed method, first a sampler is used to get the optimal exposure ratio for the camera response model. Then, a generator is used to synthesize dual-exposure images that are well exposed in the regions where the original image is underexposed. Next, the enhanced image is processed by using a part of low-light image enhancement via the illumination map estimation (LIME) algorithm, and the weight matrix of the two images will be determined when fusing. After that, the combiner is used to get the synthesized image with all pixels well exposed, and finally, a postprocessing part is added to make the output image perform better. In the postprocessing part, the best gray range of the image is adjusted, and the image is denoised and recomposed by using the block machine 3-dimensional (BM3D) model. Experiment results show that the proposed method can enhance low-light images with less visual information distortions when compared with those of several recent effective methods. When it is applied in the field of medical images, it is convenient for medical workers to accurately grasp the details and characteristics of medical images and help medical workers analyze and judge the images more conveniently.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Yuan, Nianzeng, Xingyun Zhao, Bangyong Sun, Wenjia Han, Jiahai Tan, Tao Duan, and Xiaomei Gao. "Low-Light Image Enhancement by Combining Transformer and Convolutional Neural Network." Mathematics 11, no. 7 (March 30, 2023): 1657. http://dx.doi.org/10.3390/math11071657.

Повний текст джерела
Анотація:
Within low-light imaging environment, the insufficient reflected light from objects often results in unsatisfactory images with degradations of low contrast, noise artifacts, or color distortion. The captured low-light images usually lead to poor visual perception quality for color deficient or normal observers. To address the above problems, we propose an end-to-end low-light image enhancement network by combining transformer and CNN (convolutional neural network) to restore the normal light images. Specifically, the proposed enhancement network is designed into a U-shape structure with several functional fusion blocks. Each fusion block includes a transformer stem and a CNN stem, and those two stems collaborate to accurately extract the local and global features. In this way, the transformer stem is responsible for efficiently learning global semantic information and capturing long-term dependencies, while the CNN stem is good at learning local features and focusing on detailed features. Thus, the proposed enhancement network can accurately capture the comprehensive semantic information of low-light images, which significantly contribute to recover normal light images. The proposed method is compared with the current popular algorithms quantitatively and qualitatively. Subjectively, our method significantly improves the image brightness, suppresses the image noise, and maintains the texture details and color information. For objective metrics such as peak signal-to-noise ratio (PSNR), structural similarity (SSIM), image perceptual similarity (LPIPS), DeltaE, and NIQE, our method improves the optimal values by 1.73 dB, 0.05, 0.043, 0.7939, and 0.6906, respectively, compared with other methods. The experimental results show that our proposed method can effectively solve the problems of underexposure, noise interference, and color inconsistency in micro-optical images, and has certain application value.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Omar, Amitesh, T. S. Kumar, B. Krishna Reddy, Jayshreekar Pant, and Manoj Mahto. "First-Light Images from Low-Dispersion Spectrograph-Cum-Imager on 3.6m Devasthal Optical Telescope." Current Science 116, no. 9 (May 10, 2019): 1472. http://dx.doi.org/10.18520/cs/v116/i9/1472-1478.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Yang, Yong, Wenzhi Xu, Shuying Huang, and Weiguo Wan. "Low-Light Image Enhancement Network Based on Multi-Scale Feature Complementation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3214–21. http://dx.doi.org/10.1609/aaai.v37i3.25427.

Повний текст джерела
Анотація:
Images captured in low-light environments have problems of insufficient brightness and low contrast, which will affect subsequent image processing tasks. Although most current enhancement methods can obtain high-contrast images, they still suffer from noise amplification and color distortion. To address these issues, this paper proposes a low-light image enhancement network based on multi-scale feature complementation (LIEN-MFC), which is a U-shaped encoder-decoder network supervised by multiple images of different scales. In the encoder, four feature extraction branches are constructed to extract features of low-light images at different scales. In the decoder, to ensure the integrity of the learned features at each scale, a feature supplementary fusion module (FSFM) is proposed to complement and integrate features from different branches of the encoder and decoder. In addition, a feature restoration module (FRM) and an image reconstruction module (IRM) are built in each branch to reconstruct the restored features and output enhanced images. To better train the network, a joint loss function is defined, in which a discriminative loss term is designed to ensure that the enhanced results better meet the visual properties of the human eye. Extensive experiments on benchmark datasets show that the proposed method outperforms some state-of-the-art methods subjectively and objectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Liu, Xiaoming, Yan Yang, Yuanhong Zhong, Dong Xiong, and Zhiyong Huang. "Super-Pixel Guided Low-Light Images Enhancement with Features Restoration." Sensors 22, no. 10 (May 11, 2022): 3667. http://dx.doi.org/10.3390/s22103667.

Повний текст джерела
Анотація:
Dealing with low-light images is a challenging problem in the image processing field. A mature low-light enhancement technology will not only be conductive to human visual perception but also lay a solid foundation for the subsequent high-level tasks, such as target detection and image classification. In order to balance the visual effect of the image and the contribution of the subsequent task, this paper proposes utilizing shallow Convolutional Neural Networks (CNNs) as the priori image processing to restore the necessary image feature information, which is followed by super-pixel image segmentation to obtain image regions with similar colors and brightness and, finally, the Attentive Neural Processes (ANPs) network to find its local enhancement function on each super-pixel to further restore features and details. Through extensive experiments on the synthesized low-light image and the real low-light image, the experimental results of our algorithm reach 23.402, 0.920, and 2.2490 for Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), and Natural Image Quality Evaluator (NIQE), respectively. As demonstrated by the experiments on image Scale-Invariant Feature Transform (SIFT) feature detection and subsequent target detection, the results of our approach achieve excellent results in visual effect and image features.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Hsieh, Po-Wen, Pei-Chiang Shao, and Suh-Yuh Yang. "Adaptive Variational Model for Contrast Enhancement of Low-Light Images." SIAM Journal on Imaging Sciences 13, no. 1 (January 2020): 1–28. http://dx.doi.org/10.1137/19m1245499.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Zhao, L., Guojian Yang, and Wenhui Duan. "Manipulating stored images with phase imprinting at low light levels." Optics Letters 37, no. 14 (July 10, 2012): 2853. http://dx.doi.org/10.1364/ol.37.002853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Maheswari, K., and Kadapa R. Charan. "Implementation of haze removal algorithm to enhance low light images." i-manager’s Journal on Image Processing 9, no. 2 (2022): 44. http://dx.doi.org/10.26634/jip.9.2.18796.

Повний текст джерела
Анотація:
The image is captured in foggy atmospheric conditions, resulting in hazy, visually degraded visibility; it obscures image quality. Instead of producing clear images, pixel-based metrics are not guaranteed. This updated image is used as input in computer vision for low-level tasks like segmentation. To improve this, it introduces a new approach to de-hazing an image, the end-to-end approach, to keep the visual quality of the generated images. So, it takes one step further to explore the possibility of using the network to perform a semantic segmentation method with U-Net. U-Net will be built and used in this model to improve the quality of the output even more.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Yu, Xia, Lin Bo, and Chen Xin. "Low light combining multiscale deep learning networks and image enhancement algorithm." Современные инновации, системы и технологии - Modern Innovations, Systems and Technologies 2, no. 4 (November 28, 2022): 0215–32. http://dx.doi.org/10.47813/2782-2818-2022-2-4-0215-0232.

Повний текст джерела
Анотація:
Aiming at the lack of reference images for low-light enhancement tasks and the problems of color distortion, texture loss, blurred details, and difficulty in obtaining ground-truth images in existing algorithms, this paper proposes a multi-scale weighted feature low-light based on Retinex theory and attention mechanism. An image enhancement algorithm is proposed. The algorithm performs multi-scale feature extraction on low-light images through the feature extraction module based on the Unet architecture, generates a high-dimensional multi-scale feature map, and establishes an attention mechanism module to highlight the feature information of different scales that are beneficial to the enhanced image, and obtain a weighted image. High-dimensional feature map, the final reflection estimation module uses Retinex theory to build a network model, and generates the final enhanced image through the high-dimensional feature map. An end-to-end network architecture is designed and a set of self-regular loss functions are used to constrain the network model, which gets rid of the constraints of reference images and realizes unsupervised learning. The final experimental results show that the algorithm in this paper maintains high image details and textures while enhancing the contrast and clarity of the image, has good visual effects, can effectively enhance low-light images, and greatly improves the visual quality. Compared with other enhanced algorithms, the objective indicators PSNR and SSIM have been improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Rasheed, Muhammad Tahir, Guiyu Guo, Daming Shi, Hufsa Khan, and Xiaochun Cheng. "An Empirical Study on Retinex Methods for Low-Light Image Enhancement." Remote Sensing 14, no. 18 (September 15, 2022): 4608. http://dx.doi.org/10.3390/rs14184608.

Повний текст джерела
Анотація:
A key part of interpreting, visualizing, and monitoring the surface conditions of remote-sensing images is enhancing the quality of low-light images. It aims to produce higher contrast, noise-suppressed, and better quality images from the low-light version. Recently, Retinex theory-based enhancement methods have gained a lot of attention because of their robustness. In this study, Retinex-based low-light enhancement methods are compared to other state-of-the-art low-light enhancement methods to determine their generalization ability and computational costs. Different commonly used test datasets covering different content and lighting conditions are used to compare the robustness of Retinex-based methods and other low-light enhancement techniques. Different evaluation metrics are used to compare the results, and an average ranking system is suggested to rank the enhancement methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Shi, Chunhe, Chengdong Wu, and Yuan Gao. "Research on Image Adaptive Enhancement Algorithm under Low Light in License Plate Recognition System." Symmetry 12, no. 9 (September 20, 2020): 1552. http://dx.doi.org/10.3390/sym12091552.

Повний текст джерела
Анотація:
The traffic block port monitors and manages the road traffic by shooting and recording the motor vehicles. However, due to the complex factors such as shooting angle, light condition, environmental background, etc., the recognition rate of license plate is not high enough. High light and low light under complex lighting conditions are symmetry problems. This paper analyzes and solves the low light problem in detail, an image adaptive enhancement algorithm under low light conditions is proposed in the paper. The algorithm mainly includes four modules, among which, the fast image classification module uses the deep and separable convolutional neural network to classify low-light images into low-light images by day and low-light images by night, greatly reducing the computation burden on the basis of ensuring the classification accuracy. The image enhancement module inputs the classified images into two different image enhancement algorithms and adopts the idea of dividing and ruling; the image quality evaluation module adopts a weighted comprehensive evaluation index. The final experiment shows that the comprehensive evaluation indexes are all greater than 0.83, which can improve the subsequent recognition of vehicle face and license plate.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Rahman, Ziaur, Muhammad Aamir, Yi-Fei Pu, Farhan Ullah, and Qiang Dai. "A Smart System for Low-Light Image Enhancement with Color Constancy and Detail Manipulation in Complex Light Environments." Symmetry 10, no. 12 (December 5, 2018): 718. http://dx.doi.org/10.3390/sym10120718.

Повний текст джерела
Анотація:
Images are an important medium to represent meaningful information. It may be difficult for computer vision techniques and humans to extract valuable information from images with low illumination. Currently, the enhancement of low-quality images is a challenging task in the domain of image processing and computer graphics. Although there are many algorithms for image enhancement, the existing techniques often produce defective results with respect to the portions of the image with intense or normal illumination, and such techniques also inevitably degrade certain visual artifacts of the image. The model use for image enhancement must perform the following tasks: preserving details, improving contrast, color correction, and noise suppression. In this paper, we have proposed a framework based on a camera response and weighted least squares strategies. First, the image exposure is adjusted using brightness transformation to obtain the correct model for the camera response, and an illumination estimation approach is used to extract a ratio map. Then, the proposed model adjusts every pixel according to the calculated exposure map and Retinex theory. Additionally, a dehazing algorithm is used to remove haze and improve the contrast of the image. The color constancy parameters set the true color for images of low to average quality. Finally, a details enhancement approach preserves the naturalness and extracts more details to enhance the visual quality of the image. The experimental evidence and a comparison with several, recent state-of-the-art algorithms demonstrated that our designed framework is effective and can efficiently enhance low-light images.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Liang, Dong, Ling Li, Mingqiang Wei, Shuo Yang, Liyan Zhang, Wenhan Yang, Yun Du, and Huiyu Zhou. "Semantically Contrastive Learning for Low-Light Image Enhancement." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1555–63. http://dx.doi.org/10.1609/aaai.v36i2.20046.

Повний текст джерела
Анотація:
Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images. In this paper, we respond to the intriguing learning-related question -- if leveraging both accessible unpaired over/underexposed images and high-level semantic guidance, can improve the performance of cutting-edge LLE models? Here, we propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE). Beyond the existing LLE wisdom, it casts the image enhancement task as multi-task joint learning, where LLE is converted into three constraints of contrastive learning, semantic brightness consistency, and feature preservation for simultaneously ensuring the exposure, texture, and color consistency. SCL-LLE allows the LLE model to learn from unpaired positives (normal-light)/negatives (over/underexposed), and enables it to interact with the scene semantics to regularize the image enhancement network, yet the interaction of high-level semantic knowledge and the low-level signal prior is seldom investigated in previous methods. Training on readily available open data, extensive experiments demonstrate that our method surpasses the state-of-the-arts LLE models over six independent cross-scenes datasets. Moreover, SCL-LLE's potential to benefit the downstream semantic segmentation under extremely dark conditions is discussed. Source Code: https://github.com/LingLIx/SCL-LLE.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії