Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: IMAGE DEHAZING.

Статті в журналах з теми "IMAGE DEHAZING"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "IMAGE DEHAZING".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Yeole, Aditya. "Satellite Image Dehazing." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 5184–92. http://dx.doi.org/10.22214/ijraset.2023.52728.

Повний текст джерела
Анотація:
Abstract: The images captured during haze, murkiness and raw weather has serious degradation in them. Image dehazing of a single image is a problematic affair. While already-in-use systems depend on high-quality images, some Computer Vision applications, such self-driving cars and image restoration, typically use input from data that is of poor quality.. This paper proposes a deep CNN model based on dehazing algorithm using U-NET, dynamic U-NET and Generative Adversarial Networks (CycleGANs). CycleGAN is a method that comprehends automatic training of image-to-image transformation without associated examples. To train the model network, we use SIH dataset as the training set. The superior performance is accomplished using appreciably small dataset, the corresponding outcomes confirm the adaptability and strength of the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xu, Jun, Zi-Xuan Chen, Hao Luo, and Zhe-Ming Lu. "An Efficient Dehazing Algorithm Based on the Fusion of Transformer and Convolutional Neural Network." Sensors 23, no. 1 (December 21, 2022): 43. http://dx.doi.org/10.3390/s23010043.

Повний текст джерела
Анотація:
The purpose of image dehazing is to remove the interference from weather factors in degraded images and enhance the clarity and color saturation of images to maximize the restoration of useful features. Single image dehazing is one of the most important tasks in the field of image restoration. In recent years, due to the progress of deep learning, single image dehazing has made great progress. With the success of Transformer in advanced computer vision tasks, some research studies also began to apply Transformer to image dehazing tasks and obtained surprising results. However, both the deconvolution-neural-network-based dehazing algorithm and Transformer based dehazing algorithm magnify their advantages and disadvantages separately. Therefore, this paper proposes a novel Transformer–Convolution fusion dehazing network (TCFDN), which uses Transformer’s global modeling ability and convolutional neural network’s local modeling ability to improve the dehazing ability. In the Transformer–Convolution fusion dehazing network, the classic self-encoder structure is used. This paper proposes a Transformer–Convolution hybrid layer, which uses an adaptive fusion strategy to make full use of the Swin-Transformer and convolutional neural network to extract and reconstruct image features. On the basis of previous research, this layer further improves the ability of the network to remove haze. A series of contrast experiments and ablation experiments not only proved that the Transformer–Convolution fusion dehazing network proposed in this paper exceeded the more advanced dehazing algorithm, but also provided solid and powerful evidence for the basic theory on which it depends.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ma, Shaojin, Weiguo Pan, Hongzhe Liu, Songyin Dai, Bingxin Xu, Cheng Xu, Xuewei Li, and Huaiguang Guan. "Image Dehazing Based on Improved Color Channel Transfer and Multiexposure Fusion." Advances in Multimedia 2023 (May 15, 2023): 1–10. http://dx.doi.org/10.1155/2023/8891239.

Повний текст джерела
Анотація:
Image dehazing is one of the problems that need to be solved urgently in the field of computer vision. In recent years, more and more algorithms have been applied to image dehazing and achieved good results. However, the image after dehazing still has color distortion, contrast and saturation disorder, and other challenges; in order to solve these problems, in this paper, an effective image dehazing method is proposed, which is based on improved color channel transfer and multiexposure image fusion to achieve image dehazing. First, the image is preprocessed using a color channel transfer method based on k-means. Second, gamma correction is introduced on the basis of guided filtering to obtain a series of multiexposure images, and the obtained multiexposure images are fused into a dehazed image through a Laplacian pyramid fusion scheme based on local similarity of adaptive weights. Finally, contrast and saturation corrections are performed on the dehazed image. Experimental verification is carried out on synthetic dehazed images and natural dehazed images, and it is verified that the method proposed is superior to existing dehazed algorithms from both subjective and objective aspects.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wei, Jianchong, Yi Wu, Liang Chen, Kunping Yang, and Renbao Lian. "Zero-Shot Remote Sensing Image Dehazing Based on a Re-Degradation Haze Imaging Model." Remote Sensing 14, no. 22 (November 13, 2022): 5737. http://dx.doi.org/10.3390/rs14225737.

Повний текст джерела
Анотація:
Image dehazing is crucial for improving the advanced applications on remote sensing (RS) images. However, collecting paired RS images to train the deep neural networks (DNNs) is scarcely available, and the synthetic datasets may suffer from domain-shift issues. In this paper, we propose a zero-shot RS image dehazing method based on a re-degradation haze imaging model, which directly restores the haze-free image from a single hazy image. Based on layer disentanglement, we design a dehazing framework consisting of three joint sub-modules to disentangle the hazy input image into three components: the atmospheric light, the transmission map, and the recovered haze-free image. We then generate a re-degraded hazy image by mixing up the hazy input image and the recovered haze-free image. By the proposed re-degradation haze imaging model, we theoretically demonstrate that the hazy input and the re-degraded hazy image follow a similar haze imaging model. This finding helps us to train the dehazing network in a zero-shot manner. The dehazing network is optimized to generate outputs that satisfy the relationship between the hazy input image and the re-degraded hazy image in the re-degradation haze imaging model. Therefore, given a hazy RS image, the dehazing network directly infers the haze-free image by minimizing a specific loss function. Using uniform hazy datasets, non-uniform hazy datasets, and real-world hazy images, we conducted comprehensive experiments to show that our method outperforms many state-of-the-art (SOTA) methods in processing uniform or slight/moderate non-uniform RS hazy images. In addition, evaluation on a high-level vision task (RS image road extraction) further demonstrates the effectiveness and promising performance of the proposed zero-shot dehazing method.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Dong, Weida, Chunyan Wang, Hao Sun, Yunjie Teng, and Xiping Xu. "Multi-Scale Attention Feature Enhancement Network for Single Image Dehazing." Sensors 23, no. 19 (September 27, 2023): 8102. http://dx.doi.org/10.3390/s23198102.

Повний текст джерела
Анотація:
Aiming to solve the problem of color distortion and loss of detail information in most dehazing algorithms, an end-to-end image dehazing network based on multi-scale feature enhancement is proposed. Firstly, the feature extraction enhancement module is used to capture the detailed information of hazy images and expand the receptive field. Secondly, the channel attention mechanism and pixel attention mechanism of the feature fusion enhancement module are used to dynamically adjust the weights of different channels and pixels. Thirdly, the context enhancement module is used to enhance the context semantic information, suppress redundant information, and obtain the haze density image with higher detail. Finally, our method removes haze, preserves image color, and ensures image details. The proposed method achieved a PSNR score of 33.74, SSIM scores of 0.9843 and LPIPS distance of 0.0040 on the SOTS-outdoor dataset. Compared with representative dehazing methods, it demonstrates better dehazing performance and proves the advantages of the proposed method on synthetic hazy images. Combined with dehazing experiments on real hazy images, the results show that our method can effectively improve dehazing performance while preserving more image details and achieving color fidelity.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sun, Wei, Jianli Wu, and Haroon Rashid. "Image Enhancement Algorithm of Foggy Sky with Sky based on Sky Segmentation." Journal of Physics: Conference Series 2560, no. 1 (August 1, 2023): 012011. http://dx.doi.org/10.1088/1742-6596/2560/1/012011.

Повний текст джерела
Анотація:
Abstract In recent years, image defogging has become a research hotspot in the field of digital image processing. Through defogging enhancement processing, the visual quality of foggy images can be significantly improved, and it is also an important part of subsequent image processing. To overcome the limitations of traditional image dehazing, an image enhancement algorithm for foggy image with sky based on sky segmentation is proposed. Firstly, based on K-means clustering and sky feature analysis, sky region recognition is performed for fog image with sky. Secondly, according to the pixels of sky region, the rough transmittance is corrected, and the dehazing image is obtained by dark channel prior image dehazing based on guided filter. Finally, the dehazing image is equalized by bi-histogram equalization. This algorithm effectively avoids the problem of color distortion and halo caused by the traditional dark channel prior dehazing algorithm in the sky area, and makes the restored foggy image have better global and local contrast.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

An, Shunmin, Xixia Huang, Linling Wang, Zhangjing Zheng, and Le Wang. "Unsupervised water scene dehazing network using multiple scattering model." PLOS ONE 16, no. 6 (June 28, 2021): e0253214. http://dx.doi.org/10.1371/journal.pone.0253214.

Повний текст джерела
Анотація:
In water scenes, where hazy images are subject to multiple scattering and where ideal data sets are difficult to collect, many dehazing methods are not as effective as they could be. Therefore, an unsupervised water scene dehazing network using atmospheric multiple scattering model is proposed. Unlike previous image dehazing methods, our method uses the unsupervised neural network and the atmospheric multiple scattering model and solves the problem of difficult acquisition of ideal datasets and the effect of multiple scattering on the image. In our method, in order to embed the atmospheric multiple scattering model into the unsupervised dehazing network, the unsupervised dehazing network uses four branches to estimate the scene radiation layer, transmission map layer, blur kernel layer and atmospheric light layer, the hazy image is then synthesized from the four output layers, minimizing the input hazy image and the output hazy image, where the output scene radiation layer is the final dehazing image. In addition, we constructed unsupervised loss functions which applicable to image dehazing by prior knowledge, i.e., color attenuation energy loss and dark channel loss. The method has a wide range of applications, with haze being thick and variable in marine, river and lake scenes, the method can be used to assist ship vision for target detection or forward road recognition in hazy conditions. Through extensive experiments on synthetic and real-world images, the proposed method is able to recover the details, structure and texture of the water image better than five advanced dehazing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yang, Yuanbo, Qunbo Lv, Baoyu Zhu, Xuefu Sui, Yu Zhang, and Zheng Tan. "One-Sided Unsupervised Image Dehazing Network Based on Feature Fusion and Multi-Scale Skip Connection." Applied Sciences 12, no. 23 (December 2, 2022): 12366. http://dx.doi.org/10.3390/app122312366.

Повний текст джерела
Анотація:
Haze and mist caused by air quality, weather, and other factors can reduce the clarity and contrast of images captured by cameras, which limits the applications of automatic driving, satellite remote sensing, traffic monitoring, etc. Therefore, the study of image dehazing is of great significance. Most existing unsupervised image-dehazing algorithms rely on a priori knowledge and simplified atmospheric scattering models, but the physical causes of haze in the real world are complex, resulting in inaccurate atmospheric scattering models that affect the dehazing effect. Unsupervised generative adversarial networks can be used for image-dehazing algorithm research; however, due to the information inequality between haze and haze-free images, the existing bi-directional mapping domain translation model often used in unsupervised generative adversarial networks is not suitable for image-dehazing tasks, and it also does not make good use of extracted features, which results in distortion, loss of image details, and poor retention of image features in the haze-free images. To address these problems, this paper proposes an end-to-end one-sided unsupervised image-dehazing network based on a generative adversarial network that directly learns the mapping between haze and haze-free images. The proposed feature-fusion module and multi-scale skip connection based on residual network consider the loss of feature information caused by convolution operation and the fusion of different scale features, and achieve adaptive fusion between low-level features and high-level features, to better preserve the features of the original image. Meanwhile, multiple loss functions are used to train the network, where the adversarial loss ensures that the network generates more realistic images and the contrastive loss ensures a meaningful one-sided mapping from the haze image to the haze-free image, resulting in haze-free images with good quantitative metrics and visual effects. The experiments demonstrate that, compared with existing dehazing algorithms, our method achieved better quantitative metrics and better visual effects on both synthetic haze image datasets and real-world haze image datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Han, Wensheng, Hong Zhu, Chenghui Qi, Jingsi Li, and Dengyin Zhang. "High-Resolution Representations Network for Single Image Dehazing." Sensors 22, no. 6 (March 15, 2022): 2257. http://dx.doi.org/10.3390/s22062257.

Повний текст джерела
Анотація:
Deep learning-based image dehazing methods have made great progress, but there are still many problems such as inaccurate model parameter estimation and preserving spatial information in the U-Net-based architecture. To address these problems, we propose an image dehazing network based on the high-resolution network, called DeHRNet. The high-resolution network originally used for human pose estimation. In this paper, we make a simple yet effective modification to the network and apply it to image dehazing. We add a new stage to the original network to make it better for image dehazing. The newly added stage collects the feature map representations of all branches of the network by up-sampling to enhance the high-resolution representations instead of only taking the feature maps of the high-resolution branches, which makes the restored clean images more natural. The final experimental results show that DeHRNet achieves superior performance over existing dehazing methods in synthesized and natural hazy images.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tang, Yunqing, Yin Xiang, and Guangfeng Chen. "A Nighttime and Daytime Single-Image Dehazing Method." Applied Sciences 13, no. 1 (December 25, 2022): 255. http://dx.doi.org/10.3390/app13010255.

Повний текст джерела
Анотація:
In this study, the requirements for image dehazing methods have been put forward, such as a wider range of scenarios in which the methods can be used, faster processing speeds and higher image quality. Recent dehazing methods can only unilaterally process daytime or nighttime hazy images. However, we propose an effective single-image technique, dubbed MF Dehazer, in order to solve the problems associated with nighttime and daytime dehazing. This technique was developed following an in-depth analysis of the properties of nighttime hazy images. We also propose a mixed-filter method in order to estimate ambient illumination. It is possible to obtain the color and light direction when estimating ambient illumination. Usually, after dehazing, nighttime images will cause light source diffusion problems. Thus, we propose a method to compensate for the high-light area transmission in order to improve the transmission of the light source areas. Then, through regularization, the images obtain better contrast. The experimental results show that MF Dehazer outperforms the recent dehazing methods. Additionally, it can obtain images with higher contrast and clarity while retaining the original color of the image.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Sun, Ziyi, Yunfeng Zhang, Fangxun Bao, Ping Wang, Xunxiang Yao, and Caiming Zhang. "SADnet: Semi-supervised Single Image Dehazing Method Based on an Attention Mechanism." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (May 31, 2022): 1–23. http://dx.doi.org/10.1145/3478457.

Повний текст джерела
Анотація:
Many real-life tasks such as military reconnaissance and traffic monitoring require high-quality images. However, images acquired in foggy or hazy weather pose obstacles to the implementation of these real-life tasks; consequently, image dehazing is an important research problem. To meet the requirements of practical applications, a single image dehazing algorithm has to be able to effectively process real-world hazy images with high computational efficiency. In this article, we present a fast and robust semi-supervised dehazing algorithm named SADnet for practical applications. SADnet utilizes both synthetic datasets and natural hazy images for training, so it has good generalizability for real-world hazy images. Furthermore, considering the uneven distribution of haze in the atmospheric environment, a Channel-Spatial Self-Attention (CSSA) mechanism is presented to enhance the representational power of the proposed SADnet. Extensive experimental results demonstrate that the presented approach achieves good dehazing performances and competitive running times compared with other state-of-the-art image dehazing algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wei, Jianchong, Yan Cao, Kunping Yang, Liang Chen, and Yi Wu. "Self-Supervised Remote Sensing Image Dehazing Network Based on Zero-Shot Learning." Remote Sensing 15, no. 11 (May 24, 2023): 2732. http://dx.doi.org/10.3390/rs15112732.

Повний текст джерела
Анотація:
Traditional dehazing approaches that rely on prior knowledge exhibit limited efficacy when confronted with the intricacies of real-world hazy environments. While learning-based dehazing techniques necessitate large-scale datasets for effective model training, the acquisition of these datasets is time-consuming and laborious, and the resulting models may encounter a domain shift when processing real-world hazy images. To overcome the limitations of prior-based and learning-based dehazing methods, we propose a self-supervised remote sensing (RS) image-dehazing network based on zero-shot learning, where the self-supervised process avoids dense dataset requirements and the learning-based structures refine the artifacts in extracted image priors caused by complex real-world environments. The proposed method has three stages. The first stage involves pre-processing the input hazy image by utilizing a prior-based dehazing module; in this study, we employed the widely recognized dark channel prior (DCP) to obtain atmospheric light, a transmission map, and the preliminary dehazed image. In the second stage, we devised two convolutional neural networks, known as RefineNets, dedicated to enhancing the transmission map and the initial dehazed image. In the final stage, we generated a hazy image using the atmospheric light, the refined transmission map, and the refined dehazed image by following the haze imaging model. The meticulously crafted loss function encourages cycle-consistency between the regenerated hazy image and the input hazy image, thereby facilitating a self-supervised dehazing model. During the inference phase, the model undergoes training in a zero-shot manner to yield the haze-free image. These thorough experiments validate the substantial improvement of our method over the prior-based dehazing module and the zero-shot training efficiency. Furthermore, assessments conducted on both uniform and non-uniform RS hazy images demonstrate the superiority of our proposed dehazing technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Stojanovic, Branka, Sasa Milicevic, and Srdjan Stankovic. "Improved dehazing techniques for maritime surveillance image enhancement." Serbian Journal of Electrical Engineering 15, no. 1 (2018): 53–70. http://dx.doi.org/10.2298/sjee1801053s.

Повний текст джерела
Анотація:
The subjective quality of images (human interpretation) is very important in long-range imaging systems, where the presence of haze directly influences visibility of the scene, by reducing contrast and obscuring objects. Image enhancement techniques - dehazing techniques, are usually required in such systems. This paper compares the most significant single image dehazing approaches, proposes three additional enhancement steps in dehazing algorithms, compares performance of the algorithms and additional enhancement steps, and presents test results on maritime surveillance images, which represent one special case of long-range images.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Chow, Tsz-Yeung, King-Hung Lee, and Kwok-Leung Chan. "Detection of Targets in Road Scene Images Enhanced Using Conditional GAN-Based Dehazing Model." Applied Sciences 13, no. 9 (April 24, 2023): 5326. http://dx.doi.org/10.3390/app13095326.

Повний текст джерела
Анотація:
Object detection is a classic image processing problem. For instance, in autonomous driving applications, targets such as cars and pedestrians are detected in the road scene video. Many image-based object detection methods utilizing hand-crafted features have been proposed. Recently, more research has adopted a deep learning approach. Object detectors rely on useful features, such as the object’s boundary, which are extracted via analyzing the image pixels. However, the images captured, for instance, in an outdoor environment, may be degraded due to bad weather such as haze and fog. One possible remedy is to recover the image radiance through the use of a pre-processing method such as image dehazing. We propose a dehazing model for image enhancement. The framework was based on the conditional generative adversarial network (cGAN). Our proposed model was improved with two modifications. Various image dehazing datasets were employed for comparative analysis. Our proposed model outperformed other hand-crafted and deep learning-based image dehazing methods by 2dB or more in PSNR. Moreover, we utilized the dehazed images for target detection using the object detector YOLO. In the experimentations, images were degraded by two weather conditions—rain and fog. We demonstrated that the objects detected in images enhanced by our proposed dehazing model were significantly improved over those detected in the degraded images.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chung, Won Young, Sun Young Kim, and Chang Ho Kang. "Image Dehazing Using LiDAR Generated Grayscale Depth Prior." Sensors 22, no. 3 (February 5, 2022): 1199. http://dx.doi.org/10.3390/s22031199.

Повний текст джерела
Анотація:
In this paper, the dehazing algorithm is proposed using a one-channel grayscale depth image generated from a LiDAR point cloud 2D projection image. In depth image-based dehazing, the estimation of the scattering coefficient is the most important. Since scattering coefficients are used to estimate the transmission image for dehazing, the optimal coefficients for effective dehazing must be obtained depending on the level of haze generation. Thus, we estimated the optimal scattering coefficient for 100 synthetic haze images and represented the distribution between the optimal scattering coefficient and dark channels. Moreover, through linear regression of the aforementioned distribution, the equation between scattering coefficients and dark channels was estimated, enabling the estimation of appropriate scattering coefficient. Transmission image for dehazing is defined with a scattering coefficient and a grayscale depth image, obtained from LiDAR 2D projection. Finally, dehazing is performed based on the atmospheric scattering model through the defined atmospheric light and transmission image. The proposed method was quantitatively and qualitatively analyzed through simulation and image quality parameters. Qualitative analysis was conducted through YOLO v3 and quantitative analysis was conducted through MSE, PSNR, SSIM, etc. In quantitative analysis, SSIM showed an average performance improvement of 24%.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Xu, Xiaoxiao. "A Lightweight Dual-Branch Image Dehazing Network based on Associative Learning." Frontiers in Computing and Intelligent Systems 4, no. 2 (June 25, 2023): 27–30. http://dx.doi.org/10.54097/fcis.v4i2.9761.

Повний текст джерела
Анотація:
Haze degrades the clarity, contrast, and details of images, resulting in a decrease in image quality. Image dehazing provides a means to obtain clearer and more accurate image information. Traditional methods for haze removal typically rely on manually designed features and models, limiting their performance in complex scenes. In recent years, the rapid advancement of deep learning has offered new insights into addressing the image dehazing problem. This paper proposes a lightweight dual-branch image dehazing network based on associative learning (LDANet). The network consists of a lightweight dehazing sub-network (LDSN) and a lightweight image enhancement subnetwork (LESN). To reduce computational and parameter complexity, the Tied Block Convolution (TBC) is employed, allowing for parameter sharing among components. Lastly, through associative learning, their distinctive features are mapped. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of our approach in qualitative comparisons and quantitative evaluations compared to other state-of-the-art methods. Our method holds significant practical value for real-world image dehazing scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zheng, Junbao, Chenke Xu, Wei Zhang, and Xu Yang. "Single Image Dehazing Using Global Illumination Compensation." Sensors 22, no. 11 (May 30, 2022): 4169. http://dx.doi.org/10.3390/s22114169.

Повний текст джерела
Анотація:
The existing dehazing algorithms hardly consider background interference in the process of estimating the atmospheric illumination value and transmittance, resulting in an unsatisfactory dehazing effect. In order to solve the problem, this paper proposes a novel global illumination compensation-based image-dehazing algorithm (GIC). The GIC method compensates for the intensity of light scattered when light passes through atmospheric particles such as fog. Firstly, the illumination compensation was accomplished in the CIELab color space using the shading partition enhancement mechanism. Secondly, the atmospheric illumination values and transmittance parameters of these enhanced images were computed to improve the performance of atmospheric-scattering models, in order to reduce the interference of background signals. Eventually, the dehazing result maps with reduced background interference were obtained with the computed atmospheric-scattering model. The dehazing experiments were carried out on the public data set, and the dehazing results of the foggy image were compared with cutting-edge dehazing algorithms. The experimental results illustrate that the proposed GIC algorithm shows enhanced consistency with the real-imaging situation in estimating atmospheric illumination and transmittance. Compared with established image-dehazing methods, the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) metrics of the proposed GIC method increased by 3.25 and 0.084, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Yao, Xin-Wei, Xinge Zhang, Yuchen Zhang, Weiwei Xing, and Xing Zhang. "Nighttime Image Dehazing Based on Point Light Sources." Applied Sciences 12, no. 20 (October 11, 2022): 10222. http://dx.doi.org/10.3390/app122010222.

Повний текст джерела
Анотація:
Images routinely suffer from quality degradation in fog, mist, and other harsh weather conditions. Consequently, image dehazing is an essential and inevitable pre-processing step in computer vision tasks. Image quality enhancement for special scenes, especially nighttime image dehazing is extremely well studied for unmanned driving and nighttime surveillance, while the vast majority of dehazing algorithms in the past were only applicable to daytime conditions. After observing a large number of nighttime images, artificial light sources have replaced the position of the sun in daytime images and the impact of light sources on pixels varies with distance. This paper proposed a novel nighttime dehazing method using the light source influence matrix. The luminosity map can well express the photometric difference value of the picture light source. Then, the light source influence matrix is calculated to divide the image into near light source region and non-near light source region. Using the result of two regions, the two initial transmittances obtained by dark channel prior are fused by edge-preserving filtering. For the atmospheric light term, the initial atmospheric light value is corrected by the light source influence matrix. Finally, the final result is obtained by substituting the atmospheric light model. Theoretical analysis and comparative experiments verify the performance of the proposed image dehazing method. In terms of PSNR, SSIM, and UQI, this method improves 9.4%, 11.2%, and 3.3% over the existed night-time defogging method OSPF. In the future, we will explore the work from static picture dehazing to real-time video stream dehazing detection and will be used in detection on potential applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Filin, A., A. Kopylov, and I. Gracheva. "A SINGLE IMAGE DEHAZING DATASET WITH LOW-LIGHT REAL-WORLD INDOOR IMAGES, DEPTH MAPS AND INFRARED IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W3-2023 (May 12, 2023): 53–57. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w3-2023-53-2023.

Повний текст джерела
Анотація:
Abstract. Benchmarking of haze removal methods and training related models requires appropriate datasets. The most objective metrics of assessment quality of dehazing are shown by reference metrics – i.e. those in which the reconstructed image is compared with the reference (ground-truth) image without haze. The dehazing datasets consist of pairs where haze is artificially synthesized on ground-truth images are not well suited for the assessment of the quality of dehazing methods. Accommodation of the real-world environment for take truthful pairs of hazy and haze-free images are difficult, so there are few image dehazing datasets, which consists with the real both hazy and haze-free images. The currently researcher’s attention is shifting to dehazing on “more complex” images, including those that are obtained in insufficient illumination conditions and with the presence of localized light sources. It is almost no datasets with such pairs of images, which makes it difficult of objective assessment of image dehazing methods. In this paper, we present extended version of our previously proposed dataset of this kind with more haze density levels and depths of scenes. It consists of images of 2 scenes at 4 lighting and 8 haze density levels – 64 frames in total. In addition to images in the visible spectrum, for each frame depth map and thermal image was captured. An experimental evaluation of state-of-the art haze removal methods was carried out on the resulting dataset. The dataset is available for free download at https://data.mendeley.com/datasets/jjpcj7fy6t.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ngo, Dat, Seungmin Lee, Gi-Dong Lee, and Bongsoon Kang. "Automating a Dehazing System by Self-Calibrating on Haze Conditions." Sensors 21, no. 19 (September 24, 2021): 6373. http://dx.doi.org/10.3390/s21196373.

Повний текст джерела
Анотація:
Existing image dehazing algorithms typically rely on a two-stage procedure. The medium transmittance and lightness are estimated in the first stage, and the scene radiance is recovered in the second by applying the simplified Koschmieder model. However, this type of unconstrained dehazing is only applicable to hazy images, and leads to untoward artifacts in haze-free images. Moreover, no algorithm that can automatically detect the haze density and perform dehazing on an arbitrary image has been reported in the literature to date. Therefore, this paper presents an automated dehazing system capable of producing satisfactory results regardless of the presence of haze. In the proposed system, the input image simultaneously undergoes multiscale fusion-based dehazing and haze-density-estimating processes. A subsequent image blending step then judiciously combines the dehazed result with the original input based on the estimated haze density. Finally, tone remapping post-processes the blended result to satisfactorily restore the scene radiance quality. The self-calibration capability on haze conditions lies in using haze density estimate to jointly guide image blending and tone remapping processes. We performed extensive experiments to demonstrate the superiority of the proposed system over state-of-the-art benchmark methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Lee, Seungmin, Dat Ngo, and Bongsoon Kang. "Design of an FPGA-Based High-Quality Real-Time Autonomous Dehazing System." Remote Sensing 14, no. 8 (April 12, 2022): 1852. http://dx.doi.org/10.3390/rs14081852.

Повний текст джерела
Анотація:
Image dehazing, as a common solution to weather-related degradation, holds great promise for photography, computer vision, and remote sensing applications. Diverse approaches have been proposed throughout decades of development, and deep-learning-based methods are currently predominant. Despite excellent performance, such computationally intensive methods as these recent advances amount to overkill, because image dehazing is solely a preprocessing step. In this paper, we utilize an autonomous image dehazing algorithm to analyze a non-deep dehazing approach. After that, we present a corresponding FPGA design for high-quality real-time vision systems. We also conduct extensive experiments to verify the efficacy of the proposed design across different facets. Finally, we introduce a method for synthesizing cloudy images (loosely referred to as hazy images) to facilitate future aerial surveillance research.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zheng, Shunyuan, Jiamin Sun, Qinglin Liu, Yuankai Qi, and Jianen Yan. "Overwater Image Dehazing via Cycle-Consistent Generative Adversarial Network." Electronics 9, no. 11 (November 8, 2020): 1877. http://dx.doi.org/10.3390/electronics9111877.

Повний текст джерела
Анотація:
In contrast to images taken on land scenes, images taken over water are more prone to degradation due to the influence of the haze. However, existing image dehazing methods are mainly developed for land-scene images and perform poorly when applied to overwater images. To address this problem, we collect the first overwater image dehazing dataset and propose a Generative Adversial Network (GAN)-based method called OverWater Image Dehazing GAN (OWI-DehazeGAN). Due to the difficulties of collecting paired hazy and clean images, the dataset contains unpaired hazy and clean images taken over water. The proposed OWI-DehazeGAN is composed of an encoder–decoder framework, supervised by a forward-backward translation consistency loss for self-supervision and a perceptual loss for content preservation. In addition to qualitative evaluation, we design an image quality assessment neural network to rank the dehazed images. Experimental results on both real and synthetic test data demonstrate that the proposed method performs superiorly against several state-of-the-art land dehazing methods. Compared with the state-of-the-art, our method gains a significant improvement by 1.94% for SSIM, 7.13% for PSNR and 4.00% for CIEDE2000 on the synthetic test dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Dong, Yu, Yihao Liu, He Zhang, Shifeng Chen, and Yu Qiao. "FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10729–36. http://dx.doi.org/10.1609/aaai.v34i07.6701.

Повний текст джерела
Анотація:
Recently, convolutional neural networks (CNNs) have achieved great improvements in single image dehazing and attained much attention in research. Most existing learning-based dehazing methods are not fully end-to-end, which still follow the traditional dehazing procedure: first estimate the medium transmission and the atmospheric light, then recover the haze-free image based on the atmospheric scattering model. However, in practice, due to lack of priors and constraints, it is hard to precisely estimate these intermediate parameters. Inaccurate estimation further degrades the performance of dehazing, resulting in artifacts, color distortion and insufficient haze removal. To address this, we propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing. With the proposed Fusion-discriminator which takes frequency information as additional priors, our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts. Moreover, we synthesize a large-scale training dataset including various indoor and outdoor hazy images to boost the performance and we reveal that for learning-based dehazing methods, the performance is strictly influenced by the training data. Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images with more visually pleasing dehazed results.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Fahim, Masud An-Nur Islam, and Ho Yub Jung. "Single Image Dehazing Using End-to-End Deep-Dehaze Network." Electronics 10, no. 7 (March 30, 2021): 817. http://dx.doi.org/10.3390/electronics10070817.

Повний текст джерела
Анотація:
Haze is a natural distortion to the real-life images due to the specific weather conditions. This distortion limits the perceptual fidelity, as well as information integrity, of a given image. Image dehazing for the observed images is a complicated task because of its ill-posed nature. This study offers the Deep-Dehaze network to retrieve haze-free images. Given an input, the proposed architecture uses four feature extraction modules to perform nonlinear feature extraction. We improvise the traditional U-Net architecture and the residual network to design our architecture. We also introduce the l1 spatial-edge loss function that enables our system to achieve better performance than that for the typical l1 and l2 loss function. Unlike other learning-based approaches, our network does not use any fusion connection for image dehazing. By training the image translation and dehazing network in an end-to-end manner, we can obtain better effects of both image translation and dehazing. Experimental results on synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms. We trained our network in an end-to-end manner and validated it on natural and synthetic hazy datasets. Our method shows favorable results on these datasets without any post-processing in contrast to the traditional approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Kumar, Vinaya, and Madhu Lata Nirmal. "Image Dehazing from Repeated Averaging Filters with Artificial Neural Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 2 (February 28, 2022): 1240–49. http://dx.doi.org/10.22214/ijraset.2022.40484.

Повний текст джерела
Анотація:
Abstract: The physical process of converting an image signal into a physical image is known as image processing. Either a digital or analogue image signal can be used. An actual physical image or the attributes of an image can be the actual output. The logical process of detecting, identifying, classifying, measuring, and evaluating the relevance of physical and cultural items, their patterns, and spatial relationships is referred to as image processing. This paper presents a method for measuring ambient light from a single foggy image using Repeated Averaging Filters, which adds to greater radiance recovery. The problem of halo artefacts in the final output image after dehazing plagues existing dehazing algorithms. An averaged channel is created from a single image using recurrent averaging filters using integral images with artificial neural network, which is a faster and more efficient method of reducing halo artefacts. In terms of quantitative and computational analysis, the suggested dehazing method achieves competitive results and outperforms many earlier state-of-the-art solutions. Index Terms: Image Dehazing; Averaging Filter; Integral Image; Gaussian smoothing, Feed Forward Neural Network
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Liu, Shuai, Ying Li, Hang Li, Bin Wang, Yuanhao Wu, and Zhenduo Zhang. "Visual Image Dehazing Using Polarimetric Atmospheric Light Estimation." Applied Sciences 13, no. 19 (October 1, 2023): 10909. http://dx.doi.org/10.3390/app131910909.

Повний текст джерела
Анотація:
The precision in evaluating global ambient light profoundly impacts the performance of image-dehazing technologies. Many approaches for quantifying atmospheric light intensity suffer from inaccuracies, leading to a decrease in dehazing effectiveness. To address this challenge, we introduce an approach for estimating atmospheric light based on the polarization contrast between the sky and the scene. By employing this method, we enhance the precision of atmospheric light estimation, enabling the more accurate identification of sky regions within the image. We adapt the original dark channel dehazing algorithm using this innovative technique, resulting in the development of a polarization-based dehazing imaging system employed in practical engineering applications. Experimental results reveal a significant enhancement in the accuracy of atmospheric light estimation within the dark channel dehazing algorithm. Consequently, this method enhances the overall perceptual quality of dehazed images. The proposed approach demonstrates a 28 percent improvement in SSIM and a contrast increase of over 20 percent when compared to the previous method. Additionally, the created dehazing system exhibits real-time processing capabilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Fattal, Raanan. "Single image dehazing." ACM Transactions on Graphics 27, no. 3 (August 2008): 1–9. http://dx.doi.org/10.1145/1360612.1360671.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yoon, Sungan, and Jeongho Cho. "Deep Multimodal Detection in Reduced Visibility Using Thermal Depth Estimation for Autonomous Driving." Sensors 22, no. 14 (July 6, 2022): 5084. http://dx.doi.org/10.3390/s22145084.

Повний текст джерела
Анотація:
Recently, the rapid development of convolutional neural networks (CNN) has consistently improved object detection performance using CNN and has naturally been implemented in autonomous driving due to its operational potential in real-time. Detecting moving targets to realize autonomous driving is an essential task for the safety of drivers and pedestrians, and CNN-based moving target detectors have shown stable performance in fair weather. However, there is a considerable drop in detection performance during poor weather conditions like hazy or foggy situations due to particles in the atmosphere. To ensure stable moving object detection, an image restoration process with haze removal must be accompanied. Therefore, this paper proposes an image dehazing network that estimates the current weather conditions and removes haze using the haze level to improve the detection performance under poor weather conditions due to haze and low visibility. Combined with the thermal image, the restored image is assigned to the two You Only Look Once (YOLO) object detectors, respectively, which detect moving targets independently and improve object detection performance using late fusion. The proposed model showed improved dehazing performance compared with the existing image dehazing models and has proved that images taken under foggy conditions, the poorest weather for autonomous driving, can be restored to normal images. Through the fusion of the RGB image restored by the proposed image dehazing network with thermal images, the proposed model improved the detection accuracy by up to 22% or above in a dense haze environment like fog compared with models using existing image dehazing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Hou, Xujia, Feihu Zhang, Zewen Wang, Guanglei Song, Zijun Huang, and Jinpeng Wang. "DFFA-Net: A Differential Convolutional Neural Network for Underwater Optical Image Dehazing." Electronics 12, no. 18 (September 14, 2023): 3876. http://dx.doi.org/10.3390/electronics12183876.

Повний текст джерела
Анотація:
This paper proposes DFFA-Net, a novel differential convolutional neural network designed for underwater optical image dehazing. DFFA-Net is obtained by deeply analyzing the factors that affect the quality of underwater images and combining the underwater light propagation characteristics. DFFA-Net introduces a channel differential module that captures the mutual information between the green and blue channels with respect to the red channel. Additionally, a loss function sensitive to RGB color channels is introduced. Experimental results demonstrate that DFFA-Net achieves state-of-the-art performance in terms of quantitative metrics for single-image dehazing within convolutional neural network-based dehazing models. On the widely-used underwater Underwater Image Enhancement Benchmark (UIEB) image dehazing dataset, DFFA-Net achieves a peak signal-to-noise ratio (PSNR) of 24.2631 and a structural similarity index (SSIM) score of 0.9153. Further, we have deployed DFFA-Net on a self-developed Remotely Operated Vehicle (ROV). In a swimming pool environment, DFFA-Net can process hazy images in real time, providing better visual feedback to the operator. The source code has been open sourced.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yang, Geonmo, Juhui Lee, Ayoung Kim, and Younggun Cho. "Sparse Depth-Guided Image Enhancement Using Incremental GP with Informative Point Selection." Sensors 23, no. 3 (January 20, 2023): 1212. http://dx.doi.org/10.3390/s23031212.

Повний текст джерела
Анотація:
We propose an online dehazing method with sparse depth priors using an incremental Gaussian Process (iGP). Conventional approaches focus on achieving single image dehazing by using multiple channels. In many robotics platforms, range measurements are directly available, except in a sparse form. This paper exploits direct and possibly sparse depth data in order to achieve efficient and effective dehazing that works for both color and grayscale images. The proposed algorithm is not limited to the channel information and works equally well for both color and gray images. However, efficient depth map estimations (from sparse depth priors) are additionally required. This paper focuses on a highly sparse depth prior for online dehazing. For efficient dehazing, we adopted iGP for incremental depth map estimation and dehazing. Incremental selection of the depth prior was conducted in an information-theoretic way by evaluating mutual information (MI) and other information-based metrics. As per updates, only the most informative depth prior was added, and haze-free images were reconstructed from the atmospheric scattering model with incrementally estimated depth. The proposed method was validated using different scenarios, color images under synthetic fog, real color, and grayscale haze indoors, outdoors, and underwater scenes.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Jin, Guo Dong, Li Bin Lu, and Xiao Fei Zhu. "An Dehazing Algorithm of Lossy Compression Video Image." Advanced Materials Research 850-851 (December 2013): 825–29. http://dx.doi.org/10.4028/www.scientific.net/amr.850-851.825.

Повний текст джерела
Анотація:
Using dark channel prior to estimate the thickness of the haze , recent research work has made significant progresses in single image dehazing. However , it is difficult to apply existing method for processing high resolution input images because of t he heavy computation cost s of it . For some kinds of input images , existing method still can not reach enough accuracy . we develop a powerful and practical single image dehazing method. The experimental results show our gradient prior of transmission map s greatly reduces t he computation cost s of t he previous method. Furthermore , the optimization methods and parameter adjustment for our novel image prior enhance t he accuracy of the computation related with transmission map. Overall , compared wit h the state of the art , our new single image dehazing method achieves t he same, and even better image quality with only around 1/8 computation time and memory cost .
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Liu, Jun, Rong Jia, Wei Li, Fuqi Ma, and Xiaoyang Wang. "Image Dehazing Method of Transmission Line for Unmanned Aerial Vehicle Inspection Based on Densely Connection Pyramid Network." Wireless Communications and Mobile Computing 2020 (October 8, 2020): 1–9. http://dx.doi.org/10.1155/2020/8857271.

Повний текст джерела
Анотація:
The quality of the camera image directly determines the accuracy of the defect identification of the transmission line equipment. However, complex external factors such as haze can seriously affect the image quality of the aircraft. The traditional image dehazing methods are difficult to meet the needs of enhanced image inspection in complex environments. In this paper, the image enhancement technology in haze environment is studied, and an image dehazing method of transmission line based on densely connection pyramid network is proposed. The method uses an improved pyramid network for transmittance map calculation and uses an improved U-net network for atmospheric light value calculation. Then, the transmittance map, atmospheric light value, and dehazed image are jointly optimized to obtain image dehazing model. The method proposed in this paper can improve image brightness and contrast, increase image detail information, and can generate more realistic deblur images than traditional methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Yu, Jing, Deying Liang, Bo Hang, and Hongtao Gao. "Aerial Image Dehazing Using Reinforcement Learning." Remote Sensing 14, no. 23 (November 26, 2022): 5998. http://dx.doi.org/10.3390/rs14235998.

Повний текст джерела
Анотація:
Aerial observation is usually affected by the Earth’s atmosphere, especially when haze exists. Deep reinforcement learning was used in this study for dehazing. We first developed a clear–hazy aerial image dataset addressing various types of ground; we then compared the dehazing results of some state-of-the-art methods, including the classic dark channel prior, color attenuation prior, non-local image dehazing, multi-scale convolutional neural networks, DehazeNet, and all-in-one dehazing network. We extended the most suitable method, DehazeNet, to a multi-scale form and added it into a multi-agent deep reinforcement learning network called DRL_Dehaze. DRL_Dehaze was tested on several ground types and in situations with multiple haze scales. The results show that each pixel agent can automatically select the most suitable method in multi-scale haze situations and can produce a good dehazing result. Different ground scenes may best be processed using different steps.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Nan, Dong, Du-yan Bi, Chang Liu, Shi-ping Ma, and Lin-yuan He. "A Bayesian Framework for Single Image Dehazing considering Noise." Scientific World Journal 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/651986.

Повний текст джерела
Анотація:
The single image dehazing algorithms in existence can only satisfy the demand for dehazing efficiency, not for denoising. In order to solve the problem, a Bayesian framework for single image dehazing considering noise is proposed. Firstly, the Bayesian framework is transformed to meet the dehazing algorithm. Then, the probability density function of the improved atmospheric scattering model is estimated by using the statistical prior and objective assumption of degraded image. Finally, the reflectance image is achieved by an iterative approach with feedback to reach the balance between dehazing and denoising. Experimental results demonstrate that the proposed method can remove haze and noise simultaneously and effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

He, Renjie, Xintao Guo, and Zhongke Shi. "SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images." Sensors 20, no. 18 (September 16, 2020): 5300. http://dx.doi.org/10.3390/s20185300.

Повний текст джерела
Анотація:
Single image dehazing is a difficult problem because of its ill-posed nature. Increasing attention has been paid recently as its high potential applications in many visual tasks. Although single image dehazing has made remarkable progress in recent years, they are mainly designed for haze removal in daytime. In nighttime, dehazing is more challenging where most daytime dehazing methods become invalid due to multiple scattering phenomena, and non-uniformly distributed dim ambient illumination. While a few approaches have been proposed for nighttime image dehazing, low ambient light is actually ignored. In this paper, we propose a novel unified nighttime hazy image enhancement framework to address the problems of both haze removal and illumination enhancement simultaneously. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination existing in low-light hazy conditions are considered for the first time in our approach. More importantly, most current daytime dehazing methods can be effectively incorporated into nighttime dehazing task based on our framework. Firstly, we decompose the observed hazy image into a halo layer and a scene layer to remove the influence of multiple scattering. After that, we estimate the spatially varying ambient illumination based on the Retinex theory. We then employ the classic daytime dehazing methods to recover the scene radiance. Finally, we generate the dehazing result by combining the adjusted ambient illumination and the scene radiance. Compared with various daytime dehazing methods and the state-of-the-art nighttime dehazing methods, both quantitative and qualitative experimental results on both real-world and synthetic hazy image datasets demonstrate the superiority of our framework in terms of halo mitigation, visibility improvement and color preservation.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Lee, Ho Sang. "Eximious Sandstorm Image Improvement Using Image Adaptive Ratio and Brightness-Adaptive Dark Channel Prior." Symmetry 14, no. 7 (June 28, 2022): 1334. http://dx.doi.org/10.3390/sym14071334.

Повний текст джерела
Анотація:
Sandstorm images have a color cast by sand particles. Hazy images have similar features to sandstorm images due to these images having a common obtaining process. To improve hazy images, various dehazing methods are being studied. However, not all methods are appropriate for enhancing sandstorm images as they experience color degradation via an imbalanced color channel and degraded color distributed around the image. Therefore, this paper proposes two steps to improve sandstorm images. The first is a color-balancing step using the mean ratio of the color channel between red and other colors. The sandstorm image has a degraded color channel, and therefore, the attenuated color channel has different average values for each color channel; the red channel’s average value is the highest, and that of the blue channel is the lowest. Using this property, this paper balances the color of images via the ratio of color channels. Although the image is enhanced, if the red channel is still the most abundant, the enhanced image may have a reddish color. Therefore, to enhance the image naturally, the red channel is adjusted by the average ratio of the color channel; those measures (as with the average ratio of color channels) are called image adaptive ratio (IAR). Because color-balanced sandstorm images have the same characteristics as hazy images, to enhance them, a dehazing method is applied. Ordinary dehazing methods often use dark channel prior (DCP). Though DCP estimates the dark region of an image, because the intensity of brightness is too high, the estimated DCP is not sufficiently dark. Additionally, DCP is able to show the artificial color shift in the enhanced image. To compensate for this point, this paper proposes a brightness-adaptive dark channel prior (BADCP) using a normalized color channel. The image improved using the proposed method has no color distortion or artificial color. The experimental results show the superior performance of the proposed method in comparison with state-of-the-art dehazing methods, both subjectively and objectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Vishwakarma, Sandeep, Anuradha Pillai, and Deepika Punj. "An Enhancement in Single-Image Dehazing Employing Contrastive Attention over Variational Auto-Encoder (CA-VAE) Method." International Journal of Mathematical, Engineering and Management Sciences 8, no. 4 (August 1, 2023): 728–54. http://dx.doi.org/10.33889/ijmems.2023.8.4.042.

Повний текст джерела
Анотація:
Hazy images and videos have low contrast and poor visibility. Fog, ice fog, steam fog, smoke, volcanic ash, dust, and snow are all terrible conditions for capturing images and worsening color and contrast. Computer vision applications often fail due to image degradation. Hazy images and videos with skewed color contrasts and low visibility affect photometric analysis, object identification, and target tracking. Computer programs can classify and comprehend images using image haze reduction algorithms. Image dehazing now uses deep learning approaches. The observed negative correlation between depth and the difference between the hazy image’s maximum and lowest color channels inspired the suggested study. Using a contrasting attention mechanism spanning sub-pixels and blocks, we offer a unique attention method to create high-quality, haze-free pictures. The L*a*b* color model has been proposed as an effective color space for dehazing images. A variational auto-encoder-based dehazing network may also be utilized for training since it compresses and attempts to reconstruct input images. Estimating hundreds of image-impacting characteristics may be necessary. In a variational auto-encoder, fuzzy input images are directly given a Gaussian probability distribution, and the variational auto-encoder estimates the distribution parameters. A quantitative and qualitative study of the RESIDE dataset will show the suggested method's accuracy and resilience. RESIDE’s subsets of synthetic and real-world single-image dehazing examples are utilized for training and assessment. Enhance the structural similarity index measure (SSIM) and peak signal-to-noise ratio metrics (PSNR).
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kim, Changwon. "Region Adaptive Single Image Dehazing." Entropy 23, no. 11 (October 30, 2021): 1438. http://dx.doi.org/10.3390/e23111438.

Повний текст джерела
Анотація:
Image haze removal is essential in preprocessing for computer vision applications because outdoor images taken in adverse weather conditions such as fog or snow have poor visibility. This problem has been extensively studied in the literature, and the most popular technique is dark channel prior (DCP). However, dark channel prior tends to underestimate transmissions of bright areas or objects, which may cause color distortions during dehazing. This paper proposes a new single-image dehazing method that combines dark channel prior with bright channel prior in order to overcome the limitations of dark channel prior. A patch-based robust atmospheric light estimation was introduced in order to divide image into regions to which the DCP assumption and the BCP assumption are applied. Moreover, region adaptive haze control parameters are introduced in order to suppress the distortions in a flat and bright region and to increase the visibilities in a texture region. The flat and texture regions are expressed as probabilities by using local image entropy. The performance of the proposed method is evaluated by using synthetic and real data sets. Experimental results show that the proposed method outperforms the state-of-the-art image dehazing method both visually and numerically.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Abdulkareem, Karrar Hameed, Nureize Arbaiy, A. A. Zaidan, B. B. Zaidan, O. S. Albahri, M. A. Alsalem, and Mahmood M. Salih. "A Novel Multi-Perspective Benchmarking Framework for Selecting Image Dehazing Intelligent Algorithms Based on BWM and Group VIKOR Techniques." International Journal of Information Technology & Decision Making 19, no. 03 (May 2020): 909–57. http://dx.doi.org/10.1142/s0219622020500169.

Повний текст джерела
Анотація:
The increasing demand for image dehazing-based applications has raised the value of efficient evaluation and benchmarking for image dehazing algorithms. Several perspectives, such as inhomogeneous foggy, homogenous foggy, and dark foggy scenes, have been considered in multi-criteria evaluation. The benchmarking for the selection of the best image dehazing intelligent algorithm based on multi-criteria perspectives is a challenging task owing to (a) multiple evaluation criteria, (b) criteria importance, (c) data variation, (d) criteria conflict, and (e) criteria tradeoff. A generally accepted framework for benchmarking image dehazing performance is unavailable in the existing literature. This study proposes a novel multi-perspective (i.e., an inhomogeneous foggy scene, a homogenous foggy scene, and a dark foggy scene) benchmarking framework for the selection of the best image dehazing intelligent algorithm based on multi-criteria analysis. Experiments were conducted in three stages. First was an evaluation experiment with five algorithms as part of matrix data. Second was a crossover between image dehazing intelligent algorithms and a set of target evaluation criteria to obtain matrix data. Third was the ranking of the image dehazing intelligent algorithms through integrated best–worst and VIseKriterijumska Optimizacija I Kompromisno Resenje methods. Individual and group decision-making contexts were applied to demonstrate the efficiency of the proposed framework. The mean was used to objectively validate the ranks given by group decision-making contexts. Checklist and benchmarking scenarios were provided to compare the proposed framework with an existing benchmark study. The proposed framework achieved a significant result in terms of selecting the best image dehazing algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Porto Marques, Tunai, Alexandra Branzan Albu, and Maia Hoeberechts. "A Contrast-Guided Approach for the Enhancement of Low-Lighting Underwater Images." Journal of Imaging 5, no. 10 (October 1, 2019): 79. http://dx.doi.org/10.3390/jimaging5100079.

Повний текст джерела
Анотація:
Underwater images are often acquired in sub-optimal lighting conditions, in particular at profound depths where the absence of natural light demands the use of artificial lighting. Low-lighting images impose a challenge for both manual and automated analysis, since regions of interest can have low visibility. A new framework capable of significantly enhancing these images is proposed in this article. The framework is based on a novel dehazing mechanism that considers local contrast information in the input images, and offers a solution to three common disadvantages of current single image dehazing methods: oversaturation of radiance, lack of scale-invariance and creation of halos. A novel low-lighting underwater image dataset, OceanDark, is introduced to assist in the development and evaluation of the proposed framework. Experimental results and a comparison with other underwater-specific image enhancement methods show that the proposed framework can be used for significantly improving the visibility in low-lighting underwater images of different scales, without creating undesired dehazing artifacts.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Wu, Xiaohua, Zenglu Li, Xiaoyu Guo, Songyang Xiang, and Yao Zhang. "Multi-level perception fusion dehazing network." PLOS ONE 18, no. 10 (October 2, 2023): e0285137. http://dx.doi.org/10.1371/journal.pone.0285137.

Повний текст джерела
Анотація:
Image dehazing models are critical in improving the recognition and classification capabilities of image-related artificial intelligence systems. However, existing methods often ignore the limitations of receptive field size during feature extraction and the loss of important information during network sampling, resulting in incomplete or structurally flawed dehazing outcomes. To address these challenges, we propose a multi-level perception fusion dehazing network (MPFDN) that effectively integrates feature information across different scales, expands the perceptual field of the network, and fully extracts the spatial background information of the image. Moreover, we employ an error feedback mechanism and a feature compensator to address the loss of features during the image dehazing process. Finally, we subtract the original hazy image from the generated residual image to obtain a high-quality dehazed image. Based on extensive experimentation, our proposed method has demonstrated outstanding performance not only on synthesizing dehazing datasets, but also on non-homogeneous haze datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Yousaf, Rehan Mehmood, Hafiz Adnan Habib, Zahid Mehmood, Ameen Banjar, Riad Alharbey, and Omar Aboulola. "Single Image Dehazing and Edge Preservation Based on the Dark Channel Probability-Weighted Moments." Mathematical Problems in Engineering 2019 (December 2, 2019): 1–11. http://dx.doi.org/10.1155/2019/9721503.

Повний текст джерела
Анотація:
The method of single image-based dehazing is addressed in the last two decades due to its extreme variating properties in different environments. Different factors make the image dehazing process cumbersome like unbalanced airlight, contrast, and darkness in hazy images. Many estimating and learning-based techniques are used to dehaze the images to overcome the aforementioned problems that suffer from halo artifacts and weak edges. The proposed technique can preserve better edges and illumination and retain the original color of the image. Dark channel prior (DCP) and probability-weighted moments (PWMs) are applied on each channel of an image to suppress the hazy regions and enhance the true edges. PWM is very effective as it suppresses low variations present in images that are affected by the haze. We have proposed a method in this article that performs well as compared to state-of-the-art image dehazing techniques in various conditions which include illumination changes, contrast variation, and preserving edges without producing halo effects within the image. The qualitative and quantitative analysis carried on standard image databases proves its robustness in terms of the standard performance evaluation metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Wang, Zhibo, Jia Jia, Peng Lyu, and Jeongik Min. "Efficient Dehazing with Recursive Gated Convolution in U-Net: A Novel Approach for Image Dehazing." Journal of Imaging 9, no. 9 (September 11, 2023): 183. http://dx.doi.org/10.3390/jimaging9090183.

Повний текст джерела
Анотація:
Image dehazing, a fundamental problem in computer vision, involves the recovery of clear visual cues from images marred by haze. Over recent years, deploying deep learning paradigms has spurred significant strides in image dehazing tasks. However, many dehazing networks aim to enhance performance by adopting intricate network architectures, complicating training, inference, and deployment procedures. This study proposes an end-to-end U-Net dehazing network model with recursive gated convolution and attention mechanisms to improve performance while maintaining a lean network structure. In our approach, we leverage an improved recursive gated convolution mechanism to substitute the original U-Net’s convolution blocks with residual blocks and apply the SK fusion module to revamp the skip connection method. We designate this novel U-Net variant as the Dehaze Recursive Gated U-Net (DRGNet). Comprehensive testing across public datasets demonstrates the DRGNet’s superior performance in dehazing quality, detail retrieval, and objective evaluation metrics. Ablation studies further confirm the effectiveness of the key design elements.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Liu, Qinghong, Yong Qin, Zhengyu Xie, Zhiwei Cao, and Limin Jia. "An Efficient Residual-Based Method for Railway Image Dehazing." Sensors 20, no. 21 (October 30, 2020): 6204. http://dx.doi.org/10.3390/s20216204.

Повний текст джерела
Анотація:
Trains shuttle in semiopen environments, and the surrounding environment plays an important role in the safety of train operation. The weather is one of the factors that affect the surrounding environment of railways. Under haze conditions, railway monitoring and staff vision could be blurred, threatening railway safety. This paper tackles image dehazing for railways. The contributions of this paper for railway video image dehazing are as follows: (1) this paper proposes an end-to-end residual block-based haze removal method that consists of two subnetworks, namely fine-grained and coarse-grained network can directly generate the clean image from input hazy image, called RID-Net (Railway Image Dehazing Network). (2) The combined loss function (per-pixel loss and perceptual loss functions) is proposed to achieve both low-level features and high-level features so to generate the high-quality restored images. (3) We take the full-reference criterion (PSNR&SSIM), object detection, running time, and sensory vision to evaluate the proposed dehazing method. Experimental results on railway synthesized dataset, benchmark indoor dataset, and real-world dataset demonstrate our method has superior performance compared to the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zhang, Y., L. Y. Luo, H. J. Zhao, R. D. Qiu, and Y. S. Ying. "IMAGE DEHAZING BASED ON MULTISPECTRAL POLARIZATION IMAGING METHOD IN DIFFERENT DETECTION MODES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020 (August 6, 2020): 615–20. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2020-615-2020.

Повний текст джерела
Анотація:
Abstract. In haze, the quality of the images is degraded due to the scattering of atmospheric aerosol particles in near-earth remote sensing. Therefore, how to effectively remove the influence of haze and improve image quality has been a hot issue. A polarization spectral image dehazing method is proposed here. A multi-spectral full-polarization imager was used to detect the polarization spectral images of ground objects. Firstly, the maximum- and minimum-intensity polarization images were obtained from the relationship between the Stokes vector and the Mueller matrix. Secondly, the airlight polarization model was utilized to estimate the airlight radiance at an infinite distance and the degree of polarization of the airlight. At last, the atmospheric attenuation model was used to obtain dehazed images. The results proved that our proposed image dehazing method can achieve substantial improvements on the detail recovery, not only in vertical downward detection mode, but also in horizontal detection mode.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Jiang, Bo, Wanxu Zhang, Jian Zhao, Yi Ru, Min Liu, Xiaolei Ma, Xiaoxuan Chen, and Hongqi Meng. "Gray-Scale Image Dehazing Guided by Scene Depth Information." Mathematical Problems in Engineering 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/7809214.

Повний текст джерела
Анотація:
Combined with two different types of image dehazing strategies based on image enhancement and atmospheric physical model, respectively, a novel method for gray-scale image dehazing is proposed in this paper. For image-enhancement-based strategy, the characteristics of its simplicity, effectiveness, and no color distortion are preserved, and the common guided image filter is modified to match the application of image enhancement. Through wavelet decomposition, the high frequency boundary of original image is preserved in advance. Moreover, the process of image dehazing can be guided by the image of scene depth proportion directly estimated from the original gray-scale image. Our method has the advantages of brightness consistency and no distortion over the state-of-the-art methods based on atmospheric physical model. Particularly, our method overcomes the essential shortcoming of the abovementioned methods that are mainly working for color image. Meanwhile, an image of scene depth proportion is acquired as a byproduct of image dehazing.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Hu, Xianjun, Jing Wang, Chunlei Zhang, and Yishuo Tong. "Deep Learning-Enabled Variational Optimization Method for Image Dehazing in Maritime Intelligent Transportation Systems." Journal of Advanced Transportation 2021 (April 30, 2021): 1–18. http://dx.doi.org/10.1155/2021/6658763.

Повний текст джерела
Анотація:
Image dehazing has become a fundamental problem of common concern in computer vision-driven maritime intelligent transportation systems (ITS). The purpose of image dehazing is to reconstruct the latent haze-free image from its observed hazy version. It is well known that the accurate estimation of transmission map plays a vital role in image dehazing. In this work, the coarse transmission map is firstly estimated using a robust fusion-based strategy. A unified optimization framework is then proposed to estimate the refined transmission map and latent sharp image simultaneously. The resulting constrained minimization model is solved using a two-step optimization algorithm. To further enhance dehazing performance, the solutions of subproblems obtained in this optimization algorithm are equivalent to deep learning-based image denoising. Due to the powerful representation ability, the proposed method can accurately and robustly estimate the transmission map and latent sharp image. Numerous experiments on both synthetic and realistic datasets have been performed to compare our method with several state-of-the-art dehazing methods. Dehazing results have demonstrated the proposed method’s superior imaging performance in terms of both quantitative and qualitative evaluations. The enhanced imaging quality is beneficial for practical applications in maritime ITS, for example, vessel detection, recognition, and tracking.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Song, Runze, Zhaohui Liu, and Chao Wang. "End-to-end dehazing of traffic sign images using reformulated atmospheric scattering model." Journal of Intelligent & Fuzzy Systems 41, no. 6 (December 16, 2021): 6815–30. http://dx.doi.org/10.3233/jifs-210733.

Повний текст джерела
Анотація:
As an advanced machine vision task, traffic sign recognition is of great significance to the safe driving of autonomous vehicles. Haze has seriously affected the performance of traffic sign recognition. This paper proposes a dehazing network, including multi-scale residual blocks, which significantly affects the recognition of traffic signs in hazy weather. First, we introduce the idea of residual learning, design the end-to-end multi-scale feature information fusion method. Secondly, the study used subjective visual effects and objective evaluation metrics such as Visibility Index (VI) and Realness Index (RI) based on the characteristics of the real-world environment to compare various traditional dehazing and deep learning dehazing method with good performance. Finally, this paper combines image dehazing and traffic sign recognition, using the algorithm of this paper to dehaze the traffic sign images under real-world hazy weather. The experiments show that the algorithm in this paper can improve the performance of traffic sign recognition in hazy weather and fulfil the requirements of real-time image processing. It also proves the effectiveness of the reformulated atmospheric scattering model for the dehazing of traffic sign images.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Yang, Zhenjian, Jiamei Shang, Zhongwei Zhang, Yan Zhang, and Shudong Liu. "A new end-to-end image dehazing algorithm based on residual attention mechanism." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no. 4 (August 2021): 901–8. http://dx.doi.org/10.1051/jnwpu/20213940901.

Повний текст джерела
Анотація:
Traditional image dehazing algorithms based on prior knowledge and deep learning rely on the atmospheric scattering model and are easy to cause color distortion and incomplete dehazing. To solve these problems, an end-to-end image dehazing algorithm based on residual attention mechanism is proposed in this paper. The network includes four modules: encoder, multi-scale feature extraction, feature fusion and decoder. The encoder module encodes the input haze image into feature map, which is convenient for subsequent feature extraction and reduces memory consumption; the multi-scale feature extraction module includes residual smoothed dilated convolution module, residual block and efficient channel attention, which can expand the receptive field and extract different scale features by filtering and weighting; the feature fusion module with efficient channel attention adjusts the channel weight dynamically, acquires rich context information and suppresses redundant information so as to enhance the ability to extract haze density image of the network; finally, the encoder module maps the fused feature nonlinearly to obtain the haze density image and then restores the haze free image. The qualitative and quantitative tests based on SOTS test set and natural haze images show good objective and subjective evaluation results. This algorithm improves the problems of color distortion and incomplete dehazing effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Ngo, Dat, Gi-Dong Lee, and Bongsoon Kang. "Haziness Degree Evaluator: A Knowledge-Driven Approach for Haze Density Estimation." Sensors 21, no. 11 (June 4, 2021): 3896. http://dx.doi.org/10.3390/s21113896.

Повний текст джерела
Анотація:
Haze is a term that is widely used in image processing to refer to natural and human-activity-emitted aerosols. It causes light scattering and absorption, which reduce the visibility of captured images. This reduction hinders the proper operation of many photographic and computer-vision applications, such as object recognition/localization. Accordingly, haze removal, which is also known as image dehazing or defogging, is an apposite solution. However, existing dehazing algorithms unconditionally remove haze, even when haze occurs occasionally. Therefore, an approach for haze density estimation is highly demanded. This paper then proposes a model that is known as the haziness degree evaluator to predict haze density from a single image without reference to a corresponding haze-free image, an existing georeferenced digital terrain model, or training on a significant amount of data. The proposed model quantifies haze density by optimizing an objective function comprising three haze-relevant features that result from correlation and computation analysis. This objective function is formulated to maximize the image’s saturation, brightness, and sharpness while minimizing the dark channel. Additionally, this study describes three applications of the proposed model in hazy/haze-free image classification, dehazing performance assessment, and single image dehazing. Extensive experiments on both real and synthetic datasets demonstrate its efficacy in these applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії