Добірка наукової літератури з теми "IMAGE DEHAZING"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "IMAGE DEHAZING".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "IMAGE DEHAZING"

1

Yeole, Aditya. "Satellite Image Dehazing." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 5184–92. http://dx.doi.org/10.22214/ijraset.2023.52728.

Повний текст джерела
Анотація:
Abstract: The images captured during haze, murkiness and raw weather has serious degradation in them. Image dehazing of a single image is a problematic affair. While already-in-use systems depend on high-quality images, some Computer Vision applications, such self-driving cars and image restoration, typically use input from data that is of poor quality.. This paper proposes a deep CNN model based on dehazing algorithm using U-NET, dynamic U-NET and Generative Adversarial Networks (CycleGANs). CycleGAN is a method that comprehends automatic training of image-to-image transformation without associated examples. To train the model network, we use SIH dataset as the training set. The superior performance is accomplished using appreciably small dataset, the corresponding outcomes confirm the adaptability and strength of the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xu, Jun, Zi-Xuan Chen, Hao Luo, and Zhe-Ming Lu. "An Efficient Dehazing Algorithm Based on the Fusion of Transformer and Convolutional Neural Network." Sensors 23, no. 1 (December 21, 2022): 43. http://dx.doi.org/10.3390/s23010043.

Повний текст джерела
Анотація:
The purpose of image dehazing is to remove the interference from weather factors in degraded images and enhance the clarity and color saturation of images to maximize the restoration of useful features. Single image dehazing is one of the most important tasks in the field of image restoration. In recent years, due to the progress of deep learning, single image dehazing has made great progress. With the success of Transformer in advanced computer vision tasks, some research studies also began to apply Transformer to image dehazing tasks and obtained surprising results. However, both the deconvolution-neural-network-based dehazing algorithm and Transformer based dehazing algorithm magnify their advantages and disadvantages separately. Therefore, this paper proposes a novel Transformer–Convolution fusion dehazing network (TCFDN), which uses Transformer’s global modeling ability and convolutional neural network’s local modeling ability to improve the dehazing ability. In the Transformer–Convolution fusion dehazing network, the classic self-encoder structure is used. This paper proposes a Transformer–Convolution hybrid layer, which uses an adaptive fusion strategy to make full use of the Swin-Transformer and convolutional neural network to extract and reconstruct image features. On the basis of previous research, this layer further improves the ability of the network to remove haze. A series of contrast experiments and ablation experiments not only proved that the Transformer–Convolution fusion dehazing network proposed in this paper exceeded the more advanced dehazing algorithm, but also provided solid and powerful evidence for the basic theory on which it depends.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ma, Shaojin, Weiguo Pan, Hongzhe Liu, Songyin Dai, Bingxin Xu, Cheng Xu, Xuewei Li, and Huaiguang Guan. "Image Dehazing Based on Improved Color Channel Transfer and Multiexposure Fusion." Advances in Multimedia 2023 (May 15, 2023): 1–10. http://dx.doi.org/10.1155/2023/8891239.

Повний текст джерела
Анотація:
Image dehazing is one of the problems that need to be solved urgently in the field of computer vision. In recent years, more and more algorithms have been applied to image dehazing and achieved good results. However, the image after dehazing still has color distortion, contrast and saturation disorder, and other challenges; in order to solve these problems, in this paper, an effective image dehazing method is proposed, which is based on improved color channel transfer and multiexposure image fusion to achieve image dehazing. First, the image is preprocessed using a color channel transfer method based on k-means. Second, gamma correction is introduced on the basis of guided filtering to obtain a series of multiexposure images, and the obtained multiexposure images are fused into a dehazed image through a Laplacian pyramid fusion scheme based on local similarity of adaptive weights. Finally, contrast and saturation corrections are performed on the dehazed image. Experimental verification is carried out on synthetic dehazed images and natural dehazed images, and it is verified that the method proposed is superior to existing dehazed algorithms from both subjective and objective aspects.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wei, Jianchong, Yi Wu, Liang Chen, Kunping Yang, and Renbao Lian. "Zero-Shot Remote Sensing Image Dehazing Based on a Re-Degradation Haze Imaging Model." Remote Sensing 14, no. 22 (November 13, 2022): 5737. http://dx.doi.org/10.3390/rs14225737.

Повний текст джерела
Анотація:
Image dehazing is crucial for improving the advanced applications on remote sensing (RS) images. However, collecting paired RS images to train the deep neural networks (DNNs) is scarcely available, and the synthetic datasets may suffer from domain-shift issues. In this paper, we propose a zero-shot RS image dehazing method based on a re-degradation haze imaging model, which directly restores the haze-free image from a single hazy image. Based on layer disentanglement, we design a dehazing framework consisting of three joint sub-modules to disentangle the hazy input image into three components: the atmospheric light, the transmission map, and the recovered haze-free image. We then generate a re-degraded hazy image by mixing up the hazy input image and the recovered haze-free image. By the proposed re-degradation haze imaging model, we theoretically demonstrate that the hazy input and the re-degraded hazy image follow a similar haze imaging model. This finding helps us to train the dehazing network in a zero-shot manner. The dehazing network is optimized to generate outputs that satisfy the relationship between the hazy input image and the re-degraded hazy image in the re-degradation haze imaging model. Therefore, given a hazy RS image, the dehazing network directly infers the haze-free image by minimizing a specific loss function. Using uniform hazy datasets, non-uniform hazy datasets, and real-world hazy images, we conducted comprehensive experiments to show that our method outperforms many state-of-the-art (SOTA) methods in processing uniform or slight/moderate non-uniform RS hazy images. In addition, evaluation on a high-level vision task (RS image road extraction) further demonstrates the effectiveness and promising performance of the proposed zero-shot dehazing method.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Dong, Weida, Chunyan Wang, Hao Sun, Yunjie Teng, and Xiping Xu. "Multi-Scale Attention Feature Enhancement Network for Single Image Dehazing." Sensors 23, no. 19 (September 27, 2023): 8102. http://dx.doi.org/10.3390/s23198102.

Повний текст джерела
Анотація:
Aiming to solve the problem of color distortion and loss of detail information in most dehazing algorithms, an end-to-end image dehazing network based on multi-scale feature enhancement is proposed. Firstly, the feature extraction enhancement module is used to capture the detailed information of hazy images and expand the receptive field. Secondly, the channel attention mechanism and pixel attention mechanism of the feature fusion enhancement module are used to dynamically adjust the weights of different channels and pixels. Thirdly, the context enhancement module is used to enhance the context semantic information, suppress redundant information, and obtain the haze density image with higher detail. Finally, our method removes haze, preserves image color, and ensures image details. The proposed method achieved a PSNR score of 33.74, SSIM scores of 0.9843 and LPIPS distance of 0.0040 on the SOTS-outdoor dataset. Compared with representative dehazing methods, it demonstrates better dehazing performance and proves the advantages of the proposed method on synthetic hazy images. Combined with dehazing experiments on real hazy images, the results show that our method can effectively improve dehazing performance while preserving more image details and achieving color fidelity.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sun, Wei, Jianli Wu, and Haroon Rashid. "Image Enhancement Algorithm of Foggy Sky with Sky based on Sky Segmentation." Journal of Physics: Conference Series 2560, no. 1 (August 1, 2023): 012011. http://dx.doi.org/10.1088/1742-6596/2560/1/012011.

Повний текст джерела
Анотація:
Abstract In recent years, image defogging has become a research hotspot in the field of digital image processing. Through defogging enhancement processing, the visual quality of foggy images can be significantly improved, and it is also an important part of subsequent image processing. To overcome the limitations of traditional image dehazing, an image enhancement algorithm for foggy image with sky based on sky segmentation is proposed. Firstly, based on K-means clustering and sky feature analysis, sky region recognition is performed for fog image with sky. Secondly, according to the pixels of sky region, the rough transmittance is corrected, and the dehazing image is obtained by dark channel prior image dehazing based on guided filter. Finally, the dehazing image is equalized by bi-histogram equalization. This algorithm effectively avoids the problem of color distortion and halo caused by the traditional dark channel prior dehazing algorithm in the sky area, and makes the restored foggy image have better global and local contrast.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

An, Shunmin, Xixia Huang, Linling Wang, Zhangjing Zheng, and Le Wang. "Unsupervised water scene dehazing network using multiple scattering model." PLOS ONE 16, no. 6 (June 28, 2021): e0253214. http://dx.doi.org/10.1371/journal.pone.0253214.

Повний текст джерела
Анотація:
In water scenes, where hazy images are subject to multiple scattering and where ideal data sets are difficult to collect, many dehazing methods are not as effective as they could be. Therefore, an unsupervised water scene dehazing network using atmospheric multiple scattering model is proposed. Unlike previous image dehazing methods, our method uses the unsupervised neural network and the atmospheric multiple scattering model and solves the problem of difficult acquisition of ideal datasets and the effect of multiple scattering on the image. In our method, in order to embed the atmospheric multiple scattering model into the unsupervised dehazing network, the unsupervised dehazing network uses four branches to estimate the scene radiation layer, transmission map layer, blur kernel layer and atmospheric light layer, the hazy image is then synthesized from the four output layers, minimizing the input hazy image and the output hazy image, where the output scene radiation layer is the final dehazing image. In addition, we constructed unsupervised loss functions which applicable to image dehazing by prior knowledge, i.e., color attenuation energy loss and dark channel loss. The method has a wide range of applications, with haze being thick and variable in marine, river and lake scenes, the method can be used to assist ship vision for target detection or forward road recognition in hazy conditions. Through extensive experiments on synthetic and real-world images, the proposed method is able to recover the details, structure and texture of the water image better than five advanced dehazing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yang, Yuanbo, Qunbo Lv, Baoyu Zhu, Xuefu Sui, Yu Zhang, and Zheng Tan. "One-Sided Unsupervised Image Dehazing Network Based on Feature Fusion and Multi-Scale Skip Connection." Applied Sciences 12, no. 23 (December 2, 2022): 12366. http://dx.doi.org/10.3390/app122312366.

Повний текст джерела
Анотація:
Haze and mist caused by air quality, weather, and other factors can reduce the clarity and contrast of images captured by cameras, which limits the applications of automatic driving, satellite remote sensing, traffic monitoring, etc. Therefore, the study of image dehazing is of great significance. Most existing unsupervised image-dehazing algorithms rely on a priori knowledge and simplified atmospheric scattering models, but the physical causes of haze in the real world are complex, resulting in inaccurate atmospheric scattering models that affect the dehazing effect. Unsupervised generative adversarial networks can be used for image-dehazing algorithm research; however, due to the information inequality between haze and haze-free images, the existing bi-directional mapping domain translation model often used in unsupervised generative adversarial networks is not suitable for image-dehazing tasks, and it also does not make good use of extracted features, which results in distortion, loss of image details, and poor retention of image features in the haze-free images. To address these problems, this paper proposes an end-to-end one-sided unsupervised image-dehazing network based on a generative adversarial network that directly learns the mapping between haze and haze-free images. The proposed feature-fusion module and multi-scale skip connection based on residual network consider the loss of feature information caused by convolution operation and the fusion of different scale features, and achieve adaptive fusion between low-level features and high-level features, to better preserve the features of the original image. Meanwhile, multiple loss functions are used to train the network, where the adversarial loss ensures that the network generates more realistic images and the contrastive loss ensures a meaningful one-sided mapping from the haze image to the haze-free image, resulting in haze-free images with good quantitative metrics and visual effects. The experiments demonstrate that, compared with existing dehazing algorithms, our method achieved better quantitative metrics and better visual effects on both synthetic haze image datasets and real-world haze image datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Han, Wensheng, Hong Zhu, Chenghui Qi, Jingsi Li, and Dengyin Zhang. "High-Resolution Representations Network for Single Image Dehazing." Sensors 22, no. 6 (March 15, 2022): 2257. http://dx.doi.org/10.3390/s22062257.

Повний текст джерела
Анотація:
Deep learning-based image dehazing methods have made great progress, but there are still many problems such as inaccurate model parameter estimation and preserving spatial information in the U-Net-based architecture. To address these problems, we propose an image dehazing network based on the high-resolution network, called DeHRNet. The high-resolution network originally used for human pose estimation. In this paper, we make a simple yet effective modification to the network and apply it to image dehazing. We add a new stage to the original network to make it better for image dehazing. The newly added stage collects the feature map representations of all branches of the network by up-sampling to enhance the high-resolution representations instead of only taking the feature maps of the high-resolution branches, which makes the restored clean images more natural. The final experimental results show that DeHRNet achieves superior performance over existing dehazing methods in synthesized and natural hazy images.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tang, Yunqing, Yin Xiang, and Guangfeng Chen. "A Nighttime and Daytime Single-Image Dehazing Method." Applied Sciences 13, no. 1 (December 25, 2022): 255. http://dx.doi.org/10.3390/app13010255.

Повний текст джерела
Анотація:
In this study, the requirements for image dehazing methods have been put forward, such as a wider range of scenarios in which the methods can be used, faster processing speeds and higher image quality. Recent dehazing methods can only unilaterally process daytime or nighttime hazy images. However, we propose an effective single-image technique, dubbed MF Dehazer, in order to solve the problems associated with nighttime and daytime dehazing. This technique was developed following an in-depth analysis of the properties of nighttime hazy images. We also propose a mixed-filter method in order to estimate ambient illumination. It is possible to obtain the color and light direction when estimating ambient illumination. Usually, after dehazing, nighttime images will cause light source diffusion problems. Thus, we propose a method to compensate for the high-light area transmission in order to improve the transmission of the light source areas. Then, through regularization, the images obtain better contrast. The experimental results show that MF Dehazer outperforms the recent dehazing methods. Additionally, it can obtain images with higher contrast and clarity while retaining the original color of the image.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "IMAGE DEHAZING"

1

Pérez, Soler Javier. "Visibility in underwater robotics: Benchmarking and single image dehazing." Doctoral thesis, Universitat Jaume I, 2017. http://hdl.handle.net/10803/432778.

Повний текст джерела
Анотація:
Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.
Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Karlsson, Jonas. "FPGA-Accelerated Dehazing by Visible and Near-infrared Image Fusion." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28322.

Повний текст джерела
Анотація:
Fog and haze can have a dramatic impact on vision systems for land and sea vehicles. The impact of such conditions on infrared images is not as severe as for standard images. By fusing images from two cameras, one ordinary and one near-infrared camera, a complete dehazing system with colour preservation can be achieved. Applying several different algorithms to an image set and evaluating the results, the most suitable image fusion algoritm has been identified. Using an FPGA, a programmable integrated circuit, a crucial part of the algorithm has been implemented. It is capable of producing processed images 30 times faster than a laptop computer. This implementation lays the foundation of a real-time dehazing system and provides a significant part of the full solution. The results show that such a system can be accomplished with an FPGA.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hultberg, Johanna. "Dehazing of Satellite Images." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148044.

Повний текст джерела
Анотація:
The aim of this work is to find a method for removing haze from satellite imagery. This is done by taking two algorithms developed for images taken from the sur- face of the earth and adapting them for satellite images. The two algorithms are Single Image Haze Removal Using Dark Channel Prior by He et al. and Color Im- age Dehazing Using the Near-Infrared by Schaul et al. Both algorithms, altered to fit satellite images, plus the combination are applied on four sets of satellite images. The results are compared with each other and the unaltered images. The evaluation is both qualitative, i.e. looking at the images, and quantitative using three properties: colorfulness, contrast and saturated pixels. Both the qualitative and the quantitative evaluation determined that using only the altered version of Dark Channel Prior gives the result with the least amount of haze and whose colors look most like reality.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Han, Che, and 蘇哲漢. "Nighttime Image Dehazing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3d34wx.

Повний текст джерела
Анотація:
碩士
國立中山大學
資訊工程學系研究所
102
Image surveillance is the major means of security monitoring. Image sequences obtained through surveillance cameras are vital sources for tracking criminal incidents and causes of accident, happening mostly at night due to lacking of light and obscurity of vision. The quality of the image plays a pivotal role in providing evidence and uncovering the truth. However, almost all image processing techniques focus on daylight environment, seldom on compensating artifacts rooted from artificial light source at night or light diffusion. The low-lighting environment and color obscurity often invalidate further identification from the surveillance video acquired. The processing of images acquired at night cannot follow the paradigm of the daylight image processing. Take image dehazing for example, the removal of haze depends on the derivation of scene depth. Dark Channel Prior (DCP), using dark channel as a prior assumption, is often applied to derive scene depth from a single image. The farthest area, with the highest intensity of light, in an image corresponds to the major source of lighting – daylight, while the area closer with lower degree of light intensity, Therefore, the depth within the scene links with the amount of background light. The above observation does not hold at night. The source of light does not come from sun, rather artificial light source, e.g., street lamp or automobile headlight. The farthest area, often dark-pitch due to lack of any light source, does not have the highest light intensity. To the best of our knowledge, no research has been reported regarding the nighttime image dehazing and enhancement. In light of the demands of higher nighttime image quality, this paper proposes an image dehazing technique, incorporating the light diffusion model, artificial light source, and segmentation of moving objects within the image sequence, to restore the nighttime scene back to the daytime one. The paper, employing the dehazing and image enhancement to remove the light diffusion in a nighttime image, is composed of daytime background dehazing and nighttime image enhancement. The scene depth is derived by applying DCP to the daytime background image, producing the corresponding depth map. The haze within the scene is removed by the dehazing algorithm to restore the daytime background. The reflectance of objects in the background can be further derived by taking the daylight intensity into consideration. The position and overall intensity of the artificial light sources can be determined through the nighttime background image first. The moving objects are then segmented from the image sequence. The reflectance of moving objects can be evaluated, given the depth map obtained from the daytime image, and position and overall intensity of the artificial light sources from the nighttime counterpart. Once the reflectance of moving objects are determined, the background and moving objects can be fused together given proper daytime lighting.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jyun-GuoWang and 王峻國. "Image Dehazing Using Machine Learning Methods." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/9nq7c6.

Повний текст джерела
Анотація:
博士
國立成功大學
電腦與通信工程研究所
104
In recent years, the image dehazing issue has been widely discussed. During photography in an outdoor environment, the medium in the air causes light attenuation and reduce image quality; these impacts are especially obvious in a hazy environment. Reduction of image quality results in the loss of information, which hinders image recognition systems to identify objects in the image. Removal of haze can provide a reference for subsequent image processing for specific requirements. Notably, image dehazing technology is used to maintain image quality during preprocessing. This dissertation presents machine learning methods for image haze removal and consists of two major parts. In the first part, a fuzzy inference system (FIS) model is presented. Users of this model can customize designs to generate applicable fuzzy rules from expert knowledge or data. The number of fuzzy rules is fixed. In addition, the FIS model requires substantial amounts of data and expertise; even if the model is used to develop a fuzzy system, the image output of that system may suffer from a loss of accuracy. Therefore, in the second part of this dissertation, a recurrent fuzzy cerebellar model articulation controller (RFCMAC) model with a self-evolving structure and online learning is presented to improve the FIS model. The recurrent structure in an RFCMAC is formed with internal loops and internal feedback by feeding the rule firing strength of each rule to other rules and to itself. A Takagi-Sugeno-Kang (TSK) type is used in the consequent part of the RFCMAC. The online learning algorithm consists of structure and parameter learning. The structure learning depends on an entropy measure to determine the number of fuzzy rules. The parameter learning, based on back-propagation, can adjust the shape of the membership function and the corresponding weights of the consequent part. This dissertation describes, the proposed machine learning methods and its related algorithm, applies them to various image dehazing problems, and analyzes the results to demonstrate the effectiveness of the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chung, Yun-Xin, and 鍾昀芯. "A Study in Image Dehazing Approaches." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/j2syw6.

Повний текст джерела
Анотація:
碩士
國立中興大學
土木工程學系所
106
Optical image is easily affected by poor weather such as rain, snow, fog and haze, which may deteriorate the quality of input image. The process of enhancing an image by eliminating smog is called defogging. The purpose of this paper is to apply dark channel prior and histogram equalization methods to remove smog from images. The dark channel prior method is improved based on the distribution of fog to promote the seed calculation for huge remotely sensed imagery. Finally, we applied the image quality assessment index to evaluate and analyze the image dehazing results. The experimental results show that the method removes smog from the foggy image and enhances the haze by maintaining contrast.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chen, Ying-Ching, and 陳英璟. "Underwater image enhancement: Using WavelengthCompensation and Image Dehazing (WCID)." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/94271506864231404657.

Повний текст джерела
Анотація:
碩士
國立中山大學
資訊工程學系研究所
99
Light scattering and color shift are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color shift corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by bluish tone. This paper proposes a novel approach to enhance underwater images by a dehazing algorithm with wavelength compensation. Once the depth map, i.e., distances between the objects and the camera, is estimated by dark channel prior, the light intensities of foreground and background are compared to determine whether an artificial light source is employed during image capturing process. After compensating the effect of artifical light, the haze phenomenon from light scattering is removed by the dehazing algorithm. Next, estimation of the image scene depth according to the residual energy ratios of different wavelengths in the background is performed. Based on the amount of attenuation corresponding to each light wavelength, color shift compensation is conducted to restore color balance. A Super-Rsolution image can offer more details that must be important and necessary in low resolution underwater image. In this paper combine Gradient-Base Super Resolution and Iterative Back-Projection (IBP) to propose Cocktail Super Resolution algorithm, with the bilateral filter to remove the chessboard effect and ringing effect along image edges, and improve the image quality. The underwater videos with diversified resolution downloaded from the Youtube website are processed by employing WCID, histogram equalization, and a traditional dehazing algorithm, respectively. Test results demonstrate that videos with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Liao, Jyun-Jia, and 廖俊嘉. "A New Transmission Map for Image Dehazing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/sz947w.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
電腦與通訊研究所
102
Visibility of the captured outdoor images in inclement weathers, such as haze, fog and mist, usually is degraded due to the effect of absorption and scattering caused by the atmospheric particles. Such images may significantly contaminate the performance qualities of the intelligent transportation systems relying on visual feature extraction, such as traffic status detection, traffic sign recognition, vehicular traffic tracking, and so on. Recently, haze removal techniques taken in these particular applications have caught increasing attention in improving the visibility of hazy images in order to make the performances of the intelligent transportation systems more reliable and efficient. However, estimating haze from a single haze image with an actual scene is difficult for visibility restoration methods to accomplish. In order to solve this problem, we propose a haze removal method which requires a combination of two main modules:the haze thickness estimation module and the visibility restoration module. The haze thickness estimation module is based on bi-gamma modification to effectively estimate haze for transmission map. Subsequently, the visibility restoration module utilizes the transmission map to achieve the haze removal. The experimental results demonstrate that the proposed haze removal method can restore the visibility in single haze images more effectively than can other state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jui-ChiangWen and 溫瑞強. "Single image dehazing based on vector quantization." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/69190335931091186410.

Повний текст джерела
Анотація:
碩士
國立成功大學
電腦與通信工程研究所
103
The proposed method is based on McCartney’s optical haze model and uses a novel approach to estimate transmission. According to the literature, the major problem is estimating the transmission in the model-based method. This study trains plenty of haze-free and hazy images as codebooks with LBG algorithm. Then it is used to estimate transmission with matching. In order to speed up the process, the input image is down-sampled before refining with guided image filter. It not only can reduce processing time but also can preserve the quality of restored images. RGB, dark channel, and contrast values are regarded as features while training codebooks and estimating transmission. The transmission can be selected accurately because dark channel and contrast feature have complementarity. The experiment results show that the haze-free high-intensity objects can avoid over dehazing and keep the foreground of restored images more natural. The details of recovered images are also clearer.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Huang, Ren-Jun, and 黃任駿. "Single Image Dehazing Algorithm with Two-objective Optimization." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/33776514616653182214.

Повний текст джерела
Анотація:
碩士
朝陽科技大學
資訊工程系
104
To record images with a camera under different climate condition, the quality of the image will be affected by the weather such as smoke, haze, rain and snow. Among them, haze is frequently an atmospheric phenomenon where dust, smoke and other dry particles obscure the clarity of the sky. In order to improve the poor quality of image due to low visibility by haze, researchers have proposed various methods to remove haze. One of them, a Proposed Dehazing Algorithm (PDA) developed by Hsieh has a good dehazing performance. However, the resulted image after dehazing changes the mood and has a poor visual sense under certain circumstance. Most dehazing performance measures are based on subjective visual to assess the pros and cons up until now. To overcome this drawback, we proposed a Proposed Optimization Dehazing Algorithm (PODA) with two-objectives evaluation, to improve the image with a good dehazing performance and maintain the mood retention. In addition, we proposed an evaluating method for dehazing image to analyze the performance of dehazing image. The developed PODA has compared with other dehazing methods using various examples. Simulation results indicate that the PODA outperforms these competing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "IMAGE DEHAZING"

1

Tian, Jiandong. "Single-Image Dehazing." In All Weather Robot Vision, 229–70. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Shengdong, Jian Yao, and Edel B. Garcia. "Single Image Dehazing via Image Generating." In Image and Video Technology, 123–36. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75786-5_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

He, Jiaxi, Cishen Zhang, and Ifat-Al Baqee. "Image Dehazing Using Regularized Optimization." In Advances in Visual Computing, 87–96. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-14249-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

He, Renjie, Jiaqi Yang, Xintao Guo, and Zhongke Shi. "Variational Regularized Single Image Dehazing." In Pattern Recognition and Computer Vision, 746–57. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60633-6_62.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Nian, Aihua Li, Zhigao Cui, and Yanzhao Su. "Development of Image Dehazing Algorithm." In Application of Intelligent Systems in Multi-modal Information Analytics, 461–66. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74814-2_65.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shang, Dehao, Tingting Wang, and Faming Fang. "Single Image Dehazing Using Hölder Coefficient." In Knowledge Science, Engineering and Management, 314–24. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47650-6_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hu, Bin, Zhuangzhuang Yue, Yuehua Li, Lili Zhao, and Shi Cheng. "Single Image Dehazing Using Frequency Attention." In Neural Information Processing, 253–62. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-30111-7_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Galdran, Adrian, Javier Vazquez-Corral, David Pardo, and Marcelo Bertalmío. "A Variational Framework for Single Image Dehazing." In Computer Vision - ECCV 2014 Workshops, 259–70. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16199-0_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gupta, Bhupendra, and Shivani A. Mehta. "Dehazing from a Single Remote Sensing Image." In ICT Infrastructure and Computing, 409–18. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-5331-6_42.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ye, Tian, Yunchen Zhang, Mingchao Jiang, Liang Chen, Yun Liu, Sixiang Chen, and Erkang Chen. "Perceiving and Modeling Density for Image Dehazing." In Lecture Notes in Computer Science, 130–45. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19800-7_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "IMAGE DEHAZING"

1

Zhu, Hongyuan, Xi Peng, Vijay Chandrasekhar, Liyuan Li, and Joo-Hwee Lim. "DehazeGAN: When Image Dehazing Meets Differential Programming." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/172.

Повний текст джерела
Анотація:
Single image dehazing has been a classic topic in computer vision for years. Motivated by the atmospheric scattering model, the key to satisfactory single image dehazing relies on an estimation of two physical parameters, i.e., the global atmospheric light and the transmission coefficient. Most existing methods employ a two-step pipeline to estimate these two parameters with heuristics which accumulate errors and compromise dehazing quality. Inspired by differentiable programming, we re-formulate the atmospheric scattering model into a novel generative adversarial network (DehazeGAN). Such a reformulation and adversarial learning allow the two parameters to be learned simultaneously and automatically from data by optimizing the final dehazing performance so that clean images with faithful color and structures are directly produced. Moreover, our reformulation also greatly improves the GAN’s interpretability and quality for single image dehazing. To the best of our knowledge, our method is one of the first works to explore the connection among generative adversarial models, image dehazing, and differentiable programming, which advance the theories and application of these areas. Extensive experiments on synthetic and realistic data show that our method outperforms state-of-the-art methods in terms of PSNR, SSIM, and subjective visual quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Aiping, Haixin Wang, Zhong Ji, Yanwei Pang, and Ling Shao. "Dual-Path in Dual-Path Network for Single Image Dehazing." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/643.

Повний текст джерела
Анотація:
Recently, deep learning-based single image dehazing method has been a popular approach to tackle dehazing. However, the existing dehazing approaches are performed directly on the original hazy image, which easily results in image blurring and noise amplifying. To address this issue, the paper proposes a DPDP-Net (Dual-Path in Dual-Path network) framework by employing a hierarchical dual path network. Specifically, the first-level dual-path network consists of a Dehazing Network and a Denoising Network, where the Dehazing Network is responsible for haze removal in the structural layer, and the Denoising Network deals with noise in the textural layer, respectively. And the second-level dual-path network lies in the Dehazing Network, which has an AL-Net (Atmospheric Light Network) and a TM-Net (Transmission Map Network), respectively. Concretely, the AL-Net aims to train the non-uniform atmospheric light, while the TM-Net aims to train the transmission map that reflects the visibility of the image. The final dehazing image is obtained by nonlinearly fusing the output of the Denoising Network and the Dehazing Network. Extensive experiments demonstrate that our proposed DPDP-Net achieves competitive performance against the state-of-the-art methods on both synthetic and real-world images.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Cheng, De, Yan Li, Dingwen Zhang, Nannan Wang, Xinbo Gao, and Jiande Sun. "Robust Single Image Dehazing Based on Consistent and Contrast-Assisted Reconstruction." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/119.

Повний текст джерела
Анотація:
Single image dehazing as a fundamental low-level vision task, is essential for the development of robust intelligent surveillance system. In this paper, we make an early effort to consider dehazing robustness under variational haze density, which is a realistic while under-studied problem in the research filed of singe image dehazing. To properly address this problem, we propose a novel density-variational learning framework to improve the robustness of the image dehzing model assisted by a variety of negative hazy images, to better deal with various complex hazy scenarios. Specifically, the dehazing network is optimized under the consistency-regularized framework with the proposed Contrast-Assisted Reconstruction Loss (CARL). The CARL can fully exploit the negative information to facilitate the traditional positive-orient dehazing objective function, by squeezing the dehazed image to its clean target from different directions. Meanwhile, the consistency regularization keeps consistent outputs given multi-level hazy images, thus improving the model robustness. Extensive experimental results on two synthetic and three real-world datasets demonstrate that our method significantly surpasses the state-of-the-art approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Liang, Yudong, Bin Wang, Wangmeng Zuo, Jiaying Liu, and Wenqi Ren. "Self-supervised Learning and Adaptation for Single Image Dehazing." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/159.

Повний текст джерела
Анотація:
Existing deep image dehazing methods usually depend on supervised learning with a large number of hazy-clean image pairs which are expensive or difficult to collect. Moreover, dehazing performance of the learned model may deteriorate significantly when the training hazy-clean image pairs are insufficient and are different from real hazy images in applications. In this paper, we show that exploiting large scale training set and adapting to real hazy images are two critical issues in learning effective deep dehazing models. Under the depth guidance estimated by a well-trained depth estimation network, we leverage the conventional atmospheric scattering model to generate massive hazy-clean image pairs for the self-supervised pre-training of dehazing network. Furthermore, self-supervised adaptation is presented to adapt pre-trained network to real hazy images. Learning without forgetting strategy is also deployed in self-supervised adaptation by combining self-supervision and model adaptation via contrastive learning. Experiments show that our proposed method performs favorably against the state-of-the-art methods, and is quite efficient, i.e., handling a 4K image in 23 ms. The codes are available at https://github.com/DongLiangSXU/SLAdehazing.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gui, Jie, Xiaofeng Cong, Yuan Cao, Wenqi Ren, Jun Zhang, Jing Zhang, and Dacheng Tao. "A Comprehensive Survey on Image Dehazing Based on Deep Learning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/604.

Повний текст джерела
Анотація:
The presence of haze significantly reduces the quality of images. Researchers have designed a variety of algorithms for image dehazing (ID) to restore the quality of hazy images. However, there are few studies that summarize the deep learning (DL) based dehazing technologies. In this paper, we conduct a comprehensive survey on the recent proposed dehazing methods. Firstly, we conclude the commonly used datasets, loss functions and evaluation metrics. Secondly, we group the existing researches of ID into two major categories: supervised ID and unsupervised ID. The core ideas of various influential dehazing models are introduced. Finally, the open issues for future research on ID are pointed out.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Fattal, Raanan. "Single image dehazing." In ACM SIGGRAPH 2008 papers. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1399504.1360671.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Voronin, Sergei, Vitaly Kober, Artyom Makovetskii, and Aleksei Voronin. "Image dehazing using spatially displaced images." In Applications of Digital Image Processing XLII, edited by Andrew G. Tescher and Touradj Ebrahimi. SPIE, 2019. http://dx.doi.org/10.1117/12.2529684.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Berman, Dana, Tali Treibitz, and Shai Avidan. "Non-local Image Dehazing." In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Suárez, Patricia L., Dario Carpio, Angel D. Sappa, and Henry O. Velesaca. "Transformer based Image Dehazing." In 2022 16th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). IEEE, 2022. http://dx.doi.org/10.1109/sitis57111.2022.00037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ali, Usman, and Waqas Tariq Toor. "Mutually Guided Image Dehazing." In 2022 International Conference on Emerging Technologies in Electronics, Computing and Communication (ICETECC). IEEE, 2022. http://dx.doi.org/10.1109/icetecc56662.2022.10069696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії