To see the other types of publications on this topic, follow the link: VISIBLE IMAGE.

Journal articles on the topic 'VISIBLE IMAGE'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'VISIBLE IMAGE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Uddin, Mohammad Shahab, Chiman Kwan, and Jiang Li. "MWIRGAN: Unsupervised Visible-to-MWIR Image Translation with Generative Adversarial Network." Electronics 12, no. 4 (February 20, 2023): 1039. http://dx.doi.org/10.3390/electronics12041039.

Full text
Abstract:
Unsupervised image-to-image translation techniques have been used in many applications, including visible-to-Long-Wave Infrared (visible-to-LWIR) image translation, but very few papers have explored visible-to-Mid-Wave Infrared (visible-to-MWIR) image translation. In this paper, we investigated unsupervised visible-to-MWIR image translation using generative adversarial networks (GANs). We proposed a new model named MWIRGAN for visible-to-MWIR image translation in a fully unsupervised manner. We utilized a perceptual loss to leverage shape identification and location changes of the objects in the translation. The experimental results showed that MWIRGAN was capable of visible-to-MWIR image translation while preserving the object’s shape with proper enhancement in the translated images and outperformed several competing state-of-the-art models. In addition, we customized the proposed model to convert game-engine-generated (a commercial software) images to MWIR images. The quantitative results showed that our proposed method could effectively generate MWIR images from game-engine-generated images, greatly benefiting MWIR data augmentation.
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Yumin, Zhengquan Chen, Ziyi Li, and Feng Gao. "A Multi-Branch Multi-Scale Deep Learning Image Fusion Algorithm Based on DenseNet." Applied Sciences 12, no. 21 (October 30, 2022): 10989. http://dx.doi.org/10.3390/app122110989.

Full text
Abstract:
Infrared images have good anti-environmental interference ability and can capture hot target information well, but their pictures lack rich detailed texture information and poor contrast. Visible image has clear and detailed texture information, but their imaging process depends more on the environment, and the quality of the environment determines the quality of the visible image. This paper presents an infrared image and visual image fusion algorithm based on deep learning. Two identical feature extractors are used to extract the features of visible and infrared images of different scales, fuse these features through specific fusion methods, and restore the features of visible and infrared images to the pictures through the feature restorer to make up for the deficiencies in the various photos of infrared and visible images. This paper tests infrared visual images, multi-focus images, and other data sets. The traditional image fusion algorithm is compared several with the current advanced image fusion algorithm. The experimental results show that the image fusion method proposed in this paper can keep more feature information of the source image in the fused image, and achieve excellent results in some image evaluation indexes.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yongxin, Deguang Li, and WenPeng Zhu. "Infrared and Visible Image Fusion with Hybrid Image Filtering." Mathematical Problems in Engineering 2020 (July 29, 2020): 1–17. http://dx.doi.org/10.1155/2020/1757214.

Full text
Abstract:
Image fusion is an important technique aiming to generate a composite image from multiple images of the same scene. Infrared and visible images can provide the same scene information from different aspects, which is useful for target recognition. But the existing fusion methods cannot well preserve the thermal radiation and appearance information simultaneously. Thus, we propose an infrared and visible image fusion method by hybrid image filtering. We represent the fusion problem with a divide and conquer strategy. A Gaussian filter is used to decompose the source images into base layers and detail layers. An improved co-occurrence filter fuses the detail layers for preserving the thermal radiation of the source images. A guided filter fuses the base layers for retaining the background appearance information of the source images. Superposition of the fused base layer and fused detail layer generates the final fusion image. Subjective visual and objective quantitative evaluations comparing with other fusion algorithms demonstrate the better performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
4

Son, Dong-Min, Hyuk-Ju Kwon, and Sung-Hak Lee. "Visible and Near Infrared Image Fusion Using Base Tone Compression and Detail Transform Fusion." Chemosensors 10, no. 4 (March 25, 2022): 124. http://dx.doi.org/10.3390/chemosensors10040124.

Full text
Abstract:
This study aims to develop a spatial dual-sensor module for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images. The proposed method synthesizes visible and near-infrared images using contourlet transform, principal component analysis, and iCAM06, while the blending method uses color information in a visible image and detailed information in an infrared image. The contourlet transform obtains detailed information and can decompose an image into directional images, making it better in obtaining detailed information than decomposition algorithms. The global tone information is enhanced by iCAM06, which is used for high-dynamic range imaging. The result of the blended images shows a clear appearance through both the compressed tone information of the visible image and the details of the infrared image.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Zheng, Su Mei Cui, He Yin, and Yu Chi Lin. "Comparative Analysis of Image Measurement Accuracy in High Temperature Based on Visible and Infrared Vision." Applied Mechanics and Materials 300-301 (February 2013): 1681–86. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1681.

Full text
Abstract:
Image measurement is a common and non-contact dimensional measurement method. However, because of light deflection, visible light imaging is influenced largely, which makes the measurement accuracy reduce greatly. Various factors of visual measurement in high temperature are analyzed with the application of Planck theory. Thereafter, by means of the light dispersion theory, image measurement errors of visible and infrared images in high temperature which caused by light deviation are comparatively analyzed. Imaging errors of visible and infrared images are proposed quantitatively with experiments. Experimental results indicate that, based on the same imaging resolution, the relative error value of visible light image is 3.846 times larger than infrared image in 900°C high temperature. Therefore, the infrared image measurement has higher accuracy than the visible light image measurement in high temperature circumstances.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Hui, Linlu Dong, Zhishuang Xue, Xiaofang Liu, and Caijian Hua. "Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns)." PLOS ONE 16, no. 2 (February 19, 2021): e0245563. http://dx.doi.org/10.1371/journal.pone.0245563.

Full text
Abstract:
Aiming at the situation that the existing visible and infrared images fusion algorithms only focus on highlighting infrared targets and neglect the performance of image details, and cannot take into account the characteristics of infrared and visible images, this paper proposes an image enhancement fusion algorithm combining Karhunen-Loeve transform and Laplacian pyramid fusion. The detail layer of the source image is obtained by anisotropic diffusion to get more abundant texture information. The infrared images adopt adaptive histogram partition and brightness correction enhancement algorithm to highlight thermal radiation targets. A novel power function enhancement algorithm that simulates illumination is proposed for visible images to improve the contrast of visible images and facilitate human observation. In order to improve the fusion quality of images, the source image and the enhanced images are transformed by Karhunen-Loeve to form new visible and infrared images. Laplacian pyramid fusion is performed on the new visible and infrared images, and superimposed with the detail layer images to obtain the fusion result. Experimental results show that the method in this paper is superior to several representative image fusion algorithms in subjective visual effects on public data sets. In terms of objective evaluation, the fusion result performed well on the 8 evaluation indicators, and its own quality was high.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang, and Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network." Applied Sciences 10, no. 2 (January 11, 2020): 554. http://dx.doi.org/10.3390/app10020554.

Full text
Abstract:
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Niu, Yifeng, Shengtao Xu, Lizhen Wu, and Weidong Hu. "Airborne Infrared and Visible Image Fusion for Target Perception Based on Target Region Segmentation and Discrete Wavelet Transform." Mathematical Problems in Engineering 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/275138.

Full text
Abstract:
Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Shihao, Min Sun, Xiayin Lou, Hanjun Yang, and Dong Liu. "Nighttime Thermal Infrared Image Translation Integrating Visible Images." Remote Sensing 16, no. 4 (February 13, 2024): 666. http://dx.doi.org/10.3390/rs16040666.

Full text
Abstract:
Nighttime Thermal InfraRed (NTIR) image colorization, also known as the translation of NTIR images into Daytime Color Visible (DCV) images, can facilitate human and intelligent system perception of nighttime scenes under weak lighting conditions. End-to-end neural networks have been used to learn the mapping relationship between temperature and color domains, and translate NTIR images with one channel into DCV images with three channels. However, this mapping relationship is an ill-posed problem with multiple solutions without constraints, resulting in blurred edges, color disorder, and semantic errors. To solve this problem, an NTIR2DCV method that includes two steps is proposed: firstly, fuse Nighttime Color Visible (NCV) images with NTIR images based on an Illumination-Aware, Multilevel Decomposition Latent Low-Rank Representation (IA-MDLatLRR) method, which considers the differences in illumination conditions during image fusion and adjusts the fusion strategy of MDLatLRR accordingly to suppress the adverse effects of nighttime lights; secondly, translate the Nighttime Fused (NF) image to DCV image based on HyperDimensional Computing Generative Adversarial Network (HDC-GAN), which ensures feature-level semantic consistency between the source image (NF image) and the translated image (DCV image) without creating semantic label maps. Extensive comparative experiments and the evaluation metrics values show that the proposed algorithms perform better than other State-Of-The-Art (SOTA) image fusion and translation methods, such as FID and KID, which decreased by 14.1 and 18.9, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Batchuluun, Ganbayar, Se Hyun Nam, and Kang Ryoung Park. "Deep Learning-Based Plant Classification Using Nonaligned Thermal and Visible Light Images." Mathematics 10, no. 21 (November 1, 2022): 4053. http://dx.doi.org/10.3390/math10214053.

Full text
Abstract:
There have been various studies conducted on plant images. Machine learning algorithms are usually used in visible light image-based studies, whereas, in thermal image-based studies, acquired thermal images tend to be analyzed with a naked eye visual examination. However, visible light cameras are sensitive to light, and cannot be used in environments with low illumination. Although thermal cameras are not susceptible to these drawbacks, they are sensitive to atmospheric temperature and humidity. Moreover, in previous thermal camera-based studies, time-consuming manual analyses were performed. Therefore, in this study, we conducted a novel study by simultaneously using thermal images and corresponding visible light images of plants to solve these problems. The proposed network extracted features from each thermal image and corresponding visible light image of plants through residual block-based branch networks, and combined the features to increase the accuracy of the multiclass classification. Additionally, a new database was built in this study by acquiring thermal images and corresponding visible light images of various plants.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Yugui, Bo Zhai, Gang Wang, and Jianchu Lin. "Pedestrian Detection Method Based on Two-Stage Fusion of Visible Light Image and Thermal Infrared Image." Electronics 12, no. 14 (July 21, 2023): 3171. http://dx.doi.org/10.3390/electronics12143171.

Full text
Abstract:
Pedestrian detection has important research value and practical significance. It has been used in intelligent monitoring, intelligent transportation, intelligent therapy, and automatic driving. However, in the pixel-level fusion and the feature-level fusion of visible light images and thermal infrared images under shadows during the daytime or under low illumination at night in actual surveillance, missed and false pedestrian detection always occurs. To solve this problem, an algorithm for the pedestrian detection based on the two-stage fusion of visible light images and thermal infrared images is proposed. In this algorithm, in view of the difference and complementarity of visible light images and thermal infrared images, these two types of images are subjected to pixel-level fusion and feature-level fusion according to the varying daytime conditions. In the pixel-level fusion stage, the thermal infrared image, after being brightness enhanced, is fused with the visible image. The obtained pixel-level fusion image contains the information critical for accurate pedestrian detection. In the feature-level fusion stage, in the daytime, the previous pixel-level fusion image is fused with the visible light image; meanwhile, under low illumination at night, the previous pixel-level fusion image is fused with the thermal infrared image. According to the experimental results, the proposed algorithm accurately detects pedestrian under shadows during the daytime and low illumination at night, thereby improving the accuracy of the pedestrian detection and reducing the missed rate and false rate in the detection of pedestrians.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Zili, Yan Tian, Jianxiang Li, and Yiping Xu. "Unsupervised Remote Sensing Image Super-Resolution Guided by Visible Images." Remote Sensing 14, no. 6 (March 21, 2022): 1513. http://dx.doi.org/10.3390/rs14061513.

Full text
Abstract:
Remote sensing images are widely used in many applications. However, due to being limited by the sensors, it is difficult to obtain high-resolution (HR) images from remote sensing images. In this paper, we propose a novel unsupervised cross-domain super-resolution method devoted to reconstructing a low-resolution (LR) remote sensing image guided by an unpaired HR visible natural image. Therefore, an unsupervised visible image-guided remote sensing image super-resolution network (UVRSR) is built. The network is divided into two learnable branches: a visible image-guided branch (VIG) and a remote sensing image-guided branch (RIG). As HR visible images can provide rich textures and sufficient high-frequency information, the purpose of VIG is to treat them as targets and make full use of their advantages in reconstruction. Specially, we first use a CycleGAN to drag the LR visible natural images to the remote sensing domain; then, we apply an SR network to upscale these simulated remote sensing domain LR images. However, the domain gap between SR remote sensing images and HR visible targets is massive. To enforce domain consistency, we propose a novel domain-ruled discriminator in the reconstruction. Furthermore, inspired by the zero-shot super-resolution network (ZSSR) to explore the internal information of remote sensing images, we add a remote sensing domain inner study to train the SR network in RIG. Sufficient experimental works show UVRSR can achieve superior results with state-of-the-art unpaired and remote sensing SR methods on several challenging remote sensing image datasets.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Min-Han, Young-Ho Go, Seung-Hwan Lee, and Sung-Hak Lee. "Low-Light Image Enhancement Using CycleGAN-Based Near-Infrared Image Generation and Fusion." Mathematics 12, no. 24 (December 22, 2024): 4028. https://doi.org/10.3390/math12244028.

Full text
Abstract:
Image visibility is often degraded under challenging conditions such as low light, backlighting, and inadequate contrast. To mitigate these issues, techniques like histogram equalization, high dynamic range (HDR) tone mapping and near-infrared (NIR)–visible image fusion are widely employed. However, these methods have inherent drawbacks: histogram equalization frequently causes oversaturation and detail loss, while visible–NIR fusion requires complex and error-prone images. The proposed algorithm of a complementary cycle-consistent generative adversarial network (CycleGAN)-based training with visible and NIR images, leverages CycleGAN to generate fake NIR images by blending the characteristics of visible and NIR images. This approach presents tone compression and preserves fine details, effectively addressing the limitations of traditional methods. Experimental results demonstrate that the proposed method outperforms conventional algorithms, delivering superior quality and detail retention. This advancement holds substantial promise for applications where dependable image visibility is critical, such as autonomous driving and CCTV (Closed-Circuit Television) surveillance systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Lee, Ji-Min, Young-Eun An, EunSang Bak, and Sungbum Pan. "Improvement of Negative Emotion Recognition in Visible Images Enhanced by Thermal Imaging." Sustainability 14, no. 22 (November 16, 2022): 15200. http://dx.doi.org/10.3390/su142215200.

Full text
Abstract:
Facial expressions help in understanding the intentions of others as they are an essential means of communication, revealing human emotions. Recently, thermal imaging has been playing a complementary role in emotion recognition and is considered an alternative to overcome the drawbacks of visible imaging. Notably, a relatively severe recognition error of fear among negative emotions frequently occurs in visible imaging. This study aims to improve the recognition performance of fear by using the visible and thermal images acquired simultaneously. When fear was not recognized in a visible image, we analyzed the causes of misrecognition. We thus found the condition of replacing the image with a thermal image. It improved emotion recognition performance by 4.54% on average, compared to the performance of using only visible images. Finally, we confirmed that the thermal image effectively compensated for the visible image’s shortcomings.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Xiaomin, Jun-Bao Li, and Jeng-Shyang Pan. "Feature Point Matching Based on Distinct Wavelength Phase Congruency and Log-Gabor Filters in Infrared and Visible Images." Sensors 19, no. 19 (September 29, 2019): 4244. http://dx.doi.org/10.3390/s19194244.

Full text
Abstract:
Infrared and visible image matching methods have been rising in popularity with the emergence of more kinds of sensors, which provide more applications in visual navigation, precision guidance, image fusion, and medical image analysis. In such applications, image matching is utilized for location, fusion, image analysis, and so on. In this paper, an infrared and visible image matching approach, based on distinct wavelength phase congruency (DWPC) and log-Gabor filters, is proposed. Furthermore, this method is modified for non-linear image matching with different physical wavelengths. Phase congruency (PC) theory is utilized to obtain PC images with intrinsic and affluent image features for images containing complex intensity changes or noise. Then, the maximum and minimum moments of the PC images are computed to obtain the corners in the matched images. In order to obtain the descriptors, log-Gabor filters are utilized and overlapping subregions are extracted in a neighborhood of certain pixels. In order to improve the accuracy of the algorithm, the moments of PCs in the original image and a Gaussian smoothed image are combined to detect the corners. Meanwhile, it is improper that the two matched images have the same PC wavelengths, due to the images having different physical wavelengths. Thus, in the experiment, the wavelength of the PC is changed for different physical wavelengths. For realistic application, BiDimRegression method is proposed to compute the similarity between two points set in infrared and visible images. The proposed approach is evaluated on four data sets with 237 pairs of visible and infrared images, and its performance is compared with state-of-the-art approaches: the edge-oriented histogram descriptor (EHD), phase congruency edge-oriented histogram descriptor (PCEHD), and log-Gabor histogram descriptor (LGHD) algorithms. The experimental results indicate that the accuracy rate of the proposed approach is 50% higher than the traditional approaches in infrared and visible images.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Qi, Xiang Gao, Fan Wang, Zhihang Ji, and Xiaopeng Hu. "Feature Point Matching Method Based on Consistent Edge Structures for Infrared and Visible Images." Applied Sciences 10, no. 7 (March 27, 2020): 2302. http://dx.doi.org/10.3390/app10072302.

Full text
Abstract:
Infrared and visible image match is an important research topic in the field of multi-modality image processing. Due to the difference of image contents like pixel intensities and gradients caused by disparate spectrums, it is a great challenge for infrared and visible image match in terms of the detection repeatability and the matching accuracy. To improve the matching performance, a feature detection and description method based on consistent edge structures of images (DDCE) is proposed in this paper. First, consistent edge structures are detected to obtain similar contents of infrared and visible images. Second, common feature points of infrared and visible images are extracted based on the consistent edge structures. Third, feature descriptions are established according to the edge structure attributes including edge length and edge orientation. Lastly, feature correspondences are calculated according to the distance of feature descriptions. Due to the utilization of consistent edge structures of infrared and visible images, the proposed DDCE method can improve the detection repeatability and the matching accuracy. DDCE is evaluated on two public datasets and are compared with several state-of-the-art methods. Experimental results demonstrate that DDCE can achieve superior performance against other methods for infrared and visible image match.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Zhi, and Ao Dong. "Dual Generative Adversarial Network for Infrared and Visible Image Fusion." Journal of Computing and Electronic Information Management 16, no. 1 (February 27, 2025): 55–59. https://doi.org/10.54097/jwm4va34.

Full text
Abstract:
The objective of infrared and visible image fusion is to integrate the prominent targets from the infrared image with the background information from the visible image into a single image. Many deep learning-based approaches have been employed in the field of image fusion. However, most methods have not been able to sufficiently extract the distinct features of images from different modalities, resulting in fusion outcomes that lean towards one modality while losing information from the other. To address this, we have developed a novel method based on generative adversarial network for infrared and visible image fusion. We have designed two sets of generative adversarial networks. The first set is utilized for preliminary feature extraction, generating intermediate results and discriminating features with the infrared image. The second set is employed for deep feature extraction, generating the fused image and discriminating features with the visible image. Through the adversarial training of the two sets of generators and discriminators, we ensure the comprehensive extraction of diverse features from images of various modalities. Extensive qualitative and quantitative experimental results indicate that our approach can retain more information from the source images. Compared to seven other prominent methods, our approach achieves superior quality.
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Yan Hai, Hao Zhang, Fang Ni Zhang, and Yue Hua Han. "Fusion of Visible and Infrared Images Based on Non-Sampling Contourlet and Wavelet Transform." Applied Mechanics and Materials 599-601 (August 2014): 1523–26. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1523.

Full text
Abstract:
This paper gives a method for fusion of visible and infrared image, which combined non-sampling contourlet and wavelet transform. This method firstly makes contrast enhancements to infrared image. Next, does NSCT decomposition to visible image and enhanced-infrared image, then decomposes the low frequency from above decomposition using wavelet. Thirdly, for high-frequency subband of NSCT decomposition and high or low-frequency subband of wavelet, it uses different fusion rules. Finally, it gets fusion image through refactoring of wavelet and NSCT. Experiments show that the method not only retains texture details belong to visible images, but also highlights targets in infrared images. It has a better fusion effect.
APA, Harvard, Vancouver, ISO, and other styles
19

HAO, Shuai, Xizi SUN, Xu MA, Beiyi AN, Tian HE, Jiahao LI, and Siya SUN. "Infrared and visible image fusion method based on target enhancement and rat swarm optimization." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 42, no. 4 (August 2024): 735–43. http://dx.doi.org/10.1051/jnwpu/20244240735.

Full text
Abstract:
In order to solve the target ambiguity and information loss in the fusion results of traditional infrared and visible images, a fusion method of infrared and visible images based on the target enhancement and mouse swarm optimization, which is abbreviated as TERSFuse. Firstly, in order to reduce the loss of the original image details in the fusion results, the infrared contrast enhancement module and the visible image enhancement module based on the brightness perception are constructed respectively. Secondly, the infrared and visible enhanced images were decomposed by using the Laplace pyramid transform to obtain the corresponding high and low frequency images. In order to make the fusion result fully retain the original information, the "maximum absolute value" rule is used to fuse the infrared and visible high frequency images, and the low frequency images are fused by calculating the weight coefficient. Finally, the image reconstruction module based on the rat swarm optimization is designed to achieve the adaptive allocation of weight parameters of high frequency and low frequency image reconstruction, and then improve the visual effect of the fused image. In order to verify the advantages of the present algorithm, the experimental results show that the present algorithm not only obtains the good visual effects, but also can retains the rich edge texture and contrast information of the original image.
APA, Harvard, Vancouver, ISO, and other styles
20

Du, Qinglei, Han Xu, Yong Ma, Jun Huang, and Fan Fan. "Fusing Infrared and Visible Images of Different Resolutions via Total Variation Model." Sensors 18, no. 11 (November 8, 2018): 3827. http://dx.doi.org/10.3390/s18113827.

Full text
Abstract:
In infrared and visible image fusion, existing methods typically have a prerequisite that the source images share the same resolution. However, due to limitations of hardware devices and application environments, infrared images constantly suffer from markedly lower resolution compared with the corresponding visible images. In this case, current fusion methods inevitably cause texture information loss in visible images or blur thermal radiation information in infrared images. Moreover, the principle of existing fusion rules typically focuses on preserving texture details in source images, which may be inappropriate for fusing infrared thermal radiation information because it is characterized by pixel intensities, possibly neglecting the prominence of targets in fused images. Faced with such difficulties and challenges, we propose a novel method to fuse infrared and visible images of different resolutions and generate high-resolution resulting images to obtain clear and accurate fused images. Specifically, the fusion problem is formulated as a total variation (TV) minimization problem. The data fidelity term constrains the pixel intensity similarity of the downsampled fused image with respect to the infrared image, and the regularization term compels the gradient similarity of the fused image with respect to the visible image. The fast iterative shrinkage-thresholding algorithm (FISTA) framework is applied to improve the convergence rate. Our resulting fused images are similar to super-resolved infrared images, which are sharpened by the texture information from visible images. Advantages and innovations of our method are demonstrated by the qualitative and quantitative comparisons with six state-of-the-art methods on publicly available datasets.
APA, Harvard, Vancouver, ISO, and other styles
21

Gao, Peng, Tian Tian, Tianming Zhao, Linfeng Li, Nan Zhang, and Jinwen Tian. "GF-Detection: Fusion with GAN of Infrared and Visible Images for Vehicle Detection at Nighttime." Remote Sensing 14, no. 12 (June 9, 2022): 2771. http://dx.doi.org/10.3390/rs14122771.

Full text
Abstract:
Vehicles are important targets in the remote sensing applications and nighttime vehicle detection has been a hot study topic in recent years. Vehicles in the visible images at nighttime have inadequate features for object detection. Infrared images retain the contours of vehicles while they lose the color information. Thus, it is valuable to fuse infrared and visible images to improve the vehicle detection performance at nighttime. However, it is still a challenge to design effective fusion models due to the complexity of visible and infrared images. In order to improve vehicle detection performance at nighttime, this paper proposes a fusion model of infrared and visible images with Generative Adversarial Networks (GAN) for vehicle detection named GF-detection. GAN is utilized in the image reconstruction and introduced in the image fusion recently. To be specific, to exploit more features for the fusion, GAN is utilized to fuse the infrared and visible images via the image reconstruction. The generator fuses the image features and detection features, and then generates the reconstructed images for the discriminator to classify. Two branches, visible and infrared branches, are designed in the GF-detection model. Different feature extraction strategies are conducted according to the variance of the visible and infrared images. Detection features and self-attention mechanism are added to the fusion model aiming to build a detection task-driven fusion model of infrared and visible images. Extensive experiments based on nighttime images are conducted to demonstrate the effectiveness of the proposed fusion model in night vehicle detection.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Xianglong, Haipeng Wang, Yaohui Liang, Ying Meng, and Shifeng Wang. "A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network." Sensors 22, no. 1 (December 31, 2021): 304. http://dx.doi.org/10.3390/s22010304.

Full text
Abstract:
The presence of fake pictures affects the reliability of visible face images under specific circumstances. This paper presents a novel adversarial neural network designed named as the FTSGAN for infrared and visible image fusion and we utilize FTSGAN model to fuse the face image features of infrared and visible image to improve the effect of face recognition. In FTSGAN model design, the Frobenius norm (F), total variation norm (TV), and structural similarity index measure (SSIM) are employed. The F and TV are used to limit the gray level and the gradient of the image, while the SSIM is used to limit the image structure. The FTSGAN fuses infrared and visible face images that contains bio-information for heterogeneous face recognition tasks. Experiments based on the FTSGAN using hundreds of face images demonstrate its excellent performance. The principal component analysis (PCA) and linear discrimination analysis (LDA) are involved in face recognition. The face recognition performance after fusion improved by 1.9% compared to that before fusion, and the final face recognition rate was 94.4%. This proposed method has better quality, faster rate, and is more robust than the methods that only use visible images for face recognition.
APA, Harvard, Vancouver, ISO, and other styles
23

Jia, Weibin, Zhihuan Song, and Zhengguo Li. "Multi-scale Fusion of Stretched Infrared and Visible Images." Sensors 22, no. 17 (September 2, 2022): 6660. http://dx.doi.org/10.3390/s22176660.

Full text
Abstract:
Infrared (IR) band sensors can capture digital images under challenging conditions, such as haze, smoke, and fog, while visible (VIS) band sensors seize abundant texture information. It is desired to fuse IR and VIS images to generate a more informative image. In this paper, a novel multi-scale IR and VIS images fusion algorithm is proposed to integrate information from both the images into the fused image and preserve the color of the VIS image. A content-adaptive gamma correction is first introduced to stretch the IR images by using one of the simplest edge-preserving filters, which alleviates excessive luminance shifts and color distortions in the fused images. New contrast and exposedness measures are then introduced for the stretched IR and VIS images to achieve weight matrices that are more in line with their characteristics. The IR and luminance components of the VIS image in grayscale or RGB space are fused by using the Gaussian and Laplacian pyramids. The RGB components of the VIS image are finally expanded to generate the fused image if necessary. Comparisons experimentally demonstrate the effectiveness of the proposed algorithm to 10 different state-of-the-art fusion algorithms in terms of computational cost and quality of the fused images.
APA, Harvard, Vancouver, ISO, and other styles
24

Shen, Sen, Di Li, Liye Mei, Chuan Xu, Zhaoyi Ye, Qi Zhang, Bo Hong, Wei Yang, and Ying Wang. "DFA-Net: Multi-Scale Dense Feature-Aware Network via Integrated Attention for Unmanned Aerial Vehicle Infrared and Visible Image Fusion." Drones 7, no. 8 (August 6, 2023): 517. http://dx.doi.org/10.3390/drones7080517.

Full text
Abstract:
Fusing infrared and visible images taken by an unmanned aerial vehicle (UAV) is a challenging task, since infrared images distinguish the target from the background by the difference in infrared radiation, while the low resolution also produces a less pronounced effect. Conversely, the visible light spectrum has a high spatial resolution and rich texture; however, it is easily affected by harsh weather conditions like low light. Therefore, the fusion of infrared and visible light has the potential to provide complementary advantages. In this paper, we propose a multi-scale dense feature-aware network via integrated attention for infrared and visible image fusion, namely DFA-Net. Firstly, we construct a dual-channel encoder to extract the deep features of infrared and visible images. Secondly, we adopt a nested decoder to adequately integrate the features of various scales of the encoder so as to realize the multi-scale feature representation of visible image detail texture and infrared image salient target. Then, we present a feature-aware network via integrated attention to further fuse the feature information of different scales, which can focus on specific advantage features of infrared and visible images. Finally, we use unsupervised gradient estimation and intensity loss to learn significant fusion features of infrared and visible images. In addition, our proposed DFA-Net approach addresses the challenges of fusing infrared and visible images captured by a UAV. The results show that DFA-Net achieved excellent image fusion performance in nine quantitative evaluation indexes under a low-light environment.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Shengshi, Yonghua Zou, Guanjun Wang, and Cong Lin. "Infrared and Visible Image Fusion Method Based on a Principal Component Analysis Network and Image Pyramid." Remote Sensing 15, no. 3 (January 24, 2023): 685. http://dx.doi.org/10.3390/rs15030685.

Full text
Abstract:
The aim of infrared (IR) and visible image fusion is to generate a more informative image for human observation or some other computer vision tasks. The activity-level measurement and weight assignment are two key parts in image fusion. In this paper, we propose a novel IR and visible fusion method based on the principal component analysis network (PCANet) and an image pyramid. Firstly, we use the lightweight deep learning network, a PCANet, to obtain the activity-level measurement and weight assignment of IR and visible images. The activity-level measurement obtained by the PCANet has a stronger representation ability for focusing on IR target perception and visible detail description. Secondly, the weights and the source images are decomposed into multiple scales by the image pyramid, and the weighted-average fusion rule is applied at each scale. Finally, the fused image is obtained by reconstruction. The effectiveness of the proposed algorithm was verified by two datasets with more than eighty pairs of test images in total. Compared with nineteen representative methods, the experimental results demonstrate that the proposed method can achieve the state-of-the-art results in both visual quality and objective evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Xiaoyu, Laixian Zhang, Huichao Guo, Haijing Zheng, Houpeng Sun, Yingchun Li, Rong Li, Chenglong Luan, and Xiaoyun Tong. "DCLTV: An Improved Dual-Condition Diffusion Model for Laser-Visible Image Translation." Sensors 25, no. 3 (January 24, 2025): 697. https://doi.org/10.3390/s25030697.

Full text
Abstract:
Laser active imaging systems can remedy the shortcomings of visible light imaging systems in difficult imaging circumstances, thereby attaining clear images. However, laser images exhibit significant modal discrepancy in contrast to the visible image, impeding human perception and computer processing. Consequently, it is necessary to translate laser images to visible images across modalities. Existing cross-modal image translation algorithms are plagued with issues, including difficult training and color bleeding. In recent studies, diffusion models have demonstrated superior image generation and translation abilities and been shown to be capable of generating high-quality images. To achieve more accurate laser-visible image translation, we designed an improved diffusion model, called DCLTV, which limits the randomness of diffusion models by means of dual-condition control. We incorporated the Brownian bridge strategy to serve as the first condition control and employed interpolation-based conditional injection to function as the second condition control. We also established a dataset comprising 665 pairs of laser-visible images to compensate for the data deficiency in the field of laser-visible image translation. Compared to five representative baseline models, namely Pix2pix, BigColor, CT2, ColorFormer, and DDColor, the proposed DCLTV achieved the best performance in terms of both qualitative and quantitative comparisons, realizing at least a 15.89% reduction in FID and at least a 22.02% reduction in LPIPS. We further validated the effectiveness of the dual conditions in DCLTV through ablation experiments, achieving the best results with an FID of 154.74 and an LPIPS of 0.379.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Liangliang, Ming Lv, Zhenhong Jia, Qingxin Jin, Minqin Liu, Liangfu Chen, and Hongbing Ma. "An Effective Infrared and Visible Image Fusion Approach via Rolling Guidance Filtering and Gradient Saliency Map." Remote Sensing 15, no. 10 (May 9, 2023): 2486. http://dx.doi.org/10.3390/rs15102486.

Full text
Abstract:
To solve problems of brightness and detail information loss in infrared and visible image fusion, an effective infrared and visible image fusion method using rolling guidance filtering and gradient saliency map is proposed in this paper. The rolling guidance filtering is used to decompose the input images into approximate layers and residual layers; the energy attribute fusion model is used to fuse the approximate layers; the gradient saliency map is introduced and the corresponding weight matrices are constructed to perform on residual layers. The fusion image is generated by reconstructing the fused approximate layer sub-image and residual layer sub-images. Experimental results demonstrate the superiority of the proposed infrared and visible image fusion method.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Hui, Xu Ma, and Yanshan Tian. "An Image Fusion Method Based on Curvelet Transform and Guided Filter Enhancement." Mathematical Problems in Engineering 2020 (June 27, 2020): 1–8. http://dx.doi.org/10.1155/2020/9821715.

Full text
Abstract:
In order to improve the clarity of image fusion and solve the problem that the image fusion effect is affected by the illumination and weather of visible light, a fusion method of infrared and visible images for night-vision context enhancement is proposed. First, a guided filter is used to enhance the details of the visible image. Then, the enhanced visible and infrared images are decomposed by the curvelet transform. The improved sparse representation is used to fuse the low-frequency part, while the high-frequency part is fused with the parametric adaptation pulse-coupled neural networks. Finally, the fusion result is obtained by inverse transformation of the curvelet transform. The experimental results show that the proposed method has good performance in detail processing, edge protection, and source image information.
APA, Harvard, Vancouver, ISO, and other styles
29

Akbulut, Harun. "Visible Digital Image Watermarking Using Single Candidate Optimizer." Düzce Üniversitesi Bilim ve Teknoloji Dergisi 13, no. 1 (January 30, 2025): 506–21. https://doi.org/10.29130/dubited.1532300.

Full text
Abstract:
With the advent of internet technologies, accessing information has become remarkably facile, while concurrently precipitating copyright conundrums. This predicament can be ameliorated by embedding copyright information within digital images, a methodology termed digital image watermarking. Artificial intelligence optimization algorithms are extensively employed in myriad problem-solving scenarios, yielding efficacious outcomes. This study proposes a visible digital image watermarking method utilizing the Single Candidate Optimizer (SCO). Contrary to many prevalent metaheuristic optimization algorithms, SCO, introduced in 2024, is not population-based. The fitness function of SCO is designed to maximize the resemblance between the watermarked image and both the host and watermark images. Experiments were conducted on images commonly utilized in image processing, and the results were evaluated using eight quality metrics. Additionally, the obtained numerical results were juxtaposed with those from well-known and widely-used genetic algorithms, differential evolution algorithms, and artificial bee colony optimization algorithms. The findings demonstrate that SCO outperforms the others in visible digital image watermarking. Furthermore, due to its non-population-based nature, SCO is significantly faster compared to its counterparts.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Xiang, Yue Shun He, Xuan Zhan, and Feng Yu Liu. "A Rapid Fusion Algorithm of Infrared and the Visible Images Based on Directionlet Transform." Applied Mechanics and Materials 20-23 (January 2010): 45–51. http://dx.doi.org/10.4028/www.scientific.net/amm.20-23.45.

Full text
Abstract:
Direction transform; image fusion; infrared images; fusion rule; anisotropic Abstract Based on analysing the feature of infrared and the visible, this paper proposed an improved algorithm using Directionlet transform.The feature is like this: firstly, separate the color visible images to get the component images, and then make anisotropic decomposition for component images and inrared images, after analysing these images, process them according to regional energy rules ,finally incorporate the intense color to get the fused image. The simulation results shows that,this algorithm can effectively fuse infrared and the visible image, moreover, not only the fused images can maintain the environment details, but also underline the edge features, which applies to fusion with strong edges, therefore,this algorithm is of robust and convenient.
APA, Harvard, Vancouver, ISO, and other styles
31

Ma, Weihong, Kun Wang, Jiawei Li, Simon X. Yang, Junfei Li, Lepeng Song, and Qifeng Li. "Infrared and Visible Image Fusion Technology and Application: A Review." Sensors 23, no. 2 (January 4, 2023): 599. http://dx.doi.org/10.3390/s23020599.

Full text
Abstract:
The images acquired by a single visible light sensor are very susceptible to light conditions, weather changes, and other factors, while the images acquired by a single infrared light sensor generally have poor resolution, low contrast, low signal-to-noise ratio, and blurred visual effects. The fusion of visible and infrared light can avoid the disadvantages of two single sensors and, in fusing the advantages of both sensors, significantly improve the quality of the images. The fusion of infrared and visible images is widely used in agriculture, industry, medicine, and other fields. In this study, firstly, the architecture of mainstream infrared and visible image fusion technology and application was reviewed; secondly, the application status in robot vision, medical imaging, agricultural remote sensing, and industrial defect detection fields was discussed; thirdly, the evaluation indicators of the main image fusion methods were combined into the subjective evaluation and the objective evaluation, the properties of current mainstream technologies were then specifically analyzed and compared, and the outlook for image fusion was assessed; finally, infrared and visible image fusion was summarized. The results show that the definition and efficiency of the fused infrared and visible image had been improved significantly. however, there were still some problems, such as the poor accuracy of the fused image, and irretrievably lost pixels. There is a need to improve the adaptive design of the traditional algorithm parameters, to combine the innovation of the fusion algorithm and the optimization of the neural network, so as to further improve the image fusion accuracy, reduce noise interference, and improve the real-time performance of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Luo, Yongyu, and Zhongqiang Luo. "Infrared and Visible Image Fusion: Methods, Datasets, Applications, and Prospects." Applied Sciences 13, no. 19 (September 30, 2023): 10891. http://dx.doi.org/10.3390/app131910891.

Full text
Abstract:
Infrared and visible light image fusion combines infrared and visible light images by extracting the main information from each image and fusing it together to provide a more comprehensive image with more features from the two photos. Infrared and visible image fusion has gained popularity in recent years and is increasingly being employed in sectors such as target recognition and tracking, night vision, scene segmentation, and others. In order to provide a concise overview of infrared and visible picture fusion, this paper first explores its historical context before outlining current domestic and international research efforts. Then, conventional approaches for infrared and visible picture fusion, such as the multi-scale decomposition method and the sparse representation method, are thoroughly introduced. The advancement of deep learning in recent years has greatly aided the field of picture fusion. The outcomes of the fusion have a wide range of potential applications due to the neural networks’ strong feature extraction and reconstruction skills. As a result, this research also evaluates deep learning techniques. After that, some common objective evaluation indexes are provided, and the performance evaluation of infrared and visible image fusion is introduced. The common datasets in the areas of infrared and visible image fusion are also sorted out at the same time. Datasets play a significant role in the advancement of infrared and visible image fusion and are an essential component of infrared and visible image fusion testing. The application of infrared and visible image fusion in many domains is then simply studied with practical examples, particularly in developing fields, used to show its application. Finally, the prospect of the current infrared and visible image fusion field is presented, and the full text is summarized.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Jingjing, Jinwen Ren, Hongzhen Li, Zengzhao Sun, Zhenye Luan, Zishu Yu, Chunhao Liang, Yashar E. Monfared, Huaqiang Xu, and Qing Hua. "DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion." Photonics 9, no. 3 (March 3, 2022): 150. http://dx.doi.org/10.3390/photonics9030150.

Full text
Abstract:
Infrared images can provide clear contrast information to distinguish between the target and the background under any lighting conditions. In contrast, visible images can provide rich texture details and are compatible with the human visual system. The fusion of a visible image and infrared image will thus contain both comprehensive contrast information and texture details. In this study, a novel approach for the fusion of infrared and visible images is proposed based on a dual-discriminator generative adversarial network with a squeeze-and-excitation module (DDGANSE). Our approach establishes confrontation training between one generator and two discriminators. The goal of the generator is to generate images that are similar to the source images, and contain the information from both infrared and visible source images. The purpose of the two discriminators is to increase the similarity between the image generated by the generator and the infrared and visible images. We experimentally demonstrated that using continuous adversarial training, DDGANSE outputs images retain the advantages of both infrared and visible images with significant contrast information and rich texture details. Finally, we compared the performance of our proposed method with previously reported techniques for fusing infrared and visible images using both quantitative and qualitative assessments. Our experiments on the TNO dataset demonstrate that our proposed method shows superior performance compared to other similar reported methods in the literature using various performance metrics.
APA, Harvard, Vancouver, ISO, and other styles
34

Niu, Yi Feng, Sheng Tao Xu, and Wei Dong Hu. "Fusion of Infrared and Visible Image Based on Target Regions for Environment Perception." Applied Mechanics and Materials 128-129 (October 2011): 589–93. http://dx.doi.org/10.4028/www.scientific.net/amm.128-129.589.

Full text
Abstract:
Infrared and visible image fusion is an important precondition to realize target perception for unmanned aerial vehicles (UAV) based on which UAV can perform various missions. The details in visible images are abundant, while the target information is more outstanding in infrared images. However, the conventional fusion methods are mostly based on region segmentation, and then the fused image for target recognition can’t be actually acquired. In this paper, a novel fusion method of infrared and visible image based on target regions in discrete wavelet transform (DWT) domain is proposed, which can gain more target information and preserve the details. Experimental results show that our method can generate better fused image for target recognition.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhou, Ze Hua, and Min Tan. "Infrared Image and Visible Image Fusion Based on Wavelet Transform." Advanced Materials Research 756-759 (September 2013): 2850–56. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2850.

Full text
Abstract:
The same scene, the infrared image and visible image fusion can concurrently take advantage of the original image information can overcome the limitations and differences of a single sensor image in terms of geometric, spectral and spatial resolution, to improve the quality of the image , which help to locate, identify and explain the physical phenomena and events. Put forward a kind of image fusion method based on wavelet transform. And for the wavelet decomposition of the frequency domain, respectively, discussed the principles of select high-frequency coefficients and low frequency coefficients, highlight the contours of parts and the weakening of the details section, fusion, image fusion has the characteristics of two or multiple images, more people or the visual characteristics of the machine, the image for further analysis and understanding, detection and identification or tracking of the target image.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Shaopeng, Decao Ma, Yao Ding, Yong Xian, and Tao Zhang. "DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination." Remote Sensing 16, no. 20 (October 10, 2024): 3766. http://dx.doi.org/10.3390/rs16203766.

Full text
Abstract:
Thermal infrared cameras can image stably in complex scenes such as night, rain, snow, and dense fog. Still, humans are more sensitive to visual colors, so there is an urgent need to convert infrared images into color images in areas such as assisted driving. This paper studies a colorization method for infrared images based on a generative adversarial model. The proposed dual-branch feature extraction network ensures the stability of the content and structure of the generated visible light image; the proposed discrimination strategy combining spatial and frequency domain hybrid constraints effectively improves the problem of undersaturated coloring and the loss of texture details in the edge area of the generated visible light image. The comparative experiment of the public infrared visible light paired data set shows that the algorithm proposed in this paper has achieved the best performance in maintaining the consistency of the content structure of the generated image, restoring the image color distribution, and restoring the image texture details.
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Yaochen, Lili Dong, Yuanyuan Ji, and Wenhai Xu. "Infrared and Visible Image Fusion through Details Preservation." Sensors 19, no. 20 (October 20, 2019): 4556. http://dx.doi.org/10.3390/s19204556.

Full text
Abstract:
In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.
APA, Harvard, Vancouver, ISO, and other styles
38

Jang, Hyoseon, Sangkyun Kim, Suhong Yoo, Soohee Han, and Hong-Gyoo Sohn. "Feature Matching Combining Radiometric and Geometric Characteristics of Images, Applied to Oblique- and Nadir-Looking Visible and TIR Sensors of UAV Imagery." Sensors 21, no. 13 (July 4, 2021): 4587. http://dx.doi.org/10.3390/s21134587.

Full text
Abstract:
A large amount of information needs to be identified and produced during the process of promoting projects of interest. Thermal infrared (TIR) images are extensively used because they can provide information that cannot be extracted from visible images. In particular, TIR oblique images facilitate the acquisition of information of a building’s facade that is challenging to obtain from a nadir image. When a TIR oblique image and the 3D information acquired from conventional visible nadir imagery are combined, a great synergy for identifying surface information can be created. However, it is an onerous task to match common points in the images. In this study, a robust matching method of image pairs combined with different wavelengths and geometries (i.e., visible nadir-looking vs. TIR oblique, and visible oblique vs. TIR nadir-looking) is proposed. Three main processes of phase congruency, histogram matching, and Image Matching by Affine Simulation (IMAS) were adjusted to accommodate the radiometric and geometric differences of matched image pairs. The method was applied to Unmanned Aerial Vehicle (UAV) images of building and non-building areas. The results were compared with frequently used matching techniques, such as scale-invariant feature transform (SIFT), speeded-up robust features (SURF), synthetic aperture radar–SIFT (SAR–SIFT), and Affine SIFT (ASIFT). The method outperforms other matching methods in root mean square error (RMSE) and matching performance (matched and not matched). The proposed method is believed to be a reliable solution for pinpointing surface information through image matching with different geometries obtained via TIR and visible sensors.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhao, Liangjun, Yun Zhang, Linlu Dong, and Fengling Zheng. "Infrared and visible image fusion algorithm based on spatial domain and image features." PLOS ONE 17, no. 12 (December 30, 2022): e0278055. http://dx.doi.org/10.1371/journal.pone.0278055.

Full text
Abstract:
Multi-scale image decomposition is crucial for image fusion, extracting prominent feature textures from infrared and visible light images to obtain clear fused images with more textures. This paper proposes a fusion method of infrared and visible light images based on spatial domain and image features to obtain high-resolution and texture-rich images. First, an efficient hierarchical image clustering algorithm based on superpixel fast pixel clustering directly performs multi-scale decomposition of each source image in the spatial domain and obtains high-frequency, medium-frequency, and low-frequency layers to extract the maximum and minimum values of each source image combined images. Then, using the attribute parameters of each layer as fusion weights, high-definition fusion images are through adaptive feature fusion. Besides, the proposed algorithm performs multi-scale decomposition of the image in the spatial frequency domain to solve the information loss problem caused by the conversion process between the spatial frequency and frequency domains in the traditional extraction of image features in the frequency domain. Eight image quality indicators are compared with other fusion algorithms. Experimental results show that this method outperforms other comparative methods in both subjective and objective measures. Furthermore, the algorithm has high definition and rich textures.
APA, Harvard, Vancouver, ISO, and other styles
40

Yin, Ruyi, Bin Yang, Zuyan Huang, and Xiaozhi Zhang. "DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network." Sensors 23, no. 16 (August 11, 2023): 7097. http://dx.doi.org/10.3390/s23167097.

Full text
Abstract:
Infrared and visible image fusion technologies are used to characterize the same scene using diverse modalities. However, most existing deep learning-based fusion methods are designed as symmetric networks, which ignore the differences between modal images and lead to source image information loss during feature extraction. In this paper, we propose a new fusion framework for the different characteristics of infrared and visible images. Specifically, we design a dual-stream asymmetric network with two different feature extraction networks to extract infrared and visible feature maps, respectively. The transformer architecture is introduced in the infrared feature extraction branch, which can force the network to focus on the local features of infrared images while still obtaining their contextual information. The visible feature extraction branch uses residual dense blocks to fully extract the rich background and texture detail information of visible images. In this way, it can provide better infrared targets and visible details for the fused image. Experimental results on multiple datasets indicate that DSA-Net outperforms state-of-the-art methods in both qualitative and quantitative evaluations. In addition, we also apply the fusion results to the target detection task, which indirectly demonstrates the fusion performances of our method.
APA, Harvard, Vancouver, ISO, and other styles
41

Huo, Xing, Yinping Deng, and Kun Shao. "Infrared and Visible Image Fusion with Significant Target Enhancement." Entropy 24, no. 11 (November 10, 2022): 1633. http://dx.doi.org/10.3390/e24111633.

Full text
Abstract:
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with multi-level Gaussian curvature filtering to obtain background information with high spatial resolution. Second, the large-scale layers are fused using ResNet50 and maximizing weights based on the average operator to improve detail retention. Finally, the base layers are fused by incorporating a new salient target detection method. The subjective and objective experimental results on TNO and MSRS datasets demonstrate that our method achieves better results compared to other traditional and deep learning-based methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Yongshi, Ye, Ma Haoyu, Nima Tashi, Liu Xinting, Yuan Yuchen, and Shang Zihang. "Object Detection Based on Fusion of Visible and Infrared Images." Journal of Physics: Conference Series 2560, no. 1 (August 1, 2023): 012021. http://dx.doi.org/10.1088/1742-6596/2560/1/012021.

Full text
Abstract:
Abstract In consideration of the complementary characteristics between visible light and infrared images, this paper proposes a novel method for object detection based on the fusion of these two types of images, thereby enhancing detection accuracy even under harsh environmental conditions. Specifically, we employ an improved AE network, which encodes and decodes the visible light and infrared images into dual-scale image decomposition. By reconstructing the original images with the decoder, we highlight the details of the fused image. Yolov5 network is then constructed based on this fused image, and its parameters are adjusted accordingly to achieve accurate detection of objects. Due to the complementary information features that are missing between the two image types, our method effectively enhances the precision of object detection.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Liangliang, Yan Shi, Ming Lv, Zhenhong Jia, Minqin Liu, Xiaobin Zhao, Xueyu Zhang, and Hongbing Ma. "Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain." Remote Sensing 16, no. 20 (October 13, 2024): 3804. http://dx.doi.org/10.3390/rs16203804.

Full text
Abstract:
The fusion of infrared and visible images together can fully leverage the respective advantages of each, providing a more comprehensive and richer set of information. This is applicable in various fields such as military surveillance, night navigation, environmental monitoring, etc. In this paper, a novel infrared and visible image fusion method based on sparse representation and guided filtering in Laplacian pyramid (LP) domain is introduced. The source images are decomposed into low- and high-frequency bands by the LP, respectively. Sparse representation has achieved significant effectiveness in image fusion, and it is used to process the low-frequency band; the guided filtering has excellent edge-preserving effects and can effectively maintain the spatial continuity of the high-frequency band. Therefore, guided filtering combined with the weighted sum of eight-neighborhood-based modified Laplacian (WSEML) is used to process high-frequency bands. Finally, the inverse LP transform is used to reconstruct the fused image. We conducted simulation experiments on the publicly available TNO dataset to validate the superiority of our proposed algorithm in fusing infrared and visible images. Our algorithm preserves both the thermal radiation characteristics of the infrared image and the detailed features of the visible image.
APA, Harvard, Vancouver, ISO, and other styles
44

Jin, Qi, Sanqing Tan, Gui Zhang, Zhigao Yang, Yijun Wen, Huashun Xiao, and Xin Wu. "Visible and Infrared Image Fusion of Forest Fire Scenes Based on Generative Adversarial Networks with Multi-Classification and Multi-Level Constraints." Forests 14, no. 10 (September 26, 2023): 1952. http://dx.doi.org/10.3390/f14101952.

Full text
Abstract:
Aimed at addressing deficiencies in existing image fusion methods, this paper proposed a multi-level and multi-classification generative adversarial network (GAN)-based method (MMGAN) for fusing visible and infrared images of forest fire scenes (the surroundings of firefighters), which solves the problem that GANs tend to ignore visible contrast ratio information and detailed infrared texture information. The study was based on real-time visible and infrared image data acquired by visible and infrared binocular cameras on forest firefighters’ helmets. We improved the GAN by, on the one hand, splitting the input channels of the generator into gradient and contrast ratio paths, increasing the depth of convolutional layers, and improving the extraction capability of shallow networks. On the other hand, we designed a discriminator using a multi-classification constraint structure and trained it against the generator in a continuous and adversarial manner to supervise the generator, generating better-quality fused images. Our results indicated that compared to mainstream infrared and visible image fusion methods, including anisotropic diffusion fusion (ADF), guided filtering fusion (GFF), convolutional neural networks (CNN), FusionGAN, and dual-discriminator conditional GAN (DDcGAN), the MMGAN model was overall optimal and had the best visual effect when applied to image fusions of forest fire surroundings. Five of the six objective metrics were optimal, and one ranked second-to-optimal. The image fusion speed was more than five times faster than that of the other methods. The MMGAN model significantly improved the quality of fused images of forest fire scenes, preserved the contrast ratio information of visible images and the detailed texture information of infrared images of forest fire scenes, and could accurately reflect information on forest fire scene surroundings.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhao, Yuqing, Guangyuan Fu, Hongqiao Wang, and Shaolei Zhang. "The Fusion of Unmatched Infrared and Visible Images Based on Generative Adversarial Networks." Mathematical Problems in Engineering 2020 (March 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/3739040.

Full text
Abstract:
Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared images in practice. In view of the above issues, we propose a new network model based on generative adversarial networks (GANs) to fuse unmatched infrared and visible images. Our method generates the corresponding infrared image from a visible image and fuses the two images together to obtain more information. The effectiveness of the proposed method is verified qualitatively and quantitatively through experimentation on public datasets. In addition, the generated fused images of the proposed method contain more abundant texture and thermal radiation information than other methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Xilai, Xiaosong Li, and Wuyang Liu. "CBFM: Contrast Balance Infrared and Visible Image Fusion Based on Contrast-Preserving Guided Filter." Remote Sensing 15, no. 12 (June 7, 2023): 2969. http://dx.doi.org/10.3390/rs15122969.

Full text
Abstract:
Infrared (IR) and visible image fusion is an important data fusion and image processing technique that can accurately and comprehensively integrate the thermal radiation and texture details of source images. However, existing methods neglect the high-contrast fusion problem, leading to suboptimal fusion performance when thermal radiation target information in IR images is replaced by high-contrast information in visible images. To address this limitation, we propose a contrast-balanced framework for IR and visible image fusion. Specifically, a novel contrast balance strategy is proposed to process visible images and reduce energy while allowing for detailed compensation of overexposed areas. Moreover, a contrast-preserving guided filter is proposed to decompose the image into energy-detail layers to reduce high contrast and filter information. To effectively extract the active information in the detail layer and the brightness information in the energy layer, we proposed a new weighted energy-of-Laplacian operator and a Gaussian distribution of the image entropy scheme to fuse the detail and energy layers, respectively. The fused result was obtained by adding the detail and energy layers. Extensive experimental results demonstrate that the proposed method can effectively reduce the high contrast and highlighted target information in an image while simultaneously preserving details. In addition, the proposed method exhibited superior performance compared to the state-of-the-art methods in both qualitative and quantitative assessments.
APA, Harvard, Vancouver, ISO, and other styles
47

Santoyo-Garcia, Hector, Eduardo Fragoso-Navarro, Rogelio Reyes-Reyes, Clara Cruz-Ramos, and Mariko Nakano-Miyatake. "Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras." Security and Communication Networks 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/7903198.

Full text
Abstract:
In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA) domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS) so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai, and Jing Han. "Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception." Entropy 24, no. 10 (September 21, 2022): 1327. http://dx.doi.org/10.3390/e24101327.

Full text
Abstract:
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Specifically, the ASG module transfers the semantics of the target and background to the fusion process for target highlighting. The AVP module analyzes the visual features from the global structure and local details of the visible and fusion images and then guides the fusion network to adaptively generate a weight map of signal completion so that the resulting fusion images possess a natural and visible appearance. We construct a joint distribution function between the fusion images and the corresponding semantics and use the discriminator to improve the fusion performance in terms of natural appearance and target saliency. Experimental results demonstrate that our proposed ASG and AVP modules can effectively guide the image-fusion process by selectively preserving the details in visible images and the salient information of targets in infrared images. The SGVPGAN exhibits significant improvements over other fusion methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Jinfen, Bo Cheng, Xiaoping Zhang, Tengfei Long, Bo Chen, Guizhou Wang, and Degang Zhang. "A TIR-Visible Automatic Registration and Geometric Correction Method for SDGSAT-1 Thermal Infrared Image Based on Modified RIFT." Remote Sensing 14, no. 6 (March 14, 2022): 1393. http://dx.doi.org/10.3390/rs14061393.

Full text
Abstract:
High-resolution thermal infrared (TIR) remote sensing images can more accurately retrieve land surface temperature and describe the spatial pattern of urban thermal environment. The Thermal Infrared Spectrometer (TIS), which has high spatial resolution among spaceborne thermal infrared sensors at present, and global data acquisition capability, is one of the sensors equipped in the SDGSAT-1. It is an important complement to the existing international mainstream satellites. In order to produce standard data products, rapidly and accurately, the automatic registration and geometric correction method needs to be developed. Unlike visible–visible image registration, thermal infrared images are blurred in edge details and have obvious non-linear radiometric differences from visible images, which make it challenging for the TIR-visible image registration task. To address these problems, homomorphic filtering is employed to enhance TIR image details and the modified RIFT algorithm is proposed to achieve TIR-visible image registration. Different from using MIM for feature description in RIFT, the proposed modified RIFT uses the novel binary pattern string to descriptor construction. With sufficient and uniformly distributed ground control points, the two-step orthorectification framework, from SDGSAT-1 TIS L1A image to L4 orthoimage, are proposed in this study. The first experiment, with six TIR-visible image pairs, captured in different landforms, is performed to verify the registration performance, and the result indicates that the homomorphic filtering and modified RIFT greatly increase the number of corresponding points. The second experiment, with one scene of an SDGSAT-1 TIS image, is executed to test the proposed orthorectification framework. Subsequently, 52 GCPs are selected manually to evaluate the orthorectification accuracy. The result indicates that the proposed orthorectification framework is helpful to improve the geometric accuracy and guarantee for the subsequent thermal infrared applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Pavez, Vicente, Gabriel Hermosilla, Francisco Pizarro, Sebastián Fingerhuth, and Daniel Yunge. "Thermal Image Generation for Robust Face Recognition." Applied Sciences 12, no. 1 (January 5, 2022): 497. http://dx.doi.org/10.3390/app12010497.

Full text
Abstract:
This article shows how to create a robust thermal face recognition system based on the FaceNet architecture. We propose a method for generating thermal images to create a thermal face database with six different attributes (frown, glasses, rotation, normal, vocal, and smile) based on various deep learning models. First, we use StyleCLIP, which oversees manipulating the latent space of the input visible image to add the desired attributes to the visible face. Second, we use the GANs N’ Roses (GNR) model, a multimodal image-to-image framework. It uses maps of style and content to generate thermal imaging from visible images, using generative adversarial approaches. Using the proposed generator system, we create a database of synthetic thermal faces composed of more than 100k images corresponding to 3227 individuals. When trained and tested using the synthetic database, the Thermal-FaceNet model obtained a 99.98% accuracy. Furthermore, when tested with a real database, the accuracy was more than 98%, validating the proposed thermal images generator system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography