Artículos de revistas sobre el tema "VISIBLE IMAGE"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: VISIBLE IMAGE.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "VISIBLE IMAGE".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Uddin, Mohammad Shahab, Chiman Kwan y Jiang Li. "MWIRGAN: Unsupervised Visible-to-MWIR Image Translation with Generative Adversarial Network". Electronics 12, n.º 4 (20 de febrero de 2023): 1039. http://dx.doi.org/10.3390/electronics12041039.

Texto completo
Resumen
Unsupervised image-to-image translation techniques have been used in many applications, including visible-to-Long-Wave Infrared (visible-to-LWIR) image translation, but very few papers have explored visible-to-Mid-Wave Infrared (visible-to-MWIR) image translation. In this paper, we investigated unsupervised visible-to-MWIR image translation using generative adversarial networks (GANs). We proposed a new model named MWIRGAN for visible-to-MWIR image translation in a fully unsupervised manner. We utilized a perceptual loss to leverage shape identification and location changes of the objects in the translation. The experimental results showed that MWIRGAN was capable of visible-to-MWIR image translation while preserving the object’s shape with proper enhancement in the translated images and outperformed several competing state-of-the-art models. In addition, we customized the proposed model to convert game-engine-generated (a commercial software) images to MWIR images. The quantitative results showed that our proposed method could effectively generate MWIR images from game-engine-generated images, greatly benefiting MWIR data augmentation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Yongxin, Deguang Li y WenPeng Zhu. "Infrared and Visible Image Fusion with Hybrid Image Filtering". Mathematical Problems in Engineering 2020 (29 de julio de 2020): 1–17. http://dx.doi.org/10.1155/2020/1757214.

Texto completo
Resumen
Image fusion is an important technique aiming to generate a composite image from multiple images of the same scene. Infrared and visible images can provide the same scene information from different aspects, which is useful for target recognition. But the existing fusion methods cannot well preserve the thermal radiation and appearance information simultaneously. Thus, we propose an infrared and visible image fusion method by hybrid image filtering. We represent the fusion problem with a divide and conquer strategy. A Gaussian filter is used to decompose the source images into base layers and detail layers. An improved co-occurrence filter fuses the detail layers for preserving the thermal radiation of the source images. A guided filter fuses the base layers for retaining the background appearance information of the source images. Superposition of the fused base layer and fused detail layer generates the final fusion image. Subjective visual and objective quantitative evaluations comparing with other fusion algorithms demonstrate the better performance of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dong, Yumin, Zhengquan Chen, Ziyi Li y Feng Gao. "A Multi-Branch Multi-Scale Deep Learning Image Fusion Algorithm Based on DenseNet". Applied Sciences 12, n.º 21 (30 de octubre de 2022): 10989. http://dx.doi.org/10.3390/app122110989.

Texto completo
Resumen
Infrared images have good anti-environmental interference ability and can capture hot target information well, but their pictures lack rich detailed texture information and poor contrast. Visible image has clear and detailed texture information, but their imaging process depends more on the environment, and the quality of the environment determines the quality of the visible image. This paper presents an infrared image and visual image fusion algorithm based on deep learning. Two identical feature extractors are used to extract the features of visible and infrared images of different scales, fuse these features through specific fusion methods, and restore the features of visible and infrared images to the pictures through the feature restorer to make up for the deficiencies in the various photos of infrared and visible images. This paper tests infrared visual images, multi-focus images, and other data sets. The traditional image fusion algorithm is compared several with the current advanced image fusion algorithm. The experimental results show that the image fusion method proposed in this paper can keep more feature information of the source image in the fused image, and achieve excellent results in some image evaluation indexes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Son, Dong-Min, Hyuk-Ju Kwon y Sung-Hak Lee. "Visible and Near Infrared Image Fusion Using Base Tone Compression and Detail Transform Fusion". Chemosensors 10, n.º 4 (25 de marzo de 2022): 124. http://dx.doi.org/10.3390/chemosensors10040124.

Texto completo
Resumen
This study aims to develop a spatial dual-sensor module for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images. The proposed method synthesizes visible and near-infrared images using contourlet transform, principal component analysis, and iCAM06, while the blending method uses color information in a visible image and detailed information in an infrared image. The contourlet transform obtains detailed information and can decompose an image into directional images, making it better in obtaining detailed information than decomposition algorithms. The global tone information is enhanced by iCAM06, which is used for high-dynamic range imaging. The result of the blended images shows a clear appearance through both the compressed tone information of the visible image and the details of the infrared image.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Liu, Zheng, Su Mei Cui, He Yin y Yu Chi Lin. "Comparative Analysis of Image Measurement Accuracy in High Temperature Based on Visible and Infrared Vision". Applied Mechanics and Materials 300-301 (febrero de 2013): 1681–86. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1681.

Texto completo
Resumen
Image measurement is a common and non-contact dimensional measurement method. However, because of light deflection, visible light imaging is influenced largely, which makes the measurement accuracy reduce greatly. Various factors of visual measurement in high temperature are analyzed with the application of Planck theory. Thereafter, by means of the light dispersion theory, image measurement errors of visible and infrared images in high temperature which caused by light deviation are comparatively analyzed. Imaging errors of visible and infrared images are proposed quantitatively with experiments. Experimental results indicate that, based on the same imaging resolution, the relative error value of visible light image is 3.846 times larger than infrared image in 900°C high temperature. Therefore, the infrared image measurement has higher accuracy than the visible light image measurement in high temperature circumstances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Yugui, Bo Zhai, Gang Wang y Jianchu Lin. "Pedestrian Detection Method Based on Two-Stage Fusion of Visible Light Image and Thermal Infrared Image". Electronics 12, n.º 14 (21 de julio de 2023): 3171. http://dx.doi.org/10.3390/electronics12143171.

Texto completo
Resumen
Pedestrian detection has important research value and practical significance. It has been used in intelligent monitoring, intelligent transportation, intelligent therapy, and automatic driving. However, in the pixel-level fusion and the feature-level fusion of visible light images and thermal infrared images under shadows during the daytime or under low illumination at night in actual surveillance, missed and false pedestrian detection always occurs. To solve this problem, an algorithm for the pedestrian detection based on the two-stage fusion of visible light images and thermal infrared images is proposed. In this algorithm, in view of the difference and complementarity of visible light images and thermal infrared images, these two types of images are subjected to pixel-level fusion and feature-level fusion according to the varying daytime conditions. In the pixel-level fusion stage, the thermal infrared image, after being brightness enhanced, is fused with the visible image. The obtained pixel-level fusion image contains the information critical for accurate pedestrian detection. In the feature-level fusion stage, in the daytime, the previous pixel-level fusion image is fused with the visible light image; meanwhile, under low illumination at night, the previous pixel-level fusion image is fused with the thermal infrared image. According to the experimental results, the proposed algorithm accurately detects pedestrian under shadows during the daytime and low illumination at night, thereby improving the accuracy of the pedestrian detection and reducing the missed rate and false rate in the detection of pedestrians.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Huang, Hui, Linlu Dong, Zhishuang Xue, Xiaofang Liu y Caijian Hua. "Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns)". PLOS ONE 16, n.º 2 (19 de febrero de 2021): e0245563. http://dx.doi.org/10.1371/journal.pone.0245563.

Texto completo
Resumen
Aiming at the situation that the existing visible and infrared images fusion algorithms only focus on highlighting infrared targets and neglect the performance of image details, and cannot take into account the characteristics of infrared and visible images, this paper proposes an image enhancement fusion algorithm combining Karhunen-Loeve transform and Laplacian pyramid fusion. The detail layer of the source image is obtained by anisotropic diffusion to get more abundant texture information. The infrared images adopt adaptive histogram partition and brightness correction enhancement algorithm to highlight thermal radiation targets. A novel power function enhancement algorithm that simulates illumination is proposed for visible images to improve the contrast of visible images and facilitate human observation. In order to improve the fusion quality of images, the source image and the enhanced images are transformed by Karhunen-Loeve to form new visible and infrared images. Laplacian pyramid fusion is performed on the new visible and infrared images, and superimposed with the detail layer images to obtain the fusion result. Experimental results show that the method in this paper is superior to several representative image fusion algorithms in subjective visual effects on public data sets. In terms of objective evaluation, the fusion result performed well on the 8 evaluation indicators, and its own quality was high.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang y Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network". Applied Sciences 10, n.º 2 (11 de enero de 2020): 554. http://dx.doi.org/10.3390/app10020554.

Texto completo
Resumen
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Niu, Yifeng, Shengtao Xu, Lizhen Wu y Weidong Hu. "Airborne Infrared and Visible Image Fusion for Target Perception Based on Target Region Segmentation and Discrete Wavelet Transform". Mathematical Problems in Engineering 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/275138.

Texto completo
Resumen
Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Batchuluun, Ganbayar, Se Hyun Nam y Kang Ryoung Park. "Deep Learning-Based Plant Classification Using Nonaligned Thermal and Visible Light Images". Mathematics 10, n.º 21 (1 de noviembre de 2022): 4053. http://dx.doi.org/10.3390/math10214053.

Texto completo
Resumen
There have been various studies conducted on plant images. Machine learning algorithms are usually used in visible light image-based studies, whereas, in thermal image-based studies, acquired thermal images tend to be analyzed with a naked eye visual examination. However, visible light cameras are sensitive to light, and cannot be used in environments with low illumination. Although thermal cameras are not susceptible to these drawbacks, they are sensitive to atmospheric temperature and humidity. Moreover, in previous thermal camera-based studies, time-consuming manual analyses were performed. Therefore, in this study, we conducted a novel study by simultaneously using thermal images and corresponding visible light images of plants to solve these problems. The proposed network extracted features from each thermal image and corresponding visible light image of plants through residual block-based branch networks, and combined the features to increase the accuracy of the multiclass classification. Additionally, a new database was built in this study by acquiring thermal images and corresponding visible light images of various plants.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Zili, Yan Tian, Jianxiang Li y Yiping Xu. "Unsupervised Remote Sensing Image Super-Resolution Guided by Visible Images". Remote Sensing 14, n.º 6 (21 de marzo de 2022): 1513. http://dx.doi.org/10.3390/rs14061513.

Texto completo
Resumen
Remote sensing images are widely used in many applications. However, due to being limited by the sensors, it is difficult to obtain high-resolution (HR) images from remote sensing images. In this paper, we propose a novel unsupervised cross-domain super-resolution method devoted to reconstructing a low-resolution (LR) remote sensing image guided by an unpaired HR visible natural image. Therefore, an unsupervised visible image-guided remote sensing image super-resolution network (UVRSR) is built. The network is divided into two learnable branches: a visible image-guided branch (VIG) and a remote sensing image-guided branch (RIG). As HR visible images can provide rich textures and sufficient high-frequency information, the purpose of VIG is to treat them as targets and make full use of their advantages in reconstruction. Specially, we first use a CycleGAN to drag the LR visible natural images to the remote sensing domain; then, we apply an SR network to upscale these simulated remote sensing domain LR images. However, the domain gap between SR remote sensing images and HR visible targets is massive. To enforce domain consistency, we propose a novel domain-ruled discriminator in the reconstruction. Furthermore, inspired by the zero-shot super-resolution network (ZSSR) to explore the internal information of remote sensing images, we add a remote sensing domain inner study to train the SR network in RIG. Sufficient experimental works show UVRSR can achieve superior results with state-of-the-art unpaired and remote sensing SR methods on several challenging remote sensing image datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Lee, Ji-Min, Young-Eun An, EunSang Bak y Sungbum Pan. "Improvement of Negative Emotion Recognition in Visible Images Enhanced by Thermal Imaging". Sustainability 14, n.º 22 (16 de noviembre de 2022): 15200. http://dx.doi.org/10.3390/su142215200.

Texto completo
Resumen
Facial expressions help in understanding the intentions of others as they are an essential means of communication, revealing human emotions. Recently, thermal imaging has been playing a complementary role in emotion recognition and is considered an alternative to overcome the drawbacks of visible imaging. Notably, a relatively severe recognition error of fear among negative emotions frequently occurs in visible imaging. This study aims to improve the recognition performance of fear by using the visible and thermal images acquired simultaneously. When fear was not recognized in a visible image, we analyzed the causes of misrecognition. We thus found the condition of replacing the image with a thermal image. It improved emotion recognition performance by 4.54% on average, compared to the performance of using only visible images. Finally, we confirmed that the thermal image effectively compensated for the visible image’s shortcomings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Jia, Weibin, Zhihuan Song y Zhengguo Li. "Multi-scale Fusion of Stretched Infrared and Visible Images". Sensors 22, n.º 17 (2 de septiembre de 2022): 6660. http://dx.doi.org/10.3390/s22176660.

Texto completo
Resumen
Infrared (IR) band sensors can capture digital images under challenging conditions, such as haze, smoke, and fog, while visible (VIS) band sensors seize abundant texture information. It is desired to fuse IR and VIS images to generate a more informative image. In this paper, a novel multi-scale IR and VIS images fusion algorithm is proposed to integrate information from both the images into the fused image and preserve the color of the VIS image. A content-adaptive gamma correction is first introduced to stretch the IR images by using one of the simplest edge-preserving filters, which alleviates excessive luminance shifts and color distortions in the fused images. New contrast and exposedness measures are then introduced for the stretched IR and VIS images to achieve weight matrices that are more in line with their characteristics. The IR and luminance components of the VIS image in grayscale or RGB space are fused by using the Gaussian and Laplacian pyramids. The RGB components of the VIS image are finally expanded to generate the fused image if necessary. Comparisons experimentally demonstrate the effectiveness of the proposed algorithm to 10 different state-of-the-art fusion algorithms in terms of computational cost and quality of the fused images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Liu, Xiaomin, Jun-Bao Li y Jeng-Shyang Pan. "Feature Point Matching Based on Distinct Wavelength Phase Congruency and Log-Gabor Filters in Infrared and Visible Images". Sensors 19, n.º 19 (29 de septiembre de 2019): 4244. http://dx.doi.org/10.3390/s19194244.

Texto completo
Resumen
Infrared and visible image matching methods have been rising in popularity with the emergence of more kinds of sensors, which provide more applications in visual navigation, precision guidance, image fusion, and medical image analysis. In such applications, image matching is utilized for location, fusion, image analysis, and so on. In this paper, an infrared and visible image matching approach, based on distinct wavelength phase congruency (DWPC) and log-Gabor filters, is proposed. Furthermore, this method is modified for non-linear image matching with different physical wavelengths. Phase congruency (PC) theory is utilized to obtain PC images with intrinsic and affluent image features for images containing complex intensity changes or noise. Then, the maximum and minimum moments of the PC images are computed to obtain the corners in the matched images. In order to obtain the descriptors, log-Gabor filters are utilized and overlapping subregions are extracted in a neighborhood of certain pixels. In order to improve the accuracy of the algorithm, the moments of PCs in the original image and a Gaussian smoothed image are combined to detect the corners. Meanwhile, it is improper that the two matched images have the same PC wavelengths, due to the images having different physical wavelengths. Thus, in the experiment, the wavelength of the PC is changed for different physical wavelengths. For realistic application, BiDimRegression method is proposed to compute the similarity between two points set in infrared and visible images. The proposed approach is evaluated on four data sets with 237 pairs of visible and infrared images, and its performance is compared with state-of-the-art approaches: the edge-oriented histogram descriptor (EHD), phase congruency edge-oriented histogram descriptor (PCEHD), and log-Gabor histogram descriptor (LGHD) algorithms. The experimental results indicate that the accuracy rate of the proposed approach is 50% higher than the traditional approaches in infrared and visible images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wang, Qi, Xiang Gao, Fan Wang, Zhihang Ji y Xiaopeng Hu. "Feature Point Matching Method Based on Consistent Edge Structures for Infrared and Visible Images". Applied Sciences 10, n.º 7 (27 de marzo de 2020): 2302. http://dx.doi.org/10.3390/app10072302.

Texto completo
Resumen
Infrared and visible image match is an important research topic in the field of multi-modality image processing. Due to the difference of image contents like pixel intensities and gradients caused by disparate spectrums, it is a great challenge for infrared and visible image match in terms of the detection repeatability and the matching accuracy. To improve the matching performance, a feature detection and description method based on consistent edge structures of images (DDCE) is proposed in this paper. First, consistent edge structures are detected to obtain similar contents of infrared and visible images. Second, common feature points of infrared and visible images are extracted based on the consistent edge structures. Third, feature descriptions are established according to the edge structure attributes including edge length and edge orientation. Lastly, feature correspondences are calculated according to the distance of feature descriptions. Due to the utilization of consistent edge structures of infrared and visible images, the proposed DDCE method can improve the detection repeatability and the matching accuracy. DDCE is evaluated on two public datasets and are compared with several state-of-the-art methods. Experimental results demonstrate that DDCE can achieve superior performance against other methods for infrared and visible image match.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Wu, Yan Hai, Hao Zhang, Fang Ni Zhang y Yue Hua Han. "Fusion of Visible and Infrared Images Based on Non-Sampling Contourlet and Wavelet Transform". Applied Mechanics and Materials 599-601 (agosto de 2014): 1523–26. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1523.

Texto completo
Resumen
This paper gives a method for fusion of visible and infrared image, which combined non-sampling contourlet and wavelet transform. This method firstly makes contrast enhancements to infrared image. Next, does NSCT decomposition to visible image and enhanced-infrared image, then decomposes the low frequency from above decomposition using wavelet. Thirdly, for high-frequency subband of NSCT decomposition and high or low-frequency subband of wavelet, it uses different fusion rules. Finally, it gets fusion image through refactoring of wavelet and NSCT. Experiments show that the method not only retains texture details belong to visible images, but also highlights targets in infrared images. It has a better fusion effect.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Li, Shengshi, Yonghua Zou, Guanjun Wang y Cong Lin. "Infrared and Visible Image Fusion Method Based on a Principal Component Analysis Network and Image Pyramid". Remote Sensing 15, n.º 3 (24 de enero de 2023): 685. http://dx.doi.org/10.3390/rs15030685.

Texto completo
Resumen
The aim of infrared (IR) and visible image fusion is to generate a more informative image for human observation or some other computer vision tasks. The activity-level measurement and weight assignment are two key parts in image fusion. In this paper, we propose a novel IR and visible fusion method based on the principal component analysis network (PCANet) and an image pyramid. Firstly, we use the lightweight deep learning network, a PCANet, to obtain the activity-level measurement and weight assignment of IR and visible images. The activity-level measurement obtained by the PCANet has a stronger representation ability for focusing on IR target perception and visible detail description. Secondly, the weights and the source images are decomposed into multiple scales by the image pyramid, and the weighted-average fusion rule is applied at each scale. Finally, the fused image is obtained by reconstruction. The effectiveness of the proposed algorithm was verified by two datasets with more than eighty pairs of test images in total. Compared with nineteen representative methods, the experimental results demonstrate that the proposed method can achieve the state-of-the-art results in both visual quality and objective evaluation metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Gao, Peng, Tian Tian, Tianming Zhao, Linfeng Li, Nan Zhang y Jinwen Tian. "GF-Detection: Fusion with GAN of Infrared and Visible Images for Vehicle Detection at Nighttime". Remote Sensing 14, n.º 12 (9 de junio de 2022): 2771. http://dx.doi.org/10.3390/rs14122771.

Texto completo
Resumen
Vehicles are important targets in the remote sensing applications and nighttime vehicle detection has been a hot study topic in recent years. Vehicles in the visible images at nighttime have inadequate features for object detection. Infrared images retain the contours of vehicles while they lose the color information. Thus, it is valuable to fuse infrared and visible images to improve the vehicle detection performance at nighttime. However, it is still a challenge to design effective fusion models due to the complexity of visible and infrared images. In order to improve vehicle detection performance at nighttime, this paper proposes a fusion model of infrared and visible images with Generative Adversarial Networks (GAN) for vehicle detection named GF-detection. GAN is utilized in the image reconstruction and introduced in the image fusion recently. To be specific, to exploit more features for the fusion, GAN is utilized to fuse the infrared and visible images via the image reconstruction. The generator fuses the image features and detection features, and then generates the reconstructed images for the discriminator to classify. Two branches, visible and infrared branches, are designed in the GF-detection model. Different feature extraction strategies are conducted according to the variance of the visible and infrared images. Detection features and self-attention mechanism are added to the fusion model aiming to build a detection task-driven fusion model of infrared and visible images. Extensive experiments based on nighttime images are conducted to demonstrate the effectiveness of the proposed fusion model in night vehicle detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Du, Qinglei, Han Xu, Yong Ma, Jun Huang y Fan Fan. "Fusing Infrared and Visible Images of Different Resolutions via Total Variation Model". Sensors 18, n.º 11 (8 de noviembre de 2018): 3827. http://dx.doi.org/10.3390/s18113827.

Texto completo
Resumen
In infrared and visible image fusion, existing methods typically have a prerequisite that the source images share the same resolution. However, due to limitations of hardware devices and application environments, infrared images constantly suffer from markedly lower resolution compared with the corresponding visible images. In this case, current fusion methods inevitably cause texture information loss in visible images or blur thermal radiation information in infrared images. Moreover, the principle of existing fusion rules typically focuses on preserving texture details in source images, which may be inappropriate for fusing infrared thermal radiation information because it is characterized by pixel intensities, possibly neglecting the prominence of targets in fused images. Faced with such difficulties and challenges, we propose a novel method to fuse infrared and visible images of different resolutions and generate high-resolution resulting images to obtain clear and accurate fused images. Specifically, the fusion problem is formulated as a total variation (TV) minimization problem. The data fidelity term constrains the pixel intensity similarity of the downsampled fused image with respect to the infrared image, and the regularization term compels the gradient similarity of the fused image with respect to the visible image. The fast iterative shrinkage-thresholding algorithm (FISTA) framework is applied to improve the convergence rate. Our resulting fused images are similar to super-resolved infrared images, which are sharpened by the texture information from visible images. Advantages and innovations of our method are demonstrated by the qualitative and quantitative comparisons with six state-of-the-art methods on publicly available datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Chen, Xianglong, Haipeng Wang, Yaohui Liang, Ying Meng y Shifeng Wang. "A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network". Sensors 22, n.º 1 (31 de diciembre de 2021): 304. http://dx.doi.org/10.3390/s22010304.

Texto completo
Resumen
The presence of fake pictures affects the reliability of visible face images under specific circumstances. This paper presents a novel adversarial neural network designed named as the FTSGAN for infrared and visible image fusion and we utilize FTSGAN model to fuse the face image features of infrared and visible image to improve the effect of face recognition. In FTSGAN model design, the Frobenius norm (F), total variation norm (TV), and structural similarity index measure (SSIM) are employed. The F and TV are used to limit the gray level and the gradient of the image, while the SSIM is used to limit the image structure. The FTSGAN fuses infrared and visible face images that contains bio-information for heterogeneous face recognition tasks. Experiments based on the FTSGAN using hundreds of face images demonstrate its excellent performance. The principal component analysis (PCA) and linear discrimination analysis (LDA) are involved in face recognition. The face recognition performance after fusion improved by 1.9% compared to that before fusion, and the final face recognition rate was 94.4%. This proposed method has better quality, faster rate, and is more robust than the methods that only use visible images for face recognition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Shen, Sen, Di Li, Liye Mei, Chuan Xu, Zhaoyi Ye, Qi Zhang, Bo Hong, Wei Yang y Ying Wang. "DFA-Net: Multi-Scale Dense Feature-Aware Network via Integrated Attention for Unmanned Aerial Vehicle Infrared and Visible Image Fusion". Drones 7, n.º 8 (6 de agosto de 2023): 517. http://dx.doi.org/10.3390/drones7080517.

Texto completo
Resumen
Fusing infrared and visible images taken by an unmanned aerial vehicle (UAV) is a challenging task, since infrared images distinguish the target from the background by the difference in infrared radiation, while the low resolution also produces a less pronounced effect. Conversely, the visible light spectrum has a high spatial resolution and rich texture; however, it is easily affected by harsh weather conditions like low light. Therefore, the fusion of infrared and visible light has the potential to provide complementary advantages. In this paper, we propose a multi-scale dense feature-aware network via integrated attention for infrared and visible image fusion, namely DFA-Net. Firstly, we construct a dual-channel encoder to extract the deep features of infrared and visible images. Secondly, we adopt a nested decoder to adequately integrate the features of various scales of the encoder so as to realize the multi-scale feature representation of visible image detail texture and infrared image salient target. Then, we present a feature-aware network via integrated attention to further fuse the feature information of different scales, which can focus on specific advantage features of infrared and visible images. Finally, we use unsupervised gradient estimation and intensity loss to learn significant fusion features of infrared and visible images. In addition, our proposed DFA-Net approach addresses the challenges of fusing infrared and visible images captured by a UAV. The results show that DFA-Net achieved excellent image fusion performance in nine quantitative evaluation indexes under a low-light environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Zhou, Ze Hua y Min Tan. "Infrared Image and Visible Image Fusion Based on Wavelet Transform". Advanced Materials Research 756-759 (septiembre de 2013): 2850–56. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2850.

Texto completo
Resumen
The same scene, the infrared image and visible image fusion can concurrently take advantage of the original image information can overcome the limitations and differences of a single sensor image in terms of geometric, spectral and spatial resolution, to improve the quality of the image , which help to locate, identify and explain the physical phenomena and events. Put forward a kind of image fusion method based on wavelet transform. And for the wavelet decomposition of the frequency domain, respectively, discussed the principles of select high-frequency coefficients and low frequency coefficients, highlight the contours of parts and the weakening of the details section, fusion, image fusion has the characteristics of two or multiple images, more people or the visual characteristics of the machine, the image for further analysis and understanding, detection and identification or tracking of the target image.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Li, Liangliang, Ming Lv, Zhenhong Jia, Qingxin Jin, Minqin Liu, Liangfu Chen y Hongbing Ma. "An Effective Infrared and Visible Image Fusion Approach via Rolling Guidance Filtering and Gradient Saliency Map". Remote Sensing 15, n.º 10 (9 de mayo de 2023): 2486. http://dx.doi.org/10.3390/rs15102486.

Texto completo
Resumen
To solve problems of brightness and detail information loss in infrared and visible image fusion, an effective infrared and visible image fusion method using rolling guidance filtering and gradient saliency map is proposed in this paper. The rolling guidance filtering is used to decompose the input images into approximate layers and residual layers; the energy attribute fusion model is used to fuse the approximate layers; the gradient saliency map is introduced and the corresponding weight matrices are constructed to perform on residual layers. The fusion image is generated by reconstructing the fused approximate layer sub-image and residual layer sub-images. Experimental results demonstrate the superiority of the proposed infrared and visible image fusion method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Zhang, Hui, Xu Ma y Yanshan Tian. "An Image Fusion Method Based on Curvelet Transform and Guided Filter Enhancement". Mathematical Problems in Engineering 2020 (27 de junio de 2020): 1–8. http://dx.doi.org/10.1155/2020/9821715.

Texto completo
Resumen
In order to improve the clarity of image fusion and solve the problem that the image fusion effect is affected by the illumination and weather of visible light, a fusion method of infrared and visible images for night-vision context enhancement is proposed. First, a guided filter is used to enhance the details of the visible image. Then, the enhanced visible and infrared images are decomposed by the curvelet transform. The improved sparse representation is used to fuse the low-frequency part, while the high-frequency part is fused with the parametric adaptation pulse-coupled neural networks. Finally, the fusion result is obtained by inverse transformation of the curvelet transform. The experimental results show that the proposed method has good performance in detail processing, edge protection, and source image information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Zhao, Liangjun, Yun Zhang, Linlu Dong y Fengling Zheng. "Infrared and visible image fusion algorithm based on spatial domain and image features". PLOS ONE 17, n.º 12 (30 de diciembre de 2022): e0278055. http://dx.doi.org/10.1371/journal.pone.0278055.

Texto completo
Resumen
Multi-scale image decomposition is crucial for image fusion, extracting prominent feature textures from infrared and visible light images to obtain clear fused images with more textures. This paper proposes a fusion method of infrared and visible light images based on spatial domain and image features to obtain high-resolution and texture-rich images. First, an efficient hierarchical image clustering algorithm based on superpixel fast pixel clustering directly performs multi-scale decomposition of each source image in the spatial domain and obtains high-frequency, medium-frequency, and low-frequency layers to extract the maximum and minimum values of each source image combined images. Then, using the attribute parameters of each layer as fusion weights, high-definition fusion images are through adaptive feature fusion. Besides, the proposed algorithm performs multi-scale decomposition of the image in the spatial frequency domain to solve the information loss problem caused by the conversion process between the spatial frequency and frequency domains in the traditional extraction of image features in the frequency domain. Eight image quality indicators are compared with other fusion algorithms. Experimental results show that this method outperforms other comparative methods in both subjective and objective measures. Furthermore, the algorithm has high definition and rich textures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Li, Xiang, Yue Shun He, Xuan Zhan y Feng Yu Liu. "A Rapid Fusion Algorithm of Infrared and the Visible Images Based on Directionlet Transform". Applied Mechanics and Materials 20-23 (enero de 2010): 45–51. http://dx.doi.org/10.4028/www.scientific.net/amm.20-23.45.

Texto completo
Resumen
Direction transform; image fusion; infrared images; fusion rule; anisotropic Abstract Based on analysing the feature of infrared and the visible, this paper proposed an improved algorithm using Directionlet transform.The feature is like this: firstly, separate the color visible images to get the component images, and then make anisotropic decomposition for component images and inrared images, after analysing these images, process them according to regional energy rules ,finally incorporate the intense color to get the fused image. The simulation results shows that,this algorithm can effectively fuse infrared and the visible image, moreover, not only the fused images can maintain the environment details, but also underline the edge features, which applies to fusion with strong edges, therefore,this algorithm is of robust and convenient.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Wang, Jingjing, Jinwen Ren, Hongzhen Li, Zengzhao Sun, Zhenye Luan, Zishu Yu, Chunhao Liang, Yashar E. Monfared, Huaqiang Xu y Qing Hua. "DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion". Photonics 9, n.º 3 (3 de marzo de 2022): 150. http://dx.doi.org/10.3390/photonics9030150.

Texto completo
Resumen
Infrared images can provide clear contrast information to distinguish between the target and the background under any lighting conditions. In contrast, visible images can provide rich texture details and are compatible with the human visual system. The fusion of a visible image and infrared image will thus contain both comprehensive contrast information and texture details. In this study, a novel approach for the fusion of infrared and visible images is proposed based on a dual-discriminator generative adversarial network with a squeeze-and-excitation module (DDGANSE). Our approach establishes confrontation training between one generator and two discriminators. The goal of the generator is to generate images that are similar to the source images, and contain the information from both infrared and visible source images. The purpose of the two discriminators is to increase the similarity between the image generated by the generator and the infrared and visible images. We experimentally demonstrated that using continuous adversarial training, DDGANSE outputs images retain the advantages of both infrared and visible images with significant contrast information and rich texture details. Finally, we compared the performance of our proposed method with previously reported techniques for fusing infrared and visible images using both quantitative and qualitative assessments. Our experiments on the TNO dataset demonstrate that our proposed method shows superior performance compared to other similar reported methods in the literature using various performance metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Ma, Weihong, Kun Wang, Jiawei Li, Simon X. Yang, Junfei Li, Lepeng Song y Qifeng Li. "Infrared and Visible Image Fusion Technology and Application: A Review". Sensors 23, n.º 2 (4 de enero de 2023): 599. http://dx.doi.org/10.3390/s23020599.

Texto completo
Resumen
The images acquired by a single visible light sensor are very susceptible to light conditions, weather changes, and other factors, while the images acquired by a single infrared light sensor generally have poor resolution, low contrast, low signal-to-noise ratio, and blurred visual effects. The fusion of visible and infrared light can avoid the disadvantages of two single sensors and, in fusing the advantages of both sensors, significantly improve the quality of the images. The fusion of infrared and visible images is widely used in agriculture, industry, medicine, and other fields. In this study, firstly, the architecture of mainstream infrared and visible image fusion technology and application was reviewed; secondly, the application status in robot vision, medical imaging, agricultural remote sensing, and industrial defect detection fields was discussed; thirdly, the evaluation indicators of the main image fusion methods were combined into the subjective evaluation and the objective evaluation, the properties of current mainstream technologies were then specifically analyzed and compared, and the outlook for image fusion was assessed; finally, infrared and visible image fusion was summarized. The results show that the definition and efficiency of the fused infrared and visible image had been improved significantly. however, there were still some problems, such as the poor accuracy of the fused image, and irretrievably lost pixels. There is a need to improve the adaptive design of the traditional algorithm parameters, to combine the innovation of the fusion algorithm and the optimization of the neural network, so as to further improve the image fusion accuracy, reduce noise interference, and improve the real-time performance of the algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Huo, Xing, Yinping Deng y Kun Shao. "Infrared and Visible Image Fusion with Significant Target Enhancement". Entropy 24, n.º 11 (10 de noviembre de 2022): 1633. http://dx.doi.org/10.3390/e24111633.

Texto completo
Resumen
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with multi-level Gaussian curvature filtering to obtain background information with high spatial resolution. Second, the large-scale layers are fused using ResNet50 and maximizing weights based on the average operator to improve detail retention. Finally, the base layers are fused by incorporating a new salient target detection method. The subjective and objective experimental results on TNO and MSRS datasets demonstrate that our method achieves better results compared to other traditional and deep learning-based methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Liu, Yaochen, Lili Dong, Yuanyuan Ji y Wenhai Xu. "Infrared and Visible Image Fusion through Details Preservation". Sensors 19, n.º 20 (20 de octubre de 2019): 4556. http://dx.doi.org/10.3390/s19204556.

Texto completo
Resumen
In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Niu, Yi Feng, Sheng Tao Xu y Wei Dong Hu. "Fusion of Infrared and Visible Image Based on Target Regions for Environment Perception". Applied Mechanics and Materials 128-129 (octubre de 2011): 589–93. http://dx.doi.org/10.4028/www.scientific.net/amm.128-129.589.

Texto completo
Resumen
Infrared and visible image fusion is an important precondition to realize target perception for unmanned aerial vehicles (UAV) based on which UAV can perform various missions. The details in visible images are abundant, while the target information is more outstanding in infrared images. However, the conventional fusion methods are mostly based on region segmentation, and then the fused image for target recognition can’t be actually acquired. In this paper, a novel fusion method of infrared and visible image based on target regions in discrete wavelet transform (DWT) domain is proposed, which can gain more target information and preserve the details. Experimental results show that our method can generate better fused image for target recognition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Yin, Ruyi, Bin Yang, Zuyan Huang y Xiaozhi Zhang. "DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network". Sensors 23, n.º 16 (11 de agosto de 2023): 7097. http://dx.doi.org/10.3390/s23167097.

Texto completo
Resumen
Infrared and visible image fusion technologies are used to characterize the same scene using diverse modalities. However, most existing deep learning-based fusion methods are designed as symmetric networks, which ignore the differences between modal images and lead to source image information loss during feature extraction. In this paper, we propose a new fusion framework for the different characteristics of infrared and visible images. Specifically, we design a dual-stream asymmetric network with two different feature extraction networks to extract infrared and visible feature maps, respectively. The transformer architecture is introduced in the infrared feature extraction branch, which can force the network to focus on the local features of infrared images while still obtaining their contextual information. The visible feature extraction branch uses residual dense blocks to fully extract the rich background and texture detail information of visible images. In this way, it can provide better infrared targets and visible details for the fused image. Experimental results on multiple datasets indicate that DSA-Net outperforms state-of-the-art methods in both qualitative and quantitative evaluations. In addition, we also apply the fusion results to the target detection task, which indirectly demonstrates the fusion performances of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Yongshi, Ye, Ma Haoyu, Nima Tashi, Liu Xinting, Yuan Yuchen y Shang Zihang. "Object Detection Based on Fusion of Visible and Infrared Images". Journal of Physics: Conference Series 2560, n.º 1 (1 de agosto de 2023): 012021. http://dx.doi.org/10.1088/1742-6596/2560/1/012021.

Texto completo
Resumen
Abstract In consideration of the complementary characteristics between visible light and infrared images, this paper proposes a novel method for object detection based on the fusion of these two types of images, thereby enhancing detection accuracy even under harsh environmental conditions. Specifically, we employ an improved AE network, which encodes and decodes the visible light and infrared images into dual-scale image decomposition. By reconstructing the original images with the decoder, we highlight the details of the fused image. Yolov5 network is then constructed based on this fused image, and its parameters are adjusted accordingly to achieve accurate detection of objects. Due to the complementary information features that are missing between the two image types, our method effectively enhances the precision of object detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Gu, Lei y Juan Meng. "Wireless Sensor System of UAV Infrared Image and Visible Light Image Registration Fusion". Journal of Electrical and Computer Engineering 2022 (2 de junio de 2022): 1–15. http://dx.doi.org/10.1155/2022/9245014.

Texto completo
Resumen
The application of multisource sensors to drones requires high-quality images to ensure it. In simultaneous interpreting two or more multisensor images based on the same scene or target, the image obtained by the UAV sensor is limited by the imaging time and the shooting angle. The images obtained may not be aligned in the spatial position, thus affecting the fusion effect. Therefore, different sensor images must be registered before image fusion. During the shooting process of the drone imaging sensor, imaging angle, and environmental conditions, the obtained various sensor images will have rotation, translation, and other deformations in the spatial position so that they do not reach the spatial position. Therefore, it is impossible to directly perform image fusion directly. Therefore, before the multisensor image fusion, the image registration process must be completed to ensure that the two images are aligned in space. This paper analyzes the principles; based on the principles of the Powell search algorithm and improved walking algorithm, an algorithm combining Powell and improved walking algorithm is proposed. This paper also studies several traditional image neutrosophic fusions. The algorithm combines the fusion optimization algorithm proposed in this paper greatly reduces the calculation speed and improves the performance of the optimization algorithm and success rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

IIDA, Kenta y Hitoshi KIYA. "Robust Image Identification without Visible Information for JPEG Images". IEICE Transactions on Information and Systems E101.D, n.º 1 (2018): 13–19. http://dx.doi.org/10.1587/transinf.2017mup0005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Pei, Soo-Chang y Yi-Chong Zeng. "A Novel Image Recovery Algorithm for Visible Watermarked Images". IEEE Transactions on Information Forensics and Security 1, n.º 4 (diciembre de 2006): 543–50. http://dx.doi.org/10.1109/tifs.2006.885031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Choi, Kyuha, Changhyun Kim, Myung-Ho Kang y Jong Beom Ra. "Resolution Improvement of Infrared Images Using Visible Image Information". IEEE Signal Processing Letters 18, n.º 10 (octubre de 2011): 611–14. http://dx.doi.org/10.1109/lsp.2011.2165842.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Liu, Yu, Xun Chen, Juan Cheng, Hu Peng y Zengfu Wang. "Infrared and visible image fusion with convolutional neural networks". International Journal of Wavelets, Multiresolution and Information Processing 16, n.º 03 (mayo de 2018): 1850018. http://dx.doi.org/10.1142/s0219691318500182.

Texto completo
Resumen
The fusion of infrared and visible images of the same scene aims to generate a composite image which can provide a more comprehensive description of the scene. In this paper, we propose an infrared and visible image fusion method based on convolutional neural networks (CNNs). In particular, a siamese convolutional network is applied to obtain a weight map which integrates the pixel activity information from two source images. This CNN-based approach can deal with two vital issues in image fusion as a whole, namely, activity level measurement and weight assignment. Considering the different imaging modalities of infrared and visible images, the merging procedure is conducted in a multi-scale manner via image pyramids and a local similarity-based strategy is adopted to adaptively adjust the fusion mode for the decomposed coefficients. Experimental results demonstrate that the proposed method can achieve state-of-the-art results in terms of both visual quality and objective assessment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Jang, Hyoseon, Sangkyun Kim, Suhong Yoo, Soohee Han y Hong-Gyoo Sohn. "Feature Matching Combining Radiometric and Geometric Characteristics of Images, Applied to Oblique- and Nadir-Looking Visible and TIR Sensors of UAV Imagery". Sensors 21, n.º 13 (4 de julio de 2021): 4587. http://dx.doi.org/10.3390/s21134587.

Texto completo
Resumen
A large amount of information needs to be identified and produced during the process of promoting projects of interest. Thermal infrared (TIR) images are extensively used because they can provide information that cannot be extracted from visible images. In particular, TIR oblique images facilitate the acquisition of information of a building’s facade that is challenging to obtain from a nadir image. When a TIR oblique image and the 3D information acquired from conventional visible nadir imagery are combined, a great synergy for identifying surface information can be created. However, it is an onerous task to match common points in the images. In this study, a robust matching method of image pairs combined with different wavelengths and geometries (i.e., visible nadir-looking vs. TIR oblique, and visible oblique vs. TIR nadir-looking) is proposed. Three main processes of phase congruency, histogram matching, and Image Matching by Affine Simulation (IMAS) were adjusted to accommodate the radiometric and geometric differences of matched image pairs. The method was applied to Unmanned Aerial Vehicle (UAV) images of building and non-building areas. The results were compared with frequently used matching techniques, such as scale-invariant feature transform (SIFT), speeded-up robust features (SURF), synthetic aperture radar–SIFT (SAR–SIFT), and Affine SIFT (ASIFT). The method outperforms other matching methods in root mean square error (RMSE) and matching performance (matched and not matched). The proposed method is believed to be a reliable solution for pinpointing surface information through image matching with different geometries obtained via TIR and visible sensors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai y Jing Han. "Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception". Entropy 24, n.º 10 (21 de septiembre de 2022): 1327. http://dx.doi.org/10.3390/e24101327.

Texto completo
Resumen
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Specifically, the ASG module transfers the semantics of the target and background to the fusion process for target highlighting. The AVP module analyzes the visual features from the global structure and local details of the visible and fusion images and then guides the fusion network to adaptively generate a weight map of signal completion so that the resulting fusion images possess a natural and visible appearance. We construct a joint distribution function between the fusion images and the corresponding semantics and use the discriminator to improve the fusion performance in terms of natural appearance and target saliency. Experimental results demonstrate that our proposed ASG and AVP modules can effectively guide the image-fusion process by selectively preserving the details in visible images and the salient information of targets in infrared images. The SGVPGAN exhibits significant improvements over other fusion methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Pavez, Vicente, Gabriel Hermosilla, Francisco Pizarro, Sebastián Fingerhuth y Daniel Yunge. "Thermal Image Generation for Robust Face Recognition". Applied Sciences 12, n.º 1 (5 de enero de 2022): 497. http://dx.doi.org/10.3390/app12010497.

Texto completo
Resumen
This article shows how to create a robust thermal face recognition system based on the FaceNet architecture. We propose a method for generating thermal images to create a thermal face database with six different attributes (frown, glasses, rotation, normal, vocal, and smile) based on various deep learning models. First, we use StyleCLIP, which oversees manipulating the latent space of the input visible image to add the desired attributes to the visible face. Second, we use the GANs N’ Roses (GNR) model, a multimodal image-to-image framework. It uses maps of style and content to generate thermal imaging from visible images, using generative adversarial approaches. Using the proposed generator system, we create a database of synthetic thermal faces composed of more than 100k images corresponding to 3227 individuals. When trained and tested using the synthetic database, the Thermal-FaceNet model obtained a 99.98% accuracy. Furthermore, when tested with a real database, the accuracy was more than 98%, validating the proposed thermal images generator system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Yang, Kai Wei, Tian Hua Chen, Su Xia Xing y Jing Xian Li. "Infrared and Visible Image Registration Base on SIFT Features". Key Engineering Materials 500 (enero de 2012): 383–89. http://dx.doi.org/10.4028/www.scientific.net/kem.500.383.

Texto completo
Resumen
In the System of Target Tracking Recognition, infrared sensors and visible light sensors are two kinds of the most commonly used sensors; fusion effectively for these two images can greatly enhance the accuracy and reliability of identification. Improving the accuracy of registration in infrared light and visible light images by modifying the SIFT algorithm, allowing infrared images and visible images more quickly and accurately register. The method can produce good results for registration by infrared image histogram equa-lization, reasonable to reduce the level of Gaussian blur in the pyramid establishment process of sift algorithm, appropriate adjustments to thresholds and limits the scope of direction of sub-gradient descriptor. The features are invariant to rotation, image scale and change in illumination.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Zhao, Yuqing, Guangyuan Fu, Hongqiao Wang y Shaolei Zhang. "The Fusion of Unmatched Infrared and Visible Images Based on Generative Adversarial Networks". Mathematical Problems in Engineering 2020 (20 de marzo de 2020): 1–12. http://dx.doi.org/10.1155/2020/3739040.

Texto completo
Resumen
Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared images in practice. In view of the above issues, we propose a new network model based on generative adversarial networks (GANs) to fuse unmatched infrared and visible images. Our method generates the corresponding infrared image from a visible image and fuses the two images together to obtain more information. The effectiveness of the proposed method is verified qualitatively and quantitatively through experimentation on public datasets. In addition, the generated fused images of the proposed method contain more abundant texture and thermal radiation information than other methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Santoyo-Garcia, Hector, Eduardo Fragoso-Navarro, Rogelio Reyes-Reyes, Clara Cruz-Ramos y Mariko Nakano-Miyatake. "Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras". Security and Communication Networks 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/7903198.

Texto completo
Resumen
In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA) domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS) so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Li, Xilai, Xiaosong Li y Wuyang Liu. "CBFM: Contrast Balance Infrared and Visible Image Fusion Based on Contrast-Preserving Guided Filter". Remote Sensing 15, n.º 12 (7 de junio de 2023): 2969. http://dx.doi.org/10.3390/rs15122969.

Texto completo
Resumen
Infrared (IR) and visible image fusion is an important data fusion and image processing technique that can accurately and comprehensively integrate the thermal radiation and texture details of source images. However, existing methods neglect the high-contrast fusion problem, leading to suboptimal fusion performance when thermal radiation target information in IR images is replaced by high-contrast information in visible images. To address this limitation, we propose a contrast-balanced framework for IR and visible image fusion. Specifically, a novel contrast balance strategy is proposed to process visible images and reduce energy while allowing for detailed compensation of overexposed areas. Moreover, a contrast-preserving guided filter is proposed to decompose the image into energy-detail layers to reduce high contrast and filter information. To effectively extract the active information in the detail layer and the brightness information in the energy layer, we proposed a new weighted energy-of-Laplacian operator and a Gaussian distribution of the image entropy scheme to fuse the detail and energy layers, respectively. The fused result was obtained by adding the detail and energy layers. Extensive experimental results demonstrate that the proposed method can effectively reduce the high contrast and highlighted target information in an image while simultaneously preserving details. In addition, the proposed method exhibited superior performance compared to the state-of-the-art methods in both qualitative and quantitative assessments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Chen, Jinfen, Bo Cheng, Xiaoping Zhang, Tengfei Long, Bo Chen, Guizhou Wang y Degang Zhang. "A TIR-Visible Automatic Registration and Geometric Correction Method for SDGSAT-1 Thermal Infrared Image Based on Modified RIFT". Remote Sensing 14, n.º 6 (14 de marzo de 2022): 1393. http://dx.doi.org/10.3390/rs14061393.

Texto completo
Resumen
High-resolution thermal infrared (TIR) remote sensing images can more accurately retrieve land surface temperature and describe the spatial pattern of urban thermal environment. The Thermal Infrared Spectrometer (TIS), which has high spatial resolution among spaceborne thermal infrared sensors at present, and global data acquisition capability, is one of the sensors equipped in the SDGSAT-1. It is an important complement to the existing international mainstream satellites. In order to produce standard data products, rapidly and accurately, the automatic registration and geometric correction method needs to be developed. Unlike visible–visible image registration, thermal infrared images are blurred in edge details and have obvious non-linear radiometric differences from visible images, which make it challenging for the TIR-visible image registration task. To address these problems, homomorphic filtering is employed to enhance TIR image details and the modified RIFT algorithm is proposed to achieve TIR-visible image registration. Different from using MIM for feature description in RIFT, the proposed modified RIFT uses the novel binary pattern string to descriptor construction. With sufficient and uniformly distributed ground control points, the two-step orthorectification framework, from SDGSAT-1 TIS L1A image to L4 orthoimage, are proposed in this study. The first experiment, with six TIR-visible image pairs, captured in different landforms, is performed to verify the registration performance, and the result indicates that the homomorphic filtering and modified RIFT greatly increase the number of corresponding points. The second experiment, with one scene of an SDGSAT-1 TIS image, is executed to test the proposed orthorectification framework. Subsequently, 52 GCPs are selected manually to evaluate the orthorectification accuracy. The result indicates that the proposed orthorectification framework is helpful to improve the geometric accuracy and guarantee for the subsequent thermal infrared applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Liu, Peng, Fuyu Li, Shanshan Yuan y Wanyi Li. "Unsupervised Image-Generation Enhanced Adaptation for Object Detection in Thermal Images". Mobile Information Systems 2021 (27 de diciembre de 2021): 1–6. http://dx.doi.org/10.1155/2021/1837894.

Texto completo
Resumen
Object detection in thermal images is an important computer vision task and has many applications such as unmanned vehicles, robotics, surveillance, and night vision. Deep learning-based detectors have achieved major progress, which usually need large amount of labelled training data. However, labelled data for object detection in thermal images is scarce and expensive to collect. How to take advantage of the large number labelled visible images and adapt them into thermal image domain is expected to solve. This paper proposes an unsupervised image-generation enhanced adaptation method for object detection in thermal images. To reduce the gap between visible domain and thermal domain, the proposed method manages to generate simulated fake thermal images that are similar to the target images and preserves the annotation information of the visible source domain. The image generation includes a CycleGAN-based image-to-image translation and an intensity inversion transformation. Generated fake thermal images are used as renewed source domain, and then the off-the-shelf domain adaptive faster RCNN is utilized to reduce the gap between the generated intermediate domain and the thermal target domain. Experiments demonstrate the effectiveness and superiority of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Li, Shengshi, Guanjun Wang, Hui Zhang y Yonghua Zou. "Observing Individuals and Behavior of Hainan Gibbons (Nomascus hainanus) Using Drone Infrared and Visible Image Fusion Technology". Drones 7, n.º 9 (22 de agosto de 2023): 543. http://dx.doi.org/10.3390/drones7090543.

Texto completo
Resumen
The Hainan gibbon (Nomascus hainanus) is one of the most endangered primates in the world. Infrared and visible images taken by drones are an important and effective way to observe Hainan gibbons. However, a single infrared or visible image cannot simultaneously observe the movement tracks of Hainan gibbons and the appearance of the rainforest. The fusion of infrared and visible images of the same scene aims to generate a composite image which can provide a more comprehensive description of the scene. We propose a fusion method of infrared and visible images of the Hainan gibbon for the first time, termed Swin-UetFuse. The Swin-UetFuse has a powerful global and long-range semantic information extraction capability, which is very suitable for application in complex tropical rainforest environments. Firstly, the hierarchical Swin Transformer is applied as the encoder to extract the features of different scales of infrared and visible images. Secondly, the features of different scales are fused through the l1-norm strategy. Finally, the Swing Transformer blocks and patch-expanding layers are utilized as the decoder to up-sample the fusion features to obtain the fused image. We used 21 pairs of Hainan gibbon datasets to perform experiments, and the experimental results demonstrate that the proposed method achieves excellent fusion performance. The infrared and visible image fusion technology of drones provides an important reference for the observation and protection of the Hainan gibbons.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Sun, Ruizhou, Yukun Su y Qingyao Wu. "DENet: Disentangled Embedding Network for Visible Watermark Removal". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 2 (26 de junio de 2023): 2411–19. http://dx.doi.org/10.1609/aaai.v37i2.25337.

Texto completo
Resumen
Adding visible watermark into image is a common copyright protection method of medias. Meanwhile, public research on watermark removal can be utilized as an adversarial technology to help the further development of watermarking. Existing watermark removal methods mainly adopt multi-task learning networks, which locate the watermark and restore the background simultaneously. However, these approaches view the task as an image-to-image reconstruction problem, where they only impose supervision after the final output, making the high-level semantic features shared between different tasks. To this end, inspired by the two-stage coarse-refinement network, we propose a novel contrastive learning mechanism to disentangle the high-level embedding semantic information of the images and watermarks, driving the respective network branch more oriented. Specifically, the proposed mechanism is leveraged for watermark image decomposition, which aims to decouple the clean image and watermark hints in the high-level embedding space. This can guarantee the learning representation of the restored image enjoy more task-specific cues. In addition, we introduce a self-attention-based enhancement module, which promotes the network's ability to capture semantic information among different regions, leading to further improvement on the contrastive learning mechanism. To validate the effectiveness of our proposed method, extensive experiments are conducted on different challenging benchmarks. Experimental evaluations show that our approach can achieve state-of-the-art performance and yield high-quality images. The code is available at: https://github.com/lianchengmingjue/DENet.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Zheng, Binhao, Tieming Xiang, Minghuang Lin, Silin Cheng y Pengquan Zhang. "Real-Time Semantics-Driven Infrared and Visible Image Fusion Network". Sensors 23, n.º 13 (3 de julio de 2023): 6113. http://dx.doi.org/10.3390/s23136113.

Texto completo
Resumen
This paper proposes a real-time semantics-driven infrared and visible image fusion framework (RSDFusion). A novel semantics-driven image fusion strategy is introduced in image fusion to maximize the retention of significant information of the source image in the fusion image. First, a semantically segmented image of the source image is obtained using a pre-trained semantic segmentation model. Second, masks of significant targets are obtained from the semantically segmented image, and these masks are used to separate the targets in the source and fusion images. Finally, the local semantic loss of the separation target is designed and combined with the overall structural similarity loss of the image to instruct the network to extract appropriate features to reconstruct the fusion image. Experimental results show that the RSDFusion proposed in this paper outperformed other comparative methods on both subjective and objective evaluation of public datasets and that the main target of the source image is better preserved in the fusion image.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía