Journal articles on the topic 'Multispectral pansharpening'

To see the other types of publications on this topic, follow the link: Multispectral pansharpening.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multispectral pansharpening.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Choi, Jaewan, Honglyun Park, and Doochun Seo. "Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery." Remote Sensing 11, no. 6 (March 15, 2019): 633. http://dx.doi.org/10.3390/rs11060633.

Full text
Abstract:
Pansharpening algorithms are designed to enhance the spatial resolution of multispectral images using panchromatic images with high spatial resolutions. Panchromatic and multispectral images acquired from very high resolution (VHR) satellite sensors used as input data in the pansharpening process are characterized by spatial dissimilarities due to differences in their spectral/spatial characteristics and time lags between panchromatic and multispectral sensors. In this manuscript, a new pansharpening framework is proposed to improve the spatial clarity of VHR satellite imagery. This algorithm aims to remove the spatial dissimilarity between panchromatic and multispectral images using guided filtering (GF) and to generate the optimal local injection gains for pansharpening. First, we generate optimal multispectral images with spatial characteristics similar to those of panchromatic images using GF. Then, multiresolution analysis (MRA)-based pansharpening is applied using normalized difference vegetation index (NDVI)-based optimal injection gains and spatial details obtained through GF. The algorithm is applied to Korea multipurpose satellite (KOMPSAT)-3/3A satellite sensor data, and the experimental results show that the pansharpened images obtained with the proposed algorithm exhibit a superior spatial quality and preserve spectral information better than those based on existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Junmin, Jing Ma, Rongrong Fei, Huirong Li, and Jiangshe Zhang. "Enhanced Back-Projection as Postprocessing for Pansharpening." Remote Sensing 11, no. 6 (March 25, 2019): 712. http://dx.doi.org/10.3390/rs11060712.

Full text
Abstract:
Pansharpening is the process of integrating a high spatial resolution panchromatic image with a low spatial resolution multispectral image to obtain a multispectral image with high spatial and spectral resolution. Over the last decade, several algorithms have been developed for pansharpening. In this paper, a technique, called enhanced back-projection (EBP), is introduced and applied as postprocessing on the pansharpening. The proposed EBP first enhances the spatial details of the pansharpening results by histogram matching and high-pass modulation, followed by a back-projection process, which takes into account the modulation transfer function (MTF) of the satellite sensor such that the pansharpening results obey the consistency property. The EBP is validated on four datasets acquired by different satellites and several commonly used pansharpening methods. The pansharpening results achieve substantial improvements by this postprocessing technique, which is widely applicable and requires no modification of existing pansharpening methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Wenqing, Zhiqiang Zhou, Xiaoqiao Zhang, Tu Lv, Han Liu, and Lili Liang. "DiTBN: Detail Injection-Based Two-Branch Network for Pansharpening of Remote Sensing Images." Remote Sensing 14, no. 23 (December 2, 2022): 6120. http://dx.doi.org/10.3390/rs14236120.

Full text
Abstract:
Pansharpening is one of the main research topics in the field of remote sensing image processing. In pansharpening, the spectral information from a low spatial resolution multispectral (LRMS) image and the spatial information from a high spatial resolution panchromatic (PAN) image are integrated to obtain a high spatial resolution multispectral (HRMS) image. As a prerequisite for the application of LRMS and PAN images, pansharpening has received extensive attention from researchers, and many pansharpening methods based on convolutional neural networks (CNN) have been proposed. However, most CNN-based methods regard pansharpening as a super-resolution reconstruction problem, which may not make full use of the feature information in two types of source images. Inspired by the PanNet model, this paper proposes a detail injection-based two-branch network (DiTBN) for pansharpening. In order to obtain the most abundant spatial detail features, a two-branch network is designed to extract features from the high-frequency component of the PAN image and the multispectral image. Moreover, the feature information provided by source images is reused in the network to further improve information utilization. In order to avoid the training difficulty for a real dataset, a new loss function is introduced to enhance the spectral and spatial consistency between the fused HRMS image and the input images. Experiments on different datasets show that the proposed method achieves excellent performance in both qualitative and quantitative evaluations as compared with several advanced pansharpening methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Pérez-Bueno, Fernando, Miguel Vega, Javier Mateos, Rafael Molina, and Aggelos K. Katsaggelos. "Variational Bayesian Pansharpening with Super-Gaussian Sparse Image Priors." Sensors 20, no. 18 (September 16, 2020): 5308. http://dx.doi.org/10.3390/s20185308.

Full text
Abstract:
Pansharpening is a technique that fuses a low spatial resolution multispectral image and a high spatial resolution panchromatic one to obtain a multispectral image with the spatial resolution of the latter while preserving the spectral information of the multispectral image. In this paper we propose a variational Bayesian methodology for pansharpening. The proposed methodology uses the sensor characteristics to model the observation process and Super-Gaussian sparse image priors on the expected characteristics of the pansharpened image. The pansharpened image, as well as all model and variational parameters, are estimated within the proposed methodology. Using real and synthetic data, the quality of the pansharpened images is assessed both visually and quantitatively and compared with other pansharpening methods. Theoretical and experimental results demonstrate the effectiveness, efficiency, and flexibility of the proposed formulation.
APA, Harvard, Vancouver, ISO, and other styles
5

He, Lin, Dahan Xi, Jun Li, and Jiawei Zhu. "A Spectral-Aware Convolutional Neural Network for Pansharpening." Applied Sciences 10, no. 17 (August 22, 2020): 5809. http://dx.doi.org/10.3390/app10175809.

Full text
Abstract:
Pansharpening aims at fusing a low-resolution multiband optical (MBO) image, such as a multispectral or a hyperspectral image, with the associated high-resolution panchromatic (PAN) image to yield a high spatial resolution MBO image. Though having achieved superior performances to traditional methods, existing convolutional neural network (CNN)-based pansharpening approaches are still faced with two challenges: alleviating the phenomenon of spectral distortion and improving the interpretation abilities of pansharpening CNNs. In this work, we develop a novel spectral-aware pansharpening neural network (SA-PNN). On the one hand, SA-PNN employs a network structure composed of a detail branch and an approximation branch, which is consistent with the detail injection framework; on the other hand, SA-PNN strengthens processing along the spectral dimension by using a spectral-aware strategy, which involves spatial feature transforms (SFTs) coupling the approximation branch with the detail branch as well as 3D convolution operations in the approximation branch. Our method is evaluated with experiments on real-world multispectral and hyperspectral datasets, verifying its excellent pansharpening performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Yecai, Fei Ye, and Hao Gong. "Learning an Efficient Convolution Neural Network for Pansharpening." Algorithms 12, no. 1 (January 8, 2019): 16. http://dx.doi.org/10.3390/a12010016.

Full text
Abstract:
Pansharpening is a domain-specific task of satellite imagery processing, which aims at fusing a multispectral image with a corresponding panchromatic one to enhance the spatial resolution of multispectral image. Most existing traditional methods fuse multispectral and panchromatic images in linear manners, which greatly restrict the fusion accuracy. In this paper, we propose a highly efficient inference network to cope with pansharpening, which breaks the linear limitation of traditional methods. In the network, we adopt a dilated multilevel block coupled with a skip connection to perform local and overall compensation. By using dilated multilevel block, the proposed model can make full use of the extracted features and enlarge the receptive field without introducing extra computational burden. Experiment results reveal that our network tends to induce competitive even superior pansharpening performance compared with deeper models. As our network is shallow and trained with several techniques to prevent overfitting, our model is robust to the inconsistencies across different satellites.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Yong, Wei Tu, Shuying Huang, and Hangyuan Lu. "PCDRN: Progressive Cascade Deep Residual Network for Pansharpening." Remote Sensing 12, no. 4 (February 19, 2020): 676. http://dx.doi.org/10.3390/rs12040676.

Full text
Abstract:
Pansharpening is the process of fusing a low-resolution multispectral (LRMS) image with a high-resolution panchromatic (PAN) image. In the process of pansharpening, the LRMS image is often directly upsampled by a scale of 4, which may result in the loss of high-frequency details in the fused high-resolution multispectral (HRMS) image. To solve this problem, we put forward a novel progressive cascade deep residual network (PCDRN) with two residual subnetworks for pansharpening. The network adjusts the size of an MS image to the size of a PAN image twice and gradually fuses the LRMS image with the PAN image in a coarse-to-fine manner. To prevent an overly-smooth phenomenon and achieve high-quality fusion results, a multitask loss function is defined to train our network. Furthermore, to eliminate checkerboard artifacts in the fusion results, we employ a resize-convolution approach instead of transposed convolution for upsampling LRMS images. Experimental results on the Pléiades and WorldView-3 datasets prove that PCDRN exhibits superior performance compared to other popular pansharpening methods in terms of quantitative and visual assessments.
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Xiangyong, Yang Chen, and Wenfei Cao. "Proximal PanNet: A Model-Based Deep Network for Pansharpening." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 176–84. http://dx.doi.org/10.1609/aaai.v36i1.19892.

Full text
Abstract:
Recently, deep learning techniques have been extensively studied for pansharpening, which aims to generate a high resolution multispectral (HRMS) image by fusing a low resolution multispectral (LRMS) image with a high resolution panchromatic (PAN) image. However, existing deep learning-based pansharpening methods directly learn the mapping from LRMS and PAN to HRMS. These network architectures always lack sufficient interpretability, which limits further performance improvements. To alleviate this issue, we propose a novel deep network for pansharpening by combining the model-based methodology with the deep learning method. Firstly, we build an observation model for pansharpening using the convolutional sparse coding (CSC) technique and design a proximal gradient algorithm to solve this model. Secondly, we unfold the iterative algorithm into a deep network, dubbed as Proximal PanNet, by learning the proximal operators using convolutional neural networks. Finally, all the learnable modules can be automatically learned in an end-to-end manner. Experimental results on some benchmark datasets show that our network performs better than other advanced methods both quantitatively and qualitatively.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, X. J., H. W. Yan, S. W. Yang, L. Kang, and X. M. Lu. "MULTISPECTRAL PANSHARPENING APPROACH USING PULSE-COUPLED NEURAL NETWORK SEGMENTATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 961–65. http://dx.doi.org/10.5194/isprs-archives-xlii-3-961-2018.

Full text
Abstract:
The paper proposes a novel pansharpening method based on the pulse-coupled neural network segmentation. In the new method, uniform injection gains of each region are estimated through PCNN segmentation rather than through a simple square window. Since PCNN segmentation agrees with the human visual system, the proposed method shows better spectral consistency. Our experiments, which have been carried out for both suburban and urban datasets, demonstrate that the proposed method outperforms other methods in multispectral pansharpening.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Xiaojun, Haowen Yan, Weiying Xie, Lu Kang, and Yi Tian. "An Improved Pulse-Coupled Neural Network Model for Pansharpening." Sensors 20, no. 10 (May 12, 2020): 2764. http://dx.doi.org/10.3390/s20102764.

Full text
Abstract:
Pulse-coupled neural network (PCNN) and its modified models are suitable for dealing with multi-focus and medical image fusion tasks. Unfortunately, PCNNs are difficult to directly apply to multispectral image fusion, especially when the spectral fidelity is considered. A key problem is that most fusion methods using PCNNs usually focus on the selection mechanism either in the space domain or in the transform domain, rather than a details injection mechanism, which is of utmost importance in multispectral image fusion. Thus, a novel pansharpening PCNN model for multispectral image fusion is proposed. The new model is designed to acquire the spectral fidelity in terms of human visual perception for the fusion tasks. The experimental results, examined by different kinds of datasets, show the suitability of the proposed model for pansharpening.
APA, Harvard, Vancouver, ISO, and other styles
11

Abdolahpoor, Asma, and Peyman Kabiri. "New texture-based pansharpening method using wavelet packet transform and PCA." International Journal of Wavelets, Multiresolution and Information Processing 18, no. 04 (May 7, 2020): 2050025. http://dx.doi.org/10.1142/s0219691320500253.

Full text
Abstract:
Image fusion is an important concept in remote sensing. Earth observation satellites provide both high-resolution panchromatic and low-resolution multispectral images. Pansharpening is aimed on fusion of a low-resolution multispectral image with a high-resolution panchromatic image. Because of this fusion, a multispectral image with high spatial and spectral resolution is generated. This paper reports a new method to improve spatial resolution of the final multispectral image. The reported work proposes an image fusion method using wavelet packet transform (WPT) and principal component analysis (PCA) methods based on the textures of the panchromatic image. Initially, adaptive PCA (APCA) is applied to both multispectral and panchromatic images. Consequently, WPT is used to decompose the first principal component of multispectral and panchromatic images. Using WPT, high frequency details of both panchromatic and multispectral images are extracted. In areas with similar texture, extracted spatial details from the panchromatic image are injected into the multispectral image. Experimental results show that the proposed method can provide promising results in fusing multispectral images with high-spatial resolution panchromatic image. Moreover, results show that the proposed method can successfully improve spectral features of the multispectral image.
APA, Harvard, Vancouver, ISO, and other styles
12

Lolli, Simone, Luciano Alparone, Andrea Garzelli, and Gemine Vivone. "Haze Correction for Contrast-Based Multispectral Pansharpening." IEEE Geoscience and Remote Sensing Letters 14, no. 12 (December 2017): 2255–59. http://dx.doi.org/10.1109/lgrs.2017.2761021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Golub, Y. I. "Evaluation of the results of pansharpening multispectral images." «System analysis and applied information science», no. 2 (June 27, 2022): 10–19. http://dx.doi.org/10.21122/2309-4923-2022-2-10-19.

Full text
Abstract:
When processing digital images obtained by remote sensing of the Earth, various methods are used to increase their resolution. However, in this case, some distortions of a different nature may appear on the images. For example, luminance distortion (color, contrast, sharpness) and geometric (object boundary deformations). Developers of automated image processing systems face the task of choosing from dozens of methods the one that introduces the least visually noticeable distortions, i.e. creates images of the best quality.In this article, the following problem was solved: to determine the functions for assessing the quality of images formed as a result of multispectral satellite image pansharpening. The pansharped image cannot be compared with the template one, since it does not exist. To assess quality of such images, we proposed to use the so-called no-reference evaluation measures.The article briefly describes methods for synthesizing a new high-resolution color image from four images of Earth remote sensing. Functions for calculating quantitative estimates of the quality of the resulting images are discussed. Results of some space image pansharpening by different methods are presented. Graphs of these assessments of image quality are constructed. To evaluate panchromatic fusion results, the following non-reference quality scores are recommended: FISH, LOCC, LOEN, NATU, SHAR, and WAVS. The clearest boundaries and natural colors of objects were demonstrated by the P+XS pansharpening algorithm based on a linear combination of spectral channels.
APA, Harvard, Vancouver, ISO, and other styles
14

Nie, Zihao, Lihui Chen, Seunggil Jeon, and Xiaomin Yang. "Spectral-Spatial Interaction Network for Multispectral Image and Panchromatic Image Fusion." Remote Sensing 14, no. 16 (August 21, 2022): 4100. http://dx.doi.org/10.3390/rs14164100.

Full text
Abstract:
Recently, with the rapid development of deep learning (DL), an increasing number of DL-based methods are applied in pansharpening. Benefiting from the powerful feature extraction capability of deep learning, DL-based methods have achieved state-of-the-art performance in pansharpening. However, most DL-based methods simply fuse multi-spectral (MS) images and panchromatic (PAN) images by concatenating, which can not make full use of the spectral information and spatial information of MS and PAN images, respectively. To address this issue, we propose a spectral-spatial interaction Network (SSIN) for pansharpening. Different from previous works, we extract the features of PAN and MS, respectively, and then interact them repetitively to incorporate spectral and spatial information progressively. In order to enhance the spectral-spatial information fusion, we further propose spectral-spatial attention (SSA) module to yield a more effective spatial-spectral information transfer in the network. Extensive experiments on QuickBird, WorldView-4, and WorldView-2 images demonstrate that our SSIN significantly outperforms other methods in terms of both objective assessment and visual quality.
APA, Harvard, Vancouver, ISO, and other styles
15

Baiocchi, V., A. Bianchi, C. Maddaluno, and M. Vidale. "PANSHARPENING TECHNIQUES TO DETECT MASS MONUMENT DAMAGING IN IRAQ." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5/W1 (May 15, 2017): 121–26. http://dx.doi.org/10.5194/isprs-archives-xlii-5-w1-121-2017.

Full text
Abstract:
The recent mass destructions of monuments in Iraq cannot be monitored with the terrestrial survey methodologies, for obvious reasons of safety. For the same reasons, it’s not advisable the use of classical aerial photogrammetry, so it was obvious to think to the use of multispectral Very High Resolution (VHR) satellite imagery. Nowadays VHR satellite images resolutions are very near airborne photogrammetrical images and usually they are acquired in multispectral mode. The combination of the various bands of the images is called pan-sharpening and it can be carried on using different algorithms and strategies. The correct pansharpening methodology, for a specific image, must be chosen considering the specific multispectral characteristics of the satellite used and the particular application. In this paper a first definition of guidelines for the use of VHR multispectral imagery to detect monument destruction in unsafe area, is reported. <br><br> The proposed methodology, agreed with UNESCO and soon to be used in Libya for the coastal area, has produced a first report delivered to the Iraqi authorities. Some of the most evident examples are reported to show the possible capabilities of identification of damages using VHR images.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Wenqing, Zhiqiang Zhou, Han Liu, and Guo Xie. "MSDRN: Pansharpening of Multispectral Images via Multi-Scale Deep Residual Network." Remote Sensing 13, no. 6 (March 21, 2021): 1200. http://dx.doi.org/10.3390/rs13061200.

Full text
Abstract:
In order to acquire a high resolution multispectral (HRMS) image with the same spectral resolution as multispectral (MS) image and the same spatial resolution as panchromatic (PAN) image, pansharpening, a typical and hot image fusion topic, has been well researched. Various pansharpening methods that are based on convolutional neural networks (CNN) with different architectures have been introduced by prior works. However, different scale information of the source images is not considered by these methods, which may lead to the loss of high-frequency details in the fused image. This paper proposes a pansharpening method of MS images via multi-scale deep residual network (MSDRN). The proposed method constructs a multi-level network to make better use of the scale information of the source images. Moreover, residual learning is introduced into the network to further improve the ability of feature extraction and simplify the learning process. A series of experiments are conducted on the QuickBird and GeoEye-1 datasets. Experimental results demonstrate that the MSDRN achieves a superior or competitive fusion performance to the state-of-the-art methods in both visual evaluation and quantitative evaluation.
APA, Harvard, Vancouver, ISO, and other styles
17

Song, Qun, Chen Ding, Junhua Ren, Lili Liu, and Hangyuan Lu. "An Adaptive Injection Model for Pansharpening." Computational Intelligence and Neuroscience 2023 (January 24, 2023): 1–10. http://dx.doi.org/10.1155/2023/4874974.

Full text
Abstract:
Pansharpening technology is used to acquire a multispectral image with high spatial resolution from a panchromatic (PAN) image and a multispectral (MS) image. The detail injection model is popular for its flexibility. However, the accuracy of the injection gain and the extracted details may greatly influence the quality of the pansharpened image. This paper proposes an adaptive injection model to solve these problems. For detail extraction, we present a Gaussian filter estimation algorithm by exploring the intrinsic character of the MS sensor and convolving the PAN image with the filter to adaptively optimize the details to be consistent with the character of the MS image. For the adaptive injection coefficient, we iteratively adjust the coefficient by balancing the spectral and spatial fidelity. By multiplying the optimized details and injection gain, the final HRMS is obtained with the injection model. The performance of the proposed model is analyzed and a large number of tests are carried out on various satellite datasets. Compared to some advanced pansharpening methods, the results prove that our method can achieve the best fusion quality both subjectively and objectively.
APA, Harvard, Vancouver, ISO, and other styles
18

Tsukamoto, Naoko, Yoshihiro Sugaya, and Shinichiro Omachi. "Pansharpening by Complementing Compressed Sensing with Spectral Correction." Applied Sciences 10, no. 17 (August 21, 2020): 5789. http://dx.doi.org/10.3390/app10175789.

Full text
Abstract:
Pansharpening (PS) is a process used to generate high-resolution multispectral (MS) images from high-spatial-resolution panchromatic (PAN) and high-spectral-resolution multispectral images. In this paper, we propose a method for pansharpening by focusing on a compressed sensing (CS) technique. The spectral reproducibility of the CS technique is high due to its image reproducibility, but the reproduced image is blurry. Although methods of complementing this incomplete reproduction have been proposed, it is known that the existing method may cause ringing artifacts. On the other hand, component substitution is another technique used for pansharpening. It is expected that the spatial resolution of the images generated by this technique will be as high as that of the high-resolution PAN image, because the technique uses the corrected intensity calculated from the PAN image. Based on these facts, the proposed method fuses the intensity obtained by the component substitution method and the intensity obtained by the CS technique to move the spatial resolution of the reproduced image close to that of the PAN image while reducing the spectral distortion. Experimental results showed that the proposed method can reduce spectral distortion and maintain spatial resolution better than the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Jin, Zi-Rong, Tian-Jing Zhang, Tai-Xiang Jiang, Gemine Vivone, and Liang-Jian Deng. "LAGConv: Local-Context Adaptive Convolution Kernels with Global Harmonic Bias for Pansharpening." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 1113–21. http://dx.doi.org/10.1609/aaai.v36i1.19996.

Full text
Abstract:
Pansharpening is a critical yet challenging low-level vision task that aims to obtain a higher-resolution image by fusing a multispectral (MS) image and a panchromatic (PAN) image. While most pansharpening methods are based on convolutional neural network (CNN) architectures with standard convolution operations, few attempts have been made with context-adaptive/dynamic convolution, which delivers impressive results on high-level vision tasks. In this paper, we propose a novel strategy to generate local-context adaptive (LCA) convolution kernels and introduce a new global harmonic (GH) bias mechanism, exploiting image local specificity as well as integrating global information, dubbed LAGConv. The proposed LAGConv can replace the standard convolution that is context-agnostic to fully perceive the particularity of each pixel for the task of remote sensing pansharpening. Furthermore, by applying the LAGConv, we provide an image fusion network architecture, which is more effective than conventional CNN-based pansharpening approaches. The superiority of the proposed method is demonstrated by extensive experiments implemented on a wide range of datasets compared with state-of-the-art pansharpening methods. Besides, more discussions testify that the proposed LAGConv outperforms recent adaptive convolution techniques for pansharpening.
APA, Harvard, Vancouver, ISO, and other styles
20

Amro, Israa, and Javier Mateos. "Multispectral image pansharpening based on the contourlet transform." Journal of Physics: Conference Series 206 (February 1, 2010): 012031. http://dx.doi.org/10.1088/1742-6596/206/1/012031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Liuqing, Xiaofeng Zhang, and Hongbing Ma. "Sparse representation over shared coefficients in multispectral pansharpening." Tsinghua Science and Technology 23, no. 3 (June 2018): 315–22. http://dx.doi.org/10.26599/tst.2018.9010088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kaplan, N. H., and I. Erer. "Pansharpening of Multispectral Satellite Images via Lattice Structures." International Journal of Computer Applications 140, no. 7 (April 15, 2016): 9–14. http://dx.doi.org/10.5120/ijca2016909366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Alparone, Luciano, Stefano Baronti, Bruno Aiazzi, and Andrea Garzelli. "Spatial Methods for Multispectral Pansharpening: Multiresolution Analysis Demystified." IEEE Transactions on Geoscience and Remote Sensing 54, no. 5 (May 2016): 2563–76. http://dx.doi.org/10.1109/tgrs.2015.2503045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Wenqing, Han Liu, and Guo Xie. "Pansharpening of WorldView-2 Data via Graph Regularized Sparse Coding and Adaptive Coupled Dictionary." Sensors 21, no. 11 (May 21, 2021): 3586. http://dx.doi.org/10.3390/s21113586.

Full text
Abstract:
The spectral mismatch between a multispectral (MS) image and its corresponding panchromatic (PAN) image affects the pansharpening quality, especially for WorldView-2 data. To handle this problem, a pansharpening method based on graph regularized sparse coding (GRSC) and adaptive coupled dictionary is proposed in this paper. Firstly, the pansharpening process is divided into three tasks according to the degree of correlation among the MS and PAN channels and the relative spectral response of WorldView-2 sensor. Then, for each task, the image patch set from the MS channels is clustered into several subsets, and the sparse representation of each subset is estimated through the GRSC algorithm. Besides, an adaptive coupled dictionary pair for each task is constructed to effectively represent the subsets. Finally, the high-resolution image subsets for each task are obtained by multiplying the estimated sparse coefficient matrix by the corresponding dictionary. A variety of experiments are conducted on the WorldView-2 data, and the experimental results demonstrate that the proposed method achieves better performance than the existing pansharpening algorithms in both subjective analysis and objective evaluation.
APA, Harvard, Vancouver, ISO, and other styles
25

Jin, Cheng, Liang-Jian Deng, Ting-Zhu Huang, and Gemine Vivone. "Laplacian pyramid networks: A new approach for multispectral pansharpening." Information Fusion 78 (February 2022): 158–70. http://dx.doi.org/10.1016/j.inffus.2021.09.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kaplan, N. H., and I. Erer. "Bilateral Filtering-Based Enhanced Pansharpening of Multispectral Satellite Images." IEEE Geoscience and Remote Sensing Letters 11, no. 11 (November 2014): 1941–45. http://dx.doi.org/10.1109/lgrs.2014.2314389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Garzelli, Andrea. "Pansharpening of Multispectral Images Based on Nonlocal Parameter Optimization." IEEE Transactions on Geoscience and Remote Sensing 53, no. 4 (April 2015): 2096–107. http://dx.doi.org/10.1109/tgrs.2014.2354471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yin, Junru, Jiantao Qu, Le Sun, Wei Huang, and Qiqiang Chen. "A Local and Nonlocal Feature Interaction Network for Pansharpening." Remote Sensing 14, no. 15 (August 4, 2022): 3743. http://dx.doi.org/10.3390/rs14153743.

Full text
Abstract:
Pansharpening based on deep learning (DL) has shown great advantages. Most convolutional neural network (CNN)-based methods focus on obtaining local features from multispectral (MS) and panchromatic (PAN) images, but ignore the nonlocal dependence on images. Therefore, Transformer-based methods are introduced to obtain long-range information on images. However, the representational capabilities of features extracted by CNN or Transformer alone are weak. To solve this problem, a local and nonlocal feature interaction network (LNFIN) is proposed in this paper for pansharpening. It comprises Transformer and CNN branches. Furthermore, a feature interaction module (FIM) is proposed to fuse different features and return to the two branches to enhance the representational capability of features. Specifically, a CNN branch consisting of multiscale dense modules (MDMs) is proposed for acquiring local features of the image, and a Transformer branch consisting of pansharpening Transformer modules (PTMs) is introduced for acquiring nonlocal features of the image. In addition, inspired by the PTM, a shift pansharpening Transformer module (SPTM) is proposed for the learning of texture features to further enhance the spatial representation of features. The LNFIN outperforms the state-of-the-art method experimentally on three datasets.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Qin, Letong Han, Rui Tan, Hongfei Fan, Weiqi Li, Hongming Zhu, Bowen Du, and Sicong Liu. "Hybrid Attention Based Residual Network for Pansharpening." Remote Sensing 13, no. 10 (May 18, 2021): 1962. http://dx.doi.org/10.3390/rs13101962.

Full text
Abstract:
Pansharpening aims at fusing the rich spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images to generate a fused image with both high resolutions. In general, the existing pansharpening methods suffer from the problems of spectral distortion and lack of spatial detail information, which might prevent the accuracy computation for ground object identification. To alleviate these problems, we propose a Hybrid Attention mechanism-based Residual Neural Network (HARNN). In the proposed network, we develop an encoder attention module in the feature extraction part to better utilize the spectral and spatial features of MS and PAN images. Furthermore, the fusion attention module is designed to alleviate spectral distortion and improve contour details of the fused image. A series of ablation and contrast experiments are conducted on GF-1 and GF-2 datasets. The fusion results with less distorted pixels and more spatial details demonstrate that HARNN can implement the pansharpening task effectively, which outperforms the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Weisheng, Minghao Xiang, and Xuesong Liang. "MDCwFB: A Multilevel Dense Connection Network with Feedback Connections for Pansharpening." Remote Sensing 13, no. 11 (June 5, 2021): 2218. http://dx.doi.org/10.3390/rs13112218.

Full text
Abstract:
In most practical applications of remote sensing images, high-resolution multispectral images are needed. Pansharpening aims to generate high-resolution multispectral (MS) images from the input of high spatial resolution single-band panchromatic (PAN) images and low spatial resolution multispectral images. Inspired by the remarkable results of other researchers in pansharpening based on deep learning, we propose a multilevel dense connection network with a feedback connection. Our network consists of four parts. The first part consists of two identical subnetworks to extract features from PAN and MS images. The second part is a multilevel feature fusion and recovery network, which is used to fuse images in the feature domain and to encode and decode features at different levels so that the network can fully capture different levels of information. The third part is a continuous feedback operation, which refines shallow features by feedback. The fourth part is an image reconstruction network. High-quality images are recovered by making full use of multistage decoding features through dense connections. Experiments on different satellite datasets show that our proposed method is superior to existing methods, through subjective visual evaluation and objective evaluation indicators. Compared with the results of other models, our results achieve significant gains on the multiple objective index values used to measure the spectral quality and spatial details of the generated image, namely spectral angle mapper (SAM), relative global dimensional synthesis error (ERGAS), and structural similarity (SSIM).
APA, Harvard, Vancouver, ISO, and other styles
31

Su, Haonan, Haiyan Jin, and Ce Sun. "Deep Pansharpening via 3D Spectral Super-Resolution Network and Discrepancy-Based Gradient Transfer." Remote Sensing 14, no. 17 (August 29, 2022): 4250. http://dx.doi.org/10.3390/rs14174250.

Full text
Abstract:
High-resolution (HR) multispectral (MS) images contain sharper detail and structure compared to the ground truth high-resolution hyperspectral (HS) images. In this paper, we propose a novel supervised learning method, which considers pansharpening as the spectral super-resolution of high-resolution multispectral images and generates high-resolution hyperspectral images. The proposed method learns the spectral mapping between high-resolution multispectral images and the ground truth high-resolution hyperspectral images. To consider the spectral correlation between bands, we build a three-dimensional (3D) convolution neural network (CNN). The network consists of three parts using an encoder–decoder framework: spatial/spectral feature extraction from high-resolution multispectral images/low-resolution (LR) hyperspectral images, feature transform, and image reconstruction to generate the results. In the image reconstruction network, we design the spatial–spectral fusion (SSF) blocks to reuse the extracted spatial and spectral features in the reconstructed feature layer. Then, we develop the discrepancy-based deep hybrid gradient (DDHG) losses with the spatial–spectral gradient (SSG) loss and deep gradient transfer (DGT) loss. The spatial–spectral gradient loss and deep gradient transfer loss are developed to preserve the spatial and spectral gradients from the ground truth high-resolution hyperspectral images and high-resolution multispectral images. To overcome the spectral and spatial discrepancy between two images, we design a spectral downsampling (SD) network and a gradient consistency estimation (GCE) network for hybrid gradient losses. In the experiments, it is seen that the proposed method outperforms the state-of-the-art methods in the subjective and objective experiments in terms of the structure and spectral preservation of high-resolution hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Yazhen, Guojun Liu, Rui Zhang, and Junmin Liu. "A Two-Stage Pansharpening Method for the Fusion of Remote-Sensing Images." Remote Sensing 14, no. 5 (February 24, 2022): 1121. http://dx.doi.org/10.3390/rs14051121.

Full text
Abstract:
The pansharpening (PS) of remote-sensing images aims to fuse a high-resolution panchromatic image with several low-resolution multispectral images for obtaining a high-resolution multispectral image. In this work, a two-stage PS model is proposed by integrating the ideas of component replacement and the variational method. The global sparse gradient of the panchromatic image is extracted by variational method, and the weight function is constructed by combining the gradient of multispectral image in which the global sparse gradient can provide more robust gradient information. Furthermore, we refine the results in order to reduce spatial and spectral distortions. Experimental results show that our method had high generalization ability for QuickBird, Gaofen-1, and WorldView-4 satellite data. Experimental results evaluated by seven metrics demonstrate that the proposed two-stage method enhanced spatial details subjective visual effects better than other state-of-the-art methods do. At the same time, in the process of quantitative evaluation, the method in this paper had high improvement compared with that other methods, and some of them can reach a maximal improvement of 60%.
APA, Harvard, Vancouver, ISO, and other styles
33

Jiao, Jiao, Lingda Wu, and Kechang Qian. "A Segmentation-Cooperated Pansharpening Method Using Local Adaptive Spectral Modulation." Electronics 8, no. 6 (June 17, 2019): 685. http://dx.doi.org/10.3390/electronics8060685.

Full text
Abstract:
In order to improve the spatial resolution of multispectral (MS) images and reduce spectral distortion, a segmentation-cooperated pansharpening method using local adaptive spectral modulation (LASM) is proposed in this paper. By using the k-means algorithm for the segmentation of MS images, different connected component groups can be obtained according to their spectral characteristics. For spectral information modulation of fusion images, the LASM coefficients are constructed based on details extracted from images and local spectral relationships among MS bands. Moreover, we introduce a cooperative theory for the pansharpening process. The local injection coefficient matrix and LASM coefficient matrix are estimated based on the connected component groups to optimize the fusion result, and the parameters of the segmentation algorithm are adjusted according to the feedback from the pansharpening result. In the experimental part, degraded and real data sets from GeoEye-1 and QuickBird satellites are used to assess the performance of our proposed method. Experimental results demonstrate the validity and effectiveness of our method. Generally, the method is superior to several classic and state-of-the-art pansharpening methods in both subjective visual effect and objective evaluation indices, achieving a balance between the injection of spatial details and maintenance of spectral information, while effectively reducing the spectral distortion of the fusion image.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Xuan, Ping Tang, Xing Jin, and Zheng Zhang. "From Regression Based on Dynamic Filter Network to Pansharpening by Pixel-Dependent Spatial-Detail Injection." Remote Sensing 14, no. 5 (March 3, 2022): 1242. http://dx.doi.org/10.3390/rs14051242.

Full text
Abstract:
Compared with hardware upgrading, pansharpening is a low-cost way to acquire high-quality images, which usually combines multispectral images (MS) in low spatial resolution with panchromatic images (PAN) in high spatial resolution. This paper proposes a pixel-dependent spatial-detail injection network (PDSDNet). Based on a dynamic filter network, PDSDNet constructs nonlinear mapping of the simulated panchromatic band from low-resolution multispectral bands through filtering convolution regression. PDSDNet reduces the possibility of spectral distortion and enriches spatial details by improving the similarity between the simulated panchromatic band and the real panchromatic band. Moreover, PDSDNet assumes that if an ideal multispectral image that has the same resolution with the panchromatic image exists, each band of it should have the same spatial details as in the panchromatic image. Thus, the details we fill into each multispectral band are the same and they can be extracted effectively in one pass. Experimental results demonstrate that PDSDNet can generate high-quality fusion images with multispectral images and panchromatic images. Compared with BDSD, MTF-GLP-HPM-PP, and PanNet, which are widely applied on IKONOS, QuickBird, and WorldView-3 datasets, pansharpened images of the proposed method have rich spatial details and present superior visual effects without noticeable spectral and spatial distortion.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Junmin, Yunqiao Feng, Changsheng Zhou, and Chunxia Zhang. "PWNet: An Adaptive Weight Network for the Fusion of Panchromatic and Multispectral Images." Remote Sensing 12, no. 17 (August 29, 2020): 2804. http://dx.doi.org/10.3390/rs12172804.

Full text
Abstract:
Pansharpening is a typical image fusion problem, which aims to produce a high resolution multispectral (HRMS) image by integrating a high spatial resolution panchromatic (PAN) image with a low spatial resolution multispectral (MS) image. Prior arts have used either component substitution (CS)-based methods or multiresolution analysis (MRA)-based methods for this propose. Although they are simple and easy to implement, they usually suffer from spatial or spectral distortions and could not fully exploit the spatial and/or spectral information existed in PAN and MS images. By considering their complementary performances and with the goal of combining their advantages, we propose a pansharpening weight network (PWNet) to adaptively average the fusion results obtained by different methods. The proposed PWNet works by learning adaptive weight maps for different CS-based and MRA-based methods through an end-to-end trainable neural network (NN). As a result, the proposed PWN inherits the data adaptability or flexibility of NN, while maintaining the advantages of traditional methods. Extensive experiments on data sets acquired by three different kinds of satellites demonstrate the superiority of the proposed PWNet and its competitiveness with the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Siok, Katarzyna, Ireneusz Ewiak, and Agnieszka Jenerowicz. "Multi-Sensor Fusion: A Simulation Approach to Pansharpening Aerial and Satellite Images." Sensors 20, no. 24 (December 11, 2020): 7100. http://dx.doi.org/10.3390/s20247100.

Full text
Abstract:
The growing demand for high-quality imaging data and the current technological limitations of imaging sensors require the development of techniques that combine data from different platforms in order to obtain comprehensive products for detailed studies of the environment. To meet the needs of modern remote sensing, the authors present an innovative methodology of combining multispectral aerial and satellite imagery. The methodology is based on the simulation of a new spectral band with a high spatial resolution which, when used in the pansharpening process, yields an enhanced image with a higher spectral quality compared to the original panchromatic band. This is important because spectral quality determines the further processing of the image, including segmentation and classification. The article presents a methodology of simulating new high-spatial-resolution images taking into account the spectral characteristics of the photographed types of land cover. The article focuses on natural objects such as forests, meadows, or bare soils. Aerial panchromatic and multispectral images acquired with a digital mapping camera (DMC) II 230 and satellite multispectral images acquired with the S2A sensor of the Sentinel-2 satellite were used in the study. Cloudless data with a minimal time shift were obtained. Spectral quality analysis of the generated enhanced images was performed using a method known as “consistency” or “Wald’s protocol first property”. The resulting spectral quality values clearly indicate less spectral distortion of the images enhanced by the new methodology compared to using a traditional approach to the pansharpening process.
APA, Harvard, Vancouver, ISO, and other styles
37

Arienzo, Alberto, Luciano Alparone, Andrea Garzelli, and Simone Lolli. "Advantages of Nonlinear Intensity Components for Contrast-Based Multispectral Pansharpening." Remote Sensing 14, no. 14 (July 8, 2022): 3301. http://dx.doi.org/10.3390/rs14143301.

Full text
Abstract:
In this study, we investigate whether a nonlinear intensity component can be beneficial for multispectral (MS) pansharpening based on component-substitution (CS). In classical CS methods, the intensity component is a linear combination of the spectral components and lies on a hyperplane in the vector space that contains the MS pixel values. Starting from the hyperspherical color space (HCS) fusion technique, we devise a novel method, in which the intensity component lies on a hyper-ellipsoidal surface instead of on a hyperspherical surface. The proposed method is insensitive to the format of the data, either floating-point spectral radiance values or fixed-point packed digital numbers (DNs), thanks to the use of a multivariate linear regression between the squares of the interpolated MS bands and the squared lowpass filtered Pan. The regression of squared MS, instead of the Euclidean radius used by HCS, makes the intensity component no longer lie on a hypersphere in the vector space of the MS samples, but on a hyperellipsoid. Furthermore, before the fusion is accomplished, the interpolated MS bands are corrected for atmospheric haze, in order to build a multiplicative injection model with approximately de-hazed components. Experiments on GeoEye-1 and WorldView-3 images show consistent advantages over the baseline HCS and a performance slightly superior to those of some of the most advanced methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Hui, Linhai Jing, Yunwei Tang, and Haifeng Ding. "An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data." Sensors 18, no. 2 (February 11, 2018): 557. http://dx.doi.org/10.3390/s18020557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhiqiang Zhou, Silong Peng, Bo Wang, Zhihui Hao, and Shaolin Chen. "An Optimized Approach for Pansharpening Very High Resolution Multispectral Images." IEEE Geoscience and Remote Sensing Letters 9, no. 4 (July 2012): 735–39. http://dx.doi.org/10.1109/lgrs.2011.2180504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Vivone, Gemine, Rocco Restaino, Mauro Dalla Mura, Giorgio Licciardi, and Jocelyn Chanussot. "Contrast and Error-Based Fusion Schemes for Multispectral Image Pansharpening." IEEE Geoscience and Remote Sensing Letters 11, no. 5 (May 2014): 930–34. http://dx.doi.org/10.1109/lgrs.2013.2281996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hu, Jie, Zhi He, and Jiemin Wu. "Deep Self-Learning Network for Adaptive Pansharpening." Remote Sensing 11, no. 20 (October 16, 2019): 2395. http://dx.doi.org/10.3390/rs11202395.

Full text
Abstract:
Deep learning (DL)-based paradigms have recently made many advances in image pansharpening. However, most of the existing methods directly downscale the multispectral (MSI) and panchromatic (PAN) images with default blur kernel to construct the training set, which will lead to the deteriorative results when the real image does not obey this degradation. In this paper, a deep self-learning (DSL) network is proposed for adaptive image pansharpening. First, rather than using the fixed blur kernel, a point spread function (PSF) estimation algorithm is proposed to obtain the blur kernel of the MSI. Second, an edge-detection-based pixel-to-pixel image registration method is designed to recover the local misalignments between MSI and PAN. Third, the original data is downscaled by the estimated PSF and the pansharpening network is trained in the down-sampled domain. The high-resolution result can be finally predicted by the trained DSL network using the original MSI and PAN. Extensive experiments on three images collected by different satellites prove the superiority of our DSL technique, compared with some state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
42

Raimundo, Javier, Serafin Lopez-Cuervo Medina, Juan F. Prieto, and Julian Aguirre de Mata. "Super Resolution Infrared Thermal Imaging Using Pansharpening Algorithms: Quantitative Assessment and Application to UAV Thermal Imaging." Sensors 21, no. 4 (February 10, 2021): 1265. http://dx.doi.org/10.3390/s21041265.

Full text
Abstract:
The lack of high-resolution thermal images is a limiting factor in the fusion with other sensors with a higher resolution. Different families of algorithms have been designed in the field of remote sensors to fuse panchromatic images with multispectral images from satellite platforms, in a process known as pansharpening. Attempts have been made to transfer these pansharpening algorithms to thermal images in the case of satellite sensors. Our work analyses the potential of these algorithms when applied to thermal images from unmanned aerial vehicles (UAVs). We present a comparison, by means of a quantitative procedure, of these pansharpening methods in satellite images when they are applied to fuse high-resolution images with thermal images obtained from UAVs, in order to be able to choose the method that offers the best quantitative results. This analysis, which allows the objective selection of which method to use with this type of images, has not been done until now. This algorithm selection is used here to fuse images from thermal sensors on UAVs with other images from different sensors for the documentation of heritage, but it has applications in many other fields.
APA, Harvard, Vancouver, ISO, and other styles
43

Xie, Yuchen, Wei Wu, Haiping Yang, Ning Wu, and Ying Shen. "Detail Information Prior Net for Remote Sensing Image Pansharpening." Remote Sensing 13, no. 14 (July 16, 2021): 2800. http://dx.doi.org/10.3390/rs13142800.

Full text
Abstract:
Pansharpening, which fuses the panchromatic (PAN) band with multispectral (MS) bands to obtain an MS image with spatial resolution of the PAN images, has been a popular topic in remote sensing applications in recent years. Although the deep-learning-based pansharpening algorithm has achieved better performance than traditional methods, the fusion extracts insufficient spatial information from a PAN image, producing low-quality pansharpened images. To address this problem, this paper proposes a novel progressive PAN-injected fusion method based on superresolution (SR). The network extracts the detail features of a PAN image by using two-stream PAN input; uses a feature fusion unit (FFU) to gradually inject low-frequency PAN features, with high-frequency PAN features added after subpixel convolution; uses a plain autoencoder to inject the extracted PAN features; and applies a structural similarity index measure (SSIM) loss to focus on the structural quality. Experiments performed on different datasets indicate that the proposed method outperforms several state-of-the-art pansharpening methods in both visual appearance and objective indexes, and the SSIM loss can help improve the pansharpened quality on the original dataset.
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Weiwei, Yan Zhang, Jianwei Zhang, and Yuhui Zheng. "Convolutional Neural Network for Pansharpening with Spatial Structure Enhancement Operator." Remote Sensing 13, no. 20 (October 11, 2021): 4062. http://dx.doi.org/10.3390/rs13204062.

Full text
Abstract:
Pansharpening aims to fuse the abundant spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images, yielding a high-spatial-resolution MS (HRMS) image. Traditional methods only focus on the linear model, ignoring the fact that degradation process is a nonlinear inverse problem. Due to convolutional neural networks (CNNs) having an extraordinary effect in overcoming the shortcomings of traditional linear models, they have been adapted for pansharpening in the past few years. However, most existing CNN-based methods cannot take full advantage of the structural information of images. To address this problem, a new pansharpening method combining a spatial structure enhancement operator with a CNN architecture is employed in this study. The proposed method uses the Sobel operator as an edge-detection operator to extract abundant high-frequency information from the input PAN and MS images, hence obtaining the abundant spatial features of the images. Moreover, we utilize the CNN to acquire the spatial feature maps, preserving the information in both the spatial and spectral domains. Simulated experiments and real-data experiments demonstrated that our method had excellent performance in both quantitative and visual evaluation.
APA, Harvard, Vancouver, ISO, and other styles
45

Singh, Preeti, Sarvpal Singh, and Marcin Paprzycki. "DICO: Dingo coot optimization-based ZF net for pansharpening." International Journal of Knowledge-based and Intelligent Engineering Systems 26, no. 4 (March 15, 2023): 271–88. http://dx.doi.org/10.3233/kes-221530.

Full text
Abstract:
With the recent advancements in technology, there has been a tremendous growth in the usage of images captured using satellites in various applications, like defense, academics, resource exploration, land-use mapping, and so on. Certain mission-critical applications need images of higher visual quality, but the images captured by the sensors normally suffer from a tradeoff between high spectral and spatial resolutions. Hence, for obtaining images with high visual quality, it is necessary to combine the low resolution multispectral (MS) image with the high resolution panchromatic (PAN) image, and this is accomplished by means of pansharpening. In this paper, an efficient pansharpening technique is devised by using a hybrid optimized deep learning network. Zeiler and Fergus network (ZF Net) is utilized for performing the fusion of the sharpened and upsampled MS image with the PAN image. A novel Dingo coot (DICO) optimization is created for updating the learning parameters and weights of the ZF Net. Moreover, the devised DICO_ZF Net for pansharpening is examined for its effectiveness by considering measures, like Peak Signal To Noise Ratio (PSNR) and Degree of Distortion (DD) and is found to have attained values at 50.177 dB and 0.063 dB.
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Qingsheng, Chong Huang, and He Li. "Quality Assessment by Region and Land Cover of Sharpening Approaches Applied to GF-2 Imagery." Applied Sciences 10, no. 11 (May 26, 2020): 3673. http://dx.doi.org/10.3390/app10113673.

Full text
Abstract:
The existing pansharpening methods applied to recently obtained satellite data can produce spectral distortion. Therefore, quality assessments should be performed to address this. However, quality assessment of the whole image may not be sufficient, because major differences in a given region or land cover can be minimized by small differences in another region or land cover in the image. Thus, it is necessary to evaluate the performance of the pansharpening process for different regions and land covers. In this study, the widely used modified intensity-hue-saturation (mIHS), Gram–Schmidt spectral sharpening (GS), color spectral sharpening (CN), and principal component analysis (PCA) pansharpening methods were applied to Gaofen 2 (GF-2) imagery and evaluated according to region and land-cover type, which was determined via an object-oriented image analysis technique with a support vector machine-supervised method based on several reliable quality indices at the native spatial scale without reference. Both visual and quantitative analyses based on region and land cover indicated that all four approaches satisfied the demands for improving the spatial resolution of the original GF-2 multispectral (MS) image, and mIHS produced results superior to those of the GS, CN, and PC methods by preserving image colors. The results indicated differences in the pansharpening quality among different land covers. Generally, for most land-cover types, the mIHS method better preserved the spectral information and spatial autocorrelation compared with the other methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Raimundo, J., S. Lopez-Cuervo Medina, and J. F. Prieto. "RESOLUTION ENHANCEMENT OF INFRARED THERMAL IMAGING BY PANSHARPENING ALGORITHMS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-1-2021 (August 28, 2021): 593–99. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-1-2021-593-2021.

Full text
Abstract:
Abstract. One common tool in Cultural Heritage inspections is thermal cameras, which are sensitive to the infrared part of the electromagnetic spectrum. But the resolution of these sensors is quite lower than other kinds like visible spectrum range cameras. Typically, the sensors in thermal cameras do not exceed the megapixel frontier. This limitation becomes a problem when trying to combine the information from the thermal images with data from other sensors with much higher resolution such as visible RGB cameras in the same project.In Remote Sensing, algorithms have been designed to fuse multispectral images with panchromatic images (in origin from satellite platforms) to enhance the resolution of lower resolution images with higher resolution ones. These processes are known as pansharpening. Although pansharpening procedures are widely known, they have not been tested working with thermal imaging. The first approach of merging thermal and visual spectrum images to enhance the resolution of the original thermal image involved applying the intensity-hue-saturation (IHS) algorithm (Lagüela et al., 2012, Kuenzer and Dech, 2013). These works only studied one particular algorithm and they did not include any quality study of the results.Our work contains a complete review of a bigger pansharpening algorithms’ set and provides an in-depth study of thermal imaging pansharpening, with a numerical assessment. Our research allows the use of thermal sensors with a lower resolution than other types of sensors used simultaneously in the same project.
APA, Harvard, Vancouver, ISO, and other styles
48

Wu, Yuanyuan, Siling Feng, Cong Lin, Haijie Zhou, and Mengxing Huang. "A Three Stages Detail Injection Network for Remote Sensing Images Pansharpening." Remote Sensing 14, no. 5 (February 22, 2022): 1077. http://dx.doi.org/10.3390/rs14051077.

Full text
Abstract:
Multispectral (MS) pansharpening is crucial to improve the spatial resolution of MS images. MS pansharpening has the potential to provide images with high spatial and spectral resolutions. Pansharpening technique based on deep learning is a topical issue to deal with the distortion of spatio-spectral information. To improve the preservation of spatio-spectral information, we propose a novel three-stage detail injection pansharpening network (TDPNet) for remote sensing images. First, we put forward a dual-branch multiscale feature extraction block, which extracts four scale details of panchromatic (PAN) images and the difference between duplicated PAN and MS images. Next, cascade cross-scale fusion (CCSF) employs fine-scale fusion information as prior knowledge for the coarse-scale fusion to compensate for the lost information during downsampling and retain high-frequency details. CCSF combines the fine-scale and coarse-scale fusion based on residual learning and prior information of four scales. Last, we design a multiscale detail compensation mechanism and a multiscale skip connection block to reconstruct injecting details, which strengthen spatial details and reduce parameters. Abundant experiments implemented on three satellite data sets at degraded and full resolutions confirm that TDPNet trades off the spectral information and spatial details and improves the fidelity of sharper MS images. Both the quantitative and subjective evaluation results indicate that TDPNet outperforms the compared state-of-the-art approaches in generating MS images with high spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Dong, Ying Li, Li Ma, Zongwen Bai, and Jonathan Chan. "Going Deeper with Densely Connected Convolutional Neural Networks for Multispectral Pansharpening." Remote Sensing 11, no. 22 (November 7, 2019): 2608. http://dx.doi.org/10.3390/rs11222608.

Full text
Abstract:
In recent years, convolutional neural networks (CNNs) have shown promising performance in the field of multispectral (MS) and panchromatic (PAN) image fusion (MS pansharpening). However, the small-scale data and the gradient vanishing problem have been preventing the existing CNN-based fusion approaches from leveraging deeper networks that potentially have better representation ability to characterize the complex nonlinear mapping relationship between the input (source) and the targeting (fused) images. In this paper, we introduce a very deep network with dense blocks and residual learning to tackle these problems. The proposed network takes advantage of dense connections in dense blocks that have connections for arbitrarily two convolution layers to facilitate gradient flow and implicit deep supervision during training. In addition, reusing feature maps can reduce the number of parameters, which is helpful for reducing overfitting that resulted from small-scale data. Residual learning is explored to reduce the difficulty for the model to generate the MS image with high spatial resolution. The proposed network is evaluated via experiments on three datasets, achieving competitive or superior performance, e.g. the spectral angle mapper (SAM) is decreased over 10% on GaoFen-2, when compared with other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Mateos, Javier, Miguel Vega, Rafael Molina, and Aggelos K. Katsaggelos. "Pansharpening of multispectral images using a TV-based super-resolution algorithm." Journal of Physics: Conference Series 139 (November 1, 2008): 012022. http://dx.doi.org/10.1088/1742-6596/139/1/012022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography