To see the other types of publications on this topic, follow the link: Hyperspectral and multispectral data fusion.

Journal articles on the topic 'Hyperspectral and multispectral data fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Hyperspectral and multispectral data fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chakravortty, S., and P. Subramaniam. "Fusion of Hyperspectral and Multispectral Image Data for Enhancement of Spectral and Spatial Resolution." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 28, 2014): 1099–103. http://dx.doi.org/10.5194/isprsarchives-xl-8-1099-2014.

Full text
Abstract:
Hyperspectral image enhancement has been a concern for the remote sensing society for detailed end member detection. Hyperspectral remote sensor collects images in hundreds of narrow, continuous spectral channels, whereas multispectral remote sensor collects images in relatively broader wavelength bands. However, the spatial resolution of the hyperspectral sensor image is comparatively lower than that of the multispectral. As a result, spectral signatures from different end members originate within a pixel, known as mixed pixels. This paper presents an approach for obtaining an image which has the spatial resolution of the multispectral image and spectral resolution of the hyperspectral image, by fusion of hyperspectral and multispectral image. The proposed methodology also addresses the band remapping problem, which arises due to different regions of spectral coverage by multispectral and hyperspectral images. Therefore we apply algorithms to restore the spatial information of the hyperspectral image by fusing hyperspectral bands with only those bands which come under each multispectral band range. The proposed methodology is applied over Henry Island, of the Sunderban eco-geographic province. The data is collected by the Hyperion hyperspectral sensor and LISS IV multispectral sensor.
APA, Harvard, Vancouver, ISO, and other styles
2

Mifdal, Jamila, Bartomeu Coll, Jacques Froment, and Joan Duran. "Variational Fusion of Hyperspectral Data by Non-Local Filtering." Mathematics 9, no. 11 (May 31, 2021): 1265. http://dx.doi.org/10.3390/math9111265.

Full text
Abstract:
The fusion of multisensor data has attracted a lot of attention in computer vision, particularly among the remote sensing community. Hyperspectral image fusion consists in merging the spectral information of a hyperspectral image with the geometry of a multispectral one in order to infer an image with high spatial and spectral resolutions. In this paper, we propose a variational fusion model with a nonlocal regularization term that encodes patch-based filtering conditioned to the geometry of the multispectral data. We further incorporate a radiometric constraint that injects the high frequencies of the scene into the fused product with a band per band modulation according to the energy levels of the multispectral and hyperspectral images. The proposed approach proved robust to noise and aliasing. The experimental results demonstrate the performance of our method with respect to the state-of-the-art techniques on data acquired by commercial hyperspectral cameras and Earth observation satellites.
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Jianhao, Jie Li, and Menghui Jiang. "Hyperspectral and Multispectral Image Fusion by Deep Neural Network in a Self-Supervised Manner." Remote Sensing 13, no. 16 (August 13, 2021): 3226. http://dx.doi.org/10.3390/rs13163226.

Full text
Abstract:
Compared with multispectral sensors, hyperspectral sensors obtain images with high- spectral resolution at the cost of spatial resolution, which constrains the further and precise application of hyperspectral images. An intelligent idea to obtain high-resolution hyperspectral images is hyperspectral and multispectral image fusion. In recent years, many studies have found that deep learning-based fusion methods outperform the traditional fusion methods due to the strong non-linear fitting ability of convolution neural network. However, the function of deep learning-based methods heavily depends on the size and quality of training dataset, constraining the application of deep learning under the situation where training dataset is not available or of low quality. In this paper, we introduce a novel fusion method, which operates in a self-supervised manner, to the task of hyperspectral and multispectral image fusion without training datasets. Our method proposes two constraints constructed by low-resolution hyperspectral images and fake high-resolution hyperspectral images obtained from a simple diffusion method. Several simulation and real-data experiments are conducted with several popular remote sensing hyperspectral data under the condition where training datasets are unavailable. Quantitative and qualitative results indicate that the proposed method outperforms those traditional methods by a large extent.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Jiaxin, Ke Zheng, Jing Yao, Lianru Gao, and Danfeng Hong. "Deep Unsupervised Blind Hyperspectral and Multispectral Data Fusion." IEEE Geoscience and Remote Sensing Letters 19 (2022): 1–5. http://dx.doi.org/10.1109/lgrs.2022.3151779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nikolakopoulos, K., Ev Gioti, G. Skianis, and D. Vaiopoulos. "AMELIORATING THE SPATIAL RESOLUTION OF HYPERION HYPERSPECTRAL DATA. THE CASE OF ANTIPAROS ISLAND." Bulletin of the Geological Society of Greece 43, no. 3 (January 24, 2017): 1627. http://dx.doi.org/10.12681/bgsg.11337.

Full text
Abstract:
In this study seven fusion techniques and more especially the Ehlers, Gram-Schmidt, High Pass Filter, Local Mean Matching (LMM), Local Mean and Variance Matching (LMVM), Pansharp and PCA, were used for the fusion of Hyperion hyperspectral data with ALI panchromatic data. The panchromatic data have a spatial resolution of 10m while the hyperspectral data have a spatial resolution of 30m. All the fusion techniques are designed for use with classical multispectral data. Thus, it is quite interesting to investigate the assessment of the common used fusion algorithms with the hyperspectral data. The study area is Antiparos Island in the Aegean Sea.
APA, Harvard, Vancouver, ISO, and other styles
6

Chang, Chein-I., Meiping Song, Chunyan Yu, Yulei Wang, Haoyang Yu, Jiaojiao Li, Lin Wang, Hsiao-Chi Li, and Xiaorun Li. "Editorial for Special Issue “Advances in Hyperspectral Data Exploitation”." Remote Sensing 14, no. 20 (October 13, 2022): 5111. http://dx.doi.org/10.3390/rs14205111.

Full text
Abstract:
Hyperspectral imaging (HSI) has emerged as a promising, advanced technology in remote sensing and has demonstrated great potential in the exploitation of a wide variety of data. In particular, its capability has expanded from unmixing data samples and detecting targets at the subpixel scale to finding endmembers, which generally cannot be resolved by multispectral imaging. Accordingly, a wealth of new HSI research has been conducted and reported in the literature in recent years. The aim of this Special Issue “Advances in Hyperspectral Data Exploitation“ is to provide a forum for scholars and researchers to publish and share their research ideas and findings to facilitate the utility of hyperspectral imaging in data exploitation and other applications. With this in mind, this Special Issue accepted and published 19 papers in various areas, which can be organized into 9 categories, including I: Hyperspectral Image Classification, II: Hyperspectral Target Detection, III: Hyperspectral and Multispectral Fusion, IV: Mid-wave Infrared Hyperspectral Imaging, V: Hyperspectral Unmixing, VI: Hyperspectral Sensor Hardware Design, VII: Hyperspectral Reconstruction, VIII: Hyperspectral Visualization, and IX: Applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Hervieu, Alexandre, Arnaud Le Bris, and Clément Mallet. "FUSION OF HYPERSPECTRAL AND VHR MULTISPECTRAL IMAGE CLASSIFICATIONS IN URBAN α–AREAS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 6, 2016): 457–64. http://dx.doi.org/10.5194/isprs-annals-iii-3-457-2016.

Full text
Abstract:
An energetical approach is proposed for classification decision fusion in urban areas using multispectral and hyperspectral imagery at distinct spatial resolutions. Hyperspectral data provides a great ability to discriminate land-cover classes while multispectral data, usually at higher spatial resolution, makes possible a more accurate spatial delineation of the classes. Hence, the aim here is to achieve the most accurate classification maps by taking advantage of both data sources at the decision level: spectral properties of the hyperspectral data and the geometrical resolution of multispectral images. More specifically, the proposed method takes into account probability class membership maps in order to improve the classification fusion process. Such probability maps are available using standard classification techniques such as Random Forests or Support Vector Machines. Classification probability maps are integrated into an energy framework where minimization of a given energy leads to better classification maps. The energy is minimized using a graph-cut method called quadratic pseudo-boolean optimization (QPBO) with α-expansion. A first model is proposed that gives satisfactory results in terms of classification results and visual interpretation. This model is compared to a standard Potts models adapted to the considered problem. Finally, the model is enhanced by integrating the spatial contrast observed in the data source of higher spatial resolution (i.e., the multispectral image). Obtained results using the proposed energetical decision fusion process are shown on two urban multispectral/hyperspectral datasets. 2-3% improvement is noticed with respect to a Potts formulation and 3-8% compared to a single hyperspectral-based classification.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Mingyuan, Guoyuan Li, Xiaoqing Zhou, Chen Ma, Lifu Zhang, Xia Zhang, and Kun Shang. "A Registration-Error-Resistant Swath Reconstruction Method of ZY1-02D Satellite Hyperspectral Data Using SRE-ResNet." Remote Sensing 14, no. 22 (November 21, 2022): 5890. http://dx.doi.org/10.3390/rs14225890.

Full text
Abstract:
ZY1-02D is a Chinese hyperspectral satellite, which is equipped with a visible near-infrared multispectral camera and a hyperspectral camera. Its data are widely used in soil quality assessment, mineral mapping, water quality assessment, etc. However, due to the limitations of CCD design, the swath of hyperspectral data is relatively smaller than multispectral data. In addition, stripe noise and collages exist in hyperspectral data. With the contamination brought by clouds appearing in the scene, the availability is further affected. In order to solve these problems, this article used a swath reconstruction method of a spectral-resolution-enhancement method using ResNet (SRE-ResNet), which is to use wide swath multispectral data to reconstruct hyperspectral data through modeling mappings between the two. Experiments show that the method (1) can effectively reconstruct wide swaths of hyperspectral data, (2) can remove noise existing in the hyperspectral data, and (3) is resistant to registration error. Comparison experiments also show that SRE-ResNet outperforms existing fusion methods in both accuracy and time efficiency; thus, the method is suitable for practical application.
APA, Harvard, Vancouver, ISO, and other styles
9

Guilloteau, Claire, Thomas Oberlin, Olivier Berné, Émilie Habart, and Nicolas Dobigeon. "Simulated JWST Data Sets for Multispectral and Hyperspectral Image Fusion." Astronomical Journal 160, no. 1 (June 18, 2020): 28. http://dx.doi.org/10.3847/1538-3881/ab9301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yokoya, Naoto, Takehisa Yairi, and Akira Iwasaki. "Coupled Nonnegative Matrix Factorization Unmixing for Hyperspectral and Multispectral Data Fusion." IEEE Transactions on Geoscience and Remote Sensing 50, no. 2 (February 2012): 528–37. http://dx.doi.org/10.1109/tgrs.2011.2161320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chang, Chein-I., Meiping Song, Junping Zhang, and Chao-Cheng Wu. "Editorial for Special Issue “Hyperspectral Imaging and Applications”." Remote Sensing 11, no. 17 (August 27, 2019): 2012. http://dx.doi.org/10.3390/rs11172012.

Full text
Abstract:
Due to advent of sensor technology, hyperspectral imaging has become an emerging technology in remote sensing. Many problems, which cannot be resolved by multispectral imaging, can now be solved by hyperspectral imaging. The aim of this Special Issue “Hyperspectral Imaging and Applications” is to publish new ideas and technologies to facilitate the utility of hyperspectral imaging in data exploitation and to further explore its potential in different applications. This Special Issue has accepted and published 25 papers in various areas, which can be organized into 7 categories, Data Unmixing, Spectral variability, Target Detection, Hyperspectral Image Classification, Band Selection, Data Fusion, Applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Hui, Liangfeng Deng, Yibo Dou, Xiwu Zhong, and Yurong Qian. "Pansharpening Model of Transferable Remote Sensing Images Based on Feature Fusion and Attention Modules." Sensors 23, no. 6 (March 20, 2023): 3275. http://dx.doi.org/10.3390/s23063275.

Full text
Abstract:
The purpose of the panchromatic sharpening of remote sensing images is to generate high-resolution multispectral images through software technology without increasing economic expenditure. The specific method is to fuse the spatial information of a high-resolution panchromatic image and the spectral information of a low-resolution multispectral image. This work proposes a novel model for generating high-quality multispectral images. This model uses the feature domain of the convolution neural network to fuse multispectral and panchromatic images so that the fused images can generate new features so that the final fused features can restore clear images. Because of the unique feature extraction ability of convolution neural networks, we use the core idea of convolution neural networks to extract global features. To extract the complementary features of the input image at a deeper level, we first designed two subnetworks with the same structure but different weights, and then used single-channel attention to optimize the fused features to improve the final fusion performance. We select the public data set widely used in this field to verify the validity of the model. The experimental results on the GaoFen-2 and SPOT6 data sets show that this method has a better effect in fusing multi-spectral and panchromatic images. Compared with the classical and the latest methods in this field, our model fusion obtained panchromatic sharpened images from both quantitative and qualitative analysis has achieved better results. In addition, to verify the transferability and generalization of our proposed model, we directly apply it to multispectral image sharpening, such as hyperspectral image sharpening. Experiments and tests have been carried out on Pavia Center and Botswana public hyperspectral data sets, and the results show that the model has also achieved good performance in hyperspectral data sets.
APA, Harvard, Vancouver, ISO, and other styles
13

Vargas, Edwin, Kevin Arias, Fernando Rojas, and Henry Arguello. "Fusion of Hyperspectral and Multispectral Images Based on a Centralized Non-local Sparsity Model of Abundance Maps." Tecnura 24, no. 66 (October 1, 2020): 62–75. http://dx.doi.org/10.14483/22487638.16904.

Full text
Abstract:
Objective: Hyperspectral (HS) imaging systems are commonly used in a diverse range of applications that involve detection and classification tasks. However, the low spatial resolution of hyperspectral images may limit the performance of the involved tasks in such applications. In the last years, fusing the information of an HS image with high spatial resolution multispectral (MS) or panchromatic (PAN) images has been widely studied to enhance the spatial resolution. Image fusion has been formulated as an inverse problem whose solution is an HS image which assumed to be sparse in an analytic or learned dictionary. This work proposes a non-local centralized sparse representation model on a set of learned dictionaries in order to regularize the conventional fusion problem.Methodology: The dictionaries are learned from the estimated abundance data taking advantage of the depth correlation between abundance maps and the non-local self- similarity over the spatial domain. Then, conditionally on these dictionaries, the fusion problem is solved by an alternating iterative numerical algorithm.Results: Experimental results with real data show that the proposed method outperforms the state-of-the-art methods under different quantitative assessments.Conclusions: In this work, we propose a hyperspectral and multispectral image fusion method based on a non-local centralized sparse representation on abundance maps. This model allows us to include the non-local redundancy of abundance maps in the fusion problem using spectral unmixing and improve the performance of the sparsity-based fusion approaches.
APA, Harvard, Vancouver, ISO, and other styles
14

Yokoya, Naoto, Claas Grohnfeldt, and Jocelyn Chanussot. "Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature." IEEE Geoscience and Remote Sensing Magazine 5, no. 2 (June 2017): 29–56. http://dx.doi.org/10.1109/mgrs.2016.2637824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Benhalouche, Fatima Zohra, Moussa Sofiane Karoui, Yannick Deville, and Abdelaziz Ouamri. "Hyperspectral and multispectral data fusion based on linear-quadratic nonnegative matrix factorization." Journal of Applied Remote Sensing 11, no. 2 (May 10, 2017): 025008. http://dx.doi.org/10.1117/1.jrs.11.025008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ren, Kai, Weiwei Sun, Xiangchao Meng, Gang Yang, and Qian Du. "Fusing China GF-5 Hyperspectral Data with GF-1, GF-2 and Sentinel-2A Multispectral Data: Which Methods Should Be Used?" Remote Sensing 12, no. 5 (March 9, 2020): 882. http://dx.doi.org/10.3390/rs12050882.

Full text
Abstract:
The China GaoFen-5 (GF-5) satellite sensor, which was launched in 2018, collects hyperspectral data with 330 spectral bands, a 30 m spatial resolution, and 60 km swath width. Its competitive advantages compared to other on-orbit or planned sensors are its number of bands, spectral resolution, and swath width. Unfortunately, its applications may be undermined by its relatively low spatial resolution. Therefore, the data fusion of GF-5 with high spatial resolution multispectral data is required to further enhance its spatial resolution while preserving its spectral fidelity. This paper conducted a comprehensive evaluation study of fusing GF-5 hyperspectral data with three typical multispectral data sources (i.e., GF-1, GF-2 and Sentinel-2A (S2A)), based on quantitative metrics, classification accuracy, and computational efficiency. Datasets on three study areas of China were utilized to design numerous experiments, and the performances of nine state-of-the-art fusion methods were compared. Experimental results show that LANARAS (this method was proposed by lanaras et al.), Adaptive Gram–Schmidt (GSA), and modulation transfer function (MTF)-generalized Laplacian pyramid (GLP) methods are more suitable for fusing GF-5 with GF-1 data, MTF-GLP and GSA methods are recommended for fusing GF-5 with GF-2 data, and GSA and smoothing filtered-based intensity modulation (SFIM) can be used to fuse GF-5 with S2A data.
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Hong, Jian Long, Yuanxi Peng, and Tong Zhou. "Hyperspectral Multispectral Image Fusion via Fast Matrix Truncated Singular Value Decomposition." Remote Sensing 15, no. 1 (December 30, 2022): 207. http://dx.doi.org/10.3390/rs15010207.

Full text
Abstract:
Recently, methods for obtaining a high spatial resolution hyperspectral image (HR-HSI) by fusing a low spatial resolution hyperspectral image (LR-HSI) and high spatial resolution multispectral image (HR-MSI) have become increasingly popular. However, most fusion methods require knowing the point spread function (PSF) or the spectral response function (SRF) in advance, which are uncertain and thus limit the practicability of these fusion methods. To solve this problem, we propose a fast fusion method based on the matrix truncated singular value decomposition (FTMSVD) without using the SRF, in which our first finding about the similarity between the HR-HSI and HR-MSI is utilized after matrix truncated singular value decomposition (TMSVD). We tested the FTMSVD method on two simulated data sets, Pavia University and CAVE, and a real data set wherein the remote sensing images are generated by two different spectral cameras, Sentinel 2 and Hyperion. The advantages of FTMSVD method are demonstrated by the experimental results for all data sets. Compared with the state-of-the-art non-blind methods, our proposed method can achieve more effective fusion results while reducing the fusing time to less than 1% of such methods; moreover, our proposed method can improve the PSNR value by up to 16 dB compared with the state-of-the-art blind methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Zare, Marzieh, Mohammad Sadegh Helfroush, Kamran Kazemi, and Paul Scheunders. "Hyperspectral and Multispectral Image Fusion Using Coupled Non-Negative Tucker Tensor Decomposition." Remote Sensing 13, no. 15 (July 26, 2021): 2930. http://dx.doi.org/10.3390/rs13152930.

Full text
Abstract:
Fusing a low spatial resolution hyperspectral image (HSI) with a high spatial resolution multispectral image (MSI), aiming to produce a super-resolution hyperspectral image, has recently attracted increasing research interest. In this paper, a novel approach based on coupled non-negative tensor decomposition is proposed. The proposed method performs a tucker tensor factorization of a low resolution hyperspectral image and a high resolution multispectral image under the constraint of non-negative tensor decomposition (NTD). The conventional matrix factorization methods essentially lose spatio-spectral structure information when stacking the 3D data structure of a hyperspectral image into a matrix form. Moreover, the spectral, spatial, or their joint structural features have to be imposed from the outside as a constraint to well pose the matrix factorization problem. The proposed method has the advantage of preserving the spatio-spectral structure of hyperspectral images. In this paper, the NTD is directly imposed on the coupled tensors of the HSI and MSI. Hence, the intrinsic spatio-spectral structure of the HSI is represented without loss, and spatial and spectral information can be interdependently exploited. Furthermore, multilinear interactions of different modes of the HSIs can be exactly modeled with the core tensor of the Tucker tensor decomposition. The proposed method is straightforward and easy to implement. Unlike other state-of-the-art approaches, the complexity of the proposed approach is linear with the size of the HSI cube. Experiments on two well-known datasets give promising results when compared with some recent methods from the literature.
APA, Harvard, Vancouver, ISO, and other styles
19

Lu, Han, Danyu Qiao, Yongxin Li, Shuang Wu, and Lei Deng. "Fusion of China ZY-1 02D Hyperspectral Data and Multispectral Data: Which Methods Should Be Used?" Remote Sensing 13, no. 12 (June 16, 2021): 2354. http://dx.doi.org/10.3390/rs13122354.

Full text
Abstract:
ZY-1 02D is China’s first civil hyperspectral (HS) operational satellite, developed independently and successfully launched in 2019. It can collect HS data with a spatial resolution of 30 m, 166 spectral bands, a spectral range of 400~2500 nm, and a swath width of 60 km. Its competitive advantages over other on-orbit or planned satellites are its high spectral resolution and large swath width. Unfortunately, the relatively low spatial resolution may limit its applications. As a result, fusing ZY-1 02D HS data with high-spatial-resolution multispectral (MS) data is required to improve spatial resolution while maintaining spectral fidelity. This paper conducted a comprehensive evaluation study on the fusion of ZY-1 02D HS data with ZY-1 02D MS data (10-m spatial resolution), based on visual interpretation and quantitative metrics. Datasets from Hebei, China, were used in this experiment, and the performances of six common data fusion methods, namely Gram-Schmidt (GS), High Pass Filter (HPF), Nearest-Neighbor Diffusion (NND), Modified Intensity-Hue-Saturation (IHS), Wavelet Transform (Wavelet), and Color Normalized Sharping (Brovey), were compared. The experimental results show that: (1) HPF and GS methods are better suited for the fusion of ZY-1 02D HS Data and MS Data, (2) IHS and Brovey methods can well improve the spatial resolution of ZY-1 02D HS data but introduce spectral distortion, and (3) Wavelet and NND results have high spectral fidelity but poor spatial detail representation. The findings of this study could serve as a good reference for the practical application of ZY-1 02D HS data fusion.
APA, Harvard, Vancouver, ISO, and other styles
20

Hall, Emma C., and Mark J. Lara. "Multisensor UAS mapping of Plant Species and Plant Functional Types in Midwestern Grasslands." Remote Sensing 14, no. 14 (July 18, 2022): 3453. http://dx.doi.org/10.3390/rs14143453.

Full text
Abstract:
Uncrewed aerial systems (UASs) have emerged as powerful ecological observation platforms capable of filling critical spatial and spectral observation gaps in plant physiological and phenological traits that have been difficult to measure from space-borne sensors. Despite recent technological advances, the high cost of drone-borne sensors limits the widespread application of UAS technology across scientific disciplines. Here, we evaluate the tradeoffs between off-the-shelf and sophisticated drone-borne sensors for mapping plant species and plant functional types (PFTs) within a diverse grassland. Specifically, we compared species and PFT mapping accuracies derived from hyperspectral, multispectral, and RGB imagery fused with light detection and ranging (LiDAR) or structure-for-motion (SfM)-derived canopy height models (CHM). Sensor–data fusion were used to consider either a single observation period or near-monthly observation frequencies for integration of phenological information (i.e., phenometrics). Results indicate that overall classification accuracies for plant species and PFTs were highest in hyperspectral and LiDAR-CHM fusions (78 and 89%, respectively), followed by multispectral and phenometric–SfM–CHM fusions (52 and 60%, respectively) and RGB and SfM–CHM fusions (45 and 47%, respectively). Our findings demonstrate clear tradeoffs in mapping accuracies from economical versus exorbitant sensor networks but highlight that off-the-shelf multispectral sensors may achieve accuracies comparable to those of sophisticated UAS sensors by integrating phenometrics into machine learning image classifiers.
APA, Harvard, Vancouver, ISO, and other styles
21

Weinmann, M., and M. Weinmann. "FUSION OF HYPERSPECTRAL, MULTISPECTRAL, COLOR AND 3D POINT CLOUD INFORMATION FOR THE SEMANTIC INTERPRETATION OF URBAN ENVIRONMENTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1899–906. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1899-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral data and LiDAR data acquired from aerial sensor platforms. We extract radiometric features based on the given RGB color imagery and the given hyperspectral data, and we also consider different transformations to potentially better data representations. For the RGB color imagery, these are achieved via color invariants, normalization procedures or specific assumptions about the scene. For the hyperspectral data, we involve techniques for dimensionality reduction and feature selection as well as a transformation to multispectral Sentinel-2-like data of the same spatial resolution. Furthermore, we extract geometric features describing the local 3D structure from the given LiDAR data. The defined feature sets are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different feature sets and their combination, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set.</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Anshakov, G. P., A. V. Raschupkin, and Y. N. Zhuravel. "Hyperspectral and multispectral resurs-p data fusion for increase of their informational content." Computer Optics 39, no. 1 (2015): 77–82. http://dx.doi.org/10.18287/0134-2452-2015-39-1-77-82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, Xinyu, Ye Zhang, Junping Zhang, and Shaoqi Shi. "Alternating Direction Iterative Nonnegative Matrix Factorization Unmixing for Multispectral and Hyperspectral Data Fusion." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020): 5223–32. http://dx.doi.org/10.1109/jstars.2020.3020586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Guo, Fen Fen, Jian Rong Fan, Wen Qian Zang, Fei Liu, and Huai Zhen Zhang. "Research on Fusion Approach for Hyperspectral Image and Multispectral Image of HJ-1A." Advanced Materials Research 356-360 (October 2011): 2897–903. http://dx.doi.org/10.4028/www.scientific.net/amr.356-360.2897.

Full text
Abstract:
The vacancy of hyperspectral image (HSI) in China is made up by HJ-1A satellite, which makes more study and application possible. But comparing with other HSI, low spatial resolution turns into a big limiting obstacle for application. In order to improve the HSI quality and make full use of the existing RS data, this paper proposed a fusion approach basing on 3D wavelet transform (3D WT) to fusing HJ-1A HSI and Multispectral image (MSI) using their 3D structure. Contrasting with the principal component transform (PCA) and Gram-Schmidt fusion approach, which are mature at present, 3D WT fusion approach use all bands of MSI to its advantage and the fusion result perform better in both spatial and spectral quality.
APA, Harvard, Vancouver, ISO, and other styles
25

Karimov, B., G. Karimova, and N. Amankulova. "Land Cover Classification Improvements by Remote Sensing Data Fusion." Bulletin of Science and Practice, no. 2 (February 15, 2023): 66–74. http://dx.doi.org/10.33619/2414-2948/87/07.

Full text
Abstract:
Computer processing and analysis of satellite data is an urgent task of the science of remote sensing of the earth. Such processing can range from adjusting the contrast and brightness of the images of an amateur photographer to a group of scientists using neural network classification to determine the types of minerals in a hyperspectral satellite image. This article implements a method of satellite data fusion, which improves the digital image interpretation and image quality for further analysis. For fusion, a multispectral image with a resolution of 30 m Landsat 5 with 6 channels was taken, with three more significant and informative in their composition were used, as well as a panchromatic (monochrome) image with a resolution of 15 m. To evaluate the resolution of the images and the resulting images before and after the image fusion algorithm, image slices along a straight line and intersecting buildings, green mass, roads and industrial areas presented. For testing, test territories taken from Google Earth and the field work results.
APA, Harvard, Vancouver, ISO, and other styles
26

Hu, Jingliang, Rong Liu, Danfeng Hong, Andrés Camero, Jing Yao, Mathias Schneider, Franz Kurz, Karl Segl, and Xiao Xiang Zhu. "MDAS: a new multimodal benchmark dataset for remote sensing." Earth System Science Data 15, no. 1 (January 9, 2023): 113–31. http://dx.doi.org/10.5194/essd-15-113-2023.

Full text
Abstract:
Abstract. In Earth observation, multimodal data fusion is an intuitive strategy to break the limitation of individual data. Complementary physical contents of data sources allow comprehensive and precise information retrieval. With current satellite missions, such as ESA Copernicus programme, various data will be accessible at an affordable cost. Future applications will have many options for data sources. Such a privilege can be beneficial only if algorithms are ready to work with various data sources. However, current data fusion studies mostly focus on the fusion of two data sources. There are two reasons; first, different combinations of data sources face different scientific challenges. For example, the fusion of synthetic aperture radar (SAR) data and optical images needs to handle the geometric difference, while the fusion of hyperspectral and multispectral images deals with different resolutions on spatial and spectral domains. Second, nowadays, it is still both financially and labour expensive to acquire multiple data sources for the same region at the same time. In this paper, we provide the community with a benchmark multimodal data set, MDAS, for the city of Augsburg, Germany. MDAS includes synthetic aperture radar data, multispectral image, hyperspectral image, digital surface model (DSM), and geographic information system (GIS) data. All these data are collected on the same date, 7 May 2018. MDAS is a new benchmark data set that provides researchers rich options on data selections. In this paper, we run experiments for three typical remote sensing applications, namely, resolution enhancement, spectral unmixing, and land cover classification, on MDAS data set. Our experiments demonstrate the performance of representative state-of-the-art algorithms whose outcomes can serve as baselines for further studies. The dataset is publicly available at https://doi.org/10.14459/2022mp1657312 (Hu et al., 2022a) and the code (including the pre-trained models) at https://doi.org/10.5281/zenodo.7428215 (Hu et al., 2022b).
APA, Harvard, Vancouver, ISO, and other styles
27

Mohammed Noori, Abbas, Sumaya Falih Hasan, Qayssar Mahmood Ajaj, Mustafa Ridha Mezaal, Helmi Z. M. Shafri, and Muntadher Aidi Shareef. "Fusion of Airborne Hyperspectral and WorldView2 Multispectral Images for Detailed Urban Land Cover Classification A case Study of Kuala Lumpur, Malaysia." International Journal of Engineering & Technology 7, no. 4.37 (December 13, 2018): 202. http://dx.doi.org/10.14419/ijet.v7i4.37.24102.

Full text
Abstract:
Detecting the features of urban areas in detail requires very high spatial and spectral resolution in images. Hyperspectral sensors usually offer high spectral resolution images with a low spatial resolution. By contrast, multispectral sensors produce high spatial resolution images with a poor spectral resolution. Therefore, numerous fusion algorithms and techniques have been proposed in recent years to obtain high-quality images with improved spatial and spectral resolutions by sensibly combining the data acquired for the same scene. This work aims to exploit the extracted information from images in an effective way. To achieve this objective, a new algorithm based on transformation was developed. This algorithm primarily depends on the Gram–Schmidt process for fusing images, removing distortions, and improving the appearance of images. Images are first fused by using the Gram–Schmidt pansharpening method. The obtained fused image is utilized in the classification process in different areas by using support vector machine (SVM). The classification result is evaluated using a matrix of errors. The overall accuracy produced from the hyperspectral, multispectral and fused images was 72.33%, 82.83%, and 89.34%, respectively. Results showed that the developed algorithm improved the image enhancement and image fusion. Moreover, the developed algorithm has the ability to produce an imaging product with high spatial resolution and high-quality spectral data.
APA, Harvard, Vancouver, ISO, and other styles
28

Ma, Fei, Feixia Yang, and Yanwei Wang. "Low-Rank Tensor Decomposition With Smooth and Sparse Regularization for Hyperspectral and Multispectral Data Fusion." IEEE Access 8 (2020): 129842–56. http://dx.doi.org/10.1109/access.2020.3009263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Chia-Hsiang, Fei Ma, Chong-Yung Chi, and Chih-Hsiang Hsieh. "A Convex Optimization-Based Coupled Nonnegative Matrix Factorization Algorithm for Hyperspectral and Multispectral Data Fusion." IEEE Transactions on Geoscience and Remote Sensing 56, no. 3 (March 2018): 1652–67. http://dx.doi.org/10.1109/tgrs.2017.2766080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lu, Xiaochen, Dezheng Yang, Fengde Jia, and Yifeng Zhao. "Coupled Convolutional Neural Network-Based Detail Injection Method for Hyperspectral and Multispectral Image Fusion." Applied Sciences 11, no. 1 (December 30, 2020): 288. http://dx.doi.org/10.3390/app11010288.

Full text
Abstract:
In this paper, a detail-injection method based on a coupled convolutional neural network (CNN) is proposed for hyperspectral (HS) and multispectral (MS) image fusion with the goal of enhancing the spatial resolution of HS images. Owing to the excellent performance in spectral fidelity of the detail-injection model and the image spatial–spectral feature exploration ability of CNN, the proposed method utilizes a couple of CNN networks as the feature extraction method and learns details from the HS and MS images individually. By appending an additional convolutional layer, both the extracted features of two images are concatenated to predict the missing details of the anticipated HS image. Experiments on simulated and real HS and MS data show that compared with some state-of-the-art HS and MS image fusion methods, our proposed method achieves better fusion results, provides excellent spectrum preservation ability, and is easy to implement.
APA, Harvard, Vancouver, ISO, and other styles
31

Cessna, Janice, Michael G. Alonzo, Adrianna C. Foster, and Bruce D. Cook. "Mapping Boreal Forest Spruce Beetle Health Status at the Individual Crown Scale Using Fused Spectral and Structural Data." Forests 12, no. 9 (August 25, 2021): 1145. http://dx.doi.org/10.3390/f12091145.

Full text
Abstract:
The frequency and severity of spruce bark beetle outbreaks are increasing in boreal forests leading to widespread tree mortality and fuel conditions promoting extreme wildfire. Detection of beetle infestation is a forest health monitoring (FHM) priority but is hampered by the challenges of detecting early stage (“green”) attack from the air. There is indication that green stage might be detected from vertical gradients of spectral data or from shortwave infrared information distributed within a single crown. To evaluate the efficacy of discriminating “non-infested”, “green”, and “dead” health statuses at the landscape scale in Alaska, USA, this study conducted spectral and structural fusion of data from: (1) Unoccupied aerial vehicle (UAV) multispectral (6 cm) + structure from motion point clouds (~700 pts m−2); and (2) Goddard Lidar Hyperspectral Thermal (G-LiHT) hyperspectral (400 to 1000 nm, 0.5 m) + SWIR-band lidar (~32 pts m−2). We achieved 78% accuracy for all three health statuses using spectral + structural fusion from either UAV or G-LiHT and 97% accuracy for non-infested/dead using G-LiHT. We confirm that UAV 3D spectral (e.g., greenness above versus below median height in crown) and lidar apparent reflectance metrics (e.g., mean reflectance at 99th percentile height in crown), are of high value, perhaps capturing the vertical gradient of needle degradation. In most classification exercises, UAV accuracy was lower than G-LiHT indicating that collecting ultra-high spatial resolution data might be less important than high spectral resolution information. While the value of passive optical spectral information was largely confined to the discrimination of non-infested versus dead crowns, G-LiHT hyperspectral band selection (~400, 675, 755, and 940 nm) could inform future FHM mission planning regarding optimal wavelengths for this task. Interestingly, the selected regions mostly did not align with the band designations for our UAV multispectral data but do correspond to, e.g., Sentinel-2 red edge bands, suggesting a path forward for moderate scale bark beetle detection when paired with suitable structural data.
APA, Harvard, Vancouver, ISO, and other styles
32

Huang, Leping, Zhongwen Hu, Xin Luo, Qian Zhang, Jingzhe Wang, and Guofeng Wu. "Stepwise Fusion of Hyperspectral, Multispectral and Panchromatic Images with Spectral Grouping Strategy: A Comparative Study Using GF5 and GF1 Images." Remote Sensing 14, no. 4 (February 20, 2022): 1021. http://dx.doi.org/10.3390/rs14041021.

Full text
Abstract:
Since hyperspectral satellite images (HSIs) usually hold low spatial resolution, improving the spatial resolution of hyperspectral imaging (HSI) is an effective solution to explore its potential for remote sensing applications, such as land cover mapping over urban and coastal areas. The fusion of HSIs with high spatial resolution multispectral images (MSIs) and panchromatic (PAN) images could be a solution. To address the challenging work of fusing HSIs, MSIs and PAN images, a novel easy-to-implement stepwise fusion approach was proposed in this study. The fusion of HSIs and MSIs was decomposed into a set of simple image fusion tasks through spectral grouping strategy. HSI, MSI and PAN images were fused step by step using existing image fusion algorithms. According to different fusion order, two strategies ((HSI+MSI)+PAN and HSI+(MSI+PAN)) were proposed. Using simulated and real Gaofen-5 (GF-5) HSI, MSI and PAN images from the Gaofen-1 (GF-1) PMS sensor as experimental data, we compared the proposed stepwise fusion strategies with the traditional fusion strategy (HSI+PAN), and compared the performances of six fusion algorithms under three fusion strategies. We comprehensively evaluated the fused results through three aspects: spectral fidelity, spatial fidelity and computation efficiency evaluation. The results showed that (1) the spectral fidelity of the fused images obtained by stepwise fusion strategies was better than that of the traditional strategy; (2) the proposed stepwise strategies performed better or comparable spatial fidelity than traditional strategy; (3) the stepwise strategy did not significantly increase the time complexity compared to the traditional strategy; and (4) we also provide suggestions for selecting image fusion algorithms using the proposed strategy. The study provided us with a reference for the selection of fusion strategies and algorithms in different application scenarios, and also provided an easy-to-implement solution and useful references for fusing HSI, MSI and PAN images.
APA, Harvard, Vancouver, ISO, and other styles
33

Fan, Shuxiang, Changying Li, Wenqian Huang, and Liping Chen. "Data Fusion of Two Hyperspectral Imaging Systems with Complementary Spectral Sensing Ranges for Blueberry Bruising Detection." Sensors 18, no. 12 (December 17, 2018): 4463. http://dx.doi.org/10.3390/s18124463.

Full text
Abstract:
Currently, the detection of blueberry internal bruising focuses mostly on single hyperspectral imaging (HSI) systems. Attempts to fuse different HSI systems with complementary spectral ranges are still lacking. A push broom based HSI system and a liquid crystal tunable filter (LCTF) based HSI system with different sensing ranges and detectors were investigated to jointly detect blueberry internal bruising in the lab. The mean reflectance spectrum of each berry sample was extracted from the data obtained by two HSI systems respectively. The spectral data from the two spectroscopic techniques were analyzed separately using feature selection method, partial least squares-discriminant analysis (PLS-DA), and support vector machine (SVM), and then fused with three data fusion strategies at the data level, feature level, and decision level. The three data fusion strategies achieved better classification results than using each HSI system alone. The decision level fusion integrating classification results from the two instruments with selected relevant features achieved more promising results, suggesting that the two HSI systems with complementary spectral ranges, combined with feature selection and data fusion strategies, could be used synergistically to improve blueberry internal bruising detection. This study was the first step in demonstrating the feasibility of the fusion of two HSI systems with complementary spectral ranges for detecting blueberry bruising, which could lead to a multispectral imaging system with a few selected wavelengths and an appropriate detector for bruising detection on the packing line.
APA, Harvard, Vancouver, ISO, and other styles
34

Tong, Zhonggui, Yuxia Li, Jinglin Zhang, Lei He, and Yushu Gong. "MSFANet: Multiscale Fusion Attention Network for Road Segmentation of Multispectral Remote Sensing Data." Remote Sensing 15, no. 8 (April 8, 2023): 1978. http://dx.doi.org/10.3390/rs15081978.

Full text
Abstract:
With the development of deep learning and remote sensing technologies in recent years, many semantic segmentation methods based on convolutional neural networks (CNNs) have been applied to road extraction. However, previous deep learning-based road extraction methods primarily used RGB imagery as an input and did not take advantage of the spectral information contained in hyperspectral imagery. These methods can produce discontinuous outputs caused by objects with similar spectral signatures to roads. In addition, the images obtained from different Earth remote sensing sensors may have different spatial resolutions, enhancing the difficulty of the joint analysis. This work proposes the Multiscale Fusion Attention Network (MSFANet) to overcome these problems. Compared to traditional road extraction frameworks, the proposed MSFANet fuses information from different spectra at multiple scales. In MSFANet, multispectral remote sensing data is used as an additional input to the network, in addition to RGB remote sensing data, to obtain richer spectral information. The Cross-source Feature Fusion Module (CFFM) is used to calibrate and fuse spectral features at different scales, reducing the impact of noise and redundant features from different inputs. The Multiscale Semantic Aggregation Decoder (MSAD) fuses multiscale features and global context information from the upsampling process layer by layer, reducing information loss during the multiscale feature fusion. The proposed MSFANet network was applied to the SpaceNet dataset and self-annotated images from Chongzhou, a representative city in China. Our MSFANet performs better over the baseline HRNet by a large margin of +6.38 IoU and +5.11 F1-score on the SpaceNet dataset, +3.61 IoU and +2.32 F1-score on the self-annotated dataset (Chongzhou dataset). Moreover, the effectiveness of MSFANet was also proven by comparative experiments with other studies.
APA, Harvard, Vancouver, ISO, and other styles
35

Guo, Siyu, Xi’ai Chen, Huidi Jia, Zhi Han, Zhigang Duan, and Yandong Tang. "Fusing Hyperspectral and Multispectral Images via Low-Rank Hankel Tensor Representation." Remote Sensing 14, no. 18 (September 7, 2022): 4470. http://dx.doi.org/10.3390/rs14184470.

Full text
Abstract:
Hyperspectral images (HSIs) have high spectral resolution and low spatial resolution. HSI super-resolution (SR) can enhance the spatial information of the scene. Current SR methods have generally focused on the direct utilization of image structure priors, which are often modeled in global or local lower-order image space. The spatial and spectral hidden priors, which are accessible from higher-order space, cannot be taken advantage of when using these methods. To solve this problem, we propose a higher-order Hankel space-based hyperspectral image-multispectral image (HSI-MSI) fusion method in this paper. In this method, the higher-order tensor represented in the Hankel space increases the HSI data redundancy, and the hidden relationships are revealed by the nonconvex penalized Kronecker-basis-representation-based tensor sparsity measure (KBR). Weighted 3D total variation (W3DTV) is further applied to maintain the local smoothness in the image structure, and an efficient algorithm is derived under the alternating direction method of multipliers (ADMM) framework. Extensive experiments on three commonly used public HSI datasets validate the superiority of the proposed method compared with current state-of-the-art SR approaches in image detail reconstruction and spectral information restoration.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yi, Yizhe Yang, Qinwei Zhang, Runqing Duan, Junqi Liu, Yuchu Qin, and Xianzhi Wang. "Toward Multi-Stage Phenotyping of Soybean with Multimodal UAV Sensor Data: A Comparison of Machine Learning Approaches for Leaf Area Index Estimation." Remote Sensing 15, no. 1 (December 20, 2022): 7. http://dx.doi.org/10.3390/rs15010007.

Full text
Abstract:
Leaf Area Index (LAI) is an important parameter which can be used for crop growth monitoring and yield estimation. Many studies have been carried out to estimate LAI with remote sensing data obtained by sensors mounted on Unmanned Aerial Vehicles (UAVs) in major crops; however, most of the studies used only a single type of sensor, and the comparative study of different sensors and sensor combinations in the model construction of LAI was rarely reported, especially in soybean. In this study, three types of sensors, i.e., hyperspectral, multispectral, and LiDAR, were used to collect remote sensing data at three growth stages in soybean. Six typical machine learning algorithms, including Unary Linear Regression (ULR), Multiple Linear Regression (MLR), Random Forest (RF), eXtreme Gradient Boosting (XGBoost), Support Vector Machine (SVM) and Back Propagation (BP), were used to construct prediction models of LAI. The results indicated that the hyperspectral and LiDAR data did not significantly improve the prediction accuracy of LAI. Comparison of different sensors and sensor combinations showed that the fusion of the hyperspectral and multispectral data could significantly improve the predictive ability of the models, and among all the prediction models constructed by different algorithms, the prediction model built by XGBoost based on multimodal data showed the best performance. Comparison of the models for different growth stages showed that the XGBoost-LAI model for the flowering stage and the universal models of the XGBoost-LAI and RF-LAI for three growth stages showed the best performances. The results of this study might provide some ideas for the accurate estimation of LAI, and also provide novel insights toward high-throughput phenotyping of soybean with multi-modal remote sensing data.
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmad, Uzair, Abozar Nasirahmadi, Oliver Hensel, and Stefano Marino. "Technology and Data Fusion Methods to Enhance Site-Specific Crop Monitoring." Agronomy 12, no. 3 (February 23, 2022): 555. http://dx.doi.org/10.3390/agronomy12030555.

Full text
Abstract:
Digital farming approach merges new technologies and sensor data to optimize the quality of crop monitoring in agriculture. The successful fusion of technology and data is highly dependent on the parameter collection, the modeling adoption, and the technology integration being accurately implemented according to the specified needs of the farm. This fusion technique has not yet been widely adopted due to several challenges; however, our study here reviews current methods and applications for fusing technologies and data. First, the study highlights different sensors that can be merged with other systems to develop fusion methods, such as optical, thermal infrared, multispectral, hyperspectral, light detection and ranging and radar. Second, the data fusion using the internet of things is reviewed. Third, the study shows different platforms that can be used as a source for the fusion of technologies, such as ground-based (tractors and robots), space-borne (satellites) and aerial (unmanned aerial vehicles) monitoring platforms. Finally, the study presents data fusion methods for site-specific crop parameter monitoring, such as nitrogen, chlorophyll, leaf area index, and aboveground biomass, and shows how the fusion of technologies and data can improve the monitoring of these parameters. The study further reveals limitations of the previous technologies and provides recommendations on how to improve their fusion with the best available sensors. The study reveals that among different data fusion methods, sensors and technologies, the airborne and terrestrial LiDAR fusion method for crop, canopy, and ground may be considered as a futuristic easy-to-use and low-cost solution to enhance the site-specific monitoring of crop parameters.
APA, Harvard, Vancouver, ISO, and other styles
38

Ling, Jianmei, Lu Li, and Haiyan Wang. "Improved Fusion of Spatial Information into Hyperspectral Classification through the Aggregation of Constrained Segment Trees: Segment Forest." Remote Sensing 13, no. 23 (November 27, 2021): 4816. http://dx.doi.org/10.3390/rs13234816.

Full text
Abstract:
Compared with traditional optical and multispectral remote sensing images, hyperspectral images have hundreds of bands that can provide the possibility of fine classification of the earth’s surface. At the same time, a hyperspectral image is an image that coexists with the spatial and spectral. It has become a hot research topic to combine the spatial spectrum information of the image to classify hyperspectral features. Based on the idea of spatial–spectral classification, this paper proposes a novel hyperspectral image classification method based on a segment forest (SF). Firstly, the first principal component of the image was extracted by the process of principal component analysis (PCA) data dimension reduction, and the data constructed the segment forest after dimension reduction to extract the non-local prior spatial information of the image. Secondly, the images’ initial classification results and probability distribution were obtained using support vector machine (SVM), and the spectral information of the images was extracted. Finally, the segment forest constructed above is used to optimize the initial classification results and obtain the final classification results. In this paper, three domestic and foreign public data sets were selected to verify the segment forest classification. SF effectively improved the classification accuracy of SVM, and the overall accuracy of Salinas was enhanced by 11.16%, WHU-Hi-HongHu by 15.89%, and XiongAn by 19.56%. Then, it was compared with six decision-level improved space spectrum classification methods, including guided filtering (GF), Markov random field (MRF), random walk (RW), minimum spanning tree (MST), MST+, and segment tree (ST). The results show that the segment forest-based hyperspectral image classification improves accuracy and efficiency compared with other algorithms, proving the algorithm’s effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
39

Jiang, Yufeng, Li Zhang, Min Yan, Jianguo Qi, Tianmeng Fu, Shunxiang Fan, and Bowei Chen. "High-Resolution Mangrove Forests Classification with Machine Learning Using Worldview and UAV Hyperspectral Data." Remote Sensing 13, no. 8 (April 15, 2021): 1529. http://dx.doi.org/10.3390/rs13081529.

Full text
Abstract:
Mangrove forests, as important ecological and economic resources, have suffered a loss in the area due to natural and human activities. Monitoring the distribution of and obtaining accurate information on mangrove species is necessary for ameliorating the damage and protecting and restoring mangrove forests. In this study, we compared the performance of UAV Rikola hyperspectral images, WorldView-2 (WV-2) satellite-based multispectral images, and a fusion of data from both in the classification of mangrove species. We first used recursive feature elimination‒random forest (RFE-RF) to select the vegetation’s spectral and texture feature variables, and then implemented random forest (RF) and support vector machine (SVM) algorithms as classifiers. The results showed that the accuracy of the combined data was higher than that of UAV and WV-2 data; the vegetation index features of UAV hyperspectral data and texture index of WV-2 data played dominant roles; the overall accuracy of the RF algorithm was 95.89% with a Kappa coefficient of 0.95, which is more accurate and efficient than SVM. The use of combined data and RF methods for the classification of mangrove species could be useful in biomass estimation and breeding cultivation.
APA, Harvard, Vancouver, ISO, and other styles
40

Ouerghemmi, W., A. Le Bris, N. Chehata, and C. Mallet. "A TWO-STEP DECISION FUSION STRATEGY: APPLICATION TO HYPERSPECTRAL AND MULTISPECTRAL IMAGES FOR URBAN CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 167–74. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-167-2017.

Full text
Abstract:
Very high spatial resolution multispectral images and lower spatial resolution hyperspectral images are complementary sources for urban object classification. The first enables a fine delineation of objects, while the second can better discriminate classes and consider richer land cover semantics. This paper presents a decision fusion scheme taking advantage of both sources classification maps, to produce a better classification map. The proposed method aims at dealing with both semantic and spatial uncertainties and consists in two steps. First, class membership maps are merged at pixel level. Several fusion rules are considered and compared in this study. Secondly, classification is obtained from a global regularization of a graphical model, involving a fit-to-data term related to class membership measures and an image based contrast sensitive regularization term. Results are presented on three datasets. The classification accuracy is improved up to 5&amp;thinsp;%, with comparison to the best single source classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
41

Brezini, Salah Eddine, and Yannick Deville. "Hyperspectral and Multispectral Image Fusion with Automated Extraction of Image-Based Endmember Bundles and Sparsity-Based Unmixing to Deal with Spectral Variability." Sensors 23, no. 4 (February 20, 2023): 2341. http://dx.doi.org/10.3390/s23042341.

Full text
Abstract:
The aim of fusing hyperspectral and multispectral images is to overcome the limitation of remote sensing hyperspectral sensors by improving their spatial resolutions. This process, also known as hypersharpening, generates an unobserved high-spatial-resolution hyperspectral image. To this end, several hypersharpening methods have been developed, however most of them do not consider the spectral variability phenomenon; therefore, neglecting this phenomenon may cause errors, which leads to reducing the spatial and spectral quality of the sharpened products. Recently, new approaches have been proposed to tackle this problem, particularly those based on spectral unmixing and using parametric models. Nevertheless, the reported methods need a large number of parameters to address spectral variability, which inevitably yields a higher computation time compared to the standard hypersharpening methods. In this paper, a new hypersharpening method addressing spectral variability by considering the spectra bundles-based method, namely the Automated Extraction of Endmember Bundles (AEEB), and the sparsity-based method called Sparse Unmixing by Variable Splitting and Augmented Lagrangian (SUnSAL), is introduced. This new method called Hyperspectral Super-resolution with Spectra Bundles dealing with Spectral Variability (HSB-SV) was tested on both synthetic and real data. Experimental results showed that HSB-SV provides sharpened products with higher spectral and spatial reconstruction fidelities with a very low computational complexity compared to other methods dealing with spectral variability, which are the main contributions of the designed method.
APA, Harvard, Vancouver, ISO, and other styles
42

Degerickx, Jeroen, Martin Hermy, and Ben Somers. "Mapping Functional Urban Green Types Using High Resolution Remote Sensing Data." Sustainability 12, no. 5 (March 10, 2020): 2144. http://dx.doi.org/10.3390/su12052144.

Full text
Abstract:
Urban green spaces are known to provide ample benefits to human society and hence play a vital role in safeguarding the quality of life in our cities. In order to optimize the design and management of green spaces with regard to the provisioning of these ecosystem services, there is a clear need for uniform and spatially explicit datasets on the existing urban green infrastructure. Current mapping approaches, however, largely focus on large land use units (e.g., park, garden), or broad land cover classes (e.g., tree, grass), not providing sufficient thematic detail to model urban ecosystem service supply. We therefore proposed a functional urban green typology and explored the potential of both passive (2 m-hyperspectral and 0.5 m-multispectral optical imagery) and active (airborne LiDAR) remote sensing technology for mapping the proposed types using object-based image analysis and machine learning. Airborne LiDAR data was found to be the most valuable dataset overall, while fusion with hyperspectral data was essential for mapping the most detailed classes. High spectral similarities, along with adjacency and shadow effects still caused severe confusion, resulting in class-wise accuracies <50% for some detailed functional types. Further research should focus on the use of multi-temporal image analysis to fully unlock the potential of remote sensing data for detailed urban green mapping.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Xueliang, and Honge Ren. "DBMF: A Novel Method for Tree Species Fusion Classification Based on Multi-Source Images." Forests 13, no. 1 (December 28, 2021): 33. http://dx.doi.org/10.3390/f13010033.

Full text
Abstract:
Multi-source data remote sensing provides innovative technical support for tree species recognition. Tree species recognition is relatively poor despite noteworthy advancements in image fusion methods because the features from multi-source data for each pixel in the same region cannot be deeply exploited. In the present paper, a novel deep learning approach for hyperspectral imagery is proposed to improve accuracy for the classification of tree species. The proposed method, named the double branch multi-source fusion (DBMF) method, could more deeply determine the relationship between multi-source data and provide more effective information. The DBMF method does this by fusing spectral features extracted from a hyperspectral image (HSI) captured by the HJ-1A satellite and spatial features extracted from a multispectral image (MSI) captured by the Sentinel-2 satellite. The network has two branches in the spatial branch to avoid the risk of information loss, of which, sandglass blocks are embedded into a convolutional neural network (CNN) to extract the corresponding spatial neighborhood features from the MSI. Simultaneously, to make the useful spectral feature transfer more effective in the spectral branch, we employed bidirectional long short-term memory (Bi-LSTM) with a triple attention mechanism to extract the spectral features of each pixel in the HSI with low resolution. The feature information is fused to classify the tree species after the addition of a fusion activation function, which could allow the network to obtain more interactive information. Finally, the fusion strategy allows for the prediction of the full classification map of three study areas. Experimental results on a multi-source dataset show that DBMF has a significant advantage over other state-of-the-art frameworks.
APA, Harvard, Vancouver, ISO, and other styles
44

Han, Yanling, Pengxia Cui, Yun Zhang, Ruyan Zhou, Shuhu Yang, and Jing Wang. "Remote Sensing Sea Ice Image Classification Based on Multilevel Feature Fusion and Residual Network." Mathematical Problems in Engineering 2021 (September 20, 2021): 1–10. http://dx.doi.org/10.1155/2021/9928351.

Full text
Abstract:
Sea ice disasters are already one of the most serious marine disasters in the Bohai Sea region of our country, which have seriously affected the coastal economic development and residents’ lives. Sea ice classification is an important part of sea ice detection. Hyperspectral imagery and multispectral imagery contain rich spectral information and spatial information and provide important data support for sea ice classification. At present, most sea ice classification methods mainly focus on shallow learning based on spectral features, and the good performance of the deep learning method in remote sensing image classification provides a new idea for sea ice classification. However, the level of deep learning is limited due to the influence of input size in sea ice image classification, and the deep features in the image cannot be fully mined, which affects the further improvement of sea ice classification accuracy. Therefore, this paper proposes an image classification method based on multilevel feature fusion using residual network. First, the PCA method is used to extract the first principal component of the original image, and the residual network is used to deepen the number of network layers. The FPN, PAN, and SPP modules increase the mining between layer and layer features and merge the features between different layers to further improve the accuracy of sea ice classification. In order to verify the effectiveness of the method in this paper, sea ice classification experiments were performed on the hyperspectral image of Bohai Bay in 2008 and the multispectral image of Bohai Bay in 2020. The experimental results show that compared with the algorithm with fewer layers of deep learning network, the method proposed in this paper utilizes the idea of residual network to deepen the number of network layers and carries out multilevel feature fusion through FPN, PAN, and SPP modules, which effectively solves the problem of insufficient deep feature extraction and obtains better classification performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Mallikharjuna Rao, K., B. Srinivasa Rao, B. Sai Chandana, and J. Harikiran. "Dimensionality reduction and hierarchical clustering in framework for hyperspectral image segmentation." Bulletin of Electrical Engineering and Informatics 8, no. 3 (September 1, 2019): 1081–87. http://dx.doi.org/10.11591/eei.v8i3.1451.

Full text
Abstract:
The hyperspectral data contains hundreds of narrows bands representing the same scene on earth, with each pixel has a continuous reflectance spectrum. The first attempts to analysehyperspectral images were based on techniques that were developed for multispectral images by randomly selecting few spectral channels, usually less than seven. This random selection of bands degrades the performance of segmentation algorithm on hyperspectraldatain terms of accuracies. In this paper, a new framework is designed for the analysis of hyperspectral image by taking the information from all the data channels with dimensionality reduction method using subset selection and hierarchical clustering. A methodology based on subset construction is used for selecting k informative bands from d bands dataset. In this selection, similarity metrics such as Average Pixel Intensity [API], Histogram Similarity [HS], Mutual Information [MI] and Correlation Similarity [CS] are used to create k distinct subsets and from each subset, a single band is selected. The informative bands which are selected are merged into a single image using hierarchical fusion technique. After getting fused image, Hierarchical clustering algorithm is used for segmentation of image. The qualitative and quantitative analysis shows that CS similarity metric in dimensionality reduction algorithm gets high quality segmented image.
APA, Harvard, Vancouver, ISO, and other styles
46

Marques Junior, Ademir, Eniuce Menezes de Souza, Marianne Müller, Diego Brum, Daniel Capella Zanotta, Rafael Kenji Horota, Lucas Silveira Kupssinskü, Maurício Roberto Veronez, Luiz Gonzaga, and Caroline Lessio Cazarin. "Improving Spatial Resolution of Multispectral Rock Outcrop Images Using RGB Data and Artificial Neural Networks." Sensors 20, no. 12 (June 23, 2020): 3559. http://dx.doi.org/10.3390/s20123559.

Full text
Abstract:
Spectral information provided by multispectral and hyperspectral sensors has a great impact on remote sensing studies, easing the identification of carbonate outcrops that contribute to a better understanding of petroleum reservoirs. Sensors aboard satellites like Landsat series, which have data freely available usually lack the spatial resolution that suborbital sensors have. Many techniques have been developed to improve spatial resolution through data fusion. However, most of them have serious limitations regarding application and scale. Recently Super-Resolution (SR) convolution neural networks have been tested with encouraging results. However, they require large datasets, more time and computational power for training. To overcome these limitations, this work aims to increase the spatial resolution of multispectral bands from the Landsat satellite database using a modified artificial neural network that uses pixel kernels of a single spatial high-resolution RGB image from Google Earth as input. The methodology was validated with a common dataset of indoor images as well as a specific area of Landsat 8. Different downsized scale inputs were used for training where the validation used the ground truth of the original size images, obtaining comparable results to the recent works. With the method validated, we generated high spatial resolution spectral bands based on RGB images from Google Earth on a carbonated outcrop area, which were then properly classified according to the soil spectral responses making use of the advantage of a higher spatial resolution dataset.
APA, Harvard, Vancouver, ISO, and other styles
47

Zheng, Qiong, Wenjiang Huang, Qing Xia, Yingying Dong, Huichun Ye, Hao Jiang, Shuisen Chen, and Shanyu Huang. "Remote Sensing Monitoring of Rice Diseases and Pests from Different Data Sources: A Review." Agronomy 13, no. 7 (July 13, 2023): 1851. http://dx.doi.org/10.3390/agronomy13071851.

Full text
Abstract:
Rice is an important food crop in China, and diseases and pests are the main factors threatening its safety, ecology, and efficient production. The development of remote sensing technology provides an important means for non-destructive and rapid monitoring of diseases and pests that threaten rice crops. This paper aims to provide insights into current and future trends in remote sensing for rice crop monitoring. First, we expound the mechanism of remote sensing monitoring of rice diseases and pests and introduce the applications of different commonly data sources (hyperspectral data, multispectral data, thermal infrared data, fluorescence, and multi-source data fusion) in remote sensing monitoring of rice diseases and pests. Secondly, we summarize current methods for monitoring rice diseases and pests, including statistical discriminant type, machine learning, and deep learning algorithm. Finally, we provide a general framework to facilitate the monitoring of rice diseases or pests, which provides ideas and technical guidance for remote sensing monitoring of unknown diseases and pests, and we point out the challenges and future development directions of rice disease and pest remote sensing monitoring. This work provides new ideas and references for the subsequent monitoring of rice diseases and pests using remote sensing.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhao, Jing, Fangjiang Pan, Xiao Xiao, Lianbin Hu, Xiaoli Wang, Yu Yan, Shuailing Zhang, Bingquan Tian, Hailin Yu, and Yubin Lan. "Summer Maize Growth Estimation Based on Near-Surface Multi-Source Data." Agronomy 13, no. 2 (February 12, 2023): 532. http://dx.doi.org/10.3390/agronomy13020532.

Full text
Abstract:
Rapid and accurate crop chlorophyll content estimation and the leaf area index (LAI) are both crucial for guiding field management and improving crop yields. This paper proposes an accurate monitoring method for LAI and soil plant analytical development (SPAD) values (which are closely related to leaf chlorophyll content; we use the SPAD instead of chlorophyll relative content) based on the fusion of ground–air multi-source data. Firstly, in 2020 and 2021, we collected unmanned aerial vehicle (UAV) multispectral data, ground hyperspectral data, UAV visible-light data, and environmental cumulative temperature data for multiple growth stages of summer maize, respectively. Secondly, the effective plant height (canopy height model (CHM)), effective accumulation temperature (growing degree days (GDD)), canopy vegetation index (mainly spectral vegetation index) and canopy hyperspectral features of maize were extracted, and sensitive features were screened by correlation analysis. Then, based on single-source and multi-source data, multiple linear regression (MLR), partial least-squares regression (PLSR) and random forest (RF) regression were used to construct LAI and SPAD inversion models. Finally, the distribution of LAI and SPAD prescription plots was generated and the trend for the two was analyzed. The results were as follows: (1) The correlations between the position of the hyperspectral red edge and the first-order differential value in the red edge with LAI and SPAD were all greater than 0.5. The correlation between the vegetation index, including a red and near-infrared band, with LAI and SPAD was above 0.75. The correlation between crop height and effective accumulated temperature with LAI and SPAD was above 0.7. (2) The inversion models based on multi-source data were more effective than the models made with single-source data. The RF model with multi-source data fusion achieved the highest accuracy of all models. In the testing set, the LAI and SPAD models’ R2 was 0.9315 and 0.7767; the RMSE was 0.4895 and 2.8387. (3) The absolute error between the extraction result of each model prescription map and the measured value was small. The error between the predicted value and the measured value of the LAI prescription map generated by the RF model was less than 0.4895. The difference between the predicted value and the measured value of the SPAD prescription map was less than 2.8387. The LAI and SPAD of summer maize first increased and then decreased with the advancement of the growth period, which was in line with the actual growth conditions. The research results indicate that the proposed method could effectively monitor maize growth parameters and provide a scientific basis for summer maize field management.
APA, Harvard, Vancouver, ISO, and other styles
49

Gao, Yunhao, Xiukai Song, Wei Li, Jianbu Wang, Jianlong He, Xiangyang Jiang, and Yinyin Feng. "Fusion Classification of HSI and MSI Using a Spatial-Spectral Vision Transformer for Wetland Biodiversity Estimation." Remote Sensing 14, no. 4 (February 11, 2022): 850. http://dx.doi.org/10.3390/rs14040850.

Full text
Abstract:
The rapid development of remote sensing technology provides wealthy data for earth observation. Land-cover mapping indirectly achieves biodiversity estimation at a coarse scale. Therefore, accurate land-cover mapping is the precondition of biodiversity estimation. However, the environment of the wetlands is complex, and the vegetation is mixed and patchy, so the land-cover recognition based on remote sensing is full of challenges. This paper constructs a systematic framework for multisource remote sensing image processing. Firstly, the hyperspectral image (HSI) and multispectral image (MSI) are fused by the CNN-based method to obtain the fused image with high spatial-spectral resolution. Secondly, considering the sequentiality of spatial distribution and spectral response, the spatial-spectral vision transformer (SSViT) is designed to extract sequential relationships from the fused images. After that, an external attention module is utilized for feature integration, and then the pixel-wise prediction is achieved for land-cover mapping. Finally, land-cover mapping and benthos data at the sites are analyzed consistently to reveal the distribution rule of benthos. Experiments on ZiYuan1-02D data of the Yellow River estuary wetland are conducted to demonstrate the effectiveness of the proposed framework compared with several related methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Qi, Guanghui, Chunyan Chang, Wei Yang, Peng Gao, and Gengxing Zhao. "Soil Salinity Inversion in Coastal Corn Planting Areas by the Satellite-UAV-Ground Integration Approach." Remote Sensing 13, no. 16 (August 5, 2021): 3100. http://dx.doi.org/10.3390/rs13163100.

Full text
Abstract:
Soil salinization is a significant factor affecting corn growth in coastal areas. How to use multi-source remote sensing data to achieve the target of rapid, efficient and accurate soil salinity monitoring in a large area is worth further study. In this research, using Kenli District of the Yellow River Delta as study area, the inversion of soil salinity in a corn planting area was carried out based on the integration of ground imaging hyperspectral, unmanned aerial vehicles (UAV) multispectral and Sentinel-2A satellite multispectral images. The UAV and ground images were fused, and the partial least squares inversion model was constructed by the fused UAV image. Then, inversion model was scaled up to the satellite by the TsHARP method, and finally, the accuracy of the satellite-UAV-ground inversion model and results was verified. The results show that the band fusion of UAV and ground images effectively enrich the spectral information of the UAV image. The accuracy of the inversion model constructed based on the fused UAV images was improved. The inversion results of soil salinity based on the integration of satellite-UAV-ground were highly consistent with the measured soil salinity (R2 = 0.716 and RMSE = 0.727), and the inversion model had excellent universal applicability. This research integrated the advantages of multi-source data to establish a unified satellite-UAV-ground model, which improved the ability of large-scale remote sensing data to finely indicate soil salinity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography