Journal articles on the topic 'Information extraction and fusion'

To see the other types of publications on this topic, follow the link: Information extraction and fusion.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Information extraction and fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Zhongguo, Mingzhu Zhang, Zhongmei Zhang, Han Li, Chen Liu, and Sikandar Ali. "Lecture Information Service Based on Multiple Features Fusion." International Journal of Software Engineering and Knowledge Engineering 31, no. 04 (April 2021): 545–62. http://dx.doi.org/10.1142/s0218194021400076.

Full text
Abstract:
Information service is always a hot topic especially when the Web is accessible anywhere. In university, lecture information is very important for students and teachers who want to take part in academic meetings. Therefore, lecture news extraction is an important and imperative task. Many open information extraction methods have been proposed, but due to the high heterogeneity of websites, this task is still a challenge. In this paper, we propose a method based on fusing multiple features to locate lecture news on the university website. These features include the linked relationship between parent webpage and child webpages, the visual similarity, and the semantics of webpages. Additionally, this paper provides an information service based on a main content extraction algorithm for extracting the lecture information. Stable and invariant features enable the proposed method to adapt to various kinds of campus websites. The experiments conducted on 50 websites show the effectiveness and efficiency of the provided service.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Xin, Li Yang, and Yan Zhang. "Multi-Source Information Fusion Based on Data Driven." Applied Mechanics and Materials 40-41 (November 2010): 121–26. http://dx.doi.org/10.4028/www.scientific.net/amm.40-41.121.

Full text
Abstract:
Take data driven method as theoretical basis, study multi-source information fusion technology. Using online and off-line data of the fusion system, does not rely on system's mathematical model, has avoided question about system modeling by mechanism. Uses principal component analysis method, rough set theory, Support Vector Machine(SVM) and so on, three method fusions and supplementary, through information processing and feature extraction to system's data-in, catches the most important information to lower dimensional space, realizes knowledge reduction. From data level, characteristic level, decision-making three levels realize information fusion. The example indicated that reduced computational complexity, reduced information loss in the fusion process, and enhanced the fusion accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Xia, Zhijing Xu, and Kan Huang. "Multimodal Emotion Recognition Based on Cascaded Multichannel and Hierarchical Fusion." Computational Intelligence and Neuroscience 2023 (January 5, 2023): 1–18. http://dx.doi.org/10.1155/2023/9645611.

Full text
Abstract:
Humans express their emotions in a variety of ways, which inspires research on multimodal fusion-based emotion recognition that utilizes different modalities to achieve information complementation. However, extracting deep emotional features from different modalities and fusing them remain a challenging task. It is essential to exploit the advantages of different extraction and fusion approaches to capture the emotional information contained within and across modalities. In this paper, we present a novel multimodal emotion recognition framework called multimodal emotion recognition based on cascaded multichannel and hierarchical fusion (CMC-HF), where visual, speech, and text signals are simultaneously utilized as multimodal inputs. First, three cascaded channels based on deep learning technology perform feature extraction for the three modalities separately to enhance deeper information extraction ability within each modality and improve recognition performance. Second, an improved hierarchical fusion module is introduced to promote intermodality interactions of three modalities and further improve recognition and classification accuracy. Finally, to validate the effectiveness of the designed CMC-HF model, some experiments are conducted to evaluate two benchmark datasets, IEMOCAP and CMU-MOSI. The results show that we achieved an almost 2%∼3.2% increase in accuracy of the four classes for the IEMOCAP dataset as well as an improvement of 0.9%∼2.5% in the average class accuracy for the CMU-MOSI dataset when compared to the existing state-of-the-art methods. The ablation experimental results indicate that the cascaded feature extraction method and the hierarchical fusion method make a significant contribution to multimodal emotion recognition, suggesting that the three modalities contain deeper information interactions of both intermodality and intramodality. Hence, the proposed model has better overall performance and achieves higher recognition efficiency and better robustness.
APA, Harvard, Vancouver, ISO, and other styles
4

Deren, LI, and SHAO Juliang. "HOUSE EXTRACTION WITH MULTIRESOLUTION ANALYSIS AND INFORMATION FUSION." Geo-spatial Information Science 1, no. 1 (October 1998): 6–12. http://dx.doi.org/10.1080/10095020.1998.10553277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Tingting, Jian Yin, and Qingfeng Qin. "MFHE: Multi-View Fusion-Based Heterogeneous Information Network Embedding." Applied Sciences 12, no. 16 (August 17, 2022): 8218. http://dx.doi.org/10.3390/app12168218.

Full text
Abstract:
Depending on the type of information network, information network embedding is classified into homogeneous information network embedding and heterogeneous information network (HIN) embedding. Compared with the homogeneous network, HIN composition is more complex and contains richer semantics. At present, the research on homogeneous information network embedding is relatively mature. However, if the homogeneous information network model is directly applied to HIN, it will cause incomplete information extraction. It is necessary to build a specialized embedding model for HIN. Learning information network embedding based on the meta-path is an effective approach to extracting semantic information. Nevertheless, extracting HIN embedding only from a single view will cause information loss. To solve these problems, we propose a multi-view fusion-based HIN embedding model, called MFHE. MFHE includes four parts: node feature space transformation, subview information extraction, multi-view information fusion, and training. MFHE divides HIN into different subviews based on meta-paths, models the local information accurately in the subviews based on the multi-head attention mechanism, and then fuses subview information through a spatial matrix. In this paper, we consider the relationship between subviews; thus, the MFHE is applicable to complex HIN embedding. Experiments are conducted on ACM and DBLP datasets. Compared with baselines, the experimental results demonstrate that the effectiveness of MFHE and HIN embedding has been improved.
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Yan Bin, Geng Shi Zhang, and Jin Ping Li. "A Feature Extraction Strategy Based on Multiple Color Information." Advanced Materials Research 433-440 (January 2012): 6175–81. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.6175.

Full text
Abstract:
In this paper, a feature extraction strategy based on multiple color information fusion was proposed. Firstly this method started with analyzing the transform formula of color space, which transform was mainly thinking about RGB color space to other color spaces. Secondly by analyzing the characteristic of every color space in describing the actual color information, the advantages and disadvantages of every color space were showed. Thirdly through above conclusion, the algorithm which extracted the target feature only using single color information was defective, and then the strategy based on multiple color information fusion was proposed. Lastly the detail fusion strategy was given, which fused the probability distributed information of multiple color into the last probability distributed information as the target feature. The feature extraction strategy in this paper is verified by the camshift algorithm. The results show that the multiple color information fusion can improve the tracking performance of moving target.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Wenya, and Sinno Jialin Pan. "Integrating Deep Learning with Logic Fusion for Information Extraction." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9225–32. http://dx.doi.org/10.1609/aaai.v34i05.6460.

Full text
Abstract:
Information extraction (IE) aims to produce structured information from an input text, e.g., Named Entity Recognition and Relation Extraction. Various attempts have been proposed for IE via feature engineering or deep learning. However, most of them fail to associate the complex relationships inherent in the task itself, which has proven to be especially crucial. For example, the relation between 2 entities is highly dependent on their entity types. These dependencies can be regarded as complex constraints that can be efficiently expressed as logical rules. To combine such logic reasoning capabilities with learning capabilities of deep neural networks, we propose to integrate logical knowledge in the form of first-order logic into a deep learning system, which can be trained jointly in an end-to-end manner. The integrated framework is able to enhance neural outputs with knowledge regularization via logic rules, and at the same time update the weights of logic rules to comply with the characteristics of the training data. We demonstrate the effectiveness and generalization of the proposed model on multiple IE tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Jiang, Jiao Wang, and Meng Shang. "Fault Diagnosis Method of Time Domain and Time-Frequency Domain Based on Information Fusion." Applied Mechanics and Materials 300-301 (February 2013): 635–39. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.635.

Full text
Abstract:
On account of the problem that traditional pipe leakage diagnosis method is not highly accuracy .this paper come up with a method that based on pipe leakage diagnosis method of neural network information fusion. Giving the stress wave time domain feature extraction index data algorithm and wavelet packet extraction each the frequency band energy algorithm, by comparing with these results of the pressure wave time domain feature index data, time-frequency extraction energy values and fault diagnosis of both information fusion ,which show the neural network information fusion method that is used for pipe leakage diagnosis that is feasible and effective.
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Zhiqiang, Hongyuan Wang, Qianhao Ning, and Yinxi Lu. "Robust Image Matching Based on Image Feature and Depth Information Fusion." Machines 10, no. 6 (June 8, 2022): 456. http://dx.doi.org/10.3390/machines10060456.

Full text
Abstract:
In this paper, we propose a robust image feature extraction and fusion method to effectively fuse image feature and depth information and improve the registration accuracy of RGB-D images. The proposed method directly splices the image feature point descriptors with the corresponding point cloud feature descriptors to obtain the fusion descriptor of the feature points. The fusion feature descriptor is constructed based on the SIFT, SURF, and ORB feature descriptors and the PFH and FPFH point cloud feature descriptors. Furthermore, the registration performance based on fusion features is tested through the RGB-D datasets of YCB and KITTI. ORBPFH reduces the false-matching rate by 4.66~16.66%, and ORBFPFH reduces the false-matching rate by 9~20%. The experimental results show that the RGB-D robust feature extraction and fusion method proposed in this paper is suitable for the fusion of ORB with PFH and FPFH, which can improve feature representation and registration, representing a novel approach for RGB-D image matching.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Danyao, Luhe Wan, and Wei Gao. "Fusion Method Evaluation and Classification Suitability Study of Wetland Satellite Imagery." Earth Sciences Research Journal 23, no. 4 (October 1, 2019): 339–46. http://dx.doi.org/10.15446/esrj.v23n4.84350.

Full text
Abstract:
Based on HJ-1A HSI data and Landsat-8 OLI data, RS image fusion experiments were carried out using three fusion methods: principal component (PC) transform, Gram Schimdt (GS) transform and nearest neighbor diffusion (NND) algorithm. Four evaluation indexes, namely mean, standard deviation, information entropy and average gradient, were selected to evaluate the fusion results from the aspects of image brightness, clarity and information content. Wetland vegetation was classified by spectral angle mapping (SAM) to find a suitable fusion method for wetland vegetation information extraction. The results show that PC fusion image contains the largest amount of information, GS fusion image has certain advantages in brightness and clarity maintenance, and NND fusion method can retain the spectral characteristics of the image to the maximum extent; Among the three fusion methods, PC transform is the most suitable for wetland information extraction. It can retain more spectral information while improving spatial resolution, with classification accuracy of 89.24% and Kappa coefficient of 0.86.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Hong, and Jin. "Information Fusion for Multi-Source Material Data: Progress and Challenges." Applied Sciences 9, no. 17 (August 22, 2019): 3473. http://dx.doi.org/10.3390/app9173473.

Full text
Abstract:
The development of material science in the manufacturing industry has resulted in a huge amount of material data, which are often from different sources and vary in data format and semantics. The integration and fusion of material data can offer a unified framework for material data representation, processing, storage and mining, which can further help to accomplish many tasks, including material data disambiguation, material feature extraction, material-manufacturing parameters setting, and material knowledge extraction. On the other side, the rapid advance of information technologies like artificial intelligence and big data, brings new opportunities for material data fusion. To the best of our knowledge, the community is currently lacking a comprehensive review of the state-of-the-art techniques on material data fusion. This review first analyzes the special properties of material data and discusses the motivations of multi-source material data fusion. Then, we particularly focus on the recent achievements of multi-source material data fusion. This review has a few unique features compared to previous studies. First, we present a systematic categorization and comparison framework for material data fusion according to the processing flow of material data. Second, we discuss the applications and impact of recent hot technologies in material data fusion, including artificial intelligence algorithms and big data technologies. Finally, we present some open problems and future research directions for multi-source material data fusion.
APA, Harvard, Vancouver, ISO, and other styles
12

S. Saranya, S., and N. Sabiyath Fatima. "IoT Information Status Using Data Fusion and Feature Extraction Method." Computers, Materials & Continua 70, no. 1 (2022): 1857–74. http://dx.doi.org/10.32604/cmc.2022.019621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Duan, Puhong, Xudong Kang, Pedram Ghamisi, and Yu Liu. "Multilevel Structure Extraction-Based Multi-Sensor Data Fusion." Remote Sensing 12, no. 24 (December 9, 2020): 4034. http://dx.doi.org/10.3390/rs12244034.

Full text
Abstract:
Multi-sensor data on the same area provide complementary information, which is helpful for improving the discrimination capability of classifiers. In this work, a novel multilevel structure extraction method is proposed to fuse multi-sensor data. This method is comprised of three steps: First, multilevel structure extraction is constructed by cascading morphological profiles and structure features, and is utilized to extract spatial information from multiple original images. Then, a low-rank model is adopted to integrate the extracted spatial information. Finally, a spectral classifier is employed to calculate class probabilities, and a maximum posteriori estimation model is used to decide the final labels. Experiments tested on three datasets including rural and urban scenes validate that the proposed approach can produce promising performance with regard to both subjective and objective qualities.
APA, Harvard, Vancouver, ISO, and other styles
14

GAO, Quan-Xue, De-Yan XIE, Hui XU, Yuan-Zheng LI, and Xi-Quan GAO. "Supervised Feature Extraction Based on Information Fusion of Local Structure and Diversity Information." Acta Automatica Sinica 36, no. 8 (September 30, 2010): 1107–14. http://dx.doi.org/10.3724/sp.j.1004.2010.01107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Zhili, Meng Lu, Shunping Ji, Huafen Yu, and Chenhui Nie. "Rich CNN Features for Water-Body Segmentation from Very High Resolution Aerial and Satellite Imagery." Remote Sensing 13, no. 10 (May 13, 2021): 1912. http://dx.doi.org/10.3390/rs13101912.

Full text
Abstract:
Extracting water-bodies accurately is a great challenge from very high resolution (VHR) remote sensing imagery. The boundaries of a water body are commonly hard to identify due to the complex spectral mixtures caused by aquatic vegetation, distinct lake/river colors, silts near the bank, shadows from the surrounding tall plants, and so on. The diversity and semantic information of features need to be increased for a better extraction of water-bodies from VHR remote sensing images. In this paper, we address these problems by designing a novel multi-feature extraction and combination module. This module consists of three feature extraction sub-modules based on spatial and channel correlations in feature maps at each scale, which extract the complete target information from the local space, larger space, and between-channel relationship to achieve a rich feature representation. Simultaneously, to better predict the fine contours of water-bodies, we adopt a multi-scale prediction fusion module. Besides, to solve the semantic inconsistency of feature fusion between the encoding stage and the decoding stage, we apply an encoder-decoder semantic feature fusion module to promote fusion effects. We carry out extensive experiments in VHR aerial and satellite imagery respectively. The result shows that our method achieves state-of-the-art segmentation performance, surpassing the classic and recent methods. Moreover, our proposed method is robust in challenging water-body extraction scenarios.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Yu Si, Liu Liu, and Ti Yong Zhang. "Research on Intelligent Information Search Based on Web." Applied Mechanics and Materials 539 (July 2014): 434–37. http://dx.doi.org/10.4028/www.scientific.net/amm.539.434.

Full text
Abstract:
This paper analyzes the deficiencies of the existing Web information extraction methods and reasons, and put forward the page text information extraction method based on multi-feature fusion. Compared with previous methods with a small selection of features, the method in this paper determine the choice of a variety of information via text features, better able to adapt to a variety of styles page. By comparing the experiment, this method has higher accuracy to meet practical application needs in the Web Content Extraction.
APA, Harvard, Vancouver, ISO, and other styles
17

Shi, Li Juan, Ping Feng, Jian Zhao, Li Rong Wang, and Na Che. "Study on Dual Mode Fusion Method of Video and Audio." Applied Mechanics and Materials 734 (February 2015): 412–15. http://dx.doi.org/10.4028/www.scientific.net/amm.734.412.

Full text
Abstract:
In order to solve the hearing-impaired students in class only rely on sign language, amount of classroom information received less, This paper studies video and audio dual mode fusion algorithm combined with lip reading、speech recognition technology and information fusion technology.First ,speech feature extraction, processing of speech signal, the speech synchronization output text. At the same time, extraction of video features, voice and video signal fusion, Make voice information into visual information that the hearing-impaired students can receive. Make the students receive text messages as receive visual information, improve speech recognition rate, so meet the need of the classroom teaching for hearing-impaired students.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Shudi, Jiaxiong Wu, and Zhipeng Feng. "Dual-Fusion Active Contour Model with Semantic Information for Saliency Target Extraction of Underwater Images." Applied Sciences 12, no. 5 (February 28, 2022): 2515. http://dx.doi.org/10.3390/app12052515.

Full text
Abstract:
Underwater vision research is the foundation of marine-related disciplines. The target contour extraction is significant for target tracking and visual information mining. Aiming to resolve the problem that conventional active contour models cannot effectively extract the contours of salient targets in underwater images, we propose a dual-fusion active contour model with semantic information. First, the saliency images are introduced as semantic information and salient target contours are extracted by fusing Chan–Vese and local binary fitting models. Then, the original underwater images are used to supplement the missing contour information by using the local image fitting. Compared with state-of-the-art contour extraction methods, our dual-fusion active contour model can effectively filter out background information and accurately extract salient target contours. Moreover, the proposed model achieves the best results in the quantitative comparison of MAE (mean absolute error), ER (error rate), and DR (detection rate) indicators and provides reliable prior knowledge for target tracking and visual information mining.
APA, Harvard, Vancouver, ISO, and other styles
19

Shao, Zhenfeng, Wenfu Wu, and Songjing Guo. "IHS-GTF: A Fusion Method for Optical and Synthetic Aperture Radar Data." Remote Sensing 12, no. 17 (August 28, 2020): 2796. http://dx.doi.org/10.3390/rs12172796.

Full text
Abstract:
Optical and Synthetic Aperture Radar (SAR) fusion is addressed in this paper. Intensity–Hue–Saturation (IHS) is an easily implemented fusion method and can separate Red–Green–Blue (RGB) images into three independent components; however, using this method directly for optical and SAR images fusion will cause spectral distortion. The Gradient Transfer Fusion (GTF) algorithm is proposed firstly for infrared and gray visible images fusion, which formulates image fusion as an optimization problem and keeps the radiation information and spatial details simultaneously. However, the algorithm assumes that the spatial details only come from one of the source images, which is inconsistent with the actual situation of optical and SAR images fusion. In this paper, a fusion algorithm named IHS-GTF for optical and SAR images is proposed, which combines the advantages of IHS and GTF and considers the spatial details from the both images based on pixel saliency. The proposed method was assessed by visual analysis and ten indices and was further tested by extracting impervious surface (IS) from the fused image with random forest classifier. The results show the good preservation of spatial details and spectral information by our proposed method, and the overall accuracy of IS extraction is 2% higher than that of using optical image alone. The results demonstrate the ability of the proposed method for fusing optical and SAR data effectively to generate useful data.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Yong, Xiangqiang Zeng, Xiaohan Liao, and Dafang Zhuang. "B-FGC-Net: A Building Extraction Network from High Resolution Remote Sensing Imagery." Remote Sensing 14, no. 2 (January 7, 2022): 269. http://dx.doi.org/10.3390/rs14020269.

Full text
Abstract:
Deep learning (DL) shows remarkable performance in extracting buildings from high resolution remote sensing images. However, how to improve the performance of DL based methods, especially the perception of spatial information, is worth further study. For this purpose, we proposed a building extraction network with feature highlighting, global awareness, and cross level information fusion (B-FGC-Net). The residual learning and spatial attention unit are introduced in the encoder of the B-FGC-Net, which simplifies the training of deep convolutional neural networks and highlights the spatial information representation of features. The global feature information awareness module is added to capture multiscale contextual information and integrate the global semantic information. The cross level feature recalibration module is used to bridge the semantic gap between low and high level features to complete the effective fusion of cross level information. The performance of the proposed method was tested on two public building datasets and compared with classical methods, such as UNet, LinkNet, and SegNet. Experimental results demonstrate that B-FGC-Net exhibits improved profitability of accurate extraction and information integration for both small and large scale buildings. The IoU scores of B-FGC-Net on WHU and INRIA Building datasets are 90.04% and 79.31%, respectively. B-FGC-Net is an effective and recommended method for extracting buildings from high resolution remote sensing images.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu, Wei, Xiao Yue Tang, Lin Gan, Shi Jun Li, Yun Lu Zhang, and Jun Wang. "Information Mining Based on Multi-Granularity News Fusion." Advanced Materials Research 850-851 (December 2013): 592–95. http://dx.doi.org/10.4028/www.scientific.net/amr.850-851.592.

Full text
Abstract:
In response to the Internet users are about the same theme of mass information effectively filter and extraction, this paper proposes a document based on particle size of the news MGNF fusion algorithm, particle size document means that face now the newest microblog and short documents and traditional news dissemination documents, both document length although different, but as based on the calculation of grassroots journalism and communication mode. Through mining the different particle size of the document different views, we can find out the potential to be found for the information. The experimental results verify the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen chen and Daohui Bi. "A Motion Image Pose Contour Extraction Method Based on B-Spline Wavelet." International Journal of Antennas and Propagation 2021 (October 26, 2021): 1–8. http://dx.doi.org/10.1155/2021/4553143.

Full text
Abstract:
In order to improve the accuracy of traditional motion image pose contour extraction and shorten the extraction time, a motion image pose contour extraction method based on B-spline wavelet is proposed. Moving images are acquired through the visual system, the information fusion process is used to perform statistical analysis on the images containing motion information, the location of the motion area is determined, convolutional neural network technology is used to preprocess the initial motion image pose contour, and B-spline wavelet theory is used. The preprocessed motion image pose contour is detected, combined with the heuristic search method to obtain the pose contour points, and the motion image pose contour extraction is completed. The simulation results show that the proposed method has higher accuracy and shorter extraction time in extracting motion image pose contours.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Zixuan, Xuan Sun, and Yuxi Liu. "GMR-Net: Road-Extraction Network Based on Fusion of Local and Global Information." Remote Sensing 14, no. 21 (October 31, 2022): 5476. http://dx.doi.org/10.3390/rs14215476.

Full text
Abstract:
Road extraction from high-resolution remote-sensing images has high application values in various fields. However, such work is susceptible to the influence of the surrounding environment due to the diverse slenderness and complex connectivity of roads, leading to false judgment and omission during extraction. To solve this problem, a road-extraction network, the global attention multi-path dilated convolution gated refinement Network (GMR-Net), is proposed. The GMR-Net is facilitated by both local and global information. A residual module with an attention mechanism is first designed to obtain global and other aggregate information for each location’s features. Then, a multi-path dilated convolution (MDC) approach is used to extract road features at different scales, i.e., to achieve multi-scale road feature extraction. Finally, gated refinement units (GR) are proposed to filter out ambiguous features for the gradual refinement of details. Multiple road-extraction methods are compared in this study using the Deep-Globe and Massachusetts datasets. Experiments on these two datasets demonstrate that the proposed method achieves F1-scores of 87.38 and 85.70%, respectively, outperforming other approaches on segmentation accuracy and generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
24

Ma, Xiaolin, Kaiqi Wu, Hailan Kuang, and Xinhua Liu. "An Entity Relation Extraction Method Based on Dynamic Context and Multi-Feature Fusion." Applied Sciences 12, no. 3 (January 31, 2022): 1532. http://dx.doi.org/10.3390/app12031532.

Full text
Abstract:
Dynamic context selector, a kind of mask idea, will divide the matrix into some regions, selecting the information of region as the input of model dynamically. There is a novel thought that improvement is made on the entity relation extraction (ERE) by applying the dynamic context to the training. In reality, most existing models of joint extraction of entity and relation are based on static context, which always suffers from the feature missing issue, resulting in poor performance. To address the problem, we propose a span-based joint extraction method based on dynamic context and multi-feature fusion (SPERT-DC). The context area is picked dynamically with the help of threshold in feature selecting layer of the model. It is noted that we also use Bi-LSTM_ATT to improve compatibility of longer text in feature extracting layer and enhance context information by combining with the tags of entity in feature fusion layer. Furthermore, the model in this paper outperforms prior work by up to 1% F1 score on the public dataset, which has verified the efficiency of dynamic context on ERE model.
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Dong-Hyuk, Kyoung-Mu Lee, and Sang-Uk Lee. "Information Fusion of Photogrammetric Imagery and Lidar for Reliable Building Extraction." Journal of Broadcast Engineering 13, no. 2 (March 31, 2008): 236–44. http://dx.doi.org/10.5909/jbe.2008.13.2.236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Miao, Zelang, Wenzhong Shi, Alim Samat, Gianni Lisini, and Paolo Gamba. "Information Fusion for Urban Road Extraction From VHR Optical Satellite Images." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 9, no. 5 (May 2016): 1817–29. http://dx.doi.org/10.1109/jstars.2015.2498663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Fernández-Caballero, Antonio, María López, and Juan Serrano-Cuerda. "Thermal-Infrared Pedestrian ROI Extraction through Thermal and Motion Information Fusion." Sensors 14, no. 4 (April 10, 2014): 6666–76. http://dx.doi.org/10.3390/s140406666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yu, Ning, Jianyi Liu, and Yu Shi. "Span-Based Fine-Grained Entity-Relation Extraction via Sub-Prompts Combination." Applied Sciences 13, no. 2 (January 15, 2023): 1159. http://dx.doi.org/10.3390/app13021159.

Full text
Abstract:
With the development of information extraction technology, a variety of entity-relation extraction paradigms have been formed. However, approaches guided by these existing paradigms suffer from insufficient information fusion and too coarse extraction granularity, leading to difficulties extracting all triples in a sentence. Moreover, the joint entity-relation extraction model cannot easily adapt to the relation extraction task. Therefore, we need to design more fine-grained and flexible extraction methods. In this paper, we propose a new extraction paradigm based on existing paradigms. Then, based on it, we propose SSPC, a method for Span-based Fine-Grained Entity-Relation Extraction via Sub-Prompts Combination. SSPC first decomposes the task into three sub-tasks, namely S,R Extraction, R,O Extraction and S,R,O Classification and then uses prompt tuning to fully integrate entity and relation information in each part. This fine-grained extraction framework makes the model easier to adapt to other similar tasks. We conduct experiments on joint entity-relation extraction and relation extraction, respectively. The experimental results show that our model outperforms previous methods and achieves state-of-the-art results on ADE, TACRED, and TACREV.
APA, Harvard, Vancouver, ISO, and other styles
29

Chao Zhou, Chao Zhou. "The System Design and Realization on Invlusive Finance Economic in Fluence based on the Feature Extraction and Data Fusion." 電腦學刊 33, no. 1 (February 2022): 199–208. http://dx.doi.org/10.53106/199115992022023301018.

Full text
Abstract:
<p>Inclusive finance is a financial system that can effectively and comprehensively provide services to all social classes and groups. Design an inclusive financial economic impact system based on feature extraction and data fusion. Based on the perspective of factor market, product market and financial market, the system structure is designed based on the level of information fusion, and the system is divided into data layer, feature layer and decision layer. Collect different information from the sample data of 31 provinces across the country and perform information fusion to establish a regression model to maximize the information retention rate. Explore the influence of the three market development degrees and the development level of inclusive finance on the high-quality development of the regional economy, and improve the robustness of information feature fusion. The results show that the development level of inclusive finance and the development degree of the three markets have a significant role in promoting the high-quality development of the regional economy. Inclusive finance can promote the high-quality development of the regional economy by optimizing the development of the &quot;three markets&quot;.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
30

Ma, Mingming, Yi Niu, Chang Liu, Fu Li, and Guangming Shi. "A Lightweight Multi-Level Information Network for Multispectral and Hyperspectral Image Fusion." Remote Sensing 14, no. 21 (November 6, 2022): 5600. http://dx.doi.org/10.3390/rs14215600.

Full text
Abstract:
The process of fusing the rich spectral information of a low spatial resolution hyperspectral image (LR-HSI) with the spatial information of a high spatial resolution multispectral image (HR-MSI) to obtain an HSI with the spatial resolution of an MSI image is called hyperspectral image fusion (HIF). To reconstruct hyperspectral images at video frame rate, we propose a lightweight multi-level information network (MINet) for multispectral and hyperspectral image fusion. Specifically, we develop a novel lightweight feature fusion model, namely residual constraint block based on global variance fine-tuning (GVF-RCB), to complete the feature extraction and fusion of hyperspectral images. Further, we define a residual activity factor to judge the learning ability of the residual module, thereby verifying the effectiveness of GVF-RCB. In addition, we use cascade cross-level fusion to embed the different spectral bands of the upsampled LR-HSI in a progressive manner to compensate for lost spectral information at different levels and to maintain spatial high frequency information at all times. Experiments on different datasets show that our MINet outperforms the state-of-the-art methods in terms of objective metrics, in particular by requiring only 30% of the running time and 20% of the number of parameters.
APA, Harvard, Vancouver, ISO, and other styles
31

Quan, Hong Wei, and Dong Liang Peng. "Research on Communication Reconnaissance Information Processing and Fusion." Applied Mechanics and Materials 552 (June 2014): 359–62. http://dx.doi.org/10.4028/www.scientific.net/amm.552.359.

Full text
Abstract:
In complex electromagnetic signal environment, the reconnaissance equipments in tactical communication system can uninterruptedly reconnoiter a variety of enemy’s communication signals as well as access a number of characteristic parameters of time, frequency and space domain by searching analysis, feature extraction, direction finding and comprehensive identification. After a series of signal processing, data mining and information fusion, we can get the characteristic parameters of the electromagnetic spectrum of the enemy’s reconnaissance equipments, which provide the basis for analysis and estimation of electromagnetic situation in battlefield. In this paper a multi-hierarchical blackboard model is proposed for multi-sources communication reconnaissance information mining and fusion and the effectiveness of the method is validated in simulation environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Chao, Haojin Hu, Yonghang Tai, Lijun Yun, and Jun Zhang. "Trustworthy Image Fusion with Deep Learning for Wireless Applications." Wireless Communications and Mobile Computing 2021 (June 29, 2021): 1–9. http://dx.doi.org/10.1155/2021/6220166.

Full text
Abstract:
To fuse infrared and visible images in wireless applications, the extraction and transmission of characteristic information security is an important task. The fused image quality depends on the effectiveness of feature extraction and the transmission of image pair characteristics. However, most fusion approaches based on deep learning do not make effective use of the features for image fusion, which results in missing semantic content in the fused image. In this paper, a novel trustworthy image fusion method is proposed to address these issues, which applies convolutional neural networks for feature extraction and blockchain technology to protect sensitive information. The new method can effectively reduce the loss of feature information by making the output of the feature extraction network in each convolutional layer to be fed to the next layer along with the production of the previous layer, and in order to ensure the similarity between the fused image and the original image, the original input image feature map is used as the input of the reconstruction network in the image reconstruction network. Compared to other methods, the experimental results show that our proposed method can achieve better quality and satisfy human perception.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Kun. "Web News Data Extraction Technology Based on Text Keywords." Complexity 2021 (April 16, 2021): 1–11. http://dx.doi.org/10.1155/2021/5529447.

Full text
Abstract:
In order to shorten the time for users to query news on the Internet, this paper studies and designs a network news data extraction technology, which can obtain the main news information through the extraction of news text keywords. Firstly, the TF-IDF keyword extraction algorithm, TextRank keyword extraction algorithm, and LDA keyword extraction algorithm are analyzed to understand the keyword extraction process, and the TF-IDF algorithm is optimized by Zipf’s law. By introducing the idea of model fusion, five schemes based on waterfall fusion and parallel combination fusion are designed, and the effects of the five schemes are verified by experiments. It is found that the designed extraction technology has a good effect on network news data extraction. News keyword extraction has a great application prospect, which can provide the basis for the research fields of news key phrases, news abstracts, and so on.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhong, Hongye, and Jitian Xiao. "Enhancing Health Risk Prediction with Deep Learning on Big Data and Revised Fusion Node Paradigm." Scientific Programming 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/1901876.

Full text
Abstract:
With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves the complex application of machine-learning algorithms, such as Bayesian fusions and neural network, for data extraction and logical inference. Deep learning, combined with information fusion paradigms, can be utilized to provide more comprehensive and reliable predictions from big health data. Based on the proposed framework, an experimental system is developed as an illustration for the framework implementation.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Ken, Jun Wang, Yang Yang, Yong Tang, Yong Zhou, and Jin Zhu. "A Video Key Frame Extraction Method Based on Multiview Fusion." Mobile Information Systems 2022 (July 22, 2022): 1–9. http://dx.doi.org/10.1155/2022/8931035.

Full text
Abstract:
A massive amount of video data is stored in the real-time road monitoring system, especially in high-speed scenes. Traditional methods of video key frame extraction have the problems of large computation and long-time consumption. Thus, it is imperative to decrease the massive video data generated by monitoring and help researchers to study key frames. Aiming at the above problems, we propose an efficient key frame extraction method based on multiview fusion, where the autoencoder is used to compress the video data. Specifically, all the video frames of the video data are subjected to feature dimensionality reduction, and the features after dimensionality reduction are subjected to multiview fusion. Finally, dynamic programming and clustering are used to extract key frames. The experimental results show that the proposed method has lower computational complexity in extracting key frames, while the mutual information in the extracted key frames is large. It illustrates the reliability and efficiency of the proposed method, which provides technical support for subsequent video research.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Dong, Huihua Yang, Lemeng Wang, Yuying Shao, and Peng Peng. "Gated Fusion of Infrared and Visible Light Images Based on CNN." Journal of Physics: Conference Series 2025, no. 1 (September 1, 2021): 012065. http://dx.doi.org/10.1088/1742-6596/2025/1/012065.

Full text
Abstract:
Abstract As a new research direction, fusion image technology has attracted more and more attention in many fields. Among them, infrared image and visible image, the two kinds of multimodal data have strong complementarity, the fusion image of the two modes contains not only the radiation information of infrared image, but also the texture detail information of visible image. In this paper, a convolutional neural network-based encoding-fusing-decoding network model structure is used. In the encoding stage, Dense Block, which has the advantage of feature extraction, was adapted to extract the image features. In the fusion stage, four fusion methods were compared and analyzed, and the gated fusion was selected as the main method of fusion layer. In the decoding stage, RDB (Residual Dense Blocks) was used to restore the fused features to the fused image. The fusion image based on this method is sensitive to temperature characteristics and has a better performance in image quality. The fused image has a high contrast, a relatively smooth fusion effect, and the overall visual effect is more natural.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Xue Jun, D. L. Yang, and Ling Li Jiang. "Bearing Fault Diagnosis Based on Multi-Sensor Information Fusion with SVM." Applied Mechanics and Materials 34-35 (October 2010): 995–99. http://dx.doi.org/10.4028/www.scientific.net/amm.34-35.995.

Full text
Abstract:
This paper proposed a fault diagnosis based on multi-sensor information fusion for rolling bearing. This method used the energy value of multiple sensors is used as feature vector and a binary tree support vector machine (Binary Tree Support Vector Machine, BT-SVM) is used for pattern recognition and fault diagnosis. By analyzing the training samples, penalty factor and the kernel function parameters have effects on the recognition rate of bearing fault, then a approximate method to determine optimum value are proposed, Compared with the traditional single sensor by using the components energy of EMD as feature, the results show that the proposed method in this paper significantly reduce feature extraction time, and improve diagnostic accuracy, which is up to99.82%. This method is simple, effective and fast in feature extraction and meets the bearing diagnosis requirement of real-time fault diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Bao-Yuan, Yu-Kun Shen, and Kun Sun. "Research on Object Detection Algorithm Based on Multilayer Information Fusion." Mathematical Problems in Engineering 2020 (September 27, 2020): 1–13. http://dx.doi.org/10.1155/2020/9076857.

Full text
Abstract:
At present, object detectors based on convolution neural networks generally rely on the last layer of features extracted by the feature extraction network. In the process of continuous convolution and pooling of deep features, the position information cannot be completely transferred backward. This paper proposes a multiscale feature reuse detection model, which includes the basic feature extraction network DenseNet, feature fusion network, multiscale anchor region proposal network, and classification and regression network. The fusion of high-dimensional features and low-dimensional features not only strengthens the model's sensitivity to objects of different sizes but also strengthens the transmission of information, so that the feature map has rich deep semantic information and shallow location information at the same time, which significantly improves the robustness and detection accuracy of the model. The algorithm is trained and tested in Pascal VOC2007 dataset. The experimental results show that the mean average precision of the objects in the dataset is 73.87%. At the same time, compared with the mainstream faster RCNN and SSD detection models, the mean average precision of object detection algorithm based on DenseNet is improved by 5.63% and 3.86%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Dong, Yan, Yong Sheng Zhu, and Qiang Li. "Research on License Plate Recognition Based on Information Fusion." Advanced Materials Research 433-440 (January 2012): 7067–72. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.7067.

Full text
Abstract:
The information capacity of the characters on the license plate images affects the accuracy of recognition directly. To improve the recognition rate of vehicle license, considering the low cost of installing cameras nowadays, this thesis put forwards that, adopting images from two cameras in different angles. the license plate location, character division and feature extraction process are done separately, and then information fusion technique is used to confirm the more reliable recognition result, which can reduce the error recognition rate of characters. The contrast experiments show that this method can improve the accuracy of license plate recognition.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Yi, Min Chang, and Jie Xu. "High-Resolution Remote Sensing Image Information Extraction and Target Recognition Based on Multiple Information Fusion." IEEE Access 8 (2020): 121486–500. http://dx.doi.org/10.1109/access.2020.3006288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zhao, Hongyang, and Qiang Xie. "An Improved TextRank Multi-feature Fusion Algorithm For Keyword Extraction of Educational Resources." Journal of Physics: Conference Series 2078, no. 1 (November 1, 2021): 012021. http://dx.doi.org/10.1088/1742-6596/2078/1/012021.

Full text
Abstract:
Abstract In view of the fact that the traditional graph model method which only considers statistical features or general semantic features when extracting keywords from existing massive educational resources, lacks the function of mining and utilizing multi-factor semantic features, this paper proposes an improved TextRank-based algorithm for keyword extraction of educational resources. According to the characteristics of Chinese text and the shortcomings of traditional TextRank algorithm, the improved algorithm featuring multi-feature fusion is developed using the importance of words in the corpus, the location information in the text and the attributes of words. Experimental results show that this method has higher accuracy, recall rate, and F-measure value than traditional algorithms in the process of keyword extraction of educational resources, which improves the quality of keyword extraction and is beneficial to better utilization and management of educational resources.
APA, Harvard, Vancouver, ISO, and other styles
42

Fei, Yin, Gao Wei, and Song Zongxi. "Medical Image Fusion Based on Feature Extraction and Sparse Representation." International Journal of Biomedical Imaging 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/3020461.

Full text
Abstract:
As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Ren, Jintong, Wunian Yang, Xin Yang, Xiaoyu Deng, He Zhao, Fang Wang, and Lei Wang. "Optimization of Fusion Method for GF-2 Satellite Remote Sensing Images based on the Classification Effect." Earth Sciences Research Journal 23, no. 2 (April 1, 2019): 163–69. http://dx.doi.org/10.15446/esrj.v23n2.80281.

Full text
Abstract:
With the successful launch of China’s GF series satellites, it is more important to study the image data quality, the adaptability of processing method and information extraction method. The panchromatic and multi-spectral data which is based on the GF-2 images data of Chinese sub-meter high-resolution remote sensing satellite is fused by PCA, Pansharp, Gram-Schmidt and NNDiffuse fusion. Then, the quality of the fusion images were evaluated subjectively and objectively. In order to evaluate the applicability of different classification algorithms to the classification, the object-oriented classification algorithm which is based on machine learning algorithm, such as KNN, SVM and Random Trees were used to classify the different GF-2 fusion images. The results showed that: (1) The best visual effect of GF-2 fusion image was the Pansharp fusion image; The quantitative evaluation results showed that the brightness and information retention of Gram-Schmidt fusion image was the best,while the Pansharp fusion image had the highest correlation with the original multi-spectral image; the NNDiffuse fusion image had the highest clarity, and the PCA fusion image quantitative evaluation effect was the worst; (2) According to the applicability analysis of the fusion images based on different classification algorithms with features information extraction, it could be seen that the NNDiffuse fusion method was used for the fusion of GF-2 image data, and the classification of the fusion images was more suitable by using KNN or Random Trees classification algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Puttinaovarat, Supattra, and Paramate Horkaew. "Multi-spectral and Topographic Fusion for Automated Road Extraction." Open Geosciences 10, no. 1 (September 14, 2018): 461–73. http://dx.doi.org/10.1515/geo-2018-0036.

Full text
Abstract:
AbstractRoad geometry is pertinent information in various GIS studies. Reliable and updated road information thus calls for conventional on-site survey being replaced by more accurate and efficient remote sensing technology. Generally, this approach involves image enhancement and extraction of relevant features, such as elongate gradient and intersecting corners. Thus far, its implication is often impeded by wrongly extraction of other urban peripherals with similar pixel characteristics. This paper therefore proposes the fusion of THEOS satellite image and topographic derivatives, obtained from underlying Digital Surface Models (DSM). Multi-spectral indices in thematic layers and surface properties of designated roads were both fed into state-of-the-art machine learning algorithms. The results were later fused, taken into account consistently leveled road surface. The proposed technique was thus able to eliminate irrelevant urban structures such as buildings and other constructions, otherwise left by conventional index based extraction. The numerical assessment indicates recall of 84.64%, precision of 97.40% and overall accuracy of 97.78%, with 0.89 Kappa statistics. Visual inspection reported herewith also confirms consistency with ground truth reference.
APA, Harvard, Vancouver, ISO, and other styles
45

AlFawwaz, Bader M., Atallah AL-Shatnawi, Faisal Al-Saqqar, and Mohammad Nusir. "Multi-Resolution Discrete Cosine Transform Fusion Technique Face Recognition Model." Data 7, no. 6 (June 15, 2022): 80. http://dx.doi.org/10.3390/data7060080.

Full text
Abstract:
This work presents a Multi-Resolution Discrete Cosine Transform (MDCT) fusion technique Fusion Feature-Level Face Recognition Model (FFLFRM) comprising face detection, feature extraction, feature fusion, and face classification. It detects core facial characteristics as well as local and global features utilizing Local Binary Pattern (LBP) and Principal Component Analysis (PCA) extraction. MDCT fusion technique was applied, followed by Artificial Neural Network (ANN) classification. Model testing used 10,000 faces derived from the Olivetti Research Laboratory (ORL) library. Model performance was evaluated in comparison with three state-of-the-art models depending on Frequency Partition (FP), Laplacian Pyramid (LP) and Covariance Intersection (CI) fusion techniques, in terms of image features (low-resolution issues and occlusion) and facial characteristics (pose, and expression per se and in relation to illumination). The MDCT-based model yielded promising recognition results, with a 97.70% accuracy demonstrating effectiveness and robustness for challenges. Furthermore, this work proved that the MDCT method used by the proposed FFLFRM is simpler, faster, and more accurate than the Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT). As well as that it is an effective method for facial real-life applications.
APA, Harvard, Vancouver, ISO, and other styles
46

Krauss, T., P. d'Angelo, G. Kuschk, J. Tian, and T. Partovi. "3D-information fusion from very high resolution satellite sensors." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 29, 2015): 651–56. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-651-2015.

Full text
Abstract:
In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl´eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
47

Qi, Qingfu, Liyuan Lin, and Rui Zhang. "Feature Extraction Network with Attention Mechanism for Data Enhancement and Recombination Fusion for Multimodal Sentiment Analysis." Information 12, no. 9 (August 24, 2021): 342. http://dx.doi.org/10.3390/info12090342.

Full text
Abstract:
Multimodal sentiment analysis and emotion recognition represent a major research direction in natural language processing (NLP). With the rapid development of online media, people often express their emotions on a topic in the form of video, and the signals it transmits are multimodal, including language, visual, and audio. Therefore, the traditional unimodal sentiment analysis method is no longer applicable, which requires the establishment of a fusion model of multimodal information to obtain sentiment understanding. In previous studies, scholars used the feature vector cascade method when fusing multimodal data at each time step in the middle layer. This method puts each modal information in the same position and does not distinguish between strong modal information and weak modal information among multiple modalities. At the same time, this method does not pay attention to the embedding characteristics of multimodal signals across the time dimension. In response to the above problems, this paper proposes a new method and model for processing multimodal signals, which takes into account the delay and hysteresis characteristics of multimodal signals across the time dimension. The purpose is to obtain a multimodal fusion feature emotion analysis representation. We evaluate our method on the multimodal sentiment analysis benchmark dataset CMU Multimodal Opinion Sentiment and Emotion Intensity Corpus (CMU-MOSEI). We compare our proposed method with the state-of-the-art model and show excellent results.
APA, Harvard, Vancouver, ISO, and other styles
48

Luo, Qiu, Qiming Xiong, Yao Liu, and Gui Zhang. "GeoEye Image Fusion Vegetation Information Extraction Based on Blue Noise Measurement Texture." IOP Conference Series: Earth and Environmental Science 67 (May 2017): 012007. http://dx.doi.org/10.1088/1755-1315/67/1/012007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Jiang, Mi, Zelang Miao, Paolo Gamba, and Bin Yong. "Application of Multitemporal InSAR Covariance and Information Fusion to Robust Road Extraction." IEEE Transactions on Geoscience and Remote Sensing 55, no. 6 (June 2017): 3611–22. http://dx.doi.org/10.1109/tgrs.2017.2677260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Guido, Rodrigo Capobianco. "A tutorial review on entropy-based handcrafted feature extraction for information fusion." Information Fusion 41 (May 2018): 161–75. http://dx.doi.org/10.1016/j.inffus.2017.09.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography