Academic literature on the topic 'Adversarial Information Fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Adversarial Information Fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Adversarial Information Fusion"

1

Kott, Alexander, Rajdeep Singh, William M. McEneaney, and Wes Milks. "Hypothesis-driven information fusion in adversarial, deceptive environments." Information Fusion 12, no. 2 (2011): 131–44. http://dx.doi.org/10.1016/j.inffus.2010.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Zhaoli, Xuehan Wu, Yuancai Zhu, et al. "Research on Multimodal Image Fusion Target Detection Algorithm Based on Generative Adversarial Network." Wireless Communications and Mobile Computing 2022 (January 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/1740909.

Full text
Abstract:
In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based on generative adversarial network and uses the fusion algorithm based on the dual discriminator generative adversarial network to generate high-quality IR-visible fused images and then blends the IR and visible images into a ternary dataset and combines the triple angular loss function to do migratio
APA, Harvard, Vancouver, ISO, and other styles
3

Yuan, C., C. Q. Sun, X. Y. Tang, and R. F. Liu. "FLGC-Fusion GAN: An Enhanced Fusion GAN Model by Importing Fully Learnable Group Convolution." Mathematical Problems in Engineering 2020 (October 22, 2020): 1–13. http://dx.doi.org/10.1155/2020/6384831.

Full text
Abstract:
The purpose of image fusion is to combine the source images of the same scene into a single composite image with more useful information and better visual effects. Fusion GAN has made a breakthrough in this field by proposing to use the generative adversarial network to fuse images. In some cases, considering retain infrared radiation information and gradient information at the same time, the existing fusion methods ignore the image contrast and other elements. To this end, we propose a new end-to-end network structure based on generative adversarial networks (GANs), termed as FLGC-Fusion GAN.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai, and Jing Han. "Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception." Entropy 24, no. 10 (2022): 1327. http://dx.doi.org/10.3390/e24101327.

Full text
Abstract:
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Spec
APA, Harvard, Vancouver, ISO, and other styles
5

Jia Ruiming, 贾瑞明, 李彤 Li Tong, 刘圣杰 Liu Shengjie, 崔家礼 Cui Jiali, and 袁飞 Yuan Fei. "Infrared Simulation Based on Cascade Multi-Scale Information Fusion Adversarial Network." Acta Optica Sinica 40, no. 18 (2020): 1810001. http://dx.doi.org/10.3788/aos202040.1810001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Xuhui, Hongtao Yu, Shaomei Li, and Huansha Wang. "Robust Chinese Named Entity Recognition Based on Fusion Graph Embedding." Electronics 12, no. 3 (2023): 569. http://dx.doi.org/10.3390/electronics12030569.

Full text
Abstract:
Named entity recognition is an important basic task in the field of natural language processing. The current mainstream named entity recognition methods are mainly based on the deep neural network model. The vulnerability of the deep neural network itself leads to a significant decline in the accuracy of named entity recognition when there is adversarial text in the text. In order to improve the robustness of named entity recognition under adversarial conditions, this paper proposes a Chinese named entity recognition model based on fusion graph embedding. Firstly, the model encodes and represe
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang, and Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network." Applied Sciences 10, no. 2 (2020): 554. http://dx.doi.org/10.3390/app10020554.

Full text
Abstract:
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation in
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Wei, Yu Liu, Chao Zhang, Juan Cheng, Hu Peng, and Xun Chen. "Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks." Computational and Mathematical Methods in Medicine 2019 (December 4, 2019): 1–11. http://dx.doi.org/10.1155/2019/5450373.

Full text
Abstract:
In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. Th
APA, Harvard, Vancouver, ISO, and other styles
9

He, Gang, Jiaping Zhong, Jie Lei, Yunsong Li, and Weiying Xie. "Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder." Remote Sensing 11, no. 22 (2019): 2691. http://dx.doi.org/10.3390/rs11222691.

Full text
Abstract:
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS imag
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou-xiang Jin, Zhou-xiang Jin, and Hao Qin Zhou-xiang Jin. "Generative Adversarial Network Based on Multi-feature Fusion Strategy for Motion Image Deblurring." 電腦學刊 33, no. 1 (2022): 031–41. http://dx.doi.org/10.53106/199115992022023301004.

Full text
Abstract:
<p>Deblurring of motion images is a part of the field of image restoration. The deblurring of motion images is not only difficult to estimate the motion parameters, but also contains complex factors such as noise, which makes the deblurring algorithm more difficult. Image deblurring can be divided into two categories: one is the non-blind image deblurring with known fuzzy kernel, and the other is the blind image deblurring with unknown fuzzy kernel. The traditional motion image deblurring networks ignore the non-uniformity of motion blurred images and cannot effectively recover the high
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!