Статті в журналах з теми "Adversarial Information Fusion"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Adversarial Information Fusion.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Adversarial Information Fusion".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kott, Alexander, Rajdeep Singh, William M. McEneaney, and Wes Milks. "Hypothesis-driven information fusion in adversarial, deceptive environments." Information Fusion 12, no. 2 (April 2011): 131–44. http://dx.doi.org/10.1016/j.inffus.2010.09.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wu, Zhaoli, Xuehan Wu, Yuancai Zhu, Jingxuan Zhai, Haibo Yang, Zhiwei Yang, Chao Wang, and Jilong Sun. "Research on Multimodal Image Fusion Target Detection Algorithm Based on Generative Adversarial Network." Wireless Communications and Mobile Computing 2022 (January 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/1740909.

Повний текст джерела
Анотація:
In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based on generative adversarial network and uses the fusion algorithm based on the dual discriminator generative adversarial network to generate high-quality IR-visible fused images and then blends the IR and visible images into a ternary dataset and combines the triple angular loss function to do migration learning. Finally, the fused images are used as the input images of faster RCNN object detection algorithm for detection, and a new nonmaximum suppression algorithm is used to improve the faster RCNN target detection algorithm, which further improves the target detection accuracy. Experiments prove that the method can achieve mutual complementation of multimodal feature information and make up for the lack of information in single-modal scenes, and the algorithm achieves good detection results for information from both modalities (infrared and visible light).
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yuan, C., C. Q. Sun, X. Y. Tang, and R. F. Liu. "FLGC-Fusion GAN: An Enhanced Fusion GAN Model by Importing Fully Learnable Group Convolution." Mathematical Problems in Engineering 2020 (October 22, 2020): 1–13. http://dx.doi.org/10.1155/2020/6384831.

Повний текст джерела
Анотація:
The purpose of image fusion is to combine the source images of the same scene into a single composite image with more useful information and better visual effects. Fusion GAN has made a breakthrough in this field by proposing to use the generative adversarial network to fuse images. In some cases, considering retain infrared radiation information and gradient information at the same time, the existing fusion methods ignore the image contrast and other elements. To this end, we propose a new end-to-end network structure based on generative adversarial networks (GANs), termed as FLGC-Fusion GAN. In the generator, using the learnable grouping convolution can improve the efficiency of the model and save computing resources. Therefore, we can have a better trade-off between the accuracy and speed of the model. Besides, we take the residual dense block as the basic network building unit and use the perception characteristics of the inactive as content loss characteristics of input, achieving the effect of deep network supervision. Experimental results on two public datasets show that the proposed method performs well in subjective visual performance and objective criteria and has obvious advantages over other current typical methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai, and Jing Han. "Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception." Entropy 24, no. 10 (September 21, 2022): 1327. http://dx.doi.org/10.3390/e24101327.

Повний текст джерела
Анотація:
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Specifically, the ASG module transfers the semantics of the target and background to the fusion process for target highlighting. The AVP module analyzes the visual features from the global structure and local details of the visible and fusion images and then guides the fusion network to adaptively generate a weight map of signal completion so that the resulting fusion images possess a natural and visible appearance. We construct a joint distribution function between the fusion images and the corresponding semantics and use the discriminator to improve the fusion performance in terms of natural appearance and target saliency. Experimental results demonstrate that our proposed ASG and AVP modules can effectively guide the image-fusion process by selectively preserving the details in visible images and the salient information of targets in infrared images. The SGVPGAN exhibits significant improvements over other fusion methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jia Ruiming, 贾瑞明, 李彤 Li Tong, 刘圣杰 Liu Shengjie, 崔家礼 Cui Jiali, and 袁飞 Yuan Fei. "Infrared Simulation Based on Cascade Multi-Scale Information Fusion Adversarial Network." Acta Optica Sinica 40, no. 18 (2020): 1810001. http://dx.doi.org/10.3788/aos202040.1810001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Song, Xuhui, Hongtao Yu, Shaomei Li, and Huansha Wang. "Robust Chinese Named Entity Recognition Based on Fusion Graph Embedding." Electronics 12, no. 3 (January 22, 2023): 569. http://dx.doi.org/10.3390/electronics12030569.

Повний текст джерела
Анотація:
Named entity recognition is an important basic task in the field of natural language processing. The current mainstream named entity recognition methods are mainly based on the deep neural network model. The vulnerability of the deep neural network itself leads to a significant decline in the accuracy of named entity recognition when there is adversarial text in the text. In order to improve the robustness of named entity recognition under adversarial conditions, this paper proposes a Chinese named entity recognition model based on fusion graph embedding. Firstly, the model encodes and represents the phonetic and glyph information of the input text through graph learning and integrates above-multimodal knowledge into the model, thus enhancing the robustness of the model. Secondly, we use the Bi-LSTM to further obtain the context information of the text. Finally, conditional random field is used to decode and label entities. The experimental results on OntoNotes4.0, MSRA, Weibo, and Resume datasets show that the F1 values of this model increased by 3.76%, 3.93%, 4.16%, and 6.49%, respectively, in the presence of adversarial text, which verifies the effectiveness of this model.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang, and Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network." Applied Sciences 10, no. 2 (January 11, 2020): 554. http://dx.doi.org/10.3390/app10020554.

Повний текст джерела
Анотація:
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tang, Wei, Yu Liu, Chao Zhang, Juan Cheng, Hu Peng, and Xun Chen. "Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks." Computational and Mathematical Methods in Medicine 2019 (December 4, 2019): 1–11. http://dx.doi.org/10.1155/2019/5450373.

Повний текст джерела
Анотація:
In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. The fusion problem is modelled as an adversarial game between a generator and a discriminator. The generator aims to create a fused image that well extracts the functional information from the GFP image and the structural information from the phase-contrast image at the same time. The target of the discriminator is to further improve the overall similarity between the fused image and the phase-contrast image. Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

He, Gang, Jiaping Zhong, Jie Lei, Yunsong Li, and Weiying Xie. "Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder." Remote Sensing 11, no. 22 (November 18, 2019): 2691. http://dx.doi.org/10.3390/rs11222691.

Повний текст джерела
Анотація:
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhou-xiang Jin, Zhou-xiang Jin, and Hao Qin Zhou-xiang Jin. "Generative Adversarial Network Based on Multi-feature Fusion Strategy for Motion Image Deblurring." 電腦學刊 33, no. 1 (February 2022): 031–41. http://dx.doi.org/10.53106/199115992022023301004.

Повний текст джерела
Анотація:
<p>Deblurring of motion images is a part of the field of image restoration. The deblurring of motion images is not only difficult to estimate the motion parameters, but also contains complex factors such as noise, which makes the deblurring algorithm more difficult. Image deblurring can be divided into two categories: one is the non-blind image deblurring with known fuzzy kernel, and the other is the blind image deblurring with unknown fuzzy kernel. The traditional motion image deblurring networks ignore the non-uniformity of motion blurred images and cannot effectively recover the high frequency details and remove artifacts. In this paper, we propose a new generative adversarial network based on multi-feature fusion strategy for motion image deblurring. An adaptive residual module composed of deformation convolution module and channel attention module is constructed in the generative network. Where, the deformation convolution module learns the shape variables of motion blurred image features, and can dynamically adjust the shape and size of the convolution kernel according to the deformation information of the image, thus improving the ability of the network to adapt to image deformation. The channel attention module adjusts the extracted deformation features to obtain more high-frequency features and enhance the texture details of the restored image. Experimental results on public available GOPRO dataset show that the proposed algorithm improves the peak signal-to-noise ratio (PSNR) and is able to reconstruct high quality images with rich texture details compared to other motion image deblurring methods.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ma, Xiaole, Zhihai Wang, Shaohai Hu, and Shichao Kan. "Multi-Focus Image Fusion Based on Multi-Scale Generative Adversarial Network." Entropy 24, no. 5 (April 21, 2022): 582. http://dx.doi.org/10.3390/e24050582.

Повний текст джерела
Анотація:
The methods based on the convolutional neural network have demonstrated its powerful information integration ability in image fusion. However, most of the existing methods based on neural networks are only applied to a part of the fusion process. In this paper, an end-to-end multi-focus image fusion method based on a multi-scale generative adversarial network (MsGAN) is proposed that makes full use of image features by a combination of multi-scale decomposition with a convolutional neural network. Extensive qualitative and quantitative experiments on the synthetic and Lytro datasets demonstrated the effectiveness and superiority of the proposed MsGAN compared to the state-of-the-art multi-focus image fusion methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wang, Min, Congyan Lang, Liqian Liang, Songhe Feng, Tao Wang, and Yutong Gao. "Fine-Grained Semantic Image Synthesis with Object-Attention Generative Adversarial Network." ACM Transactions on Intelligent Systems and Technology 12, no. 5 (October 31, 2021): 1–18. http://dx.doi.org/10.1145/3470008.

Повний текст джерела
Анотація:
Semantic image synthesis is a new rising and challenging vision problem accompanied by the recent promising advances in generative adversarial networks. The existing semantic image synthesis methods only consider the global information provided by the semantic segmentation mask, such as class label, global layout, and location, so the generative models cannot capture the rich local fine-grained information of the images (e.g., object structure, contour, and texture). To address this issue, we adopt a multi-scale feature fusion algorithm to refine the generated images by learning the fine-grained information of the local objects. We propose OA-GAN, a novel object-attention generative adversarial network that allows attention-driven, multi-fusion refinement for fine-grained semantic image synthesis. Specifically, the proposed model first generates multi-scale global image features and local object features, respectively, then the local object features are fused into the global image features to improve the correlation between the local and the global. In the process of feature fusion, the global image features and the local object features are fused through the channel-spatial-wise fusion block to learn ‘what’ and ‘where’ to attend in the channel and spatial axes, respectively. The fused features are used to construct correlation filters to obtain feature response maps to determine the locations, contours, and textures of the objects. Extensive quantitative and qualitative experiments on COCO-Stuff, ADE20K and Cityscapes datasets demonstrate that our OA-GAN significantly outperforms the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wang, Jingjing, Jinwen Ren, Hongzhen Li, Zengzhao Sun, Zhenye Luan, Zishu Yu, Chunhao Liang, Yashar E. Monfared, Huaqiang Xu, and Qing Hua. "DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion." Photonics 9, no. 3 (March 3, 2022): 150. http://dx.doi.org/10.3390/photonics9030150.

Повний текст джерела
Анотація:
Infrared images can provide clear contrast information to distinguish between the target and the background under any lighting conditions. In contrast, visible images can provide rich texture details and are compatible with the human visual system. The fusion of a visible image and infrared image will thus contain both comprehensive contrast information and texture details. In this study, a novel approach for the fusion of infrared and visible images is proposed based on a dual-discriminator generative adversarial network with a squeeze-and-excitation module (DDGANSE). Our approach establishes confrontation training between one generator and two discriminators. The goal of the generator is to generate images that are similar to the source images, and contain the information from both infrared and visible source images. The purpose of the two discriminators is to increase the similarity between the image generated by the generator and the infrared and visible images. We experimentally demonstrated that using continuous adversarial training, DDGANSE outputs images retain the advantages of both infrared and visible images with significant contrast information and rich texture details. Finally, we compared the performance of our proposed method with previously reported techniques for fusing infrared and visible images using both quantitative and qualitative assessments. Our experiments on the TNO dataset demonstrate that our proposed method shows superior performance compared to other similar reported methods in the literature using various performance metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Minahil, Syeda, Jun-Hyung Kim, and Youngbae Hwang. "Patch-Wise Infrared and Visible Image Fusion Using Spatial Adaptive Weights." Applied Sciences 11, no. 19 (October 5, 2021): 9255. http://dx.doi.org/10.3390/app11199255.

Повний текст джерела
Анотація:
In infrared (IR) and visible image fusion, the significant information is extracted from each source image and integrated into a single image with comprehensive data. We observe that the salient regions in the infrared image contain targets of interests. Therefore, we enforce spatial adaptive weights derived from the infrared images. In this paper, a Generative Adversarial Network (GAN)-based fusion method is proposed for infrared and visible image fusion. Based on the end-to-end network structure with dual discriminators, a patch-wise discrimination is applied to reduce blurry artifact from the previous image-level approaches. A new loss function is also proposed to use constructed weight maps which direct the adversarial training of GAN in a manner such that the informative regions of the infrared images are preserved. Experiments are performed on the two datasets and ablation studies are also conducted. The qualitative and quantitative analysis shows that we achieve competitive results compared to the existing fusion methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Abdalla, Younis, M. Tariq Iqbal, and Mohamed Shehata. "Copy-Move Forgery Detection and Localization Using a Generative Adversarial Network and Convolutional Neural-Network." Information 10, no. 9 (September 16, 2019): 286. http://dx.doi.org/10.3390/info10090286.

Повний текст джерела
Анотація:
The problem of forged images has become a global phenomenon that is spreading mainly through social media. New technologies have provided both the means and the support for this phenomenon, but they are also enabling a targeted response to overcome it. Deep convolution learning algorithms are one such solution. These have been shown to be highly effective in dealing with image forgery derived from generative adversarial networks (GANs). In this type of algorithm, the image is altered such that it appears identical to the original image and is nearly undetectable to the unaided human eye as a forgery. The present paper investigates copy-move forgery detection using a fusion processing model comprising a deep convolutional model and an adversarial model. Four datasets are used. Our results indicate a significantly high detection accuracy performance (~95%) exhibited by the deep learning CNN and discriminator forgery detectors. Consequently, an end-to-end trainable deep neural network approach to forgery detection appears to be the optimal strategy. The network is developed based on two-branch architecture and a fusion module. The two branches are used to localize and identify copy-move forgery regions through CNN and GAN.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Huang, Min, and Jinghan Yin. "Research on Adversarial Domain Adaptation Method and Its Application in Power Load Forecasting." Mathematics 10, no. 18 (September 6, 2022): 3223. http://dx.doi.org/10.3390/math10183223.

Повний текст джерела
Анотація:
Domain adaptation has been used to transfer the knowledge from the source domain to the target domain where training data is insufficient in the target domain; thus, it can overcome the data shortage problem of power load forecasting effectively. Inspired by Generative Adversarial Networks (GANs), adversarial domain adaptation transfers knowledge in adversarial learning. Existing adversarial domain adaptation faces the problems of adversarial disequilibrium and a lack of transferability quantification, which will eventually decrease the prediction accuracy. To address this issue, a novel adversarial domain adaptation method is proposed. Firstly, by analyzing the causes of the adversarial disequilibrium, an initial state fusion strategy is proposed to improve the reliability of the domain discriminator, thus maintaining the adversarial equilibrium. Secondly, domain similarity is calculated to quantify the transferability of source domain samples based on information entropy; through weighting in the process of domain alignment, the knowledge is transferred selectively and the negative transfer is suppressed. Finally, the Building Data Genome Project 2 (BDGP2) dataset is used to validate the proposed method. The experimental results demonstrate that the proposed method can alleviate the problem of adversarial disequilibrium and reasonably quantify the transferability to improve the accuracy of power load forecasting.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Fu, Yu, Xiao-Jun Wu, and Tariq Durrani. "Image fusion based on generative adversarial network consistent with perception." Information Fusion 72 (August 2021): 110–25. http://dx.doi.org/10.1016/j.inffus.2021.02.019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ma, Jiayi, Pengwei Liang, Wei Yu, Chen Chen, Xiaojie Guo, Jia Wu, and Junjun Jiang. "Infrared and visible image fusion via detail preserving adversarial learning." Information Fusion 54 (February 2020): 85–98. http://dx.doi.org/10.1016/j.inffus.2019.07.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Nandhini Abirami, R., P. M. Durai Raj Vincent, Kathiravan Srinivasan, K. Suresh Manic, and Chuan-Yu Chang. "Multimodal Medical Image Fusion of Positron Emission Tomography and Magnetic Resonance Imaging Using Generative Adversarial Networks." Behavioural Neurology 2022 (April 14, 2022): 1–12. http://dx.doi.org/10.1155/2022/6878783.

Повний текст джерела
Анотація:
Multimodal medical image fusion is a current technique applied in the applications related to medical field to combine images from the same modality or different modalities to improve the visual content of the image to perform further operations like image segmentation. Biomedical research and medical image analysis highly demand medical image fusion to perform higher level of medical analysis. Multimodal medical fusion assists medical practitioners to visualize the internal organs and tissues. Multimodal medical fusion of brain image helps to medical practitioners to simultaneously visualize hard portion like skull and soft portion like tissue. Brain tumor segmentation can be accurately performed by utilizing the image obtained after multimodal medical image fusion. The area of the tumor can be accurately located with the information obtained from both Positron Emission Tomography and Magnetic Resonance Image in a single fused image. This approach increases the accuracy in diagnosing the tumor and reduces the time consumed in diagnosing and locating the tumor. The functional information of the brain is available in the Positron Emission Tomography while the anatomy of the brain tissue is available in the Magnetic Resonance Image. Thus, the spatial characteristics and functional information can be obtained from a single image using a robust multimodal medical image fusion model. The proposed approach uses a generative adversarial network to fuse Positron Emission Tomography and Magnetic Resonance Image into a single image. The results obtained from the proposed approach can be used for further medical analysis to locate the tumor and plan for further surgical procedures. The performance of the GAN based model is evaluated using two metrics, namely, structural similarity index and mutual information. The proposed approach achieved a structural similarity index of 0.8551 and a mutual information of 2.8059.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhao, Yuqing, Guangyuan Fu, Hongqiao Wang, and Shaolei Zhang. "The Fusion of Unmatched Infrared and Visible Images Based on Generative Adversarial Networks." Mathematical Problems in Engineering 2020 (March 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/3739040.

Повний текст джерела
Анотація:
Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared images in practice. In view of the above issues, we propose a new network model based on generative adversarial networks (GANs) to fuse unmatched infrared and visible images. Our method generates the corresponding infrared image from a visible image and fuses the two images together to obtain more information. The effectiveness of the proposed method is verified qualitatively and quantitatively through experimentation on public datasets. In addition, the generated fused images of the proposed method contain more abundant texture and thermal radiation information than other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhao, Rui, Hengyu Li, Jingyi Liu, Huayan Pu, Shaorong Xie, and Jun Luo. "A video inpainting method for unmanned vehicle based on fusion of time series optical flow information and spatial information." International Journal of Advanced Robotic Systems 18, no. 5 (September 1, 2021): 172988142110531. http://dx.doi.org/10.1177/17298814211053103.

Повний текст джерела
Анотація:
In this article, the problem of video inpainting combines multiview spatial information and interframe information between video sequences. A vision system is an important way for autonomous vehicles to obtain information about the external environment. Loss or distortion of visual images caused by camera damage or pollution seriously makes an impact on the vision system ability to correctly perceive and understand the external environment. In this article, we solve the problem of image restoration by combining the optical flow information between frames in the video with the spatial information from multiple perspectives. To solve the problems of noise in the single-frame images of video frames, we propose a complete two-stage video repair method. We combine the spatial information of images from different perspectives and the optical flow information of the video sequence to assist and constrain the repair of damaged images in the video. This method combines the interframe information of the front and rear image frames with the multiview image information in the video and performs video repair based on optical flow and a conditional generation adversarial network. This method regards video inpainting as a pixel propagation problem, uses the interframe information in the video for video inpainting, and introduces multiview information to assist the repair based on a conditional generative adversarial network. This method was trained and tested in Zurich using a data set recorded by a pair of cameras mounted on a mobile platform.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Pu, Can, Runzi Song, Radim Tylecek, Nanbo Li, and Robert Fisher. "SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks." Remote Sensing 11, no. 5 (February 27, 2019): 487. http://dx.doi.org/10.3390/rs11050487.

Повний текст джерела
Анотація:
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at different receptive fields and scales. Assuming a Markov Random Field for the refined disparity map produces better estimates of the true disparity distribution. Both fully supervised and semi-supervised versions of the algorithm are proposed. The approach includes a more robust loss function to inpaint invalid disparity values and requires much less labeled data to train in the semi-supervised learning mode. The algorithm can be generalized to fuse depths from different kinds of depth sources. Experiments explored different fusion opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on public synthetic datasets (Scene Flow, SYNTH3, our synthetic garden dataset) and real datasets (Kitti2015 dataset and Trimbot2020 Garden dataset).
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lu, Ting, Kexin Ding, Wei Fu, Shutao Li, and Anjing Guo. "Coupled adversarial learning for fusion classification of hyperspectral and LiDAR data." Information Fusion 93 (May 2023): 118–31. http://dx.doi.org/10.1016/j.inffus.2022.12.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Fu, Weiyu, and Lixia Wang. "Component-Based Software Testing Method Based on Deep Adversarial Network." Security and Communication Networks 2022 (October 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/4231083.

Повний текст джерела
Анотація:
With the continuous updating and application of software, the current problems in software are becoming more and more serious. Aiming at this phenomenon, the application and testing methods of componentized software based on deep adversarial networks are discussed. The experiments show that: (1) some of the software has a high fusion rate, reaching an astonishing 95% adaptability. The instability and greater potential of component-based software are solved through GAN and gray evaluation. With the evaluation system, people are dispelled. Trust degree. (2) According to the data in the graph and table, the deep learning adversarial network solves the vulnerability and closedness of the general network, and the built-in test method with experimental data reaching an average accuracy rate of 90% is the best test method for this system. With the deep learning adversarial network, the average test level of component-based software reaches level 7, which makes the new software industry of component-based software have a long way to go.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhang, Jinsong, Haiyan Chen, and Zhiliang Wang. "Droplet Image Reconstruction Based on Generative Adversarial Network." Journal of Physics: Conference Series 2216, no. 1 (March 1, 2022): 012096. http://dx.doi.org/10.1088/1742-6596/2216/1/012096.

Повний текст джерела
Анотація:
Abstract In the digital microfluidic experiments, the improper adjustments of the camera focus and background illumination lead to the phenomena of low illumination and blurred edges in the droplet image, which seriously interferes with information acquisition. Removing these blurred factors is an essential pretreatment step before information extraction. In this paper, a generative adversarial network model combining multi-scale convolution and attention mechanism is proposed to reconstruct the droplet image. The feature reconstruction module in generator can reconstruct the image feature maps from multiple scales. The fusion module is used to fuse the multi-scale feature maps into a reconstructed sharp image. The new model was trained on the data set which was made by the Style Transfer. Experimental results show that the proposed model can significantly improve the visual quality of images, effectively reduce the blur and improve the background illumination.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Yang, Yuanbo, Qunbo Lv, Baoyu Zhu, Xuefu Sui, Yu Zhang, and Zheng Tan. "One-Sided Unsupervised Image Dehazing Network Based on Feature Fusion and Multi-Scale Skip Connection." Applied Sciences 12, no. 23 (December 2, 2022): 12366. http://dx.doi.org/10.3390/app122312366.

Повний текст джерела
Анотація:
Haze and mist caused by air quality, weather, and other factors can reduce the clarity and contrast of images captured by cameras, which limits the applications of automatic driving, satellite remote sensing, traffic monitoring, etc. Therefore, the study of image dehazing is of great significance. Most existing unsupervised image-dehazing algorithms rely on a priori knowledge and simplified atmospheric scattering models, but the physical causes of haze in the real world are complex, resulting in inaccurate atmospheric scattering models that affect the dehazing effect. Unsupervised generative adversarial networks can be used for image-dehazing algorithm research; however, due to the information inequality between haze and haze-free images, the existing bi-directional mapping domain translation model often used in unsupervised generative adversarial networks is not suitable for image-dehazing tasks, and it also does not make good use of extracted features, which results in distortion, loss of image details, and poor retention of image features in the haze-free images. To address these problems, this paper proposes an end-to-end one-sided unsupervised image-dehazing network based on a generative adversarial network that directly learns the mapping between haze and haze-free images. The proposed feature-fusion module and multi-scale skip connection based on residual network consider the loss of feature information caused by convolution operation and the fusion of different scale features, and achieve adaptive fusion between low-level features and high-level features, to better preserve the features of the original image. Meanwhile, multiple loss functions are used to train the network, where the adversarial loss ensures that the network generates more realistic images and the contrastive loss ensures a meaningful one-sided mapping from the haze image to the haze-free image, resulting in haze-free images with good quantitative metrics and visual effects. The experiments demonstrate that, compared with existing dehazing algorithms, our method achieved better quantitative metrics and better visual effects on both synthetic haze image datasets and real-world haze image datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zhang, Jiahuan, Keisuke Maeda, Takahiro Ogawa, and Miki Haseyama. "Regularization Meets Enhanced Multi-Stage Fusion Features: Making CNN More Robust against White-Box Adversarial Attacks." Sensors 22, no. 14 (July 20, 2022): 5431. http://dx.doi.org/10.3390/s22145431.

Повний текст джерела
Анотація:
Regularization has become an important method in adversarial defense. However, the existing regularization-based defense methods do not discuss which features in convolutional neural networks (CNN) are more suitable for regularization. Thus, in this paper, we propose a multi-stage feature fusion network with a feature regularization operation, which is called Enhanced Multi-Stage Feature Fusion Network (EMSF2Net). EMSF2Net mainly combines three parts: multi-stage feature enhancement (MSFE), multi-stage feature fusion (MSF2), and regularization. Specifically, MSFE aims to obtain enhanced and expressive features in each stage by multiplying the features of each channel; MSF2 aims to fuse the enhanced features of different stages to further enrich the information of the feature, and the regularization part can regularize the fused and original features during the training process. EMSF2Net has proved that if the regularization term of the enhanced multi-stage feature is added, the adversarial robustness of CNN will be significantly improved. The experimental results on extensive white-box attacks on the CIFAR-10 dataset illustrate the robustness and effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Fang, Jing, Xiaole Ma, Jingjing Wang, Kai Qin, Shaohai Hu, and Yuefeng Zhao. "A Noisy SAR Image Fusion Method Based on NLM and GAN." Entropy 23, no. 4 (March 30, 2021): 410. http://dx.doi.org/10.3390/e23040410.

Повний текст джерела
Анотація:
The unavoidable noise often present in synthetic aperture radar (SAR) images, such as speckle noise, negatively impacts the subsequent processing of SAR images. Further, it is not easy to find an appropriate application for SAR images, given that the human visual system is sensitive to color and SAR images are gray. As a result, a noisy SAR image fusion method based on nonlocal matching and generative adversarial networks is presented in this paper. A nonlocal matching method is applied to processing source images into similar block groups in the pre-processing step. Then, adversarial networks are employed to generate a final noise-free fused SAR image block, where the generator aims to generate a noise-free SAR image block with color information, and the discriminator tries to increase the spatial resolution of the generated image block. This step ensures that the fused image block contains high resolution and color information at the same time. Finally, a fused image can be obtained by aggregating all the image blocks. By extensive comparative experiments on the SEN1–2 datasets and source images, it can be found that the proposed method not only has better fusion results but is also robust to image noise, indicating the superiority of the proposed noisy SAR image fusion method over the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zhang, Liping, Weisheng Li, Hefeng Huang, and Dajiang Lei. "A Pansharpening Generative Adversarial Network with Multilevel Structure Enhancement and a Multistream Fusion Architecture." Remote Sensing 13, no. 12 (June 21, 2021): 2423. http://dx.doi.org/10.3390/rs13122423.

Повний текст джерела
Анотація:
Deep learning has been widely used in various computer vision tasks. As a result, researchers have begun to explore the application of deep learning for pansharpening and have achieved remarkable results. However, most current pansharpening methods focus only on the mapping relationship between images and the lack overall structure enhancement, and do not fully and completely research optimization goals and fusion rules. Therefore, for these problems, we propose a pansharpening generative adversarial network with multilevel structure enhancement and a multistream fusion architecture. This method first uses multilevel gradient operators to obtain the structural information of the high-resolution panchromatic image. Then, it combines the spectral features with multilevel gradient information and inputs them into two subnetworks of the generator for fusion training. We design a comprehensive optimization goal for the generator, which not only minimizes the gap between the fused image and the real image but also considers the adversarial loss between the generator and the discriminator and the multilevel structure loss between the fused image and the panchromatic image. It is worth mentioning that we comprehensively consider the spectral information and the multilevel structure as the input of the discriminator, which makes it easier for the discriminator to distinguish real and fake images. Experiments show that our proposed method is superior to state-of-the-art methods in both the subjective visual and objective assessments of fused images, especially in road and building areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Zhu, Baoyu, Qunbo Lv, and Zheng Tan. "Adaptive Multi-Scale Fusion Blind Deblurred Generative Adversarial Network Method for Sharpening Image Data." Drones 7, no. 2 (January 30, 2023): 96. http://dx.doi.org/10.3390/drones7020096.

Повний текст джерела
Анотація:
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image information, but images with different degrees of blurring use the same weights, leading to increasing errors in the feature fusion process layer by layer. Based on the physical properties of image blurring, this paper proposes an adaptive multi-scale fusion blind deblurred generative adversarial network (AMD-GAN), which innovatively applies the degree of image blurring to guide the adjustment of the weights of multi-scale fusion, effectively suppressing the errors in the multi-scale fusion process and enhancing the interpretability of the feature layer. The research work in this paper reveals the necessity and effectiveness of a priori information on image blurring levels in image deblurring tasks. By studying and exploring the image blurring levels, the network model focuses more on the basic physical features of image blurring. Meanwhile, this paper proposes an image blurring degree description model, which can effectively represent the blurring degree of aerial remote sensing images. The comparison experiments show that the algorithm in this paper can effectively recover images with different degrees of blur, obtain high-quality images with clear texture details, outperform the comparison algorithm in both qualitative and quantitative evaluation, and can effectively improve the object detection performance of blurred aerial remote sensing images. Moreover, the average PSNR of this paper’s algorithm tested on the publicly available dataset RealBlur-R reached 41.02 dB, surpassing the latest SOTA algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ma, Shuai, Jianfeng Cui, Weidong Xiao, and Lijuan Liu. "Deep Learning-Based Data Augmentation and Model Fusion for Automatic Arrhythmia Identification and Classification Algorithms." Computational Intelligence and Neuroscience 2022 (August 11, 2022): 1–17. http://dx.doi.org/10.1155/2022/1577778.

Повний текст джерела
Анотація:
Automated ECG-based arrhythmia detection is critical for early cardiac disease prevention and diagnosis. Recently, deep learning algorithms have been widely applied for arrhythmia detection with great success. However, the lack of labeled ECG data and low classification accuracy can have a significant impact on the overall effectiveness of a classification algorithm. In order to better apply deep learning methods to arrhythmia classification, in this study, feature extraction and classification strategy based on generative adversarial network data augmentation and model fusion are proposed to address these problems. First, the arrhythmia sparse data is augmented by generative adversarial networks. Then, aiming at the identification of different types of arrhythmias in long-term ECG, a spatial information fusion model based on ResNet and a temporal information fusion model based on BiLSTM are proposed. The model effectively fuses the location information of the nearest neighbors through the local feature extraction part of the generated ECG feature map and obtains the correlation of the global features by autonomous learning in multiple spaces through the BiLSTM network in the part of the global feature extraction. In addition, an attention mechanism is introduced to enhance the features of arrhythmia-type signal segments, and this mechanism can effectively focus on the extraction of key information to form a feature vector for final classification. Finally, it is validated by the enhanced MIT-BIH arrhythmia database. The experimental results demonstrate that the proposed classification technique enhances arrhythmia diagnostic accuracy by 99.4%, and the algorithm has high recognition performance and clinical value.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Liu, Yu, Yu Shi, Fuhao Mu, Juan Cheng, and Xun Chen. "Glioma Segmentation-Oriented Multi-Modal MR Image Fusion With Adversarial Learning." IEEE/CAA Journal of Automatica Sinica 9, no. 8 (August 2022): 1528–31. http://dx.doi.org/10.1109/jas.2022.105770.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Gu, Yansong, Xinya Wang, Can Zhang, and Baiyang Li. "Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images." Entropy 23, no. 2 (February 19, 2021): 239. http://dx.doi.org/10.3390/e23020239.

Повний текст джерела
Анотація:
Obtaining key and rich visual information under sophisticated road conditions is one of the key requirements for advanced driving assistance. In this paper, a newfangled end-to-end model is proposed for advanced driving assistance based on the fusion of infrared and visible images, termed as FusionADA. In our model, we are committed to extracting and fusing the optimal texture details and salient thermal targets from the source images. To achieve this goal, our model constitutes an adversarial framework between the generator and the discriminator. Specifically, the generator aims to generate a fused image with basic intensity information together with the optimal texture details from source images, while the discriminator aims to force the fused image to restore the salient thermal targets from the source infrared image. In addition, our FusionADA is a fully end-to-end model, solving the issues of manually designing complicated activity level measurements and fusion rules existing in traditional methods. Qualitative and quantitative experiments on publicly available datasets RoadScene and TNO demonstrate the superiority of our FusionADA over the state-of-the-art approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Chen, Lei, Jun Han, and Feng Tian. "Infrared and visible image fusion using two-layer generative adversarial network." Journal of Intelligent & Fuzzy Systems 40, no. 6 (June 21, 2021): 11897–913. http://dx.doi.org/10.3233/jifs-210041.

Повний текст джерела
Анотація:
Infrared (IR) images can distinguish targets from their backgrounds based on difference in thermal radiation, whereas visible images can provide texture details with high spatial resolution. The fusion of the IR and visible images has many advantages and can be applied to applications such as target detection and recognition. This paper proposes a two-layer generative adversarial network (GAN) to fuse these two types of images. In the first layer, the network generate fused images using two GANs: one uses the IR image as input and the visible image as ground truth, and the other with the visible as input and the IR as ground truth. In the second layer, the network transfer one of the two fused images generated in the first layer as input and the other as ground truth to GAN to generate the final fused image. We adopt TNO and INO data sets to verify our method, and by comparing with eight objective evaluation parameters obtained by other ten methods. It is demonstrated that our method is able to achieve better performance than state-of-arts on preserving both texture details and thermal information.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Huang, Ningbo, Gang Zhou, Mengli Zhang, Meng Zhang, and Ze Yu. "Modelling the Latent Semantics of Diffusion Sources in Information Cascade Prediction." Computational Intelligence and Neuroscience 2021 (September 29, 2021): 1–12. http://dx.doi.org/10.1155/2021/7880215.

Повний текст джерела
Анотація:
Predicting the information spread tendency can help products recommendation and public opinion management. The existing information cascade prediction models are devoted to extract the chronological features from diffusion sequences but treat the diffusion sources as ordinary users. Diffusion source, the first user in the information cascade, can indicate the latent topic and diffusion pattern of an information item to mine user potential common interests, which facilitates information cascade prediction. In this paper, for modelling the abundant implicit semantics of diffusion sources in information cascade prediction, we propose a Diffusion Source latent Semantics-Fused cascade prediction framework, named DSSF. Specifically, we firstly apply diffusion sources embedding to model the special role of the source users. To learn the latent interaction between users and diffusion sources, we proposed a co-attention-based fusion gate which fuses the diffusion sources’ latent semantics with user embedding. To address the challenge that the distribution of diffusion sources is long-tailed, we develop an adversarial training framework to transfer the semantics knowledge from head to tail sources. Finally, we conduct experiments on real-world datasets, and the results show that modelling the diffusion sources can significantly improve the prediction performance. Besides, this improvement is limited for the cascades from tail sources, and the adversarial framework can help.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yang, Xiuzhu, Xinyue Zhang, Yi Ding, and Lin Zhang. "Indoor Activity and Vital Sign Monitoring for Moving People with Multiple Radar Data Fusion." Remote Sensing 13, no. 18 (September 21, 2021): 3791. http://dx.doi.org/10.3390/rs13183791.

Повний текст джерела
Анотація:
The monitoring of human activity and vital signs plays a significant role in remote health-care. Radar provides a non-contact monitoring approach without privacy and illumination concerns. However, multiple people in a narrow indoor environment bring dense multipaths for activity monitoring, and the received vital sign signals are heavily distorted with body movements. This paper proposes a framework based on Frequency Modulated Continuous Wave (FMCW) and Impulse Radio Ultra-Wideband (IR-UWB) radars to address these challenges, designing intelligent spatial-temporal information fusion for activity and vital sign monitoring. First, a local binary pattern (LBP) and energy features are extracted from FMCW radar, combined with the wavelet packet transform (WPT) features on IR-UWB radar for activity monitoring. Then the additional information guided fusing network (A-FuseNet) is proposed with a modified generative and adversarial structure for vital sign monitoring. A Cascaded Convolutional Neural Network (CCNN) module and a Long Short Term Memory (LSTM) module are designed as the fusion sub-network for vital sign information extraction and multisensory data fusion, while a discrimination sub-network is constructed to optimize the fused heartbeat signal. In addition, the activity and movement characteristics are introduced as additional information to guide the fusion and optimization. A multi-radar dataset with an FMCW and two IR-UWB radars in a cotton tent, a small room and a wide lobby is constructed, and the accuracies of activity and vital sign monitoring achieve 99.9% and 92.3% respectively. Experimental results demonstrate the superiority and robustness of the proposed framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Chen, Xianglong, Haipeng Wang, Yaohui Liang, Ying Meng, and Shifeng Wang. "A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network." Sensors 22, no. 1 (December 31, 2021): 304. http://dx.doi.org/10.3390/s22010304.

Повний текст джерела
Анотація:
The presence of fake pictures affects the reliability of visible face images under specific circumstances. This paper presents a novel adversarial neural network designed named as the FTSGAN for infrared and visible image fusion and we utilize FTSGAN model to fuse the face image features of infrared and visible image to improve the effect of face recognition. In FTSGAN model design, the Frobenius norm (F), total variation norm (TV), and structural similarity index measure (SSIM) are employed. The F and TV are used to limit the gray level and the gradient of the image, while the SSIM is used to limit the image structure. The FTSGAN fuses infrared and visible face images that contains bio-information for heterogeneous face recognition tasks. Experiments based on the FTSGAN using hundreds of face images demonstrate its excellent performance. The principal component analysis (PCA) and linear discrimination analysis (LDA) are involved in face recognition. The face recognition performance after fusion improved by 1.9% compared to that before fusion, and the final face recognition rate was 94.4%. This proposed method has better quality, faster rate, and is more robust than the methods that only use visible images for face recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Yin, Jian, Zhibo Zhou, Shaohua Xu, Ruiping Yang, and Kun Liu. "A Generative Adversarial Network Fused with Dual-Attention Mechanism and Its Application in Multitarget Image Fine Segmentation." Computational Intelligence and Neuroscience 2021 (December 18, 2021): 1–16. http://dx.doi.org/10.1155/2021/2464648.

Повний текст джерела
Анотація:
Aiming at the problem of insignificant target morphological features, inaccurate detection and unclear boundary of small-target regions, and multitarget boundary overlap in multitarget complex image segmentation, combining the image segmentation mechanism of generative adversarial network with the feature enhancement method of nonlocal attention, a generative adversarial network fused with attention mechanism (AM-GAN) is proposed. The generative network in the model is composed of residual network and nonlocal attention module, which use the feature extraction and multiscale fusion mechanism of residual network, as well as feature enhancement and global information fusion ability of nonlocal spatial-channel dual attention to enhance the target features in the detection area and improve the continuity and clarity of the segmentation boundary. The adversarial network is composed of fully convolutional networks, which penalizes the loss of information in small-target regions by judging the authenticity of prediction and label segmentation and improves the detection ability of the generative adversarial model for small targets and the accuracy of multitarget segmentation. AM-GAN can use the GAN’s inherent mechanism that reconstruct and repair high-resolution image, as well as the ability of nonlocal attention global receptive field to strengthen detail features, automatically learn to focus on target structures of different shapes and sizes, highlight salient features useful for specific tasks, reduce the loss of image detail features, improve the accuracy of small-target detection, and optimize the segmentation boundary of multitargets. Taking medical MRI abdominal image segmentation as a verification experiment, multitargets such as liver, left/right kidney, and spleen are selected for segmentation and abnormal tissue detection. In the case of small and unbalanced sample datasets, the class pixels’ accuracy reaches 87.37%, the intersection over union is 92.42%, and the average Dice coefficient is 93%. Compared with other methods in the experiment, the segmentation precision and accuracy are greatly improved. It shows that the proposed method has good applicability for solving typical multitarget image segmentation problems such as small-target feature detection, boundary overlap, and offset deformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Liu, Shangwang, and Lihan Yang. "BPDGAN: A GAN-Based Unsupervised Back Project Dense Network for Multi-Modal Medical Image Fusion." Entropy 24, no. 12 (December 14, 2022): 1823. http://dx.doi.org/10.3390/e24121823.

Повний текст джерела
Анотація:
Single-modality medical images often cannot contain sufficient valid information to meet the information requirements of clinical diagnosis. The diagnostic efficiency is always limited by observing multiple images at the same time. Image fusion is a technique that combines functional modalities such as positron emission computed tomography (PET) and single-photon emission computed tomography (SPECT) with anatomical modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) to supplement the complementary information. Meanwhile, fusing two anatomical images (like CT-MRI) is often required to replace single MRI, and the fused images can improve the efficiency and accuracy of clinical diagnosis. To this end, in order to achieve high-quality, high-resolution and rich-detail fusion without artificial prior, an unsupervised deep learning image fusion framework is proposed in this paper. It is named the back project dense generative adversarial network (BPDGAN) framework. In particular, we construct a novel network based on the back project dense block (BPDB) and convolutional block attention module (CBAM). The BPDB can effectively mitigate the impact of black backgrounds on image content. Conversely, the CBAM improves the performance of BPDGAN on the texture and edge information. To conclude, qualitative and quantitative experiments are tested to demonstrate the superiority of BPDGAN. In terms of quantitative metrics, BPDGAN outperforms the state-of-the-art comparisons by approximately 19.58%, 14.84%, 10.40% and 86.78% on AG, EI, Qabf and Qcv metrics, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Dong, Yu, Yihao Liu, He Zhang, Shifeng Chen, and Yu Qiao. "FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10729–36. http://dx.doi.org/10.1609/aaai.v34i07.6701.

Повний текст джерела
Анотація:
Recently, convolutional neural networks (CNNs) have achieved great improvements in single image dehazing and attained much attention in research. Most existing learning-based dehazing methods are not fully end-to-end, which still follow the traditional dehazing procedure: first estimate the medium transmission and the atmospheric light, then recover the haze-free image based on the atmospheric scattering model. However, in practice, due to lack of priors and constraints, it is hard to precisely estimate these intermediate parameters. Inaccurate estimation further degrades the performance of dehazing, resulting in artifacts, color distortion and insufficient haze removal. To address this, we propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing. With the proposed Fusion-discriminator which takes frequency information as additional priors, our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts. Moreover, we synthesize a large-scale training dataset including various indoor and outdoor hazy images to boost the performance and we reveal that for learning-based dehazing methods, the performance is strictly influenced by the training data. Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images with more visually pleasing dehazed results.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Dutta, Anjan, and Zeynep Akata. "Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-Based Image Retrieval." International Journal of Computer Vision 128, no. 10-11 (July 29, 2020): 2684–703. http://dx.doi.org/10.1007/s11263-020-01350-x.

Повний текст джерела
Анотація:
Abstract Low-shot sketch-based image retrieval is an emerging task in computer vision, allowing to retrieve natural images relevant to hand-drawn sketch queries that are rarely seen during the training phase. Related prior works either require aligned sketch-image pairs that are costly to obtain or inefficient memory fusion layer for mapping the visual information to a semantic space. In this paper, we address any-shot, i.e. zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks, where we introduce the few-shot setting for SBIR. For solving these tasks, we propose a semantically aligned paired cycle-consistent generative adversarial network (SEM-PCYC) for any-shot SBIR, where each branch of the generative adversarial network maps the visual information from sketch and image to a common semantic space via adversarial training. Each of these branches maintains cycle consistency that only requires supervision at the category level, and avoids the need of aligned sketch-image pairs. A classification criteria on the generators’ outputs ensures the visual to semantic space mapping to be class-specific. Furthermore, we propose to combine textual and hierarchical side information via an auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in any-shot SBIR performance over the state-of-the-art on the extended version of the challenging Sketchy, TU-Berlin and QuickDraw datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Vizil’ter, Yu V., O. V. Vygolov, D. V. Komarov, and M. A. Lebedev. "Fusion of Images of Different Spectra Based on Generative Adversarial Networks." Journal of Computer and Systems Sciences International 58, no. 3 (May 2019): 441–53. http://dx.doi.org/10.1134/s1064230719030201.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Gao, Jianhao, Qiangqiang Yuan, Jie Li, Hai Zhang, and Xin Su. "Cloud Removal with Fusion of High Resolution Optical and SAR Images Using Generative Adversarial Networks." Remote Sensing 12, no. 1 (January 5, 2020): 191. http://dx.doi.org/10.3390/rs12010191.

Повний текст джерела
Анотація:
The existence of clouds is one of the main factors that contributes to missing information in optical remote sensing images, restricting their further applications for Earth observation, so how to reconstruct the missing information caused by clouds is of great concern. Inspired by the image-to-image translation work based on convolutional neural network model and the heterogeneous information fusion thought, we propose a novel cloud removal method in this paper. The approach can be roughly divided into two steps: in the first step, a specially designed convolutional neural network (CNN) translates the synthetic aperture radar (SAR) images into simulated optical images in an object-to-object manner; in the second step, the simulated optical image, together with the SAR image and the optical image corrupted by clouds, is fused to reconstruct the corrupted area by a generative adversarial network (GAN) with a particular loss function. Between the first step and the second step, the contrast and luminance of the simulated optical image are randomly altered to make the model more robust. Two simulation experiments and one real-data experiment are conducted to confirm the effectiveness of the proposed method on Sentinel 1/2, GF 2/3 and airborne SAR/optical data. The results demonstrate that the proposed method outperforms state-of-the-art algorithms that also employ SAR images as auxiliary data.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chen, Zhuo, Ming Fang, Xu Chai, Feiran Fu, and Lihong Yuan. "U-GAN Model for Infrared and Visible Images Fusion." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, no. 4 (August 2020): 904–12. http://dx.doi.org/10.1051/jnwpu/20203840904.

Повний текст джерела
Анотація:
Infrared and visible image fusion is an effective method to solve the lack of single sensor imaging. The purpose is that the fusion images are suitable for human eyes and conducive to the next application and processing. In order to solve the problems of incomplete feature extraction, loss of details, and less samples of common data sets, it is not conducive to training, an end-to-end network architecture for image fusion is proposed. U-net is introduced into image fusion, and the final fusion result is obtained by using the generative adversarial network. Through its special convolution structure, the important feature information is extracted to the maximum extent, and the sample does not need to be cut to avoid the problem of reducing the fusion accuracy, but also to improve the training speed. Then the U-net extracted feature is confronted with the discriminator containing infrared image, and the generator model is obtained. The experimental results show that the present algorithm can obtain the fusion image with clear outline, prominent texture and obvious target. SD, SF, SSIM, AG and other indicators are obviously improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zhou, Tao, Qi Li, Huiling Lu, Xiangxiang Zhang, and Qianru Cheng. "Hybrid Multimodal Medical Image Fusion Method Based on LatLRR and ED-D2GAN." Applied Sciences 12, no. 24 (December 12, 2022): 12758. http://dx.doi.org/10.3390/app122412758.

Повний текст джерела
Анотація:
In order to better preserve the anatomical structure information of Computed Tomography (CT) source images and highlight the metabolic information of lesion regions in Positron Emission Tomography (PET) source images, a hybrid multimodal medical image fusion method (LatLRR-GAN) based on Latent low-rank representation (LatLRR) and the dual discriminators Generative Adversarial Network (ED-D2GAN) is proposed. Firstly, considering the denoising capability of LatLRR, source images were decomposed by LatLRR. Secondly, the ED-D2GAN model was put forward as the low-rank region fusion method, which can fully extract the information contained by the low-rank region images. Among them, encoder and decoder networks were used in the generator; convolutional neural networks were also used in dual discriminators. Thirdly, a threshold adaptive weighting algorithm based on the region energy ratio is proposed as the salient region fusion rule, which can improve the overall sharpness of the fused image. The experimental results show that compared with the best methods of the other six methods, this paper is effective in multiple objective evaluation metrics, including the average gradient, edge intensity, information entropy, spatial frequency and standard deviation. The results of the two experiments are improved by 35.03%, 42.42%, 4.66%, 8.59% and 11.49% on average.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Yin, Xiao-Xia, Lihua Yin, and Sillas Hadjiloucas. "Pattern Classification Approaches for Breast Cancer Identification via MRI: State-Of-The-Art and Vision for the Future." Applied Sciences 10, no. 20 (October 15, 2020): 7201. http://dx.doi.org/10.3390/app10207201.

Повний текст джерела
Анотація:
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of breast tissue are discussed. The algorithms are based on recent advances in multi-dimensional signal processing and aim to advance current state-of-the-art computer-aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi-parametric computer-aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi-supervised deep learning and self-supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high-dimensional medical imaging analysis platform that is based on multi-task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE-MRI. Since some of the approaches discussed are also based on time-lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Hou, Jilei, Dazhi Zhang, Wei Wu, Jiayi Ma, and Huabing Zhou. "A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation." Entropy 23, no. 3 (March 21, 2021): 376. http://dx.doi.org/10.3390/e23030376.

Повний текст джерела
Анотація:
This paper proposes a new generative adversarial network for infrared and visible image fusion based on semantic segmentation (SSGAN), which can consider not only the low-level features of infrared and visible images, but also the high-level semantic information. Source images can be divided into foregrounds and backgrounds by semantic masks. The generator with a dual-encoder-single-decoder framework is used to extract the feature of foregrounds and backgrounds by different encoder paths. Moreover, the discriminator’s input image is designed based on semantic segmentation, which is obtained by combining the foregrounds of the infrared images with the backgrounds of the visible images. Consequently, the prominence of thermal targets in the infrared images and texture details in the visible images can be preserved in the fused images simultaneously. Qualitative and quantitative experiments on publicly available datasets demonstrate that the proposed approach can significantly outperform the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

de Villiers, James G., and Rensu P. Theart. "Predicting mitochondrial fission, fusion and depolarisation event locations from a single z-stack." PLOS ONE 18, no. 3 (March 8, 2023): e0271151. http://dx.doi.org/10.1371/journal.pone.0271151.

Повний текст джерела
Анотація:
This paper documents the development of a novel method to predict the occurrence and exact locations of mitochondrial fission, fusion and depolarisation events in three dimensions. This novel implementation of neural networks to predict these events using information encoded only in the morphology of the mitochondria eliminate the need for time-lapse sequences of cells. The ability to predict these morphological mitochondrial events using a single image can not only democratise research but also revolutionise drug trials. The occurrence and location of these events were successfully predicted with a three-dimensional version of the Pix2Pix generative adversarial network (GAN) as well as a three-dimensional adversarial segmentation network called the Vox2Vox GAN. The Pix2Pix GAN predicted the locations of mitochondrial fission, fusion and depolarisation events with accuracies of 35.9%, 33.2% and 4.90%, respectively. Similarly, the Vox2Vox GAN achieved accuracies of 37.1%, 37.3% and 7.43%. The accuracies achieved by the networks in this paper are too low for the immediate implementation of these tools in life science research. They do however indicate that the networks have modelled the mitochondrial dynamics to some degree of accuracy and may therefore still be helpful as an indication of where events might occur if time lapse sequences are not available. The prediction of these morphological mitochondrial events have, to our knowledge, never been achieved before in literature. The results from this paper can be used as a baseline for the results obtained by future work.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wang, Yang. "Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry, and Fusion." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (March 31, 2021): 1–25. http://dx.doi.org/10.1145/3408317.

Повний текст джерела
Анотація:
With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects. Often, different modalities are complementary to each other. This fact motivated a lot of research attention on fusing the multi-modal feature spaces to comprehensively characterize the data objects. Most of the existing state-of-the-arts focused on how to fuse the energy or information from multi-modal spaces to deliver a superior performance over their counterparts with single modal. Recently, deep neural networks have been exhibited as a powerful architecture to well capture the nonlinear distribution of high-dimensional multimedia data, so naturally does for multi-modal data. Substantial empirical studies are carried out to demonstrate its advantages that are benefited from deep multi-modal methods, which can essentially deepen the fusion from multi-modal deep feature spaces. In this article, we provide a substantial overview of the existing state-of-the-arts in the field of multi-modal data analytics from shallow to deep spaces. Throughout this survey, we further indicate that the critical components for this field go to collaboration, adversarial competition, and fusion over multi-modal spaces. Finally, we share our viewpoints regarding some future directions in this field.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zhang, Junjie, Zhouyin Cai, Fansheng Chen, and Dan Zeng. "Hyperspectral Image Denoising via Adversarial Learning." Remote Sensing 14, no. 8 (April 7, 2022): 1790. http://dx.doi.org/10.3390/rs14081790.

Повний текст джерела
Анотація:
Due to sensor instability and atmospheric interference, hyperspectral images (HSIs) often suffer from different kinds of noise which degrade the performance of downstream tasks. Therefore, HSI denoising has become an essential part of HSI preprocessing. Traditional methods tend to tackle one specific type of noise and remove it iteratively, resulting in drawbacks including inefficiency when dealing with mixed noise. Most recently, deep neural network-based models, especially generative adversarial networks, have demonstrated promising performance in generic image denoising. However, in contrast to generic RGB images, HSIs often possess abundant spectral information; thus, it is non-trivial to design a denoising network to effectively explore both spatial and spectral characteristics simultaneously. To address the above issues, in this paper, we propose an end-to-end HSI denoising model via adversarial learning. More specifically, to capture the subtle noise distribution from both spatial and spectral dimensions, we designed a Residual Spatial-Spectral Module (RSSM) and embed it in an UNet-like structure as the generator to obtain clean images. To distinguish the real image from the generated one, we designed a discriminator based on the Multiscale Feature Fusion Module (MFFM) to further improve the quality of the denoising results. The generator was trained with joint loss functions, including reconstruction loss, structural loss and adversarial loss. Moreover, considering the lack of publicly available training data for the HSI denoising task, we collected an additional benchmark dataset denoted as the Shandong Feicheng Denoising (SFD) dataset. We evaluated five types of mixed noise across several datasets in comparative experiments, and comprehensive experimental results on both simulated and real data demonstrate that the proposed model achieves competitive results against state-of-the-art methods. For ablation studies, we investigated the structure of the generator as well as the training process with joint losses and different amounts of training data, further validating the rationality and effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії