论文标题

注意力和可见图像融合的注意力引导和小波受限的生成对抗网络

An Attention-Guided and Wavelet-Constrained Generative Adversarial Network for Infrared and Visible Image Fusion

论文作者

Liu, Xiaowen, Wang, Renhua, Huo, Hongtao, Yang, Xin, Li, Jing

论文摘要

基于GAN的红外线和可见的图像融合方法由于其有效性和优越性而引起了人们的关注。但是,现有方法采用源图像的全局像素分布作为歧视的基础,而歧视的基础未能集中在关键的模态信息上。此外,基于双歧节的方法遭受了歧视者之间的对抗。为此,我们提出了一个针对红外和可见图像融合(AWFGAN)的注意力引导和小波受限的GAN。在这种方法中,两种独特的歧视策略旨在改善融合性能。具体而言,我们将空间注意模块(SAM)引入发电机以获取空间注意图,然后将注意图用于迫使歧视红外图像以专注于目标区域。此外,我们将可见信息的歧视范围扩展到小波子空间,这可以迫使发电机恢复可见图像的高频细节。消融实验证明了我们方法在消除歧视者之间的对抗方面的有效性。公共数据集上的比较实验证明了该方法的有效性和优势。

The GAN-based infrared and visible image fusion methods have gained ever-increasing attention due to its effectiveness and superiority. However, the existing methods adopt the global pixel distribution of source images as the basis for discrimination, which fails to focus on the key modality information. Moreover, the dual-discriminator based methods suffer from the confrontation between the discriminators. To this end, we propose an attention-guided and wavelet-constrained GAN for infrared and visible image fusion (AWFGAN). In this method, two unique discrimination strategies are designed to improve the fusion performance. Specifically, we introduce the spatial attention modules (SAM) into the generator to obtain the spatial attention maps, and then the attention maps are utilized to force the discrimination of infrared images to focus on the target regions. In addition, we extend the discrimination range of visible information to the wavelet subspace, which can force the generator to restore the high-frequency details of visible images. Ablation experiments demonstrate the effectiveness of our method in eliminating the confrontation between discriminators. And the comparison experiments on public datasets demonstrate the effectiveness and superiority of the proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源