论文标题

MFIF-GAN:用于多聚焦图像融合的新的生成对抗网络

MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image Fusion

论文作者

Wang, Yicheng, Xu, Shuang, Liu, Junmin, Zhao, Zixiang, Zhang, Chunxia, Zhang, Jiangshe

论文摘要

多聚焦图像融合(MFIF)是一种有希望的图像增强技术,可获得满足视觉需求的全焦点图像,并且是其他计算机视觉任务的前提。 MFIF的研究趋势之一是避免围绕焦点/散焦边界(FDB)的散焦扩散效应(DSE)。在本文中,我们提出了一个称为MFIF-GAN的网络,以通过生成焦点图来减轻DSE,其中前景区域正确地比相应的对象大。网络中采用了挤压和激发残差模块。通过结合训练条件的先验知识,该网络将在基于α-Matte模型的合成数据集上进行训练。此外,重建和梯度正则化项在损失函数中合并,以增强边界细节并提高融合图像的质量。广泛的实验表明,MFIF-GAN在视觉感知,定量分析和效率方面的表现优于几种最新方法(SOTA)方法。此外,首先提出了边缘扩散和收缩模块,以验证我们方法生成的焦点图在像素级别上是准确的。

Multi-Focus Image Fusion (MFIF) is a promising image enhancement technique to obtain all-in-focus images meeting visual needs and it is a precondition of other computer vision tasks. One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB). In this paper,we propose a network termed MFIF-GAN to attenuate the DSE by generating focus maps in which the foreground region are correctly larger than the corresponding objects. The Squeeze and Excitation Residual module is employed in the network. By combining the prior knowledge of training condition, this network is trained on a synthetic dataset based on an α-matte model. In addition, the reconstruction and gradient regularization terms are combined in the loss functions to enhance the boundary details and improve the quality of fused images. Extensive experiments demonstrate that the MFIF-GAN outperforms several state-of-the-art (SOTA) methods in visual perception, quantitative analysis as well as efficiency. Moreover, the edge diffusion and contraction module is firstly proposed to verify that focus maps generated by our method are accurate at the pixel level.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源