论文标题
使用生成对抗网络和SAR到光学图像翻译中的遥感图像中的云去删除
Cloud removal in remote sensing images using generative adversarial networks and SAR-to-optical image translation
论文作者
论文摘要
卫星图像通常被云污染。由于卫星图像应用的广泛,云的去除引起了很多关注。随着云层变厚,去除云的过程变得更加具有挑战性。在这种情况下,使用辅助图像,例如近红外或合成孔径雷达(SAR)进行重建是常见的。在这项研究中,我们尝试使用两个生成对抗网络(GAN)来解决问题。第一个将SAR图像转换为光学图像,第二个将SAR图像使用先前GAN的翻译图像去除云。此外,我们建议在发电机网络中提出扩张的残留构成块(DRIB),而不是发电机网络中的香草U-NET,并使用结构相似性指数测量(SSIM)除了L1损耗函数之外。通过扩张的卷积减少减少采样的数量和扩大的接受场增加了输出图像的质量。我们使用SEN1-2数据集训练和测试两个gan,我们通过将合成云添加到光学图像中制作了多云的图像。使用PSNR和SSIM评估恢复的图像。我们将所提出的方法与最先进的深度学习模型进行了比较,并在SAR到光学翻译和云删除零件中获得了更准确的结果。
Satellite images are often contaminated by clouds. Cloud removal has received much attention due to the wide range of satellite image applications. As the clouds thicken, the process of removing the clouds becomes more challenging. In such cases, using auxiliary images such as near-infrared or synthetic aperture radar (SAR) for reconstructing is common. In this study, we attempt to solve the problem using two generative adversarial networks (GANs). The first translates SAR images into optical images, and the second removes clouds using the translated images of prior GAN. Also, we propose dilated residual inception blocks (DRIBs) instead of vanilla U-net in the generator networks and use structural similarity index measure (SSIM) in addition to the L1 Loss function. Reducing the number of downsamplings and expanding receptive fields by dilated convolutions increase the quality of output images. We used the SEN1-2 dataset to train and test both GANs, and we made cloudy images by adding synthetic clouds to optical images. The restored images are evaluated with PSNR and SSIM. We compare the proposed method with state-of-the-art deep learning models and achieve more accurate results in both SAR-to-optical translation and cloud removal parts.