论文标题
基于平面区域共识的图像缝制
Image Stitching Based on Planar Region Consensus
论文作者
论文摘要
众所周知,对于两个没有全球转换的图像,图像缝合了两个图像。在本文中,注意到在透视几何形状下平面结构的重要性,我们提出了一种新的图像缝合方法,该方法通过允许对齐一组匹配的主要平面区域来缝制图像。显然,与以前的方法求助于平面分割不同,我们方法的关键是直接利用RGB图像的丰富语义信息来提取具有深卷积神经网络(CNN)的平面图像区域。我们专门设计了一个新的模块,以充分利用现有的语义细分网络来容纳平面细分。为了训练网络,为平面区域细分的数据集提供了贡献。借助平面区域知识,可以通过约束匹配区域来获得一组局部变换,从而在重叠区域中更精确地对齐。我们还使用平面知识来估计整个图像上的转换字段。最终的马赛克是通过基于网格的优化框架获得的,该框架保持高对准精度并同时放松相似性转换。定量比较的广泛实验表明,我们的方法可以处理不同的情况,并且在具有挑战性的场景上表现出色。
Image stitching for two images without a global transformation between them is notoriously difficult. In this paper, noticing the importance of planar structure under perspective geometry, we propose a new image stitching method which stitches images by allowing for the alignment of a set of matched dominant planar regions. Clearly different from previous methods resorting to plane segmentation, the key to our approach is to utilize rich semantic information directly from RGB images to extract planar image regions with a deep Convolutional Neural Network (CNN). We specifically design a new module to make fully use of existing semantic segmentation networks to accommodate planar segmentation. To train the network, a dataset for planar region segmentation is contributed. With the planar region knowledge, a set of local transformations can be obtained by constraining matched regions, enabling more precise alignment in the overlapping area. We also use planar knowledge to estimate a transformation field over the whole image. The final mosaic is obtained by a mesh-based optimization framework which maintains high alignment accuracy and relaxes similarity transformation at the same time. Extensive experiments with quantitative comparisons show that our method can deal with different situations and outperforms the state-of-the-arts on challenging scenes.