论文标题

3D形状表示的深层隐式模板

Deep Implicit Templates for 3D Shape Representation

论文作者

Zheng, Zerong, Yu, Tao, Dai, Qionghai, Liu, Yebin

论文摘要

深层隐式函数(DIFS)作为一种3D形状表示,由于其紧凑和强大的代表力,在3D视觉社区中变得越来越流行。但是,与基于多边形网格的模板不同,对于difs代表的形状的密集对应关系或其他语义关系的挑战仍然是一个挑战,这限制了其在纹理传递,形状分析等方面的应用。为了克服这一局限性并使差异更加易于解释,我们提出了深层隐式模板,这是一种新的3D形状表示,支持深层隐式表示中的明确对应推理。我们的关键思想是作为模板隐式函数的条件变形而差异。为此,我们提出了空间翘曲LSTM,该空间翘曲将条件的空间转换分解为多个仿射转换并确保概括能力。此外,仔细设计训练损失是为了达到高重建精度,同时学习具有无监督方式准确对应的合理模板。实验表明,我们的方法不仅可以学习形状集合的共同隐式模板,而且还可以在没有任何监督的情况下同时在所有形状上建立密集的对应关系。

Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power. However, unlike polygon mesh-based templates, it remains a challenge to reason dense correspondences or other semantic relationships across shapes represented by DIFs, which limits its applications in texture transfer, shape analysis and so on. To overcome this limitation and also make DIFs more interpretable, we propose Deep Implicit Templates, a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations. Our key idea is to formulate DIFs as conditional deformations of a template implicit function. To this end, we propose Spatial Warping LSTM, which decomposes the conditional spatial transformation into multiple affine transformations and guarantees generalization capability. Moreover, the training loss is carefully designed in order to achieve high reconstruction accuracy while learning a plausible template with accurate correspondences in an unsupervised manner. Experiments show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源