论文标题
连续的表面嵌入
Continuous Surface Embeddings
论文作者
论文摘要
在这项工作中,我们专注于学习和表示可变形对象类别中密集的对应关系的任务。尽管以前已经考虑过此问题,但到目前为止,解决方案对于特定的对象类型(即人)来说是相当临时的,通常涉及大量的手动工作。但是,将几何理解缩放到自然界中的所有对象需要更多的自动化方法,这些方法也可以表达相关的对象,但几何不同的对象。为此,我们提出了一个新的,可学习的基于图像的密集对应表示。我们的模型预测了2D图像中每个像素,即对象网格中相应顶点的嵌入向量,因此建立了图像像素和3D对象几何形状之间的密集对应关系。我们证明,所提出的方法在PAR或更好的方法上进行的方法比最先进的姿势估算人类的方法更简单。我们还收集了动物类密集对应的新野外数据集,并证明我们的框架自然而然地缩放到新的可变形对象类别。
In this work, we focus on the task of learning and representing dense correspondences in deformable object categories. While this problem has been considered before, solutions so far have been rather ad-hoc for specific object types (i.e., humans), often with significant manual work involved. However, scaling the geometry understanding to all objects in nature requires more automated approaches that can also express correspondences between related, but geometrically different objects. To this end, we propose a new, learnable image-based representation of dense correspondences. Our model predicts, for each pixel in a 2D image, an embedding vector of the corresponding vertex in the object mesh, therefore establishing dense correspondences between image pixels and 3D object geometry. We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans, while being conceptually simpler. We also collect a new in-the-wild dataset of dense correspondences for animal classes and demonstrate that our framework scales naturally to the new deformable object categories.