论文标题

通过删除实例网状重建理解点场景

Point Scene Understanding via Disentangled Instance Mesh Reconstruction

论文作者

Tang, Jiaxiang, Chen, Xiaokang, Wang, Jingbo, Zeng, Gang

论文摘要

点云的语义场景重建是3D场景理解的重要任务。此任务不仅需要识别场景中的每个实例,还需要根据部分观察到的点云恢复其几何形状。现有方法通常尝试基于基于检测的主链的不完整点云建议直接预测完整对象的占用值。但是,由于妨碍了各种检测到的假阳性对象建议,并且对完整对象的学习占用值的不完整点观察的歧义,该框架总是无法重建高保真网格。为了避免障碍,我们提出了一个分离的实例网格重建(DIMR)框架,以了解有效的点场景理解。采用基于分割的主链来减少假阳性对象建议,这进一步有益于我们对识别与重建之间关系的探索。根据准确的建议,我们利用网状的潜在代码空间来解开形状完成和网格生成的过程,从而缓解了由不完整的点观测引起的歧义。此外,通过在测试时间访问CAD型号池,我们的模型也可以通过在没有额外训练的情况下进行网格检索来改善重建质量。我们通过多个指标彻底评估了重建的网格质量,并证明了我们在挑战性扫描数据集上的优越性。代码可在\ url {https://github.com/ashawkey/dimr}上找到。

Semantic scene reconstruction from point cloud is an essential and challenging task for 3D scene understanding. This task requires not only to recognize each instance in the scene, but also to recover their geometries based on the partial observed point cloud. Existing methods usually attempt to directly predict occupancy values of the complete object based on incomplete point cloud proposals from a detection-based backbone. However, this framework always fails to reconstruct high fidelity mesh due to the obstruction of various detected false positive object proposals and the ambiguity of incomplete point observations for learning occupancy values of complete objects. To circumvent the hurdle, we propose a Disentangled Instance Mesh Reconstruction (DIMR) framework for effective point scene understanding. A segmentation-based backbone is applied to reduce false positive object proposals, which further benefits our exploration on the relationship between recognition and reconstruction. Based on the accurate proposals, we leverage a mesh-aware latent code space to disentangle the processes of shape completion and mesh generation, relieving the ambiguity caused by the incomplete point observations. Furthermore, with access to the CAD model pool at test time, our model can also be used to improve the reconstruction quality by performing mesh retrieval without extra training. We thoroughly evaluate the reconstructed mesh quality with multiple metrics, and demonstrate the superiority of our method on the challenging ScanNet dataset. Code is available at \url{https://github.com/ashawkey/dimr}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源