论文标题

Safenet:具有语义感知功能提取的自我监督的单眼深度估计

SAFENet: Self-Supervised Monocular Depth Estimation with Semantic-Aware Feature Extraction

论文作者

Choi, Jaehoon, Jung, Dongki, Lee, Donghwan, Kim, Changick

论文摘要

自我监督的单眼深度估计已成为一种有前途的方法,因为它在训练过程中不需要地面图形深度图。作为地面深度图的替代方案,光度损失使得通过匹配输入图像帧在深度预测上提供自学。但是,光度损失会导致各种问题,与监督方法相比,准确的深度值较少。在本文中,我们提出了旨在利用语义信息来克服光度损失的局限性的Safenet。我们的关键思想是利用整合语义和几何知识的语义意识深度特征。因此,我们引入了多任务学习方案,以将语义意识纳入深度特征的表示。 Kitti数据集的实验表明,我们的方法竞争甚至超过最先进的方法。此外,在不同数据集上进行的广泛实验显示了其对各种条件(例如弱光或不利天气)的更好的泛化能力和鲁棒性。

Self-supervised monocular depth estimation has emerged as a promising method because it does not require groundtruth depth maps during training. As an alternative for the groundtruth depth map, the photometric loss enables to provide self-supervision on depth prediction by matching the input image frames. However, the photometric loss causes various problems, resulting in less accurate depth values compared with supervised approaches. In this paper, we propose SAFENet that is designed to leverage semantic information to overcome the limitations of the photometric loss. Our key idea is to exploit semantic-aware depth features that integrate the semantic and geometric knowledge. Therefore, we introduce multi-task learning schemes to incorporate semantic-awareness into the representation of depth features. Experiments on KITTI dataset demonstrate that our methods compete or even outperform the state-of-the-art methods. Furthermore, extensive experiments on different datasets show its better generalization ability and robustness to various conditions, such as low-light or adverse weather.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源