论文标题

深度对象姿势估计的快速不确定性量化

Fast Uncertainty Quantification for Deep Object Pose Estimation

论文作者

Shi, Guanya, Zhu, Yifeng, Tremblay, Jonathan, Birchfield, Stan, Ramos, Fabio, Anandkumar, Animashree, Zhu, Yuke

论文摘要

基于深度学习的对象姿势估计器通常不可靠,并且过度自信,尤其是当输入图像在训练域之外,例如,使用SIM2REAL传输。在许多机器人任务中,姿势估计量中的有效且稳健的不确定性定量(UQ)需要。在这项工作中,我们提出了一种简单,高效且插件的UQ方法,以进行6-DOF对象姿势估计。我们将2-3个预训练的模型与不同的神经网络架构和/或培训数据源一起,并计算出他们的平均成对分歧相互分歧以获得不确定性定量。我们提出了四个分歧指标,包括一个学识渊博的指标,并表明平均距离(添加)是最好的无学习度量指标,并且仅比学到的指标稍差,该指标需要标记为目标数据。与先前的艺术相比,我们的方法具有多个优点:1)我们的方法不需要对培训过程或模型输入的任何修改; 2)每个型号仅需要一个前向通过。我们在三个任务上评估了所提出的UQ方法,在这三个任务中,我们的不确定性定量与姿势估计误差的相关性比基线更强。此外,在一项真正的机器人掌握任务中,我们的方法将成功率从35%提高到90%。

Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer. Efficient and robust uncertainty quantification (UQ) in pose estimators is critically needed in many robotic tasks. In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation. We ensemble 2-3 pre-trained models with different neural network architectures and/or training data sources, and compute their average pairwise disagreement against one another to obtain the uncertainty quantification. We propose four disagreement metrics, including a learned metric, and show that the average distance (ADD) is the best learning-free metric and it is only slightly worse than the learned metric, which requires labeled target data. Our method has several advantages compared to the prior art: 1) our method does not require any modification of the training process or the model inputs; and 2) it needs only one forward pass for each model. We evaluate the proposed UQ method on three tasks where our uncertainty quantification yields much stronger correlations with pose estimation errors than the baselines. Moreover, in a real robot grasping task, our method increases the grasping success rate from 35% to 90%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源