论文标题
基于动作的表示自主驾驶学习
Action-Based Representation Learning for Autonomous Driving
论文作者
论文摘要
人类驾驶员产生的大量数据原则上可以用来改善自动驾驶系统。不幸的是,在解释性方面,可以直接将传感器数据直接映射到驾驶动作的端到端驾驶模型的方法似乎很简单,并且通常在处理虚假相关性方面存在很大的困难。另外,我们建议将基于动作的驾驶数据用于学习表示。我们的实验表明,通过这种方法预先训练的基于负担能力的驾驶模型可以利用相对较少的弱注释的图像和优于纯端到端的端到端驾驶模型,同时更容易解释。此外,我们演示了该策略如何根据学习逆动力学模型以及基于大量人类监督(Imagenet)的其他方法优于先前的方法。
Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seemingly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).