论文标题

海豚:协作感知的数据集启用了和谐与互连的自动驾驶

DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and Interconnected Self-driving

论文作者

Mao, Ruiqing, Guo, Jingyu, Jia, Yukuan, Sun, Yuxuan, Zhou, Sheng, Niu, Zhisheng

论文摘要

车辆到所有(V2X)网络已实现了自动驾驶中的协作感知,这是解决独立情报的根本缺陷的有前途解决方案,包括盲区和远程感知。但是,缺乏数据集严重阻碍了协作感知算法的发展。在这项工作中,我们发布了海豚:用于协作感知的数据集,可以实现和谐且相互联系的自动驾驶,这是一种新的模拟大型的各种多种模型多模式自动驾驶数据集,为互连接的自动驾驶提供了突破性的基准平台。海豚在六个维度上优于当前数据集:从车辆和道路侧单元(RSU)(RSUS)的临时图像和点云,启用了车辆到车辆(V2V)和车辆到基础设施(V2I)的协作感知; 6具有动态天气条件的典型场景使各种互连的自动驾驶数据集最多;精心选择的观点,提供关键区域和每个对象的全面覆盖范围; 42376帧和292549个对象,以及相应的3D注释,地理位置和校准,构成了最大的协作知觉数据集;全高清图像和64线激光雷达构建高分辨率数据,并提供足够的详细信息;组织良好的API和开源代码可确保海豚的可扩展性。我们还构建了2D检测,3D检测和海豚的多视图协作任务的基准。实验结果表明,通过V2X通信的原始融合方案可以帮助提高精度,并在RSU存在时减少昂贵的LiDAR设备的必要性,这可能会加速相互联系的自动驾驶车辆的普及。现在可以在https://dolphins-dataset.net/上获得海豚。

Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving, which is a promising solution to the fundamental defect of stand-alone intelligence including blind zones and long-range perception. However, the lack of datasets has severely blocked the development of collaborative perception algorithms. In this work, we release DOLPHINS: Dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving, as a new simulated large-scale various-scenario multi-view multi-modality autonomous driving dataset, which provides a ground-breaking benchmark platform for interconnected autonomous driving. DOLPHINS outperforms current datasets in six dimensions: temporally-aligned images and point clouds from both vehicles and Road Side Units (RSUs) enabling both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) based collaborative perception; 6 typical scenarios with dynamic weather conditions make the most various interconnected autonomous driving dataset; meticulously selected viewpoints providing full coverage of the key areas and every object; 42376 frames and 292549 objects, as well as the corresponding 3D annotations, geo-positions, and calibrations, compose the largest dataset for collaborative perception; Full-HD images and 64-line LiDARs construct high-resolution data with sufficient details; well-organized APIs and open-source codes ensure the extensibility of DOLPHINS. We also construct a benchmark of 2D detection, 3D detection, and multi-view collaborative perception tasks on DOLPHINS. The experiment results show that the raw-level fusion scheme through V2X communication can help to improve the precision as well as to reduce the necessity of expensive LiDAR equipment on vehicles when RSUs exist, which may accelerate the popularity of interconnected self-driving vehicles. DOLPHINS is now available on https://dolphins-dataset.net/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源