论文标题
实时地图视图语义分段的跨视图变压器
Cross-view Transformers for real-time Map-view Semantic Segmentation
论文作者
论文摘要
我们提出了跨视图变形金刚,这是一个有效的基于注意力的基于注意力的模型,用于来自多个相机的MAP-VIEW语义分割。我们的体系结构隐含地学习了从单个相机视图的映射到使用摄像头跨视图注意机制的规范地图视图表示。每个相机都使用依赖其内在和外部校准的位置嵌入。这些嵌入使变压器可以在不同视图上学习映射,而无需明确地对其进行几何建模。该体系结构由每个视图和跨视图变压器层的卷积图像编码器组成,以推断地图视图语义分割。我们的模型简单,易于并行,并实时运行。提出的体系结构在Nuscenes数据集的最新架构上执行,其推理速度更快。代码可从https://github.com/bradyz/cross_view_transformers获得。
We present cross-view transformers, an efficient attention-based model for map-view semantic segmentation from multiple cameras. Our architecture implicitly learns a mapping from individual camera views into a canonical map-view representation using a camera-aware cross-view attention mechanism. Each camera uses positional embeddings that depend on its intrinsic and extrinsic calibration. These embeddings allow a transformer to learn the mapping across different views without ever explicitly modeling it geometrically. The architecture consists of a convolutional image encoder for each view and cross-view transformer layers to infer a map-view semantic segmentation. Our model is simple, easily parallelizable, and runs in real-time. The presented architecture performs at state-of-the-art on the nuScenes dataset, with 4x faster inference speeds. Code is available at https://github.com/bradyz/cross_view_transformers.