论文标题

s-graphs+:实时本地化和映射利用层次表示

S-Graphs+: Real-time Localization and Mapping leveraging Hierarchical Representations

论文作者

Bavle, Hriday, Sanchez-Lopez, Jose Luis, Shaheer, Muhammad, Civera, Javier, Voos, Holger

论文摘要

在本文中,我们提出了情境图的进化版本,该版本在单个优化因子图(1)中共同建模姿势图(1)作为一组构成相关测量和机器人构成的机器人密钥框架,以及(2)3D场景图,作为环境的高级表示,将其不同的角度元素用于其不同的角度元素,以编码其不同的语义元素,并将其与他们之间的相关性信息相关性和相关性信息。 具体而言,我们的S-Graphs+是一个新颖的四层因子图,其中包括:(1)具有机器人姿势估计值的钥匙帧层,(2)一个代表墙壁表面的墙壁层,(3)一个房间层包含墙壁平面的集合,(4)(4)一个地板将房间聚集在给定的地面层中。上图实时优化,以获得对机器人姿势及其地图的强大而准确的估计,同时构建和利用环境的高级信息。为了提取这些高级信息,我们使用映射的墙壁平面和自由空间簇提出了新颖的房间和地板分割算法。 我们在多个数据集上测试了S-Graphs+,包括不同施工站点的室内环境的模拟和真实数据,以及在几个室内办公区域的真实公共数据集上。平均而言,在我们的数据集上,S-Graphs+的表现优于第二好的方法的准确性10.67%,同时通过更丰富的场景模型扩展机器人情境意识。此外,我们将软件作为Docker文件提供。

In this paper, we present an evolved version of Situational Graphs, which jointly models in a single optimizable factor graph (1) a pose graph, as a set of robot keyframes comprising associated measurements and robot poses, and (2) a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between them. Specifically, our S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robots pose and its map, simultaneously constructing and leveraging high-level information of the environment. To extract this high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets, including simulated and real data of indoor environments from varying construction sites, and on a real public dataset of several indoor office areas. On average over our datasets, S-Graphs+ outperforms the accuracy of the second-best method by a margin of 10.67%, while extending the robot situational awareness by a richer scene model. Moreover, we make the software available as a docker file.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源