论文标题
基于单眼视力,惯性和车轮速度,紧密耦合的姿势估计
Robust tightly coupled pose estimation based on monocular vision, inertia, and wheel speed
论文作者
论文摘要
视觉大满贯方法被广泛用于复杂环境中的自定位和映射。将摄像机与IMU结合在一起的Visual Interia Slam可以显着提高鲁棒性并使尺度弱化性,而单眼视觉大满贯是可让人的。对于地面移动机器人,引入车轮速度传感器可以解决尺度可见的问题并改善异常条件下的鲁棒性。在本文中,提出了使用单眼视力,惯性和车轮速度测量的多传感器融合量算法。传感器测量以紧密耦合的方式组合,并使用非线性优化方法来最大化后验概率以求解最佳状态估计。添加环路检测和后端优化,以帮助减少甚至消除估计姿势的累积误差,从而确保轨迹和MAP的全局一致性。结合了底盘速度和IMU角速度的车轮里程表预融合算法可以避免在迭代优化过程中线性化点变化引起的重复集成;基于车轮里程表和IMU的状态初始化可以对固定状态和移动状态中状态估计器所需的初始状态值进行快速可靠的计算。比较实验在房间规模的场景,建筑规模场景和视觉损失场景中进行。结果表明,该算法在移动812 m(0.28%,环回优化)后,累积误差的精度高2.2 m,即使在传感器丢失(例如视觉损失)的情况下,也有强大的鲁棒性和有效的定位能力。所提出的方法的准确性和鲁棒性优于单眼视觉惯性大满贯和传统的车轮音量计。
The visual SLAM method is widely used for self-localization and mapping in complex environments. Visual-inertia SLAM, which combines a camera with IMU, can significantly improve the robustness and enable scale weak-visibility, whereas monocular visual SLAM is scale-invisible. For ground mobile robots, the introduction of a wheel speed sensor can solve the scale weak-visible problem and improve the robustness under abnormal conditions. In this thesis, a multi-sensor fusion SLAM algorithm using monocular vision, inertia, and wheel speed measurements is proposed. The sensor measurements are combined in a tightly coupled manner, and a nonlinear optimization method is used to maximize the posterior probability to solve the optimal state estimation. Loop detection and back-end optimization are added to help reduce or even eliminate the cumulative error of the estimated poses, thus ensuring global consistency of the trajectory and map. The wheel odometer pre-integration algorithm, which combines the chassis speed and IMU angular speed, can avoid repeated integration caused by linearization point changes during iterative optimization; state initialization based on the wheel odometer and IMU enables a quick and reliable calculation of the initial state values required by the state estimator in both stationary and moving states. Comparative experiments were carried out in room-scale scenes, building scale scenes, and visual loss scenarios. The results showed that the proposed algorithm has high accuracy, 2.2 m of cumulative error after moving 812 m (0.28%, loopback optimization disabled), strong robustness, and effective localization capability even in the event of sensor loss such as visual loss. The accuracy and robustness of the proposed method are superior to those of monocular visual inertia SLAM and traditional wheel odometers.