论文标题
浮动灌注:TOF和图像稳定的立体声摄像机的深度
FloatingFusion: Depth from ToF and Image-stabilized Stereo Cameras
论文作者
论文摘要
高准确性的人均深度对于计算摄影至关重要,因此智能手机现在具有带有飞行时间(TOF)深度传感器和多色摄像机的多模式相机系统。但是,由于低分辨率和有限的TOF传感器的主动照明能力,产生准确的高分辨率深度仍然具有挑战性。融合RGB立体声和TOF信息是克服这些问题的一个有希望的方向,但是仍然存在一个问题:为了提供高质量的2D RGB图像,主颜色传感器的镜头是光学稳定的,从而导致了浮动镜头的未知姿势,从而破坏了多模型图像传感器之间的几何关系。利用TOF深度估计和广角RGB摄像机,我们设计了一种基于密度2D/3D匹配的自动校准技术,该技术可以估算来自单个快照的稳定主RGB传感器的外在,内在和失真参数。这使我们可以通过相关量融合立体声和TOF提示。为了进行融合,我们通过真实世界培训数据集进行深入学习,并通过神经重建方法估算的深度监督。为了进行评估,我们使用商用的高功率深度摄像头获得了测试数据集,并表明我们的方法比现有基线获得了更高的精度。
High-accuracy per-pixel depth is vital for computational photography, so smartphones now have multimodal camera systems with time-of-flight (ToF) depth sensors and multiple color cameras. However, producing accurate high-resolution depth is still challenging due to the low resolution and limited active illumination power of ToF sensors. Fusing RGB stereo and ToF information is a promising direction to overcome these issues, but a key problem remains: to provide high-quality 2D RGB images, the main color sensor's lens is optically stabilized, resulting in an unknown pose for the floating lens that breaks the geometric relationships between the multimodal image sensors. Leveraging ToF depth estimates and a wide-angle RGB camera, we design an automatic calibration technique based on dense 2D/3D matching that can estimate camera extrinsic, intrinsic, and distortion parameters of a stabilized main RGB sensor from a single snapshot. This lets us fuse stereo and ToF cues via a correlation volume. For fusion, we apply deep learning via a real-world training dataset with depth supervision estimated by a neural reconstruction method. For evaluation, we acquire a test dataset using a commercial high-power depth camera and show that our approach achieves higher accuracy than existing baselines.