论文标题
扭曲整流的深度序数失真估计方法
A Deep Ordinal Distortion Estimation Approach for Distortion Rectification
论文作者
论文摘要
流行的广角摄像机和鱼眼摄像机捕获的图像中广泛存在失真。尽管失真纠正的历史悠久,但准确地估算单个扭曲图像的失真参数仍然具有挑战性。主要原因是这些参数隐含着图像特征,影响网络以充分学习失真信息。在这项工作中,我们提出了一种新型的失真整流方法,该方法可以以更高的效率获得更准确的参数。我们的关键见解是,可以将失真纠正作为从单个失真图像中学习顺序失真的问题。为了解决这个问题,我们设计了一个局部全球相关的估计网络,该网络学习了序数失真以近似现实的失真分布。与隐式失真参数相反,所提出的序数失真与图像特征具有更明确的关系,因此显着增强了神经网络的失真感知。考虑到失真信息的冗余性,我们的方法仅使用扭曲图像的一部分进行顺序失真估计,显示出有效的变形矫正中有希望的应用。据我们所知,我们首先将异质失真参数统一为学习友好的中间表示形式,从而弥合了图像特征和失真整流之间的差距。实验结果表明,我们的方法的表现优于最先进的方法,而定量评估的提高约为23%,同时显示出视觉外观上最佳性能。该代码可从https://github.com/kangliao929/ordinaldistortion获得。
Distortion is widely existed in the images captured by popular wide-angle cameras and fisheye cameras. Despite the long history of distortion rectification, accurately estimating the distortion parameters from a single distorted image is still challenging. The main reason is these parameters are implicit to image features, influencing the networks to fully learn the distortion information. In this work, we propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency. Our key insight is that distortion rectification can be cast as a problem of learning an ordinal distortion from a single distorted image. To solve this problem, we design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution. In contrast to the implicit distortion parameters, the proposed ordinal distortion have more explicit relationship with image features, and thus significantly boosts the distortion perception of neural networks. Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation, showing promising applications in the efficient distortion rectification. To our knowledge, we first unify the heterogeneous distortion parameters into a learning-friendly intermediate representation through ordinal distortion, bridging the gap between image feature and distortion rectification. The experimental results demonstrate that our approach outperforms the state-of-the-art methods by a significant margin, with approximately 23% improvement on the quantitative evaluation while displaying the best performance on visual appearance. The code is available at https://github.com/KangLiao929/OrdinalDistortion.