论文标题
Blur的动画:多模式模糊分解与运动指导
Animation from Blur: Multi-modal Blur Decomposition with Motion Guidance
论文作者
论文摘要
我们研究了从单个运动毛发图像中恢复详细运动的挑战性问题。该问题的现有解决方案估算了单个图像序列,而无需考虑每个区域的运动歧义。因此,结果倾向于融合到多模式可能性的平均值。在本文中,我们明确地说明了这种运动歧义,使我们能够详细地生成多个合理的解决方案。关键思想是引入运动引导表示,这是对仅有四个离散运动方向的2D光流的紧凑量量化。在运动引导的条件下,模糊分解通过使用新型的两阶段分解网络导致了特定的,明确的解决方案。我们为模糊分解提供了一个统一的框架,该框架支持各种界面来生成我们的运动指导,包括人类输入,来自相邻视频框架的运动信息以及从视频数据集中学习。关于合成数据集和现实世界数据的广泛实验表明,所提出的框架在定性和定量上优于以前的方法,并且还具有生产物理上合理和多样的解决方案的优点。代码可从https://github.com/zzh-tech/animation-from-blur获得。
We study the challenging problem of recovering detailed motion from a single motion-blurred image. Existing solutions to this problem estimate a single image sequence without considering the motion ambiguity for each region. Therefore, the results tend to converge to the mean of the multi-modal possibilities. In this paper, we explicitly account for such motion ambiguity, allowing us to generate multiple plausible solutions all in sharp detail. The key idea is to introduce a motion guidance representation, which is a compact quantization of 2D optical flow with only four discrete motion directions. Conditioned on the motion guidance, the blur decomposition is led to a specific, unambiguous solution by using a novel two-stage decomposition network. We propose a unified framework for blur decomposition, which supports various interfaces for generating our motion guidance, including human input, motion information from adjacent video frames, and learning from a video dataset. Extensive experiments on synthesized datasets and real-world data show that the proposed framework is qualitatively and quantitatively superior to previous methods, and also offers the merit of producing physically plausible and diverse solutions. Code is available at https://github.com/zzh-tech/Animation-from-Blur.