论文标题
SMPLPIX:来自3D人类模型的神经化身
SMPLpix: Neural Avatars from 3D Human Models
论文作者
论文摘要
深层生成模型的最新进展导致了综合生成的人类图像的前所未有的现实主义水平。但是,这些模型的剩余基本局限性之一是能够灵活控制生成过程,例如〜在保留主题身份的同时更改相机和人姿势。同时,像SMPL及其后继者这样的可变形人体模型可以完全控制姿势和形状,但依靠经典的计算机图形管道进行渲染。这样的渲染管道需要明确的网状栅格化,即(a)在原始的3D几何形状中没有潜力修复伪影或缺乏现实主义,并且(b)直到最近才完全纳入深度学习框架中。在这项工作中,我们建议弥合基于经典几何渲染的经典渲染与在像素空间中运行的最新生成网络之间的差距。我们训练一个直接将一组稀疏的3D网格顶点转换为光真逼真图像的网络,从而减轻了对传统栅格化机制的需求。我们在人类3D模型和相应的真实照片上训练我们的模型,并在光真相和渲染效率的水平方面表现出比常规可区分的渲染器的优势。
Recent advances in deep generative models have led to an unprecedented level of realism for synthetically generated images of humans. However, one of the remaining fundamental limitations of these models is the ability to flexibly control the generative process, e.g.~change the camera and human pose while retaining the subject identity. At the same time, deformable human body models like SMPL and its successors provide full control over pose and shape but rely on classic computer graphics pipelines for rendering. Such rendering pipelines require explicit mesh rasterization that (a) does not have the potential to fix artifacts or lack of realism in the original 3D geometry and (b) until recently, were not fully incorporated into deep learning frameworks. In this work, we propose to bridge the gap between classic geometry-based rendering and the latest generative networks operating in pixel space. We train a network that directly converts a sparse set of 3D mesh vertices into photorealistic images, alleviating the need for traditional rasterization mechanism. We train our model on a large corpus of human 3D models and corresponding real photos, and show the advantage over conventional differentiable renderers both in terms of the level of photorealism and rendering efficiency.