论文标题
RMGN:无解析器虚拟试验的区域面具指导网络
RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on
论文作者
论文摘要
虚拟试验(VTON)旨在将目标服装拟合到引用人物图像,这在电子商务中广泛采用。使用Vton方法可以通过依靠Parser信息来掩盖人们的衣服和合成的尝试图像来将Vton方法狭窄地分为基于Parser(PB)和无PARSER(PF)。尽管放弃的解析器信息提高了PF方法的适用性,但细节合成的能力也已被牺牲。结果,原始布的分散注意力可能会持久综合图像,尤其是在复杂的姿势和高分辨率应用中。为了解决上述问题,我们提出了一种名为“区域面具指导网络(RMGN)”的新型PF方法。更具体地说,提出了一个区域口罩,以明确融合目标衣服和参考人员的特征,以便可以消除持续的干扰。进一步提出了姿势意识丧失和多级特征提取器,以处理复杂的姿势并合成高分辨率图像。广泛的实验表明,我们提出的RMGN胜过最新的PB和PF方法。启动研究进一步验证了Modules在RMGN中的有效性。
Virtual try-on(VTON) aims at fitting target clothes to reference person images, which is widely adopted in e-commerce.Existing VTON approaches can be narrowly categorized into Parser-Based(PB) and Parser-Free(PF) by whether relying on the parser information to mask the persons' clothes and synthesize try-on images. Although abandoning parser information has improved the applicability of PF methods, the ability of detail synthesizing has also been sacrificed. As a result, the distraction from original cloth may persistin synthesized images, especially in complicated postures and high resolution applications. To address the aforementioned issue, we propose a novel PF method named Regional Mask Guided Network(RMGN). More specifically, a regional mask is proposed to explicitly fuse the features of target clothes and reference persons so that the persisted distraction can be eliminated. A posture awareness loss and a multi-level feature extractor are further proposed to handle the complicated postures and synthesize high resolution images. Extensive experiments demonstrate that our proposed RMGN outperforms both state-of-the-art PB and PF methods.Ablation studies further verify the effectiveness ofmodules in RMGN.