论文标题
改变衣服的人仅以RGB方式重新识别
Clothes-Changing Person Re-identification with RGB Modality Only
论文作者
论文摘要
解决改变衣服的人的重新识别(重新识别)的关键是提取衣服 - 肉食,例如脸部,发型,身体形状和步态。当前大多数作品主要集中于从多模式信息(例如,剪影和草图)中对身体形状进行建模,但不要在原始RGB图像中充分利用衣服irrevelvant信息。在本文中,我们通过惩罚Re-ID模型W.R.T.的预测能力来提出一种基于衣服的对抗损失(CAL),以从原始RGB图像中挖掘出衣服 - 卷积功能。衣服。广泛的实验表明,仅使用RGB图像,CAL在广泛改变衣服的人重新ID基准上胜过所有最先进的方法。此外,与图像相比,视频包含更丰富的外观和其他时间信息,可用于建模适当的时空图案,以帮助换衣服的重新ID。由于没有公开可换衣服的视频RE-ID数据集,因此我们贡献了一个名为CCVID的新数据集,并表明在建模时空信息上有很大的改进空间。代码和新数据集可在以下网址提供:https://github.com/guxinqian/simple-ccreid。
The key to address clothes-changing person re-identification (re-id) is to extract clothes-irrelevant features, e.g., face, hairstyle, body shape, and gait. Most current works mainly focus on modeling body shape from multi-modality information (e.g., silhouettes and sketches), but do not make full use of the clothes-irrelevant information in the original RGB images. In this paper, we propose a Clothes-based Adversarial Loss (CAL) to mine clothes-irrelevant features from the original RGB images by penalizing the predictive power of re-id model w.r.t. clothes. Extensive experiments demonstrate that using RGB images only, CAL outperforms all state-of-the-art methods on widely-used clothes-changing person re-id benchmarks. Besides, compared with images, videos contain richer appearance and additional temporal information, which can be used to model proper spatiotemporal patterns to assist clothes-changing re-id. Since there is no publicly available clothes-changing video re-id dataset, we contribute a new dataset named CCVID and show that there exists much room for improvement in modeling spatiotemporal information. The code and new dataset are available at: https://github.com/guxinqian/Simple-CCReID.