论文标题

无监督的人重新识别的联合生成和对比度学习

Joint Generative and Contrastive Learning for Unsupervised Person Re-identification

论文作者

Chen, Hao, Wang, Yaohui, Lagadec, Benoit, Dantcheva, Antitza, Bremond, Francois

论文摘要

最近的自我监督对比学习通过从输入的不同观点(转换版本)学习不变性,为无监督的人重新识别(REID)提供了一种有效的方法。在本文中,我们将生成性对抗网络(GAN)和对比度学习模块纳入一个联合培训框架中。尽管GAN为对比度学习提供了在线数据增强,但对比模块学习了生成的视图不变功能。在这种情况下,我们建议基于网格的视图生成器。具体而言,网格预测是引起人类新观点的参考。此外,我们提出了不变的损失,以促进原始观点和生成观点之间的对比学习。与以前基于GAN的无监督的REID方法偏离涉及域适应性的REID方法,我们不依赖标记的源数据集,这使我们的方法更加灵活。广泛的实验结果表明,我们的方法在两个大型REID Datsets上都完全不受监督和无监督的域自适应设置,均显着胜过最先进的方法。

Recent self-supervised contrastive learning provides an effective approach for unsupervised person re-identification (ReID) by learning invariance from different views (transformed versions) of an input. In this paper, we incorporate a Generative Adversarial Network (GAN) and a contrastive learning module into one joint training framework. While the GAN provides online data augmentation for contrastive learning, the contrastive module learns view-invariant features for generation. In this context, we propose a mesh-based view generator. Specifically, mesh projections serve as references towards generating novel views of a person. In addition, we propose a view-invariant loss to facilitate contrastive learning between original and generated views. Deviating from previous GAN-based unsupervised ReID methods involving domain adaptation, we do not rely on a labeled source dataset, which makes our method more flexible. Extensive experimental results show that our method significantly outperforms state-of-the-art methods under both, fully unsupervised and unsupervised domain adaptive settings on several large scale ReID datsets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源