论文标题
部分可观测时空混沌系统的无模型预测
DiP: Learning Discriminative Implicit Parts for Person Re-Identification
论文作者
论文摘要
在亲自重新识别(REID)任务中,许多作品探索了零件功能的学习,以提高全球图像功能的性能。现有方法通过使用手工设计的图像划分或使用外部视觉系统获得的关键点来明确提取零件特征。在这项工作中,我们建议学习与显式身体部位脱钩的歧视性隐式部分(DIPS)。因此,DIPS可以学会提取任何有益于区分身份的歧视性特征,这超出了预定义的身体部位(例如配件)。此外,我们提出了一个新颖的隐式立场,以给出每种蘸酱的几何解释。隐式位置也可以用作学习信号,以鼓励倾斜与图像中的身份更加稳态。最后,引入了额外的倾角加权,以处理隐形或遮挡的情况,并进一步改善倾角的特征表示。广泛的实验表明,所提出的方法在多人REID基准测试中实现了最先进的性能。
In person re-identification (ReID) tasks, many works explore the learning of part features to improve the performance over global image features. Existing methods explicitly extract part features by either using a hand-designed image division or keypoints obtained with external visual systems. In this work, we propose to learn Discriminative implicit Parts (DiPs) which are decoupled from explicit body parts. Therefore, DiPs can learn to extract any discriminative features that can benefit in distinguishing identities, which is beyond predefined body parts (such as accessories). Moreover, we propose a novel implicit position to give a geometric interpretation for each DiP. The implicit position can also serve as a learning signal to encourage DiPs to be more position-equivariant with the identity in the image. Lastly, an additional DiP weighting is introduced to handle the invisible or occluded situation and further improve the feature representation of DiPs. Extensive experiments show that the proposed method achieves state-of-the-art performance on multiple person ReID benchmarks.