论文标题

标签结构保留了与缺少标签的多标签学习的对比度嵌入

Label Structure Preserving Contrastive Embedding for Multi-Label Learning with Missing Labels

论文作者

Ma, Zhongchen, Li, Lisha, Mao, Qirong, Chen, Songcan

论文摘要

对比度学习(CL)在任何受监督的多级分类或无监督的学习中显示出令人印象深刻的图像表示学习进步。但是,这些CL方法无法直接适应多标签图像分类,这是因为难以定义在多标签场景中与给定的锚点进行对比给定的锚图像进行对比的,让该标签单独丢失一个单独的标签,这意味着借用一种通常的方式,从对比的多级学习中借用一种通常的多级学习来定义它们会因误解的实例而造成不利的实例,这将是错误的。在本文中,通过引入标签校正机制来识别缺失的标签,我们首先优雅地产生了锚定图像的单个语义标签的阳性和负面因素,然后定义了带有缺少标签(CLML)的多标签图像分类(CLML)的独特对比损失(CLML),损失能够准确地与他们的真实图像和真实的真实图像相距,从而使图像与他们的真实图像相比,远离他们的真实图像。与现有的多标签CL损失不同,CLML还保留了潜在表示空间中的低排名全球和本地标签依赖关系,在这些空间中,这些依赖性已被证明有助于处理缺失的标签。据我们所知,这是在缺失标签方案中的第一个一般多标签CL损失,因此只需通过单个超参数即可与任何现有的多标签学习方法的损失无缝配对。已提出的策略已被证明可以在三个标准数据集(MSCOCO,VOC和NUS范围内)分别提高RESNET101模型的分类性能。代码可在https://github.com/chuangua/contrastivelossmlml上找到。

Contrastive learning (CL) has shown impressive advances in image representation learning in whichever supervised multi-class classification or unsupervised learning. However, these CL methods fail to be directly adapted to multi-label image classification due to the difficulty in defining the positive and negative instances to contrast a given anchor image in multi-label scenario, let the label missing one alone, implying that borrowing a commonly-used way from contrastive multi-class learning to define them will incur a lot of false negative instances unfavorable for learning. In this paper, with the introduction of a label correction mechanism to identify missing labels, we first elegantly generate positives and negatives for individual semantic labels of an anchor image, then define a unique contrastive loss for multi-label image classification with missing labels (CLML), the loss is able to accurately bring images close to their true positive images and false negative images, far away from their true negative images. Different from existing multi-label CL losses, CLML also preserves low-rank global and local label dependencies in the latent representation space where such dependencies have been shown to be helpful in dealing with missing labels. To the best of our knowledge, this is the first general multi-label CL loss in the missing-label scenario and thus can seamlessly be paired with those losses of any existing multi-label learning methods just via a single hyperparameter. The proposed strategy has been shown to improve the classification performance of the Resnet101 model by margins of 1.2%, 1.6%, and 1.3% respectively on three standard datasets, MSCOCO, VOC, and NUS-WIDE. Code is available at https://github.com/chuangua/ContrastiveLossMLML.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源