论文标题

歌剧:Omni审议的代表性学习和等级监督

OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions

论文作者

Wang, Chengkun, Zheng, Wenzhao, Zhu, Zheng, Zhou, Jie, Lu, Jiwen

论文摘要

现代计算机视觉中的预处理范式促进了自学学习的成功,这比监督学习往往可以实现更好的可转移性。但是,随着大量标记数据的可用性,出现了一个自然的问题:如何同时使用自我和完整的监督信号来训练更好的模型?在本文中,我们提出了以分层监督(OPERA)为解决方案的Omni-Cumensupersed代表学习。我们提供了来自标记和未标记数据的监督的统一观点,并提出了一个完全监督和自欺欺人的学习的统一框架。我们为每个图像提取一组层次代理表示,并对相应的代理表示对自我和完整的监督。卷积神经网络和视觉变压器的广泛实验证明了歌剧在图像分类,分割和对象检测中的优越性。代码可在以下网址找到:https://github.com/wangck20/opera。

The pretrain-finetune paradigm in modern computer vision facilitates the success of self-supervised learning, which tends to achieve better transferability than supervised learning. However, with the availability of massive labeled data, a natural question emerges: how to train a better model with both self and full supervision signals? In this paper, we propose Omni-suPErvised Representation leArning with hierarchical supervisions (OPERA) as a solution. We provide a unified perspective of supervisions from labeled and unlabeled data and propose a unified framework of fully supervised and self-supervised learning. We extract a set of hierarchical proxy representations for each image and impose self and full supervisions on the corresponding proxy representations. Extensive experiments on both convolutional neural networks and vision transformers demonstrate the superiority of OPERA in image classification, segmentation, and object detection. Code is available at: https://github.com/wangck20/OPERA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源