论文标题
一项关于改进多模式表示学习的自我监督学习方法的调查
A survey on Self Supervised learning approaches for improving Multimodal representation learning
论文作者
论文摘要
最近,自我监督学习已经爆炸性增长和在各种机器学习任务中的使用,因为它可以避免注释大规模数据集的成本。 本文为多模式学习提供了最佳自我监督学习方法的概述。对文献的广泛研究总结了所提出的方法,并以不同的方式解决了自我监督学习的应用。讨论的方法是交叉模态生成,跨模态预处理,循环翻译以及以自我监督方式生成单峰标签。
Recently self supervised learning has seen explosive growth and use in variety of machine learning tasks because of its ability to avoid the cost of annotating large-scale datasets. This paper gives an overview for best self supervised learning approaches for multimodal learning. The presented approaches have been aggregated by extensive study of the literature and tackle the application of self supervised learning in different ways. The approaches discussed are cross modal generation, cross modal pretraining, cyclic translation, and generating unimodal labels in self supervised fashion.