论文标题
奇怪的代表学习
Odd-One-Out Representation Learning
论文作者
论文摘要
有效地应用表示对现实世界中问题的应用需要两种技术来学习有用的表示形式,也需要鲁棒的方法来评估表示形式的属性。最新的分解表示学习的工作表明,无监督的表示学习方法依赖于完全监督的分解指标,这些指标假设访问标签的基础真实因素。在许多现实情况下,地面真相因素的收集成本昂贵,或者很难建模,例如感知。在这里,我们从经验上表明,基于奇怪的观测值的弱监督下游任务适用于模型选择,通过在困难的下游抽象视觉推理任务上观察高相关性。我们还表明,在这项任务上执行高度执行的定制度量学习VAE模型也超过了其他几个指标的其他标准无监督和弱监督的分解模型。
The effective application of representation learning to real-world problems requires both techniques for learning useful representations, and also robust ways to evaluate properties of representations. Recent work in disentangled representation learning has shown that unsupervised representation learning approaches rely on fully supervised disentanglement metrics, which assume access to labels for ground-truth factors of variation. In many real-world cases ground-truth factors are expensive to collect, or difficult to model, such as for perception. Here we empirically show that a weakly-supervised downstream task based on odd-one-out observations is suitable for model selection by observing high correlation on a difficult downstream abstract visual reasoning task. We also show that a bespoke metric-learning VAE model which performs highly on this task also out-performs other standard unsupervised and a weakly-supervised disentanglement model across several metrics.