论文标题
如何将预训练的视觉和语言模型适应仅文本输入?
How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?
论文作者
论文摘要
当前的语言模型因单独从文本学习语言而没有单词之间的联系及其含义之间的联系而受到批评。因此,已经提出了多模式训练,以通过提供缺乏联系来创建具有更好语言理解的模型的一种方式。我们专注于预先训练的多模式视觉和语言(VL)模型,这些模型已经有了他们的语言理解能力的一些结果。但是,评估这些模型的语言技能的一个尚未解决的问题是,没有建立的方法可以使它们适应仅在不分发不确定性的情况下进行文本输入。为了找到最佳方法,我们研究并比较了七种可能适应三种不同预训练的VL模型以仅文本输入的方法。我们对胶水和视觉属性规范(VPN)的评估表明,应注意将VL模型调整为零击文本任务,而模型对我们如何使其适应非零射击任务的敏感性不太敏感。我们还发现,适应方法在不同模型中的性能有所不同,并且单形模型对应与VL模型相同,无论适应如何,这表明当前的VL模型并不一定从其多峰训练中获得更好的语言理解。
Current language models have been criticised for learning language from text alone without connection between words and their meaning. Consequently, multimodal training has been proposed as a way for creating models with better language understanding by providing the lacking connection. We focus on pre-trained multimodal vision-and-language (VL) models for which there already are some results on their language understanding capabilities. An unresolved issue with evaluating the linguistic skills of these models, however, is that there is no established method for adapting them to text-only input without out-of-distribution uncertainty. To find the best approach, we investigate and compare seven possible methods for adapting three different pre-trained VL models to text-only input. Our evaluations on both GLUE and Visual Property Norms (VPN) show that care should be put into adapting VL models to zero-shot text-only tasks, while the models are less sensitive to how we adapt them to non-zero-shot tasks. We also find that the adaptation methods perform differently for different models and that unimodal model counterparts perform on par with the VL models regardless of adaptation, indicating that current VL models do not necessarily gain better language understanding from their multimodal training.