论文标题

重点!新闻图像字幕的相关和足够的上下文选择

Focus! Relevant and Sufficient Context Selection for News Image Captioning

论文作者

Zhou, Mingyang, Luo, Grace, Rohrbach, Anna, Yu, Zhou

论文摘要

新闻图像字幕需要通过利用新闻文章中的其他上下文来描述图像。先前的作品仅将文章的精致借助来提取必要的上下文,这使模型在识别相关事件和命名实体方面具有挑战性。在我们的论文中,我们首先证明,通过结合更细粒度的上下文来捕获命名命名实体(通过Oracle获得)以及总结新闻的全球环境,我们可以显着提高模型生成准确的新闻字幕的能力。这就提出了一个问题,如何自动从图像中提取此类关键实体?我们建议使用预先训练的愿景和语言检索模型剪辑在新闻文章中定位视觉扎根的实体,然后通过开放的关系提取模型捕获非视觉实体。我们的实验表明,通过简单地从文章中选择一个更好的上下文,我们可以显着提高现有模型的性能并在多个基准上实现新的最新性能。

News Image Captioning requires describing an image by leveraging additional context from a news article. Previous works only coarsely leverage the article to extract the necessary context, which makes it challenging for models to identify relevant events and named entities. In our paper, we first demonstrate that by combining more fine-grained context that captures the key named entities (obtained via an oracle) and the global context that summarizes the news, we can dramatically improve the model's ability to generate accurate news captions. This begs the question, how to automatically extract such key entities from an image? We propose to use the pre-trained vision and language retrieval model CLIP to localize the visually grounded entities in the news article and then capture the non-visual entities via an open relation extraction model. Our experiments demonstrate that by simply selecting a better context from the article, we can significantly improve the performance of existing models and achieve new state-of-the-art performance on multiple benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源