论文标题
一声场景图生成
One-shot Scene Graph Generation
论文作者
论文摘要
作为图像内容的结构化表示,视觉场景图(视觉关系)充当计算机视觉和自然语言处理之间的桥梁。场景上的现有模型图表生成任务众所周知需要数十或数百个标记的样本。相比之下,人类可以从几个甚至一个例子中学习视觉关系。受此启发,我们设计了一个名为“单拍”图的任务,每个关系三重态(例如,“狗 - has-head”)仅来自一个标记的示例。关键的见解是,人们可以利用丰富的先验知识,而不是从头开始学习。在本文中,我们为单次场景图生成任务提出了多个结构化知识(关系知识和常识知识)。具体而言,关系知识代表了从视觉内容中提取的实体之间的关系的先验知识,例如,视觉关系“站在”,“坐在”和“躺在“狗”和“院子里”之间,而常识知识编码“ siense Makity”的知识,例如“狗可以守卫”。通过在图形结构中组织这两种知识,图形卷积网络(GCN)用于提取实体的知识式语义特征。此外,我们没有从更快的R-CNN生成的每个实体中提取孤立的视觉特征,而是利用实例关系变压器编码器充分探索其上下文信息。基于构建的单发数据集,实验结果表明,我们的方法大大优于现有的最新方法。消融研究还验证了实例关系变压器编码器和多个结构化知识的有效性。
As a structured representation of the image content, the visual scene graph (visual relationship) acts as a bridge between computer vision and natural language processing. Existing models on the scene graph generation task notoriously require tens or hundreds of labeled samples. By contrast, human beings can learn visual relationships from a few or even one example. Inspired by this, we design a task named One-Shot Scene Graph Generation, where each relationship triplet (e.g., "dog-has-head") comes from only one labeled example. The key insight is that rather than learning from scratch, one can utilize rich prior knowledge. In this paper, we propose Multiple Structured Knowledge (Relational Knowledge and Commonsense Knowledge) for the one-shot scene graph generation task. Specifically, the Relational Knowledge represents the prior knowledge of relationships between entities extracted from the visual content, e.g., the visual relationships "standing in", "sitting in", and "lying in" may exist between "dog" and "yard", while the Commonsense Knowledge encodes "sense-making" knowledge like "dog can guard yard". By organizing these two kinds of knowledge in a graph structure, Graph Convolution Networks (GCNs) are used to extract knowledge-embedded semantic features of the entities. Besides, instead of extracting isolated visual features from each entity generated by Faster R-CNN, we utilize an Instance Relation Transformer encoder to fully explore their context information. Based on a constructed one-shot dataset, the experimental results show that our method significantly outperforms existing state-of-the-art methods by a large margin. Ablation studies also verify the effectiveness of the Instance Relation Transformer encoder and the Multiple Structured Knowledge.