论文标题
探索神经实体表示语义信息
Exploring Neural Entity Representations for Semantic Information
论文作者
论文摘要
通常在下游任务上外部评估嵌入实体的神经方法,并且最近使用探测任务进行了内在的内在评估。由于任务结构的差异,基于任务的下游比较通常很难解释,而探测任务评估通常只查看一些属性和模型。我们通过评估一组简单的探测任务上的八个神经实体嵌入方法的多样化来解决这两个问题,证明哪些方法能够记住用于描述实体,学习类型,关系和事实信息的单词,并确定一个实体的频率。我们还在两个实体链接任务上的统一框架中比较了这些方法,并讨论了它们如何推广到不同的模型体系结构和数据集。
Neural methods for embedding entities are typically extrinsically evaluated on downstream tasks and, more recently, intrinsically using probing tasks. Downstream task-based comparisons are often difficult to interpret due to differences in task structure, while probing task evaluations often look at only a few attributes and models. We address both of these issues by evaluating a diverse set of eight neural entity embedding methods on a set of simple probing tasks, demonstrating which methods are able to remember words used to describe entities, learn type, relationship and factual information, and identify how frequently an entity is mentioned. We also compare these methods in a unified framework on two entity linking tasks and discuss how they generalize to different model architectures and datasets.