论文标题

selfkg:知识图中的自我监督实体对齐

SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs

论文作者

Liu, Xiao, Hong, Haoyun, Wang, Xinghao, Chen, Zeyi, Kharlamov, Evgeny, Dong, Yuxiao, Tang, Jie

论文摘要

实体一致性,旨在识别不同知识图(kgs)的等效实体,是构建网络规模kgs的基本问题。在开发过程中,标签监督被认为是准确对齐的必要条件。受到自我监督学习的最新进展的启发,我们探索了我们可以摆脱实体一致性的监督的程度。通常,标签信息(正实体对)用于监督将每个正对中的对齐实体靠近的过程。但是,我们的理论分析表明,实体对准的学习实际上可以从远离彼此的未标记的负面对中受益更多,而不是拉起标记为正面对。通过利用这一发现,我们为实体一致性发展了自我监督的学习目标。我们通过有效的策略提出了SelfKG,以优化无标签监督的实体对齐实体的目标。基准数据集上的广泛实验表明,没有监督的SelfKG可以与最先进的监督基线相匹配或获得可比的结果。 SelfKG的表现表明,自我监督的学习为实体保持一致提供了巨大的潜力。代码和数据可在https://github.com/thudm/selfkg上找到。

Entity alignment, aiming to identify equivalent entities across different knowledge graphs (KGs), is a fundamental problem for constructing Web-scale KGs. Over the course of its development, the label supervision has been considered necessary for accurate alignments. Inspired by the recent progress of self-supervised learning, we explore the extent to which we can get rid of supervision for entity alignment. Commonly, the label information (positive entity pairs) is used to supervise the process of pulling the aligned entities in each positive pair closer. However, our theoretical analysis suggests that the learning of entity alignment can actually benefit more from pushing unlabeled negative pairs far away from each other than pulling labeled positive pairs close. By leveraging this discovery, we develop the self-supervised learning objective for entity alignment. We present SelfKG with efficient strategies to optimize this objective for aligning entities without label supervision. Extensive experiments on benchmark datasets demonstrate that SelfKG without supervision can match or achieve comparable results with state-of-the-art supervised baselines. The performance of SelfKG suggests that self-supervised learning offers great potential for entity alignment in KGs. The code and data are available at https://github.com/THUDM/SelfKG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源