论文标题

我有要回答的知识吗?调查知识库问题的答复性

Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions

论文作者

Patidar, Mayur, Faldu, Prayushi, Singh, Avinash, Vig, Lovekesh, Bhattacharya, Indrajit, Mausam

论文摘要

当通过知识基础回答自然语言问题时,缺失事实,不完整的模式和有限的范围自然会导致许多问题无法回答。尽管在其他质量检查设置中已经探索了答案性,但尚未对知识库(KBQA)进行质量检查。我们首先识别各种形式的KB不完整性,使问题无法回答,然后系统地适应GrailQA(一种流行的KBQA数据集(一个只有可回答的问题)),我们创建了一种具有无法回述性的新基准KBQA数据集,这是一种无法提高性的。尝试三种最先进的KBQA模型,我们发现即使适应了无法回答的问题,这三种模型即使在适当的适应后也均下降。此外,这些通常出于错误的原因检测到无法选择的性能,并找到特定形式的无法解决性的特定形式。这突显了需要进一步研究的需要,以使KBQA系统强大地降低了性能

When answering natural language questions over knowledge bases, missing facts, incomplete schema and limited scope naturally lead to many questions being unanswerable. While answerability has been explored in other QA settings, it has not been studied for QA over knowledge bases (KBQA). We create GrailQAbility, a new benchmark KBQA dataset with unanswerability, by first identifying various forms of KB incompleteness that make questions unanswerable, and then systematically adapting GrailQA (a popular KBQA dataset with only answerable questions). Experimenting with three state-of-the-art KBQA models, we find that all three models suffer a drop in performance even after suitable adaptation for unanswerable questions. In addition, these often detect unanswerability for wrong reasons and find specific forms of unanswerability particularly difficult to handle. This underscores the need for further research in making KBQA systems robust to unanswerability

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源