论文标题
朝着端到端的口头对话问题回答数据蒸馏
Towards Data Distillation for End-to-end Spoken Conversational Question Answering
论文作者
论文摘要
在口头问题回答中,QA系统旨在回答相关语音成绩单中连续文本跨度的问题。但是,人类寻求或测试知识的最自然方式是通过人类的对话。因此,我们提出了一项新的口语对话问题回答任务(SCQA),旨在使QA系统能够在鉴于语音话语和文本语料库的情况下对复杂的对话进行建模。在此任务中,我们的主要目标是建立一个质量检查系统,以处理口语和文本表格中的对话问题,并探索在信息收集中使用系统中提供更多语言文档中更多提示的合理性。为此,我们提出了一种新型的统一数据蒸馏方法DDNET,而不是自动生成语音转录本,而是直接融合了音频文本特征,以减少自动语音识别假设和参考转录之间的未对准。此外,为了评估质量检查系统在对话风格的互动中的能力,我们组装了一个口语对话问题答案(Spoins-coqa)数据集,并使用超过120k的问题解答对。实验表明,我们提出的方法在口头对话问题回答中实现了卓越的表现。
In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts. However, the most natural way that human seek or test their knowledge is via human conversations. Therefore, we propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora. In this task, our main objective is to build a QA system to deal with conversational questions both in spoken and text forms, and to explore the plausibility of providing more cues in spoken documents with systems in information gathering. To this end, instead of adopting automatically generated speech transcripts with highly noisy data, we propose a novel unified data distillation approach, DDNet, which directly fuse audio-text features to reduce the misalignment between automatic speech recognition hypotheses and the reference transcriptions. In addition, to evaluate the capacity of QA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 120k question-answer pairs. Experiments demonstrate that our proposed method achieves superior performance in spoken conversational question answering.