论文标题
一百分之一:从众多候选人中选择流式语音识别的最佳预测顺序
One In A Hundred: Select The Best Predicted Sequence from Numerous Candidates for Streaming Speech Recognition
论文作者
论文摘要
RNN-Transducer和改进的基于注意力的编码器模型被广泛应用于流语音识别。与这两个端到端模型相比,CTC模型在训练和推理方面更有效。但是,它无法捕获输出令牌之间的语言依赖性。受到两次通行端到端模型的成功的启发,我们将变压器解码器和两阶段推理方法引入了流CTC模型。在推断期间,CTC解码器首先以流媒体方式生成了许多候选人。然后,变压器解码器基于相应的声学编码状态选择最佳候选者。第二阶段变压器解码器可以被视为有条件的语言模型。我们假设第一阶段产生的足够数量和足够多样性的候选者可以补偿CTC模型缺乏语言建模能力。所有实验均在中国普通话数据集Aishell-1上进行。结果表明,我们提出的模型可以快速,直接的方式实现流媒体解码。与基线CTC模型相比,我们的模型的字符错误率最高可降低20%。此外,我们的模型还可以仅通过少量的性能降解来执行非流域推理。
The RNN-Transducers and improved attention-based encoder-decoder models are widely applied to streaming speech recognition. Compared with these two end-to-end models, the CTC model is more efficient in training and inference. However, it cannot capture the linguistic dependencies between the output tokens. Inspired by the success of two-pass end-to-end models, we introduce a transformer decoder and the two-stage inference method into the streaming CTC model. During inference, the CTC decoder first generates many candidates in a streaming fashion. Then the transformer decoder selects the best candidate based on the corresponding acoustic encoded states. The second-stage transformer decoder can be regarded as a conditional language model. We assume that a large enough number and enough diversity of candidates generated in the first stage can compensate the CTC model for the lack of language modeling ability. All the experiments are conducted on a Chinese Mandarin dataset AISHELL-1. The results show that our proposed model can implement streaming decoding in a fast and straightforward way. Our model can achieve up to a 20% reduction in the character error rate than the baseline CTC model. In addition, our model can also perform non-streaming inference with only a little performance degradation.