论文标题
联合自动语音识别和多语言语音翻译的双十二座变压器
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation
论文作者
论文摘要
我们介绍了Dual-Decoder Transformer,这是一种共同执行自动语音识别(ASR)和多语言语音翻译(ST)的新模型体系结构。我们的模型基于原始的变压器体系结构(Vaswani等,2017),但由两个解码器组成,每个解码器负责一个任务(ASR或ST)。我们的主要贡献在于这些解码器如何相互作用:一个解码器可以通过双重注意机制从另一个解码器来参与不同的信息源。我们提出了这些架构的两个变体,分别与解码器之间的两个不同级别的依赖关系相对应,分别称为平行和交叉双二次变压器。在Reses-C数据集上进行的广泛实验表明,我们的模型在多语言设置中优于先前报告的最高翻译性能,而双语一对一的结果也优于跑赢大盘。此外,与香草多任务架构相比,我们的并行模型在ASR和ST之间没有表现出不权衡。我们的代码和预培训模型可在https://github.com/formiel/speech-translation上找到。
We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.