论文标题
一项关于说话者代表性学习的自我监督蒸馏学习的全面研究
A comprehensive study on self-supervised distillation for speaker representation learning
论文作者
论文摘要
在实际的应用程序场景中,由于说话者的隐私问题,获得大量标记的数据来进行说话者表示学习通常是具有挑战性的。没有标签的自制学习已成为解决它的一种越来越有前途的方法。与对比度学习相比,自缩的方法仅在损失函数中使用阳性样本,因此更具吸引力。在本文中,我们介绍了一项关于自缩的自我监督的说话者代表性学习的全面研究,尤其是在关键数据增强方面。我们提出的音频扰动策略增强策略已将说话者代表的性能提高到了新的限制。实验结果表明,我们的模型可以在Voxceleb1扬声器验证评估基准上获得新的SOTA(即分别为2.505%,2.473%和4.791%,分别为2.505%,2.473%和4.791%,分别为VOX1-O,VOX1-E和VOX1-H),丢弃培训阶段中任何扬声器标签的扬声器。
In real application scenarios, it is often challenging to obtain a large amount of labeled data for speaker representation learning due to speaker privacy concerns. Self-supervised learning with no labels has become a more and more promising way to solve it. Compared with contrastive learning, self-distilled approaches use only positive samples in the loss function and thus are more attractive. In this paper, we present a comprehensive study on self-distilled self-supervised speaker representation learning, especially on critical data augmentation. Our proposed strategy of audio perturbation augmentation has pushed the performance of the speaker representation to a new limit. The experimental results show that our model can achieve a new SoTA on Voxceleb1 speaker verification evaluation benchmark ( i.e., equal error rate (EER) 2.505%, 2.473%, and 4.791% for trial Vox1-O, Vox1-E and Vox1-H , respectively), discarding any speaker labels in the training phase.