论文标题

提伯特:藏族预训练的语言模型

TiBERT: Tibetan Pre-trained Language Model

论文作者

Sun, Yuan, Liu, Sisi, Deng, Junjie, Zhao, Xiaobing

论文摘要

预先训练的语言模型接受了大规模未标记文本的培训,并且可以在许多不同的下游任务中获得最新的结果。但是,当前的预训练的语言模型主要集中在中文和英语领域。对于诸如藏族之类的低资源语言,缺乏单语的预训练模型。为了促进藏族自然语言处理任务的发展,本文从藏族网站收集了大规模的培训数据,并构建了一个词汇,可以通过使用句子来覆盖语料库中99.95美元的单词的99.95 $ \%$。然后,我们在数据和词汇上训练藏族单语培训的语言模型。最后,我们将Tibert应用于文本分类和问题生成的下游任务,并将其与经典模型和多语言预培训模型进行比较,实验结果表明Tibert可以实现最佳性能。我们的模型发表在http://tibert.cmli-nlp.com/

The pre-trained language model is trained on large-scale unlabeled text and can achieve state-of-the-art results in many different downstream tasks. However, the current pre-trained language model is mainly concentrated in the Chinese and English fields. For low resource language such as Tibetan, there is lack of a monolingual pre-trained model. To promote the development of Tibetan natural language processing tasks, this paper collects the large-scale training data from Tibetan websites and constructs a vocabulary that can cover 99.95$\%$ of the words in the corpus by using Sentencepiece. Then, we train the Tibetan monolingual pre-trained language model named TiBERT on the data and vocabulary. Finally, we apply TiBERT to the downstream tasks of text classification and question generation, and compare it with classic models and multilingual pre-trained models, the experimental results show that TiBERT can achieve the best performance. Our model is published in http://tibert.cmli-nlp.com/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源