论文标题
Transformer从顺序SNP数据中的有效HLA推出
Efficient HLA imputation from sequential SNPs data by Transformer
论文作者
论文摘要
人类白细胞抗原(HLA)基因与多种疾病有关,但是HLA的直接键入是时间和成本的。因此,已经根据统计或深度学习模型(例如基于CNN的模型,名为Deep*HLA。但是,插补效率不足以频繁等位基因,需要大尺寸的参考面板。在这里,我们开发了一个基于变压器的模型来估算HLA等位基因,称为“ Transficeer(Hlarimnt)可靠的插定”,以利用SNP数据的顺序性质。我们使用两个不同的参考面板验证了Hlarimnt的性能;泛亚洲参考面板(n = 530)和1型糖尿病遗传联盟(T1DGC)参考面板(n = 5,225),以及这两个面板的混合物(n = 1,060)。通过几个指数,hlarimnt的精度比深*hla更高,尤其是对于罕见的等位基因。我们还改变了用于培训的数据的大小,Hlarimnt在任何规模的培训数据中都更准确地估算了培训。这些结果表明,基于变压器的模型不仅可以有效地将HLA类型算,还可以从顺序SNP数据中使用任何其他基因类型。
Human leukocyte antigen (HLA) genes are associated with a variety of diseases, however direct typing of HLA is time and cost consuming. Thus various imputation methods using sequential SNPs data have been proposed based on statistical or deep learning models, e.g. CNN-based model, named DEEP*HLA. However, imputation efficiency is not sufficient for in frequent alleles and a large size of reference panel is required. Here, we developed a Transformer-based model to impute HLA alleles, named "HLA Reliable IMputatioN by Transformer (HLARIMNT)" to take advantage of sequential nature of SNPs data. We validated the performance of HLARIMNT using two different reference panels; Pan-Asian reference panel (n = 530) and Type 1 Diabetes Genetics Consortium (T1DGC) reference panel (n = 5,225), as well as the mixture of those two panels (n = 1,060). HLARIMNT achieved higher accuracy than DEEP*HLA by several indices, especially for infrequent alleles. We also varied the size of data used for training, and HLARIMNT imputed more accurately among any size of training data. These results suggest that Transformer-based model may impute efficiently not only HLA types but also any other gene types from sequential SNPs data.