论文标题
DIP-GNN:图神经网络的歧视性预训练
DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
论文作者
论文摘要
已经提出了图形神经网络(GNN)预训练方法来增强GNN的能力。具体而言,GNN首先在大型未标记图上进行预训练,然后在单独的小标记图上进行微调,以用于下游应用,例如节点分类。一种流行的预训练方法是掩盖一部分边缘,并接受了GNN的培训以恢复它们。但是,这种生成方法遭受图不匹配。也就是说,输入到GNN偏离原始图的蒙版图。为了减轻此问题,我们提出了DIP-GNN(图神经网络的歧视性预训练)。具体来说,我们训练一个发电机以恢复蒙版边缘的身份,同时,我们训练一个判别器,以区分生成的边缘与原始图的边缘。在我们的框架中,鉴别器看到的图形可以更好地匹配原始图,因为生成器可以恢复蒙版边缘的一部分。大规模同质和异质图的广泛实验证明了所提出的框架的有效性。
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs. Specifically, a GNN is first pre-trained on a large-scale unlabeled graph and then fine-tuned on a separate small labeled graph for downstream applications, such as node classification. One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them. However, such a generative method suffers from graph mismatch. That is, the masked graph inputted to the GNN deviates from the original graph. To alleviate this issue, we propose DiP-GNN (Discriminative Pre-training of Graph Neural Networks). Specifically, we train a generator to recover identities of the masked edges, and simultaneously, we train a discriminator to distinguish the generated edges from the original graph's edges. In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges. Extensive experiments on large-scale homogeneous and heterogeneous graphs demonstrate the effectiveness of the proposed framework.