论文标题
无监督的对抗性表述在图表上学习
Unsupervised Adversarially-Robust Representation Learning on Graphs
论文作者
论文摘要
图表表示学习的无监督/自我监督的预训练方法最近吸引了增加的研究兴趣,并且证明它们能够推广到各种下游应用程序。然而,这种预训练的图形学习模型的对抗性鲁棒性在很大程度上尚未得到探索。更重要的是,大多数用于端到端图表示学习方法设计方法设计的现有防御技术都需要预先指定的标签定义,因此不能直接应用于预训练方法。在本文中,我们提出了一种无监督的防御技术来鲁棒化的预训练的深图模型,以便在将模型应用于不同下游任务之前,可以成功识别和阻止输入图上的扰动。具体来说,我们引入了基于信息的度量\ textIt {图表表示漏洞(grv)},以量化表示图在表示空间上的图形编码的鲁棒性。然后,我们通过仔细平衡表达能力和鲁棒性(\ emph {i.e。},grv)来制定一个优化问题来学习图表表示形式。图形拓扑的离散性质和图形数据的关节空间使优化问题难以解决。要处理上述困难并减少计算费用,我们进一步放松了问题,从而提供了近似的解决方案。此外,我们探索了无监督图编码器的鲁棒性与下游任务上的模型的鲁棒性之间的可证明连接。广泛的实验表明,即使没有访问标签和任务,我们的模型仍然能够增强针对对三个下游任务(节点分类,链接预测和社区检测)的对抗性攻击的鲁棒性,而与现有方法相比,平均为 +16.5%。
Unsupervised/self-supervised pre-training methods for graph representation learning have recently attracted increasing research interests, and they are shown to be able to generalize to various downstream applications. Yet, the adversarial robustness of such pre-trained graph learning models remains largely unexplored. More importantly, most existing defense techniques designed for end-to-end graph representation learning methods require pre-specified label definitions, and thus cannot be directly applied to the pre-training methods. In this paper, we propose an unsupervised defense technique to robustify pre-trained deep graph models, so that the perturbations on the input graph can be successfully identified and blocked before the model is applied to different downstream tasks. Specifically, we introduce a mutual information-based measure, \textit{graph representation vulnerability (GRV)}, to quantify the robustness of graph encoders on the representation space. We then formulate an optimization problem to learn the graph representation by carefully balancing the trade-off between the expressive power and the robustness (\emph{i.e.}, GRV) of the graph encoder. The discrete nature of graph topology and the joint space of graph data make the optimization problem intractable to solve. To handle the above difficulty and to reduce computational expense, we further relax the problem and thus provide an approximate solution. Additionally, we explore a provable connection between the robustness of the unsupervised graph encoder and that of models on downstream tasks. Extensive experiments demonstrate that even without access to labels and tasks, our model is still able to enhance robustness against adversarial attacks on three downstream tasks (node classification, link prediction, and community detection) by an average of +16.5% compared with existing methods.