论文标题
克服具有经验重播的图形神经网络中的灾难性遗忘
Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay
论文作者
论文摘要
图形神经网络(GNN)最近因其在各种与图形相关的学习任务上的出色表现而受到了大量的研究关注。当前的大多数作品都集中在静态或动态图形设置上,解决了一个特定任务,例如节点/图形分类,链接预测。在这项工作中,我们研究了一个问题:是否可以将GNN应用于连续学习一系列任务?在此方面,我们探索了持续的图形学习(CGL)范式,并为CGL提供了基于经验重播的ER-GNN,以减轻现有GNN中的灾难性遗忘问题。 ER-GNN将知识从以前的任务中存储为经验,并在学习新任务以减轻灾难性遗忘问题时会重播它们。我们提出了三种体验节点选择策略:功能的平均值,覆盖范围最大化和影响最大化,以指导选择体验节点的过程。在三个基准数据集上进行的广泛实验证明了我们的ER-GNN和Shed Ship ship shoprement图(非欧几里得)结构学习的有效性。
Graph Neural Networks (GNNs) have recently received significant research attention due to their superior performance on a variety of graph-related learning tasks. Most of the current works focus on either static or dynamic graph settings, addressing a single particular task, e.g., node/graph classification, link prediction. In this work, we investigate the question: can GNNs be applied to continuously learning a sequence of tasks? Towards that, we explore the Continual Graph Learning (CGL) paradigm and present the Experience Replay based framework ER-GNN for CGL to alleviate the catastrophic forgetting problem in existing GNNs. ER-GNN stores knowledge from previous tasks as experiences and replays them when learning new tasks to mitigate the catastrophic forgetting issue. We propose three experience node selection strategies: mean of feature, coverage maximization, and influence maximization, to guide the process of selecting experience nodes. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our ER-GNN and shed light on the incremental graph (non-Euclidean) structure learning.