论文标题
多任务设置的图表学习:一种元学习方法
Graph Representation Learning for Multi-Task Settings: a Meta-Learning Approach
论文作者
论文摘要
图形神经网络(GNN)已成为图形结构数据上许多应用的最新方法。 GNNS是图表表示学习的模型,旨在学习生成封装结构和特征相关信息的低维节点嵌入。 GNN通常以端到端的方式进行训练,从而导致高度专业的节点嵌入。尽管这种方法在单任务设置中取得了出色的结果,但可以使用可用于执行多个任务的节点嵌入(具有与单任务模型相当的性能)的生成仍然是一个开放的问题。我们建议使用元学习来允许训练能够产生多任务节点嵌入的GNN模型。特别是,我们利用基于优化的元学习的属性来学习可以通过学习参数来产生通用节点表示的GNN,这些参数可以快速(即使用梯度下降的几个步骤)适应多个任务。我们的实验表明,使用我们有目的的元学习过程训练的模型产生的嵌入可以用于执行具有可比性或令人惊讶的性能的多个任务,甚至比单任务和多任务端到的端到端模型更高。
Graph Neural Networks (GNNs) have become the state-of-the-art method for many applications on graph structured data. GNNs are a model for graph representation learning, which aims at learning to generate low dimensional node embeddings that encapsulate structural and feature-related information. GNNs are usually trained in an end-to-end fashion, leading to highly specialized node embeddings. While this approach achieves great results in the single-task setting, the generation of node embeddings that can be used to perform multiple tasks (with performance comparable to single-task models) is still an open problem. We propose the use of meta-learning to allow the training of a GNN model capable of producing multi-task node embeddings. In particular, we exploit the properties of optimization-based meta-learning to learn GNNs that can produce general node representations by learning parameters that can quickly (i.e. with a few steps of gradient descent) adapt to multiple tasks. Our experiments show that the embeddings produced by a model trained with our purposely designed meta-learning procedure can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.