论文标题
PGADA:在支持疑问转移下进行几次学习的扰动引导的对抗对准
PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning Under the Support-Query Shift
论文作者
论文摘要
很少有射击学习方法旨在将数据嵌入到低维嵌入空间中,然后将看不见的查询数据分类为可见的支持集。尽管这些作品假定支持集和查询集位于同一嵌入空间中,但分布位置通常发生在支持集和查询集之间,即支持 - 汇率转移在现实世界中。尽管最佳运输表现出令人信服的结果在对齐不同的分布方面,但我们发现图像中的小扰动会大大误导最佳运输,从而降低模型性能。为了缓解未对准,我们首先提出了一种新颖的对抗数据增强方法,即扰动引导的对抗对准(PGADA),该方法以自欺欺人的方式产生了硬例子。此外,我们引入了正规化的最佳运输,以得出平稳的最佳运输计划。在三个基准数据集上进行的广泛实验表明,我们的框架在三个数据集上的表现显着优于11个最先进的方法。
Few-shot learning methods aim to embed the data to a low-dimensional embedding space and then classify the unseen query data to the seen support set. While these works assume that the support set and the query set lie in the same embedding space, a distribution shift usually occurs between the support set and the query set, i.e., the Support-Query Shift, in the real world. Though optimal transportation has shown convincing results in aligning different distributions, we find that the small perturbations in the images would significantly misguide the optimal transportation and thus degrade the model performance. To relieve the misalignment, we first propose a novel adversarial data augmentation method, namely Perturbation-Guided Adversarial Alignment (PGADA), which generates the hard examples in a self-supervised manner. In addition, we introduce Regularized Optimal Transportation to derive a smooth optimal transportation plan. Extensive experiments on three benchmark datasets manifest that our framework significantly outperforms the eleven state-of-the-art methods on three datasets.