论文标题
弥合FGSM和PGD对抗训练之间的性能差距
Bridging the Performance Gap between FGSM and PGD Adversarial Training
论文作者
论文摘要
深度学习在许多任务中都取得了最新的表现,但暴露于针对对抗性例子的根本脆弱性。在现有的防御技术中,对预计梯度体面攻击(ADV.PGD)进行的对抗训练被认为是实现适度对抗性鲁棒性的最有效方法之一。但是,由于预计梯度攻击(PGD)需要多次迭代来产生扰动,因此Adv.PGD需要太多的训练时间。另一方面,使用快速梯度符号方法(ADV.FGSM)的对抗训练需要更少的训练时间,因为快速梯度符号方法(FGSM)需要一步来产生扰动,但未能增加对抗性鲁棒性。在这项工作中,我们扩展了adv.fgsm,以使其达到adv.pgd的对抗性鲁棒性。我们证明,沿FGSM扰动方向的较大曲率导致adv.FGSM和adv.pgd之间对抗性鲁棒性的性能差异很大,因此建议将adv.FGSM与曲率正则化(adv.fgsmr)组合在一起,以弥合adv.fgsm和adv.pgd.fgsm之间的性能。实验表明,Adv.FGSMR的培训效率高于Adv.PGD。此外,它在白色框攻击下在MNIST数据集上实现了对抗性鲁棒性的可比性,并且在白色盒子攻击下的adv.pgd比Adv.pgd更好,并有效地捍卫了对CIFAR-10数据集的可转移对抗性攻击。
Deep learning achieves state-of-the-art performance in many tasks but exposes to the underlying vulnerability against adversarial examples. Across existing defense techniques, adversarial training with the projected gradient decent attack (adv.PGD) is considered as one of the most effective ways to achieve moderate adversarial robustness. However, adv.PGD requires too much training time since the projected gradient attack (PGD) takes multiple iterations to generate perturbations. On the other hand, adversarial training with the fast gradient sign method (adv.FGSM) takes much less training time since the fast gradient sign method (FGSM) takes one step to generate perturbations but fails to increase adversarial robustness. In this work, we extend adv.FGSM to make it achieve the adversarial robustness of adv.PGD. We demonstrate that the large curvature along FGSM perturbed direction leads to a large difference in performance of adversarial robustness between adv.FGSM and adv.PGD, and therefore propose combining adv.FGSM with a curvature regularization (adv.FGSMR) in order to bridge the performance gap between adv.FGSM and adv.PGD. The experiments show that adv.FGSMR has higher training efficiency than adv.PGD. In addition, it achieves comparable performance of adversarial robustness on MNIST dataset under white-box attack, and it achieves better performance than adv.PGD under white-box attack and effectively defends the transferable adversarial attack on CIFAR-10 dataset.