论文标题

使用基于替代物的对抗黑盒方法攻击深层网络很容易

Attacking deep networks with surrogate-based adversarial black-box methods is easy

论文作者

Lord, Nicholas A., Mueller, Romain, Bertinetto, Luca

论文摘要

在黑框对抗攻击方面,最近的一项工作通过将其集成到基于查询的搜索中,从而恢复了从替代模型转移的使用。但是,我们发现这种类型的现有方法表现不佳,并且此外可能过于复杂。在这里,我们提供了一种简短而简单的算法,该算法通过使用替代网络的集体分数梯度来实现最先进的结果,而不需要其他先验或启发式方法。算法的指导假设是,所研究的网络在基本意义上学习相似的功能,因此从一个人到另一个的转移攻击应该相当“容易”。该假设通过极低的查询计数和失败率来验证:例如使用RESNET-152对VGG-16成像网网络进行了不靶向的攻击,因为代理人的中位数查询计数为6,成功率为99.9%。代码可从https://github.com/fiveai/gfcs获得。

A recent line of work on black-box adversarial attacks has revived the use of transfer from surrogate models by integrating it into query-based search. However, we find that existing approaches of this type underperform their potential, and can be overly complicated besides. Here, we provide a short and simple algorithm which achieves state-of-the-art results through a search which uses the surrogate network's class-score gradients, with no need for other priors or heuristics. The guiding assumption of the algorithm is that the studied networks are in a fundamental sense learning similar functions, and that a transfer attack from one to the other should thus be fairly "easy". This assumption is validated by the extremely low query counts and failure rates achieved: e.g. an untargeted attack on a VGG-16 ImageNet network using a ResNet-152 as the surrogate yields a median query count of 6 at a success rate of 99.9%. Code is available at https://github.com/fiveai/GFCS.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源