论文标题

自适应神经网络的动力学意识对抗性攻击

Dynamics-aware Adversarial Attack of Adaptive Neural Networks

论文作者

Tao, An, Duan, Yueqi, Wang, Yingqi, Lu, Jiwen, Zhou, Jie

论文摘要

在本文中,我们研究了自适应神经网络的动态感知攻击问题。大多数现有的对抗攻击算法都是在基本假设下设计的 - 整个攻击过程中,网络体系结构是固定的。但是,对于许多最近提出的自适应神经网络,这种假设并不成立,这些神经网络基于输入以提高计算效率,可以自适应地停用不必要的执行单位。它导致了一个严重的落后梯度问题,由于架构之后发生了变化,因此在当前步骤中获得了学习的攻击。为了解决这个问题,我们提出了一种铅梯度方法(LGM),并显示滞后梯度的显着影响。更具体地说,我们将梯度重新准备好以了解网络体系结构的潜在动态变化,以便在网络体系结构动态变化时,学到的攻击更好地“领导”下一步。对2D图像和3D点云的代表性自适应神经网络类型进行的广泛实验表明,与动态 - 非局部攻击方法相比,我们的LGM实现了令人印象深刻的对抗性攻击性能。代码可在https://github.com/antao97/lgm上找到。

In this paper, we investigate the dynamics-aware adversarial attack problem of adaptive neural networks. Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process. However, this assumption does not hold for many recently proposed adaptive neural networks, which adaptively deactivate unnecessary execution units based on inputs to improve computational efficiency. It results in a serious issue of lagged gradient, making the learned attack at the current step ineffective due to the architecture change afterward. To address this issue, we propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient. More specifically, we reformulate the gradients to be aware of the potential dynamic changes of network architectures, so that the learned attack better "leads" the next step than the dynamics-unaware methods when network architecture changes dynamically. Extensive experiments on representative types of adaptive neural networks for both 2D images and 3D point clouds show that our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods. Code is available at https://github.com/antao97/LGM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源