论文标题
学习何时使用自动驾驶汽车的自适应对抗图像扰动
Learning When to Use Adaptive Adversarial Image Perturbations against Autonomous Vehicles
论文作者
论文摘要
使用相机图像的对象检测的深神经网络(DNN)模型在自动驾驶中广泛采用。但是,DNN模型被证明容易受到对抗图像扰动的影响。在生成对抗图像扰动的现有方法中,优化将每个传入的图像框架作为决策变量,以生成图像扰动。因此,鉴于一个新图像,由于独立优化之间没有学习,因此通常需要重新计算优化的优化。在考虑自动驾驶汽车,其任务和环境的基本物理动态的同时,几乎没有开发出用于攻击在线图像流的方法。我们提出了一个多级随机优化框架,该框架可以监视攻击者产生对抗扰动的能力。基于此能力级别,引入了二进制决策攻击/不攻击,以增强攻击者的有效性。我们使用在办公室环境中使用小型室内无人机来评估我们提出的多层次图像攻击框架,并进行视觉引导的自动驾驶汽车和实际测试。结果表明,当攻击者熟练给定状态估计值时,我们的方法有能力实时生成图像攻击。
The deep neural network (DNN) models for object detection using camera images are widely adopted in autonomous vehicles. However, DNN models are shown to be susceptible to adversarial image perturbations. In the existing methods of generating the adversarial image perturbations, optimizations take each incoming image frame as the decision variable to generate an image perturbation. Therefore, given a new image, the typically computationally-expensive optimization needs to start over as there is no learning between the independent optimizations. Very few approaches have been developed for attacking online image streams while considering the underlying physical dynamics of autonomous vehicles, their mission, and the environment. We propose a multi-level stochastic optimization framework that monitors an attacker's capability of generating the adversarial perturbations. Based on this capability level, a binary decision attack/not attack is introduced to enhance the effectiveness of the attacker. We evaluate our proposed multi-level image attack framework using simulations for vision-guided autonomous vehicles and actual tests with a small indoor drone in an office environment. The results show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.