论文标题
放松:深神经网络的差异验证
ReluDiff: Differential Verification of Deep Neural Networks
论文作者
论文摘要
随着深度神经网络在实践中越来越多地部署,它们的效率已成为一个重要的问题。尽管有减少网络规模,能源消耗和计算要求的压缩技术,但它们仅在经验上证明没有准确性损失,但缺乏对压缩网络的正式保证,例如在存在对抗性示例的情况下。现有的验证技术(例如Reluplex,Reluval和Deeppoly)提供了正式的保证,但它们旨在分析单个网络而不是两个网络之间的关系。为了填补空白,我们开发了一种新方法,用于对两个紧密相关的网络进行差异验证。我们的方法由快速但近似的前向间隔分析通过,然后是向后通过,该向后通过迭代地完善近似值,直到验证所需的属性为止。我们有两个主要的创新。在正向通过期间,我们利用两个网络的结构和行为相似性,以更准确地绑定两个网络的输出神经元之间的差异。然后,在向后通过,我们利用梯度差异更准确地计算最有益的精致。我们的实验表明,与最先进的验证工具相比,我们的方法可以实现高度速度的加速顺序,并证明比现有工具具有更多的属性。
As deep neural networks are increasingly being deployed in practice, their efficiency has become an important issue. While there are compression techniques for reducing the network's size, energy consumption and computational requirement, they only demonstrate empirically that there is no loss of accuracy, but lack formal guarantees of the compressed network, e.g., in the presence of adversarial examples. Existing verification techniques such as Reluplex, ReluVal, and DeepPoly provide formal guarantees, but they are designed for analyzing a single network instead of the relationship between two networks. To fill the gap, we develop a new method for differential verification of two closely related networks. Our method consists of a fast but approximate forward interval analysis pass followed by a backward pass that iteratively refines the approximation until the desired property is verified. We have two main innovations. During the forward pass, we exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks. Then in the backward pass, we leverage the gradient differences to more accurately compute the most beneficial refinement. Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup and prove many more properties than existing tools.