论文标题

生成和验证:神经网络感知系统的语义上有意义的正式分析

Generate and Verify: Semantically Meaningful Formal Analysis of Neural Network Perception Systems

论文作者

Serrano, Chris R., Sylla, Pape M., Warren, Michael A.

论文摘要

测试仍然是评估神经网络感知系统准确性的主要方法。关于神经网络感知模型的正式验证的先前工作仅限于针对各个图像输入的局部对抗性鲁棒性的概念。在这项工作中,我们提出了有关具有语义上有意义的潜在空间的生成神经网络进行回归的神经网络感知模型的全球正确性概念。也就是说,与生成模型在其潜在空间的间隔内产生的无限图像集,我们采用神经网络验证来证明该模型将始终在地面真理的某些错误界内产生估计值。在感知模型失败的情况下,我们获得了语义上有意义的反例,这些审查在感兴趣系统的混凝土状态上提供信息,可以通过编程方式使用,而无需对相应生成的图像进行人工检查。我们的方法生成和验证,提供了一种新技术,可以洞悉神经网络感知系统的失败情况,并提供有意义的保证在安全关键应用中正确行为。

Testing remains the primary method to evaluate the accuracy of neural network perception systems. Prior work on the formal verification of neural network perception models has been limited to notions of local adversarial robustness for classification with respect to individual image inputs. In this work, we propose a notion of global correctness for neural network perception models performing regression with respect to a generative neural network with a semantically meaningful latent space. That is, against an infinite set of images produced by a generative model over an interval of its latent space, we employ neural network verification to prove that the model will always produce estimates within some error bound of the ground truth. Where the perception model fails, we obtain semantically meaningful counter-examples which carry information on concrete states of the system of interest that can be used programmatically without human inspection of corresponding generated images. Our approach, Generate and Verify, provides a new technique to gather insight into the failure cases of neural network perception systems and provide meaningful guarantees of correct behavior in safety critical applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源