论文标题

重新思考Lipschitz神经网络和认证的鲁棒性:布尔功能的观点

Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective

论文作者

Zhang, Bohang, Jiang, Du, He, Di, Wang, Liwei

论文摘要

使用有限的Lipschitz常数设计神经网络是一种有前途的方法,可以通过对抗性实例获得可靠的强大分类器。但是,重要的$ \ ell_ \ infty $扰动设置的相关进度相当有限,并且仍然缺乏对如何设计表现力的$ \ ell_ \ infty $ lipschitz网络的原则理解。在本文中,我们从代表布尔功能的新角度研究了认证的$ \ ell_ \ infty $稳健性来弥合差距。我们得出了两个基本不可能的结果,这些结果可用于任何标准Lipschitz网络:一个用于有限数据集的稳健分类,另一个用于Lipschitz功能近似。这些结果确定了基于规范结合的仿期层和Lipschitz激活的网络即使在二维情况下也会在本质上失去表达能力,并阐明了最近提出的Lipschitz网络(例如,Grouptort和$ \ ell_ \ ell_ \ ell_ \ ell_ \ ell_ \ ell_ \ elfty $ distance nets)绕过这些不可能通过利用稳定量的统计函数来旁路。最后,基于这些见解,我们开发了一个统一的Lipschitz网络,该网络概括了先前的作品,并设计了一个可以有效培训的实用版本(免费培训)。广泛的实验表明,与先前的Lipschitz网络相比,我们的方法可扩展,有效,并且在多个数据集和扰动半径上产生更好的认证鲁棒性。我们的代码可在https://github.com/zbh2047/sortnet上找到。

Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples. However, the relevant progress for the important $\ell_\infty$ perturbation setting is rather limited, and a principled understanding of how to design expressive $\ell_\infty$ Lipschitz networks is still lacking. In this paper, we bridge the gap by studying certified $\ell_\infty$ robustness from a novel perspective of representing Boolean functions. We derive two fundamental impossibility results that hold for any standard Lipschitz network: one for robust classification on finite datasets, and the other for Lipschitz function approximation. These results identify that networks built upon norm-bounded affine layers and Lipschitz activations intrinsically lose expressive power even in the two-dimensional case, and shed light on how recently proposed Lipschitz networks (e.g., GroupSort and $\ell_\infty$-distance nets) bypass these impossibilities by leveraging order statistic functions. Finally, based on these insights, we develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained (making certified robust training free). Extensive experiments show that our approach is scalable, efficient, and consistently yields better certified robustness across multiple datasets and perturbation radii than prior Lipschitz networks. Our code is available at https://github.com/zbh2047/SortNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源