论文标题
Smoothnets:优化CNN体系结构设计,用于私人深度学习
SmoothNets: Optimizing CNN architecture design for differentially private deep learning
论文作者
论文摘要
DPSGD是最广泛使用的算法来培训具有差异性隐私的深神经网络,这需要剪接和缩减每样本梯度。与非私人培训相比,这引入了模型实用性的降低。从经验上讲,可以观察到这种准确性降解在很大程度上取决于模型架构。我们研究了这一现象,并通过结合表现出良好个性表现的组件来提炼一种称为Smoothnet的新模型架构,该模型的特征是增加了对DP-SGD训练挑战的鲁棒性。 Experimentally, we benchmark SmoothNet against standard architectures on two benchmark datasets and observe that our architecture outperforms others, reaching an accuracy of 73.5\% on CIFAR-10 at $\varepsilon=7.0$ and 69.2\% at $\varepsilon=7.0$ on ImageNette, a state-of-the-art result compared to prior architectural modifications for DP.
The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients. This introduces a reduction in model utility compared to non-private training. Empirically, it can be observed that this accuracy degradation is strongly dependent on the model architecture. We investigated this phenomenon and, by combining components which exhibit good individual performance, distilled a new model architecture termed SmoothNet, which is characterised by increased robustness to the challenges of DP-SGD training. Experimentally, we benchmark SmoothNet against standard architectures on two benchmark datasets and observe that our architecture outperforms others, reaching an accuracy of 73.5\% on CIFAR-10 at $\varepsilon=7.0$ and 69.2\% at $\varepsilon=7.0$ on ImageNette, a state-of-the-art result compared to prior architectural modifications for DP.