论文标题
SS-Auto:一个具有超高效率的DNN的单发,自动结构重量修剪框架
SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency
论文作者
论文摘要
结构化重量修剪是DNN的代表性模型压缩技术,可用于硬件效率和推理加速度。由于没有完全有效地利用了具有不同结构化修剪方案的组合的稀疏结构,因此在该领域的先前工作为改进提供了巨大的改进空间。为了减轻局限性,我们提出了SS-AUTO,这是一个自动结构化的修剪框架,可以同时实现行修剪和列修剪。我们采用基于软约束的公式来减轻基于最新ADMM的L0-NORM约束的强烈非转换性,以更快地收敛和更少的超参数。提出了一个原始的固定解决方案,而不是直接解决问题,以避免平等地惩罚所有权重的陷阱,从而提高准确性。在CIFAR-10和CIFAR-100数据集上进行了广泛的实验表明,所提出的框架可以达到超高的修剪率,同时保持准确性。此外,通过在智能手机上的实际测量结果从提议的框架中观察到了显着的推理速度。
Structured weight pruning is a representative model compression technique of DNNs for hardware efficiency and inference accelerations. Previous works in this area leave great space for improvement since sparse structures with combinations of different structured pruning schemes are not exploited fully and efficiently. To mitigate the limitations, we propose SS-Auto, a single-shot, automatic structured pruning framework that can achieve row pruning and column pruning simultaneously. We adopt soft constraint-based formulation to alleviate the strong non-convexity of l0-norm constraints used in state-of-the-art ADMM-based methods for faster convergence and fewer hyperparameters. Instead of solving the problem directly, a Primal-Proximal solution is proposed to avoid the pitfall of penalizing all weights equally, thereby enhancing the accuracy. Extensive experiments on CIFAR-10 and CIFAR-100 datasets demonstrate that the proposed framework can achieve ultra-high pruning rates while maintaining accuracy. Furthermore, significant inference speedup has been observed from the proposed framework through actual measurements on the smartphone.