论文标题
SBPF:图像分类的基于敏感性的卷积神经网络的敏感性修剪框架
SBPF: Sensitiveness Based Pruning Framework For Convolutional Neural Network On Image Classification
论文作者
论文摘要
修剪技术可全面地用于图像分类中压缩卷积神经网络(CNN)。但是,大多数修剪方法都需要一个经过良好训练的模型,以提供有用的支持参数,例如C1-核心,批处理值和梯度信息,如果预先训练模型的参数未得到很好的优化,这可能会导致过滤器评估的不一致。因此,我们提出了一种基于敏感性的方法,可以通过为原始模型增加额外的损害来评估每一层的重要性。由于准确性的性能取决于参数在所有层而不是单个参数中的分布,因此基于灵敏度的方法将对参数的更新具有鲁棒性。也就是说,我们可以获得对不完美训练和完全训练的模型之间每个卷积层的相似重要性评估。对于CIFAR-10上的VGG-16,即使原始模型仅经过50个时期训练,我们也可以对层重要性进行评估,并在对模型进行充分训练时的结果相同。然后,我们将通过量化的灵敏度从每一层删除过滤器。我们基于敏感性的修剪框架在VGG-16上有效验证,分别是CIFAR-10,MNIST和CIFAR-100的定制Conv-4和Resnet-18。
Pruning techniques are used comprehensively to compress convolutional neural networks (CNNs) on image classification. However, the majority of pruning methods require a well pre-trained model to provide useful supporting parameters, such as C1-norm, BatchNorm value and gradient information, which may lead to inconsistency of filter evaluation if the parameters of the pre-trained model are not well optimized. Therefore, we propose a sensitiveness based method to evaluate the importance of each layer from the perspective of inference accuracy by adding extra damage for the original model. Because the performance of the accuracy is determined by the distribution of parameters across all layers rather than individual parameter, the sensitiveness based method will be robust to update of parameters. Namely, we can obtain similar importance evaluation of each convolutional layer between the imperfect-trained and fully trained models. For VGG-16 on CIFAR-10, even when the original model is only trained with 50 epochs, we can get same evaluation of layer importance as the results when the model is trained fully. Then we will remove filters proportional from each layer by the quantified sensitiveness. Our sensitiveness based pruning framework is verified efficiently on VGG-16, a customized Conv-4 and ResNet-18 with CIFAR-10, MNIST and CIFAR-100, respectively.