SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration

Introduction

Accelerating the inference of CNNs is critical to their deployment in real-world applications. Among all pruning approaches, the methods of implementing a sparsity learning framework have shown effectiveness as they learn and prune the models in an end-to-end data-driven manner. However, these works impose the same sparsity regularization on all filters indiscriminately, which can hardly result in an optimal structure-sparse network. In this paper, we propose a Saliency-Adaptive Sparsity Learning (SASL) approach for further optimization. A novel and effective estimation of each filter, i.e., saliency, is designed, which is measured from two aspects: the importance for prediction performance and the consumed computational resources. During sparsity learning, the regularization strength is adjusted according to the saliency, so our optimized format can better preserve the prediction performance while zeroing out more computation-heavy filters. The calculation for saliency introduces minimum overhead to the training process, which means our SASL is very efficient. During the pruning phase, in order to optimize the proposed data-dependent criterion, a hard sample mining strategy is utilized, which shows higher effectiveness and efficiency. Extensive experiments demonstrate the superior performance of our method. Notably, on ILSVRC-2012 dataset, our approach can reduce 49.7% FLOPs of ResNet-50 with very negligible 0.39% top-1 and 0.05% top-5 accuracy degradation.

Paper

Jun Shi, Jianfeng Xu, Kazuyuki Tasaka, Zhibo Chen “SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration”, Early Access, IEEE Trans. on Circuits and Systems for Video Technology, 2020

Download

To get the source code, Please check this website.