Prune Your Neurons Blindly: Neural Network Compression Through Structured Class-Blind Pruning

Abdullah Salama, Oleksiy Ostapenko, Tassilo Klein and Moin Nabi

SAP AI Research
SAP AI Research
1 min readMar 26, 2019

--

International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), Brighton, UK

High performance of deep learning models typically comes at cost of considerable model size and computation time. These factors limit applicability for deployment on memory and battery constrained devices such as mobile phones or embedded systems. In this work, we propose a novel pruning technique that eliminates entire filters and neurons according to their relative L1-norm as compared to the rest of the network, yielding more compression and decreased parameters’ redundancy. The resulting network is non-sparse, however, much more compact and requires no special infrastructure for deployment. We prove the viability of our method by achieving 97.4%, 86.1%, 47.8% and 53% compression of LeNet-5, VGG-16, ResNet-56 and ResNet-110 respectively, exceeding state-of- the-art compression results reported on VGG-16 and ResNet without losing any performance compared to the baseline. Our approach does not only exhibit good performance but is also easy to implement.

PDF

--

--