作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2021, Vol. 47 ›› Issue (10): 61-66. doi: 10.19678/j.issn.1000-3428.0059375

• 人工智能与模式识别 • 上一篇    下一篇

基于稀疏正则化的卷积神经网络模型剪枝方法

韦越1,2, 陈世超2,3, 朱凤华2, 熊刚2   

  1. 1. 中国科学院大学 人工智能学院, 北京 100049;
    2. 中国科学院自动化研究所 复杂系统管理与控制国家重点实验室, 北京 100190;
    3. 澳门科技大学 资讯科技学院, 澳门 999078
  • 收稿日期:2020-08-27 修回日期:2020-10-20 发布日期:2020-10-20
  • 作者简介:韦越(1996-),男,硕士研究生,主研方向为模型压缩与加速;陈世超,助理研究员、博士;朱凤华,副研究员、博士。
  • 基金资助:
    国家自然科学基金委员会-浙江省人民政府两化融合联合基金(U1909204);广东省基础与应用基础研究基金(2019B1515120030)。

Pruning Method for Convolutional Neural Network Models Based on Sparse Regularization

WEI Yue1,2, CHEN Shichao2,3, ZHU Fenghua2, XIONG Gang2   

  1. 1. School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China;
    2. State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;
    3. Faculty of Information Technology, Macau University of Science and Technology, Macao 999078, China
  • Received:2020-08-27 Revised:2020-10-20 Published:2020-10-20

摘要: 现有卷积神经网络模型剪枝方法仅依靠自身参数信息难以准确评估参数重要性,容易造成参数误剪且影响网络模型整体性能。提出一种改进的卷积神经网络模型剪枝方法,通过对卷积神经网络模型进行稀疏正则化训练,得到参数较稀疏的深度卷积神经网络模型,并结合卷积层和BN层的稀疏性进行结构化剪枝去除冗余滤波器。在CIFAR-10、CIFAR-100和SVHN数据集上的实验结果表明,该方法能有效压缩网络模型规模并降低计算复杂度,尤其在SVHN数据集上,压缩后的VGG-16网络模型在参数量和浮点运算量分别减少97.3%和91.2%的情况下,图像分类准确率仅损失了0.57个百分点。

关键词: 深度学习, 模型剪枝, 卷积神经网络, 稀疏约束, 模型压缩

Abstract: The existing pruning algorithms for Convolutional Neural Network(CNN) models exhibit a low accuracy in evaluating the importance of parameters by relying on their own parameter information, which would easily lead to mispruning and affect the performance of model.To address the problem, an improved pruning method for CNN models is proposed.By training the model with sparse regularization, a deep convolutional neural network model with sparse parameters is obtained.Structural pruning is performed by combining the sparsity of the convolution layer and the BN layer to remove redundant filters.Experimental results on CIFAR-10, CIFAR-100 and SVHN datasets show that the proposed pruning method can effectively compress the network model scale and reduce the computational complexity.Especially on the SVHN dataset, the compressed VGG-16 network model reduces the amount of parameters and FLOPs by 97.3% and 91.2%, respectively, and the accuracy of image classification only loses 0.57 percentage points.

Key words: deep learning, model pruning, Convolutional Neural Network(CNN), sparse constraint, model compression

中图分类号: