首页 | 本学科首页   官方微博 | 高级检索  
     

基于稀疏正则化的卷积神经网络模型剪枝方法
引用本文:韦越,陈世超,朱凤华,熊刚. 基于稀疏正则化的卷积神经网络模型剪枝方法[J]. 计算机工程, 2021, 47(10): 61-66. DOI: 10.19678/j.issn.1000-3428.0059375
作者姓名:韦越  陈世超  朱凤华  熊刚
作者单位:中国科学院大学人工智能学院,北京100049;中国科学院自动化研究所复杂系统管理与控制国家重点实验室,北京100190;中国科学院自动化研究所复杂系统管理与控制国家重点实验室,北京100190;澳门科技大学 资讯科技学院,澳门 999078;中国科学院自动化研究所复杂系统管理与控制国家重点实验室,北京100190
基金项目:国家自然科学基金委员会-浙江省人民政府两化融合联合基金(U1909204);广东省基础与应用基础研究基金(2019B1515120030)。
摘    要:现有卷积神经网络模型剪枝方法仅依靠自身参数信息难以准确评估参数重要性,容易造成参数误剪且影响网络模型整体性能.提出一种改进的卷积神经网络模型剪枝方法,通过对卷积神经网络模型进行稀疏正则化训练,得到参数较稀疏的深度卷积神经网络模型,并结合卷积层和BN层的稀疏性进行结构化剪枝去除冗余滤波器.在CIFAR-10、CIFAR-...

关 键 词:深度学习  模型剪枝  卷积神经网络  稀疏约束  模型压缩
收稿时间:2020-08-27
修稿时间:2020-10-20

Pruning Method for Convolutional Neural Network Models Based on Sparse Regularization
WEI Yue,CHEN Shichao,ZHU Fenghua,XIONG Gang. Pruning Method for Convolutional Neural Network Models Based on Sparse Regularization[J]. Computer Engineering, 2021, 47(10): 61-66. DOI: 10.19678/j.issn.1000-3428.0059375
Authors:WEI Yue  CHEN Shichao  ZHU Fenghua  XIONG Gang
Affiliation:1. School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China;2. State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;3. Faculty of Information Technology, Macau University of Science and Technology, Macao 999078, China
Abstract:The existing pruning algorithms for Convolutional Neural Network(CNN) models exhibit a low accuracy in evaluating the importance of parameters by relying on their own parameter information, which would easily lead to mispruning and affect the performance of model.To address the problem, an improved pruning method for CNN models is proposed.By training the model with sparse regularization, a deep convolutional neural network model with sparse parameters is obtained.Structural pruning is performed by combining the sparsity of the convolution layer and the BN layer to remove redundant filters.Experimental results on CIFAR-10, CIFAR-100 and SVHN datasets show that the proposed pruning method can effectively compress the network model scale and reduce the computational complexity.Especially on the SVHN dataset, the compressed VGG-16 network model reduces the amount of parameters and FLOPs by 97.3% and 91.2%, respectively, and the accuracy of image classification only loses 0.57 percentage points.
Keywords:deep learning  model pruning  Convolutional Neural Network(CNN)  sparse constraint  model compression  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号