首页 | 本学科首页   官方微博 | 高级检索  
     

基于自适应分层阈值判断的神经网络模型压缩
引用本文:卢鹏,万莹,邹国良,陈金宇,郑宗生,王振华. 基于自适应分层阈值判断的神经网络模型压缩[J]. 计算机工程, 2022, 48(1): 112-118+126. DOI: 10.19678/j.issn.1000-3428.0060042
作者姓名:卢鹏  万莹  邹国良  陈金宇  郑宗生  王振华
作者单位:上海海洋大学 信息学院, 上海 201306
基金项目:国家自然科学基金(41501419,41671431);
摘    要:面对多样化的应用环境,卷积神经网络(CNN)的架构深度不断增加以提升精度,但同时需要大量的计算参数和网络存储。针对CNN卷积层参数冗余和运算效率低的问题,提出一种基于分层阈值的自适应动态剪枝方法。设计自适应分层阈值判断算法,对批归一化层的尺度因子进行聚类分析,自适应地找到每层的分类断点并据此确定最终阈值,利用该阈值修剪正则化后的输入模型,从而避免根据经验人为定义固定阈值,减小模型尺寸和运行时占用的内存。分别采用该方法和LIU等提出的使用固定阈值且全局修剪的方法对VGGNet、ResNet、DenseNet和LeNet模型进行压缩,并在CIFAR、SVHN和MNIST数据集上测试模型性能。实验结果表明,该方法能够在模型精度与剪枝率之间找到最优平衡,剪枝后模型的测试错误率较对比方法降低0.02~1.52个百分点,同时自适应分层阈值判断算法也能避免对比方法在全局修剪时减去整个层的问题。

关 键 词:深度学习  图像识别  卷积神经网络  模型压缩  网络剪枝  
收稿时间:2020-11-18
修稿时间:2021-01-05

Neural Network Model Compression Based on Adaptive Hierarchical Threshold Judgment
LU Peng,WAN Ying,ZOU Guoliang,CHEN Jinyu,ZHENG Zongsheng,WANG Zhenhua. Neural Network Model Compression Based on Adaptive Hierarchical Threshold Judgment[J]. Computer Engineering, 2022, 48(1): 112-118+126. DOI: 10.19678/j.issn.1000-3428.0060042
Authors:LU Peng  WAN Ying  ZOU Guoliang  CHEN Jinyu  ZHENG Zongsheng  WANG Zhenhua
Affiliation:College of Information, Shanghai Ocean University, Shanghai 201306, China
Abstract:Aiming at diversified application environments, the architecture depth of Convolutional Neural Network(CNN) is increasing to improve the accuracy, but at the same time, it needs a lot of computing parameters and network storage.An adaptive dynamic pruning method based on layered threshold is proposed to solve the problems of CNN convolution parameter redundancy and low operation efficiency.An adaptive hierarchical threshold judgment algorithm is designed to cluster the scale factors of the Batch Normalization(BN) layer, adaptively find the classification cluster points of each layer, and determine the final threshold accordingly.The regularized input model is trimmed by using the threshold, so as to avoid artificially defining a fixed threshold according to experience and reduce the model size and memory occupied during operation.This method and the method of using fixed threshold and global pruning proposed by LIU et al are used to compress VGGNet, ResNet, DenseNet and LeNet models respectively, and the model performance are tested on CIFAR, SVHN and MNIST data sets.Experimental results show that this method can find the optimal balance between model accuracy and pruning rate.The test error rate of the model after pruning is 0.02~1.52 percentage points lower than that of the comparison method.At the same time, the adaptive layered threshold judgment algorithm can also avoid the problem of reducing the whole layer in the global pruning of the comparison method.
Keywords:deep learning  image recognition  Convolutional Neural Network(CNN)  model compression  network pruning
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号