首页 | 本学科首页   官方微博 | 高级检索  
     

基于贝叶斯优化的无标签网络剪枝算法
引用本文:高媛媛,余振华,杜方,宋丽娟.基于贝叶斯优化的无标签网络剪枝算法[J].计算机应用,2023,43(1):30-36.
作者姓名:高媛媛  余振华  杜方  宋丽娟
作者单位:宁夏大学 信息工程学院,银川 750021
宁夏大数据与人工智能省部共建协同创新中心(宁夏大学),银川 750021
基金项目:宁夏自然科学基金资助项目(2018A0899)
摘    要:针对深度神经网络(DNN)的参数和计算量过大问题,提出一种基于贝叶斯优化的无标签网络剪枝算法。首先,利用全局剪枝策略来有效避免以逐层方式修剪而导致的模型次优压缩率;其次,在网络剪枝过程中不依赖数据样本标签,并通过最小化剪枝网络与基线网络输出特征的距离对网络每层的压缩率进行优化;最后,利用贝叶斯优化算法寻找网络每一层的最优剪枝率,以提高子网搜索的效率和精度。实验结果表明,使用所提算法在CIFAR-10数据集上对VGG-16网络进行压缩,参数压缩率为85.32%,每秒浮点运算次数(FLOPS)压缩率为69.20%,而精度损失仅为0.43%。可见,所提算法可以有效地压缩DNN模型,且压缩后的模型仍能保持良好的精度。

关 键 词:深度神经网络  模型压缩  网络剪枝  网络结构搜索  贝叶斯优化
收稿时间:2021-11-29
修稿时间:2022-05-03

Unlabeled network pruning algorithm based on Bayesian optimization
Yuanyuan GAO,Zhenhua YU,Fang DU,Lijuan SONG.Unlabeled network pruning algorithm based on Bayesian optimization[J].journal of Computer Applications,2023,43(1):30-36.
Authors:Yuanyuan GAO  Zhenhua YU  Fang DU  Lijuan SONG
Affiliation:School of Information Engineering,Ningxia University,Yinchuan Ningxia 750021,China
Collaborative Innovation Center for Ningxia Big Data and Artificial Intelligence Co-founded by Ningxia Municipality and Ministry of Education(Ningxia University),Yinchuan Ningxia 750021,China
Abstract:To deal with too many parameters and too much computation in Deep Neural Networks (DNNs), an unlabeled neural network pruning algorithm based on Bayesian optimization was proposed. Firstly, based on a global pruning strategy, the sub-optimal compression ratio of the model caused by layer-by-layer pruning was avoided effectively. Secondly, the pruning process was independent on the labels of data samples, and the compression ratios of all layers were optimized by minimizing the distance between the output features of pruning and baseline networks. Finally, the Bayesian optimization algorithm was adopted to find the optimal compression ratio of each layer, thereby improving the efficiency and accuracy of sub-network search. Experimental results show that when compressing VGG-16 network by the proposed algorithm on CIFAR-10 dataset, the parameter compression ratio is 85.32%, and the Floating Point of Operations (FLOPS) compression ratio is 69.20% with only 0.43% accuracy loss. Therefore, the DNN model can be compressed effectively by the proposed algorithm, and the compressed model can still maintain good accuracy.
Keywords:Deep Neural Network (DNN)  model compression  network pruning  network structure search  Bayesian optimization  
点击此处可从《计算机应用》浏览原始摘要信息
点击此处可从《计算机应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号