首页 | 本学科首页   官方微博 | 高级检索  
     

基于字典学习的卷积神经网络压缩方法
引用本文:耿旭,张永辉,张健.基于字典学习的卷积神经网络压缩方法[J].计算机工程与设计,2020,41(4):1024-1028.
作者姓名:耿旭  张永辉  张健
作者单位:海南大学信息与通信工程学院,海南海口570228;海南大学信息与通信工程学院,海南海口570228;海南大学信息与通信工程学院,海南海口570228
基金项目:海南省自然科学基金;海南省科技厅项目
摘    要:为解决深度卷积神经网络模型占用存储空间较大的问题,提出一种基于K-SVD字典学习的卷积神经网络压缩方法。用字典中少数原子的线性组合来近似表示单个卷积核的参数,对原子的系数进行量化,存储卷积核参数时,只须存储原子的索引及其量化后的系数,达到模型压缩的目的。在MNIST数据集上对LeNet-C5和CIFAR-10数据集上对DenseNet的压缩实验结果表明,在准确率波动不足0.1%的情况下,将网络模型占用的存储空间降低至12%左右。

关 键 词:卷积神经网络  字典学习  压缩  量化  稀疏矩阵

Convolutional neural network compression method based on dictionary learning
GENG Xu,ZHANG Yong-hui,ZHANG Jian.Convolutional neural network compression method based on dictionary learning[J].Computer Engineering and Design,2020,41(4):1024-1028.
Authors:GENG Xu  ZHANG Yong-hui  ZHANG Jian
Affiliation:(School of Information and Communication Engineering,Hainan University,Haikou 570228,China)
Abstract:To solve the problem that the deep convolutional neural network model occupies a large storage space,a convolutional neural network compression method based on K-SVD dictionary learning was proposed.The main idea was to approximate the parameters of a single convolution kernel by linear combination of a few atoms in the dictionary to achieve the purpose of model compression.The compression experiments of LeNet-C5 on the MNIST datasets and DenseNet on CIFAR-10 datasets show that the space occupied by the storage network model is reduced to about 12%when the accuracy fluctuation is less than 0.1%.
Keywords:convolutional neural network  dictionary learning  compression  quantization  sparse matrix
本文献已被 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号