首页 | 本学科首页   官方微博 | 高级检索  
     

深度网络模型压缩综述
引用本文:雷杰,高鑫,宋杰,王兴路,宋明黎.深度网络模型压缩综述[J].软件学报,2018,29(2):251-266.
作者姓名:雷杰  高鑫  宋杰  王兴路  宋明黎
作者单位:浙江大学计算机科学与技术学院, 浙江杭州 310027,浙江大学计算机科学与技术学院, 浙江杭州 310027,浙江大学计算机科学与技术学院, 浙江杭州 310027,浙江大学计算机科学与技术学院, 浙江杭州 310027,浙江大学计算机科学与技术学院, 浙江杭州 310027
基金项目:国家自然科学基金(00000000,00000000);xxxx计算机软件新技术国家重点实验室开放课题(KFKT00000000)
摘    要:深度网络近年在计算机视觉任务上不断刷新传统模型的性能,已逐渐成为研究热点.深度模型尽管性能强大,然而由于参数数量庞大、存储和计算代价高,依然难以部署在受限的硬件平台上(如移动设备).模型的参数一定程度上能表达其复杂性,相关研究表明并不是所有的参数都在模型中发挥作用,部分参数作用有限、表达冗余、甚至会降低模型的性能.本文首先对国内外学者在深度模型压缩上取得的成果进行了分类整理,依此归纳了基于网络剪枝、网络精馏和网络分解的方法;随后,总结了相关方法在多种公开深度模型上的压缩效果;最后,对未来研究可能的方向和挑战进行了展望.

关 键 词:深度网络  网络压缩  网络剪枝  网络精馏  网络分解
收稿时间:2017/5/2 0:00:00
修稿时间:2017/7/24 0:00:00

Survey of Deep Neural Network Model Compression
LEI Jie,GAO Xin,SONG Jie,WANG Xing-Lu and SONG Ming-Li.Survey of Deep Neural Network Model Compression[J].Journal of Software,2018,29(2):251-266.
Authors:LEI Jie  GAO Xin  SONG Jie  WANG Xing-Lu and SONG Ming-Li
Affiliation:School of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China,School of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China,School of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China,School of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China and School of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
Abstract:Deep neuron networks have continually surpassed traditional methods on a variety of computer vision tasks.Though deep neural networks are very powerful, the large number of weights consumes considerable storage and calculation time, making it hard to deploy on limited hardware platforms, such as mobile system. The number of weights in deep neuron networks represents the complexity to an extent, but not all the weights contribute to the performance according to recent researches. Specifically, some weights are redundant and even decrease the performance. This survey offers a systematic summarization of existing research achievements of the domestic and foreign researchers in recent years, in the aspects of network pruning, network distillation, and network decomposition. Furthermore, comparisons of compression performance are provided on several public deep neuron networks. Finally a perspective of the future work and challenges in this research area are discussed.
Keywords:deep neuron network  network compression  network pruning  network distillation  network decomposition
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号