首页 | 本学科首页   官方微博 | 高级检索  
     

深度神经网络模型压缩综述
作者姓名:李江昀  赵义凯  薛卓尔  蔡铮  李擎
作者单位:1).北京科技大学自动化学院, 北京 100083
基金项目:国家自然科学基金资助项目61671054北京市自然科学基金资助项目4182038
摘    要:深度神经网络近年在计算机视觉以及自然语言处理等任务上不断刷新已有最好性能,已经成为最受关注的研究方向.深度网络模型虽然性能显著,但由于参数量巨大、存储成本与计算成本过高,仍然难以部署到硬件受限的嵌入式或移动设备上.相关研究发现,基于卷积神经网络的深度模型本身存在参数冗余,模型中存在对最终结果无用的参数,这为深度网络模型压缩提供了理论支持.因此,如何在保证模型精度条件下降低模型大小已经成为热点问题.本文对国内外学者近几年在模型压缩方面所取得的成果与进展进行了分类归纳并对其优缺点进行评价,并探讨了模型压缩目前存在的问题以及未来的发展方向. 

关 键 词:深度神经网络    模型压缩    深度学习    网络剪枝    网络蒸馏
收稿时间:2019-03-27

A survey of model compression for deep neural networks
Affiliation:1).School of Automation&Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China2).Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, Beijing 100083, China
Abstract:In recent years, deep neural networks (DNN) have attracted increasing attention because of their excellent performance in computer vision and natural language processing. The success of deep learning is due to the fact that the models have more layers and more parameters, which gives them stronger nonlinear fitting ability. Furthermore, the continuous updating of hardware equipment makes it possible to quickly train deep learning models. The development of deep learning is driven by the greater amounts of available annotated or unannotated data. Specifically, large-scale data provide models with greater learning space and stronger generalization ability. Although the performance of deep neural networks is significant, they are difficult to deploy in embedded or mobile devices with limited hardware due to their large number of parameters and high storage and computing costs. Recent studies have found that deep models based on a convolutional neural network are characterized by parameter redundancy as well as parameters that are irrelevant to the final model results, which provides theoretical support for the compression of deep network models. Therefore, determining ways to reduce model size while retaining model precision has become a hot research issue. Model compression refers to the reduction of a trained model through some operation to obtain a lightweight network with equivalent performance. After model compression, there are fewer network parameters and usually a reduction in the computation required, which greatly reduces the computational and storage costs and enables the deployment of the model in restricted hardware conditions. In this paper, the achievements and progress made in recent years by domestic and foreign scholars with respect to model compressionwere classified and summarized and their advantages and disadvantages were evaluated, including network pruning, parameter sharing, quantization, network decomposition, and network distillation. Then, existing problems and the future development of model compression were discussed. 
Keywords:
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号