首页 | 本学科首页   官方微博 | 高级检索  
     

二值网络的分阶段残差二值化算法
引用本文:任红萍,陈敏捷,王子豪,杨春,殷绪成. 二值网络的分阶段残差二值化算法[J]. 计算机系统应用, 2019, 28(1): 38-46
作者姓名:任红萍  陈敏捷  王子豪  杨春  殷绪成
作者单位:北京科技大学计算机与通信工程学院,北京,100083;北京科技大学计算机与通信工程学院,北京,100083;北京科技大学计算机与通信工程学院,北京,100083;北京科技大学计算机与通信工程学院,北京,100083;北京科技大学计算机与通信工程学院,北京,100083
摘    要:二值网络在速度、能耗、内存占用等方面优势明显,但会对深度网络模型造成较大的精度损失.为了解决上述问题,本文提出了二值网络的分阶段残差二值化优化算法,以得到精度更好的二值神经网络模型.本文将随机量化的方法与XNOR-net相结合,提出了两种改进算法带有近似因子的随机权重二值化和确定权重二值化,以及一种全新的分阶段残差二值化的BNN训练优化算法,以得到接近全精度神经网络的识别准确率.实验表明,本文提出的分阶段残差二值化算法能够有效提升二值模型的训练精度,而且不会增加相关网络在测试过程中的计算量,从而保持了二值网络速度快、空间小、能耗低的优势.

关 键 词:深度学习  二值网络  随机量化  高阶残差量化  分阶段残差二值化
收稿时间:2018-05-22
修稿时间:2018-06-15

Staged Residual Binarization Algorithm for Binary Networks
REN Hong-Ping,CHEN Min-Jie,WANG Zi-Hao,YANG Chun and YIN Xu-Cheng. Staged Residual Binarization Algorithm for Binary Networks[J]. Computer Systems& Applications, 2019, 28(1): 38-46
Authors:REN Hong-Ping  CHEN Min-Jie  WANG Zi-Hao  YANG Chun  YIN Xu-Cheng
Affiliation:School of Computer & Communication Engineering, University of Science & Technology Beijing, Beijing 100083, China,School of Computer & Communication Engineering, University of Science & Technology Beijing, Beijing 100083, China,School of Computer & Communication Engineering, University of Science & Technology Beijing, Beijing 100083, China,School of Computer & Communication Engineering, University of Science & Technology Beijing, Beijing 100083, China and School of Computer & Communication Engineering, University of Science & Technology Beijing, Beijing 100083, China
Abstract:Binary networks have obvious advantages in terms of speed, energy consumption, and memory consumption, but they cause a great loss of accuracy for the deep network model. In order to solve the problems above, this study proposes a staged residual binarization optimization algorithm for binary networks to obtain a better binary neural network model. In this study, we combine the random quantification method with XNOR-net, and propose two improved algorithms, namely applying weights approximation factor and deterministic quantization networks, and a new staged residual binarization BNN training optimization algorithm, in order to obtain the recognition accuracy of the full-accuracy neural network. Experimental results show that staged residual binarization algorithm can effectively improve the training accuracy of binary model, and does not increase the computational complexity of the related network in the testing process, thus maintaining the advantages of high speed, low memory usage, and small energy consumption.
Keywords:deep learning  binary networks  random quantification  high-order residual quantization  staged residual binarization
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号