首页 | 本学科首页   官方微博 | 高级检索  
     


High performance multiply-accumulator for the convolutional neural networks accelerator
Authors:KONG Xin  CHEN Gang  GONG Guoliang  LU Huaxiang  Mao Wenyu
Affiliation:1. Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China;2. University of Chinese Academy of Sciences, Beijing, 100089, China;3. Center of Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China;4. Semiconductor Neural Network Intelligent Perception and Computing Technology Beijing Key Lab, Beijing 100083, China
Abstract:The multiply-accumulator (MAC) in existing convolutional neural network(CNN) accelerators generally have some problems, such as a large area, a high power consumption and a long critical path. Aiming at these problems, this paper presents a high-performance MAC based on transmission gates for CNN accelerators. This paper proposes a new data accumulation and compression structure suitable for the MAC, which reduces the hardware overhead. Moreover, we propose a new parallel adder architecture. Compared with the Brent Kung adder, the proposed adder reduces the number of gate delay stages and improves the calculation speed without causing an increase in hardware resources. In addition, we use the advantages of the transmission gate to optimize each unit circuit of the MAC. The 16-by-8 fixed-point high performance MAC based on the methods presented in this paper has a critical path delay of 1.173ns, a layout area of 9049.41μm2, and an average power consumption of 4.153mW at 800MHz under the SMIC 130nm tt corner. Compared with the traditional MAC, the speed is increased by 37.42%, the area is reduced by 47.84%, and the power consumption is reduced by56.77% under the same conditions.
Keywords:multiply accumulator  transmission gate  accumulation and compression  convolutional neural network  high performance  
点击此处可从《西安电子科技大学学报》浏览原始摘要信息
点击此处可从《西安电子科技大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号