首页 | 本学科首页   官方微博 | 高级检索  
     

神经网络计算部件的数字VLSI优化设计
引用本文:李昂,吴巍,钱艺,王沁.神经网络计算部件的数字VLSI优化设计[J].计算机工程,2008,34(5):254-256.
作者姓名:李昂  吴巍  钱艺  王沁
作者单位:北京科技大学信息工程学院,北京,100083
摘    要:在神经网络的数字VLSI实现中,激活函数及乘累加等计算部件是设计中的难点。区别于使用乘法器及加法器的传统方法,该文提出的LMN方法基于查找表(即函数真值表),使用逻辑最小项化简提炼出函数最简逻辑表达式后,可直接生成结构规整的门级电路,除线延时外,电路只有数个门级延时。以非线性函数为例对该方法进行了介绍,结果表明当定点数位数较少时,算法在速度及误差方面具有更好的性能。

关 键 词:神经网络  VLSI设计  非线性函数  逻辑化简

Optimized Digital VLSI Design for Computation Components of Neural Network
LI Ang,WU Wei,QIAN Yi,WANG Qin.Optimized Digital VLSI Design for Computation Components of Neural Network[J].Computer Engineering,2008,34(5):254-256.
Authors:LI Ang  WU Wei  QIAN Yi  WANG Qin
Affiliation:(Information Engineering School, University of Science & Technology Beijing, Beijing 100083)
Abstract:Computation components’ design is difficult in VLSI implementation of neural networks. Differing from traditional methods which use multipliers and adders, the LMN method proposed in this paper uses logic minimization to compress the function’s look-up-table (true table) effectively, and produces the simplest logic expressions. These expressions can be translated straightforward into gates level circuits with canonical structure and little gate delays. It takes non-linear activation functions as example to introduce the method. Results show when using limited bits of fixed point numbers, this method has a better performance in speed and error.
Keywords:neural network  VLSI  nonlinear function  logic minimization
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号