首页 | 本学科首页   官方微博 | 高级检索  
     

Neocognitron学习算法分析
引用本文:洪家荣,李星原.Neocognitron学习算法分析[J].软件学报,1994,5(4):35-39.
作者姓名:洪家荣  李星原
作者单位:哈尔滨工业大学计算机系,哈尔滨 150006;哈尔滨工业大学计算机系,哈尔滨 150006
摘    要:现有的对Neocognitron的分析都采用代数法,因而无法研究它的动态特性.本文把Neocognitron及其学习算法推广到连续时域,借助微分方程来研究Neocognitron.文中给出了无教师学习算法一般情况下,学习过程中Us层神经元输出变化规律的微分方程,指出其增加的条件,并推出权a、b初始值选择的一个必要条件;进一步得出无教师算法代表稳定后和有教师学习情况下Us变化的一种等效显式函数,指出此时学习过程是Us层神经元输出向一个系数的逼近过程,且有教师学习过程的最后状态与可变权初值和学习率无关.并讨论了影响Us终值和逼近速度的因素.

关 键 词:神经网络,学习算法,自组织,Neocognitron
收稿时间:1991/10/22 0:00:00
修稿时间:1992/1/27 0:00:00

ON THE LEARNING PROCESS OF THE NEOCOGNITRON
Hong Jiarong and Li Xingyuan.ON THE LEARNING PROCESS OF THE NEOCOGNITRON[J].Journal of Software,1994,5(4):35-39.
Authors:Hong Jiarong and Li Xingyuan
Abstract:xisting analyses of the Neocognitron failed to discuss the dynamic characteristic during learning because they were all confined to algebraic method. This paper intruduces differentiation to analyse the Neocognitron. For the unsupervised learning, this paper derives a defferential equation to describe us, a condition of us increasing, and an inequality that the initial value of variable weight of a, b must satisfy. For fixed representation or the supervised learning, we obtain an u. function of time. We show that, in this case, learning is a process in which us approximates a coefficience independence of q and the initial value of weight a,b.
Keywords:Neural networks  learning rule  self-organizing  Neocognitron  
本文献已被 CNKI 维普 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号