首页 | 本学科首页   官方微博 | 高级检索  
     


Backpropagation Discrimination Geometric Analysis Interference Memory Modelling Neural Nets
Authors:Noel E. Sharkey  Amanda J. C. Sharkey
Abstract:
A number of recent simulation studies have shown that when feedforward neural nets are trained, using backpropagation, to memorize sets of items in sequential blocks and without negative exemplars, severe retroactive interference or catastrophic forgetting results. Both formal analysis and simulation studies are employed here to show why and under what circumstances such retroactive interference arises. The conclusion is that, on the one hand, approximations to 'ideal' network geometries can entirely alleviate interference if the training data sets have been generated from a learnable function (not arbitrary pattern associations). All that is required is either a representative training set or enough sequential memory sets. However, this elimination of interference comes with cost of a breakdown in discrimination between input patterns that have been learned and those that have not: catastrophic remembering. On the other hand, localized geometries for subfunctions eliminate the discrimination problem but are easily disrupted by new training sets and thus cause catastrophic interference. The paper concludes with a formally guaranteed solution to the problems of interference and discrimination. This is the Hebbian Autoassociative Recognition Memory (HARM) model which is essentially a neural net implementation of a simple look-up table. Although it requires considerable memory resources, when used as a yardstick with which to evaluate other proposed solutions, it uses the same or less resources.
Keywords:Backpropagation Discrimination Geometric Analysis Interference Memory Modelling Neural Nets
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号