首页 | 本学科首页   官方微博 | 高级检索  
     

基于Hebb规则的分布神经网络学习算法
引用本文:田大新,刘衍珩,李宾,吴静.基于Hebb规则的分布神经网络学习算法[J].计算机学报,2007,30(8):1379-1388.
作者姓名:田大新  刘衍珩  李宾  吴静
作者单位:吉林大学计算机科学与技术学院,长春,130012;吉林大学符号计算与知识工程教育部重点实验室,长春,130012;吉林大学数学学院,长春,130012
基金项目:国家自然科学基金 , 高等学校博士学科点专项科研项目
摘    要:随着知识发现与数据挖掘领域数据量的不断增加,为了处理大规模数据,scaling up学习成为KDD的热点研究领域.文中提出了基于Hebb规则的分布式神经网络学习算法实现scaling up学习.为了提高学习速度,完整数据集被分割成不相交的子集并由独立的子神经网络来学习;通过对算法完整性及竞争Hebb学习的风险界的分析,采用增长和修剪策略避免分割学习降低算法的学习精度.对该算法的测试实验首先采用基准测试数据circlein-the-square测试了其学习能力,并与SVM,ARTMAP和BP神经网络进行比较;然后采用UCI中的数据集USCensus1990测试其对大规模数据的学习性能.

关 键 词:scaling  up  数据分割  Hebb规则  分布式学习  竞争学习
修稿时间:2007-03-07

Distributed Neural Network Learning Algorithm Based on Hebb Rule
TIAN Da-Xin,LIU Yan-Heng,LI Bin,WU Jing.Distributed Neural Network Learning Algorithm Based on Hebb Rule[J].Chinese Journal of Computers,2007,30(8):1379-1388.
Authors:TIAN Da-Xin  LIU Yan-Heng  LI Bin  WU Jing
Affiliation:1.College of Computer Science and Technology, Jilin University, Changchun 130012;2.Key Laboratory of Symbolic Computation and Knozoledge Engineering of Ministry of Education, Jilin University, Changchun 130012;3.College of Mathematics, Jilin University, Changchun 130012
Abstract:In the fields of knowledge discovery and data mining the amount of data available for building classifiers or regression models is growing very fast. Therefore, there is a great need for scaling up inductive learning algorithms that are capable of handling very-large datasets and, simultaneously, being computationally efficient and scalable. In this paper a distributed neural network based on Hebb rule is presented to improve the speed and scalability of inductive learning. The speed is improved by doing the algorithm on disjoint subsets instead of the entire dataset. To avoid the accuracy being degraded as compared to running a single algorithm with the entire data, a growing and pruning policy is adopted, which is based on the analysis of completeness and risk bounds of competitive Hebb learning. In the experiments, the accuracy of the algorithm is tested on a small benchmark (circle-in-the-square) and compared with SVM, ARTMAP and BP neural network. The performance on the large dataset (USCensus1990Data) is evaluated on the data from UCI repository.
Keywords:scaling up  data partition  Hebb rule  distributed learning  competitive learning
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号