首页 | 本学科首页   官方微博 | 高级检索  
     

面向大规模数据集的贝叶斯网络参数学习算法
引用本文:张少中,章锦文,张志勇,韩美君,王秀坤. 面向大规模数据集的贝叶斯网络参数学习算法[J]. 计算机应用, 2006, 26(7): 1689-1691
作者姓名:张少中  章锦文  张志勇  韩美君  王秀坤
作者单位:浙江万里学院,电子信息学院,浙江,宁波,315100;河南科技大学,计算机学院,河南,洛阳,471000;大连理工大学,计算机科学与工程系,辽宁,大连,116023
摘    要:贝叶斯网络的学习可以分为结构学习和参数学习。期望最大化(EM)算法通常用于不完整数据的参数学习,但是由于EM算法计算相对复杂,存在收敛速度慢和容易局部最大化等问题,传统的EM算法难于处理大规模数据集。研究了EM算法的主要问题,采用划分数据块的方法将大规模数据集划分为小的样本集来处理,降低了EM算法的计算量,同时也提高了计算精度。实验证明,该改进的EM算法具有较高的性能。

关 键 词:贝叶斯网络  参数学习  期望最大化算法
文章编号:1001-9081(2006)07-1689-03
收稿时间:2006-01-11
修稿时间:2006-01-112006-03-08

Parameter learning for Bayesian networks with large data set
ZHANG Shao-zhong,ZHANG Jin-wen,ZHANG Zhi-yong,HAN Mei-jun,WANG Xiu-kun. Parameter learning for Bayesian networks with large data set[J]. Journal of Computer Applications, 2006, 26(7): 1689-1691
Authors:ZHANG Shao-zhong  ZHANG Jin-wen  ZHANG Zhi-yong  HAN Mei-jun  WANG Xiu-kun
Affiliation:1. Institute of Electronics and Information, Zhejiang Wanli University, Ningbo Zhejiang 315100, China; 2. Institute of Computer Science, Henan University of Science and Technology, Luoyang Henan 471000, China; 3. Department of Computer Science and Engineering, Dalian University of Technology, Dalian Liaoning 116023, China
Abstract:The creation of Bayesian networks can be separated into two tasks,structure learning and parameter learning.Expectation Maximization(EM) algorithm is a general method for parameter learning to incomplete data.The traditional EM algorithm has some shortcomings: it can't deal with large data sets,its convergence is slow and it easily results in local maximum.To overcome these shortcomings, large data set was divided into several small blocks and optimized in the small ones.Experiment results indicate that the improved EM algorithm has more advantages than standard EM.
Keywords:Bayesian networks  parameter learning  Expectation Maximization(EM) algorithm
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《计算机应用》浏览原始摘要信息
点击此处可从《计算机应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号