首页 | 本学科首页   官方微博 | 高级检索  
     

一种基于递增估计GMM的连续优化算法
引用本文:李斌,钟润添,王先基,庄镇泉.一种基于递增估计GMM的连续优化算法[J].计算机学报,2007,30(6):979-985.
作者姓名:李斌  钟润添  王先基  庄镇泉
作者单位:1. 中国科学技术大学自然计算与应用实验室,合肥,233027;中国科学技术大学电子科学与技术系,合肥,233027
2. 中国科学技术大学电子科学与技术系,合肥,233027
基金项目:国家自然科学基金 , 安徽省自然科学基金
摘    要:目前的分布估计算法(estimation of distribution algorithms)中概率模型的学习或多或少存在着对先验知识的依赖,而这些先验知识往往是不可预知的.针对这一问题,文中提出采用集成学习(ensemble learning)的思想实现EDAs中概率模型结构和参数的自动学习,并提出了一种基于递增学习策略的连续域分布估计算法,该算法采用贪心EM算法来实现高斯混合模型(GMM)的递增学习,在不需要任何先验知识的情况下,实现模型结构和参数的自动学习.通过一组函数优化实验对该算法的性能进行了考查,并与其它同类算法进行了比较.实验结果表明该方法是有效的,并且,相比其它同类EDAs,该算法用相对少的迭代,可以得到同样或者更好的结果.

关 键 词:分布估计算法  连续优化  贪心EM算法  递增学习  高斯混合模型  递增  估计  连续域  优化算法  Mixture  Model  Estimation  Progressive  Optimization  Algorithm  迭代  方法  结果  优化实验  比较  考查  算法的性能  函数  实现模型  情况  高斯混合模型  学习策略
修稿时间:2005-11-252007-02-05

A Continuous Optimization Algorithm Based-on Progressive Estimation of Guassian Mixture Model
LI Bin,ZHONG Run-Tian,WANG Xian-Ji,ZHUANG Zhen-Quan.A Continuous Optimization Algorithm Based-on Progressive Estimation of Guassian Mixture Model[J].Chinese Journal of Computers,2007,30(6):979-985.
Authors:LI Bin  ZHONG Run-Tian  WANG Xian-Ji  ZHUANG Zhen-Quan
Affiliation:1.Nature Inspired Computation and Applications Laboratory, University of Science and Technology of China, Hefei 230027; 2.Department of Electronic Science and Technology, University of Science and Technology of China, Hefei 230027
Abstract:In Estimation of Distribution Algorithms proposed in published literatures, learning of probabilistic model is dependent more or less on the prior-knowledge of the structure of model, which is unavailable in the process of evolutionary optimization. This paper proposes a new idea, which learns probabilistic model in EDAs by an approach similar to ensemble learning in machine learning, to implement automatic learning of both model parameter and model structure. According to this idea, a new EDAs for continuous optimization based on progressive learning of Gaussian Mixture Model is proposed. A greedy EM algorithm is adopted to estimation GMM in a progressive manner, which has the ability of learning the model structure and parameters automatically without any requirement of prior knowledge. A set of experiments on selected function optimization problems are performed to evaluate, and to compare with other EDAs, the efficiency and performance of the new algorithm. The experimental results confirm the feasibility and effect of the idea, and also show that, with a relative small number of generations, the new algorithm can perform better or as well as compared EDAs.
Keywords:estimation of distribution algorithms  continuous optimization  greedy EM  progres-sive learning  Gaussian mixture model
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号