首页 | 本学科首页   官方微博 | 高级检索  
     

基于信息熵与动态聚类的文本特征选择方法
引用本文:唐立力. 基于信息熵与动态聚类的文本特征选择方法[J]. 计算机工程与应用, 2015, 51(19): 152-157
作者姓名:唐立力
作者单位:重庆工商大学 融智学院,重庆 400033
摘    要:根据科技文献的结构特点,搭建了一个四层挖掘模式,提出了一种应用于科技文献分类的文本特征选择方法。该方法首先依据科技文献的结构将其分为四个层次,然后采用K-means聚类对前三层逐层实现特征词提取,最后再使用Aprori算法找出第四层的最大频繁项集,并作为第四层的特征词集合。在该方法中,针对K-means算法受初始中心点的影响较大的问题,首先采用信息熵对聚类对象赋权的方式来修正对象间的距离函数,然后再利用初始聚类的赋权函数值选出较合适的初始聚类中心点。同时,通过为K-means算法的终止条件设定标准值,来减少算法迭代次数,以减少学习时间;通过删除由信息动态变化而产生的冗余信息,来减少动态聚类过程中的干扰,从而使算法达到更准确更高效的聚类效果。上述措施使得该文本特征选择方法能够在文献语料库中更加准确地找到特征词,较之以前的方法有很大提升,尤其是在科技文献方面更为适用。实验结果表明,当数据量较大时,该方法结合改进后的K-means算法在科技文献分类方面有较高的性能。

关 键 词:K-means算法  动态聚类  特征选择  信息熵  

Text feature selection method based on information entropy and dynamic clustering
TANG Lili. Text feature selection method based on information entropy and dynamic clustering[J]. Computer Engineering and Applications, 2015, 51(19): 152-157
Authors:TANG Lili
Affiliation:Rongzhi College of Chongqing Technology and Business University, Chongqing 400033, China
Abstract:By means of a four-mining model which is constructed based on the structural characteristics of scientific literatures, a text feature selection method is proposed to apply in classification of scientific literatures. The proposed method firstly divides scientific literature into four layers according to its structure, and then selects features progressively for the former three layers by K-means algorithm, and finally finds out the maximum frequent itemsets of fourth layer by Aprori algorithm to act as a collection of fourth layer features. Meanwhile, K-means algorithm is also improved which firstly uses information entropy empower the clustering objects to correct the distance function, and then employs empowerment function value to select the optimal initial clustering center, and subsequently reduces algorithm iterations and learning time by setting the standard value for termination condition of the algorithm and reduces interference of dynamic clustering by removing redundant information from the changing information to make the algorithm achieve more accurate and efficient clustering effect. So, it is possible for this proposed method to find features more accurately in the literature corpus. Experimental results show that the proposed method is feasible and effective, and has higher performance in scientific literature classification which is compared with the previous methods.
Keywords:K-means algorithm  dynamic clustering  feature selection  information entropy  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机工程与应用》浏览原始摘要信息
点击此处可从《计算机工程与应用》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号