首页 | 本学科首页   官方微博 | 高级检索  
     

优化初始聚类中心的K-means聚类算法
引用本文:郭永坤,章新友,刘莉萍,丁亮,牛晓录. 优化初始聚类中心的K-means聚类算法[J]. 计算机工程与应用, 2020, 56(15): 172-178. DOI: 10.3778/j.issn.1002-8331.1910-0220
作者姓名:郭永坤  章新友  刘莉萍  丁亮  牛晓录
作者单位:1.江西中医药大学 计算机学院,南昌 3300042.江西中医药大学 药学院,南昌 330004
摘    要:针对传统K-means算法对初始中心十分敏感,聚类结果不稳定问题,提出了一种改进K-means聚类算法。该算法首先计算样本间的距离,根据样本距离找出距离最近的两点形成集合,根据点与集合的计算公式找出其他所有离集合最近的点,直到集合内数据数目大于或等于[α]([α]为样本集数据点数目与聚类的簇类数目的比值),再把该集合从样本集中删除,重复以上步骤得到K(K为簇类数目)个集合,计算每个集合的均值作为初始中心,并根据K-means算法得到最终的聚类结果。在Wine、Hayes-Roth、Iris、Tae、Heart-stalog、Ionosphere、Haberman数据集中,改进算法比传统K-means、K-means++算法的聚类结果更稳定;在Wine、Iris、Tae数据集中,比最小方差优化初始聚类中心的K-means算法聚类准确率更高,且在7组数据集中改进算法得到的轮廓系数和F1值最大。对于密度差异较大数据集,聚类结果比传统K-means、K-means++算法更稳定,更准确,且比最小方差优化初始聚类中心的K-means算法更高效。

关 键 词:K-means聚类算法  算法优化  初始聚类中心  

K-means Clustering Algorithm of Optimizing Initial Clustering Center
GUO Yongkun,ZHANG Xinyou,LIU Liping,DING Liang,NIU Xiaolu. K-means Clustering Algorithm of Optimizing Initial Clustering Center[J]. Computer Engineering and Applications, 2020, 56(15): 172-178. DOI: 10.3778/j.issn.1002-8331.1910-0220
Authors:GUO Yongkun  ZHANG Xinyou  LIU Liping  DING Liang  NIU Xiaolu
Affiliation:1.School of Computing, Jiangxi University of Traditional Chinese Medicine, Nanchang 330004, China2.School of Pharmacy, Jiangxi University of Traditional Chinese Medicine, Nanchang 330004, China
Abstract:An improved K-means clustering algorithm is proposed to solve the problem that traditional K-means algorithm is very sensitive to the initial center and the clustering result is unstable. The algorithm calculates the distance between samples, then finds the nearest two points to form a set according to the distance between samples. The algorithm finds all other nearest points to the set according to the calculation formula of points and sets until the number of data points in the set is greater or equal to [α]([α] is the ratio of the number of data points in the sample set to the number of clusters in the cluster), while the set is deleted from the sample set. The steps above are repeated, and K(K is the number of clusters) sets are obtained. The mean of each set is calculated as the initial center, and then the final clustering results are obtained according to K-means algorithm. In Wine, Hayes-Roth, Iris, Tae, Heart-stalog, Ionosphere and Haberman datasets, the improved algorithm designed in this study is more stable than the traditional K-means and K-means++ clustering results. In Wine, Iris and Tae datasets, the improved algorithm has higher clustering accuracy than the K-means algorithm which optimizes the initial clustering center with minimum variance, and the contour coefficients and F1 values obtained by the improved algorithm are the largest in seven sets of data. For data sets with large density differences, the improved clustering algorithm designed in this study is more stable and accurate than the traditional K-means and K-means++ algorithms, and the improved clustering algorithm is more efficient than the K-means algorithm which optimizes the initial clustering center with minimum variance.
Keywords:K-means clustering algorithm  algorithm optimization  initial clustering center  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机工程与应用》浏览原始摘要信息
点击此处可从《计算机工程与应用》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号