首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
针对传统K-medoids聚类算法对初始中心点敏感,以及迭代次数较高等缺点,提出一种可行的初始化方法和中心点搜索更新策略。新算法首先利用密度可达思想为数据集中每个对象建立一个稠密区域,遴选出[K]个密度大且距离较远的稠密区域,把对应的稠密区域的核心对象作为聚类算法的[K]个初始中心点;其次,把[K]个中心点搜索更新范围锁定在所选的[K]个有效稠密区域里。新算法在Iris、Wine、PId标准数据集中测试,获取了理想中心点和稠密区域,并且在较少的迭代次数内收敛到最优解或近似最优解。  相似文献   

2.
近年来,混合型数据的聚类问题受到广泛关注。作为处理混合型数据的一种有效方法,K-prototype聚类算法在初始化聚类中心时通常采用随机选取的策略,然而这种策略在很多实际应用中难以保证聚类结果的质量。针对上述问题,采用基于离群点检测的策略来为K-prototype算法选择初始中心,并提出一种新的混合型数据聚类初始化算法(initialization of K-prototype clustering based on outlier detection and density, IKP-ODD)。给定一个候选对象,IKP-ODD通过计算其距离离群因子、加权密度以及与已有初始中心之间的加权距离来判断候选对象是否是一个初始中心。IKP-ODD通过采用距离离群因子和加权密度,防止选择离群点作为初始中心。在计算对象的加权密度以及对象之间的加权距离时,采用邻域粗糙集中的粒度邻域熵来计算每一个属性的重要性,并根据属性重要性的大小为不同属性赋予不同的权重,有效地反映不同属性之间的差异性。在多个UCI数据集上的实验表明,相对于现有的初始化方法,IKP-ODD能够更好地解决K-prototype聚类的初始化问题。  相似文献   

3.

为确定??-means 等聚类算法的初始聚类中心, 首先由样本总量及其取值区间长度确定对应维上的样本密度统计区间数, 并将满足筛选条件的密度峰值所在区间内的样本均值作为候选初始聚类中心; 然后, 根据密度峰值区间在各维上的映射关系建立候选初始聚类中心关系树, 进一步采用最大最小距离算法获得初始聚类中心; 最后为确定最佳聚类数, 基于类内样本密度及类密度建立聚类有效性评估函数. 针对人工数据集及UCI 数据集的实验结果表明了所提出算法的有效性.

  相似文献   

4.
刘娟  万静 《计算机科学与探索》2021,15(10):1888-1899
密度峰值聚类算法是一种基于密度的聚类算法.针对密度峰值聚类算法存在的参数敏感和对复杂流形数据得到的聚类结果较差的缺陷,提出一种新的密度峰值聚类算法,该算法基于自然反向最近邻结构.首先,该算法引入反向最近邻计算数据对象的局部密度;其次,通过代表点和密度相结合的方式选取初始聚类中心;然后,应用密度自适应距离计算初始聚类中心之间的距离,利用基于反向最近邻计算出的局部密度和密度自适应距离在初始聚类中心上构建决策图,并通过决策图选择最终的聚类中心;最后,将剩余的数据对象分配到距离其最近的初始聚类中心所在的簇中.实验结果表明,该算法在合成数据集和UCI真实数据集上与实验对比算法相比较,具有较好的聚类效果和准确性,并且在处理复杂流形数据上的优越性较强.  相似文献   

5.
模糊c均值聚类算法是目前聚类分析中最受欢迎的算法之一,但其聚类效果往往受初始参数的影响.针对这一问题,提出一种基于网格和密度的模糊c均值聚类初始化方法.以网格和密度为工具提取聚类样本的类聚类中心,以此来初始化模糊c均值聚类算法的初始参数,从而弥补原算法的不足.实验证明方法是可行的、有效的.  相似文献   

6.
针对粗糙K-means聚类及其相关衍生算法需要提前人为给定聚类数目、随机选取初始类簇中心导致类簇交叉区域的数据划分准确率偏低等问题,文中提出基于混合度量与类簇自适应调整的粗糙模糊K-means聚类算法.在计算边界区域的数据对象归属于不同类簇的隶属程度时,综合考虑局部密度和距离的混合度量,并采用自适应调整类簇数目的策略,获得最佳聚类数目.选取数据对象稠密区域中距离最小的两个样本的中点作为初始类簇中心,将附近局部密度高于平均密度的对象划分至该簇后再选取剩余的初始类簇中心,使初始类簇中心的选取更合理.在人工数据集和UCI标准数据集上的实验表明,文中算法在处理类簇交叠严重的球簇状数据集时,具有自适应性,聚类精度较优.  相似文献   

7.
在K-means型多视图聚类算法中,最终的聚类结果会受到初始类中心的影响。因此研究了不同的初始中心选择方法对K-means型多视图聚类算法的影响,并提出一种基于采样的主动式初始中心选择方法(sampledclustering by fast search and find of density peaks,SDPC)。该方法通过对数据集进行均匀采样,利用密度峰值快速搜索聚类算法(clustering by fast search and find of density peaks,DPC),以及K-means再迭代策略,进一步改善多视图聚类中的初始中心选择效率和类个数问题。实验验证了不同初始化方法对K-means型多视图聚类算法的影响。多视图基准数据集上的实验结果表明:全局(核)K-means初始化方法存在时间复杂度过高的问题,AFKMC~2(assumption-free K-Markov chain Monte Carlo)初始化适用于大规模数据,DPC可以主动选择类个数和初始类中心,SDPC较DPC而言,不仅能主动式获得类个数,还在聚类精度和效率上取得了较好的折衷。  相似文献   

8.
一种有效的K-means聚类中心初始化方法*   总被引:5,自引:0,他引:5  
传统K-means算法由于随机选取初始聚类中心,使得聚类结果波动性大;已有的最大最小距离法选取初始聚类中心过于稠密,容易造成聚类冲突现象。针对以上问题,对最大最小距离法进行了改进,提出了最大距离积法。该方法在基于密度概念的基础上,选取到所有已初始化聚类中心距离乘积最大的高密度点作为当前聚类中心。理论分析与对比实验结果表明,此方法相对于传统K-means 算法和最大最小距离法有更快的收敛速度、更高的准确率和更强的稳定性。  相似文献   

9.
K-means聚类算法简单高效,应用广泛。针对传统K-means算法初始聚类中心点的选择随机性导致算法易陷入局部最优以及K值需要人工确定的问题,为了得到最合适的初始聚类中心,提出一种基于距离和样本权重改进的K-means算法。该聚类算法采用维度加权的欧氏距离来度量样本点之间的远近,计算出所有样本的密度和权重后,令密度最大的点作为第一个初始聚类中心,并剔除该簇内所有样本,然后依次根据上一个聚类中心和数据集中剩下样本点的权重并通过引入的参数[τi]找出下一个初始聚类中心,不断重复此过程直至数据集为空,最后自动得到[k]个初始聚类中心。在UCI数据集上进行测试,对比经典K-means算法、WK-means算法、ZK-means算法和DCK-means算法,基于距离和权重改进的K-means算法的聚类效果更好。  相似文献   

10.
现有面向矩阵数据集的算法多数通过随机选取初始类中心得到聚类结果。为克服不同初始类中心对聚类结果的影响,针对分类型矩阵数据,提出一种新的初始聚类中心选择算法。根据属性值的频率定义矩阵对象的密度和矩阵对象间的距离,扩展最大最小距离算法,从而实现初始类中心的选择。在7个真实数据集上的实验结果表明,与初始类中心选择算法CAOICACD和BAIICACD相比,该算法均具有较优的聚类效果。  相似文献   

11.
This paper proposes a new method for estimating the true number of clusters and initial cluster centers in a dataset with many clusters. The observation points are assigned to the data space to observe the clusters through the distributions of the distances between the observation points and the objects in the dataset. A Gamma Mixture Model (GMM) is built from a distance distribution to partition the dataset into subsets, and a GMM tree is obtained by recursively partitioning the dataset. From the leaves of the GMM tree, a set of initial cluster centers are identified and the true number of clusters is estimated. This method is implemented in the new GMM-Tree algorithm. Two GMM forest algorithms are further proposed to ensemble multiple GMM trees to handle high dimensional data with many clusters. The GMM-P-Forest algorithm builds GMM trees in parallel, whereas the GMM-S-Forest algorithm uses a sequential process to build a GMM forest. Experiments were conducted on 32 synthetic datasets and 15 real datasets to evaluate the performance of the new algorithms. The results have shown that the proposed algorithms outperformed the existing popular methods: Silhouette, Elbow and Gap Statistic, and the recent method I-nice in estimating the true number of clusters from high dimensional complex data.  相似文献   

12.
Partitional clustering of categorical data is normally performed by using K-modes clustering algorithm, which works well for large datasets. Even though the design and implementation of K-modes algorithm is simple and efficient, it has the pitfall of randomly choosing the initial cluster centers for invoking every new execution that may lead to non-repeatable clustering results. This paper addresses the randomized center initialization problem of K-modes algorithm by proposing a cluster center initialization algorithm. The proposed algorithm performs multiple clustering of the data based on attribute values in different attributes and yields deterministic modes that are to be used as initial cluster centers. In the paper, we propose a new method for selecting the most relevant attributes, namely Prominent attributes, compare it with another existing method to find Significant attributes for unsupervised learning, and perform multiple clustering of data to find initial cluster centers. The proposed algorithm ensures fixed initial cluster centers and thus repeatable clustering results. The worst-case time complexity of the proposed algorithm is log-linear to the number of data objects. We evaluate the proposed algorithm on several categorical datasets and compared it against random initialization and two other initialization methods, and show that the proposed method performs better in terms of accuracy and time complexity. The initial cluster centers computed by the proposed approach are close to the actual cluster centers of the different data we tested, which leads to faster convergence of K-modes clustering algorithm in conjunction to better clustering results.  相似文献   

13.
Clustering by fast search and find of density peaks (CFSFDP) is proposed to cluster the data by finding of density peaks. CFSFDP is based on two assumptions that: a cluster center is a high dense data point as compared to its surrounding neighbors, and it lies at a large distance from other cluster centers. Based on these assumptions, CFSFDP supports a heuristic approach, known as decision graph to manually select cluster centers. Manual selection of cluster centers is a big limitation of CFSFDP in intelligent data analysis. In this paper, we proposed a fuzzy-CFSFDP method for adaptively selecting the cluster centers, effectively. It uses the fuzzy rules, based on aforementioned assumption for the selection of cluster centers. We performed a number of experiments on nine synthetic clustering datasets and compared the resulting clusters with the state-of-the-art methods. Clustering results and the comparisons of synthetic data validate the robustness and effectiveness of proposed fuzzy-CFSFDP method.  相似文献   

14.
The leading partitional clustering technique, k-modes, is one of the most computationally efficient clustering methods for categorical data. However, the performance of the k-modes clustering algorithm which converges to numerous local minima strongly depends on initial cluster centers. Currently, most methods of initialization cluster centers are mainly for numerical data. Due to lack of geometry for the categorical data, these methods used in cluster centers initialization for numerical data are not applicable to categorical data. This paper proposes a novel initialization method for categorical data which is implemented to the k-modes algorithm. The method integrates the distance and the density together to select initial cluster centers and overcomes shortcomings of the existing initialization methods for categorical data. Experimental results illustrate the proposed initialization method is effective and can be applied to large data sets for its linear time complexity with respect to the number of data objects.  相似文献   

15.
张亚萍  胡学钢 《微机发展》2007,17(11):33-35
将K-means算法引入到朴素贝叶斯分类研究中,提出一种基于K-means的朴素贝叶斯分类算法。首先用K-means算法对原始数据集中的完整数据子集进行聚类,计算缺失数据子集中的每条记录与k个簇重心之间的相似度,把记录赋给距离最近的一个簇,并用该簇相应的属性均值来填充记录的缺失值,然后用朴素贝叶斯分类算法对处理后的数据集进行分类。实验结果表明,与朴素贝叶斯相比,基于K-means思想的朴素贝叶斯算法具有较高的分类准确率。  相似文献   

16.
Fuzzy c-means clustering with spatial constraints is considered as suitable algorithm for data clustering or data analyzing. But FCM has still lacks enough robustness to employ with noise data, because of its Euclidean distance measure objective function for finding the relationship between the objects. It can only be effective in clustering ‘spherical’ clusters, and it may not give reasonable clustering results for “non-compactly filled” spherical data such as “annular-shaped” data. This paper realized the drawbacks of the general fuzzy c-mean algorithm and it tries to introduce an extended Gaussian version of fuzzy C-means by replacing the Euclidean distance in the original object function of FCM. Firstly, this paper proposes initial kernel version of fuzzy c-means to aim at simplifying its computation and then extended it to extended Gaussian kernel version of fuzzy c-means. It derives an effective method to construct the membership matrix for objects, and it derives a robust method for updating centers from extended Gaussian version of fuzzy C-means. Furthermore, this paper proposes a new prototypes learning method and it obtains initial cluster centers using new mathematical initialization centers for the new effective objective function of fuzzy c-means, so that this paper tries to minimize the iteration of algorithms to obtain more accurate result. Initial experiment will be done with an artificially generated data to show how effectively the new proposed Gaussian version of fuzzy C-means works in obtaining clusters, and then the proposed methods can be implemented to cluster the Wisconsin breast cancer database into two clusters for the classes benign and malignant. To show the effective performance of proposed fuzzy c-means with new initialization of centers of clusters, this work compares the results with results of recent fuzzy c-means algorithm; in addition, it uses Silhouette method to validate the obtained clusters from breast cancer datasets.  相似文献   

17.
基于K-means的朴素贝叶斯分类算法的研究   总被引:1,自引:0,他引:1  
将K-means算法引入到朴素贝叶斯分类研究中,提出一种基于K-means的朴素贝叶斯分类算法。首先用K-means算法对原始数据集中的完整数据子集进行聚类,计算缺失数据子集中的每条记录与k个簇重心之间的相似度,把记录赋给距离最近的一个簇,并用该簇相应的属性均值来填充记录的缺失值,然后用朴素贝叶斯分类算法对处理后的数据集进行分类。实验结果表明,与朴素贝叶斯相比,基于K-means思想的朴素贝叶斯算法具有较高的分类准确率。  相似文献   

18.
快速模糊C均值聚类彩色图像分割方法   总被引:33,自引:3,他引:33       下载免费PDF全文
模糊C均值(FCM)聚类用于彩色图像分割具有简单直观、易于实现的特点,但存在聚类性能受中心点初始化影响且计算量大等问题,为此,提出了一种快速模糊聚类方法(FFCM)。这种方法利用分层减法聚类把图像数据分成一定数量的色彩相近的子集,一方面,子集中心用于初始化聚类中心点;另一方面,利用子集中心点和分布密度进行模糊聚类,由于聚类样本数量显著减少以及分层减法聚类计算量小,故可以大幅提高模糊C均值算法的计算速度,进而可以利用聚类有效性分析指标快速确定聚类数目。实验表明,这种方法不需事先确定聚类数目并且在优化聚类性能不变的前提下,可以使模糊聚类的速度得到明显提高,实现彩色图像的快速分割。  相似文献   

19.
基于改进K均值聚类的异常检测算法   总被引:1,自引:0,他引:1  
左进  陈泽茂 《计算机科学》2016,43(8):258-261
通过改进传统K-means算法的初始聚类中心随机选取过程,提出了一种基于改进K均值聚类的异常检测算法。在选择初始聚类中心时,首先计算所有数据点的紧密性,排除离群点区域,在数据紧密的地方均匀选择K个初始中心,避免了随机性选择容易导致局部最优的缺陷。通过优化选取过程,使得算法在迭代前更加接近真实的聚类类簇中心,减少了迭代次数,提高了聚类质量和异常检测率。实验表明,改进算法在聚类性能和异常检测方面都明显优于原算法。  相似文献   

20.
The leading partitional clustering technique, k-modes, is one of the most computationally efficient clustering methods for categorical data. However, in the k-modes-type algorithms, the performance of their clustering depends on initial cluster centers and the number of clusters needs be known or given in advance. This paper proposes a novel initialization method for categorical data which is implemented to the k-modes-type algorithms. The proposed method can not only obtain the good initial cluster centers but also provide a criterion to find candidates for the number of clusters. The performance and scalability of the proposed method has been studied on real data sets. The experimental results illustrate that the proposed method is effective and can be applied to large data sets for its linear time complexity with respect to the number of data points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号