共查询到17条相似文献,搜索用时 116 毫秒
1.
2.
模糊聚类是模式识别、机器学习和图像处理等领域的重要研究内容。模糊C-均值聚类算法是最常用的模糊聚类实现算法,该算法需要预先给定聚类数才能对数据集进行聚类。提出了一种新的聚类有效性指标,对聚类结果进行有效性验证。该指标从划分熵、隶属度、几何结构角度,定义了紧凑度、分离度、重叠度三个重要特征测量。在此基础上,提出了一种最佳聚类数确定方法。将新聚类有效性指标和传统有效性指标在6个人工数据集和3个真实数据集进行实验验证。实验结果表明,所提出的指标和方法能够有效地对聚类结果进行评估,适合确定样本的最佳聚类数。 相似文献
3.
4.
K-均值聚类算法是以确定的类数k和随机选定的初始聚类中心为前提对数据集进行聚类的。通常聚类数k事先无法确定,随机选定的初始聚类中心容易使聚类结果不稳定。提出了一种新的确定K-均值聚类算法的最佳聚类数方法,通过设定AP算法的参数,将AP算法产生的聚类数作为聚类数搜索范围的上界kmax,并通过选择合适的有效性指标Silhouette指标,以及基于最大最小距离算法思想设定初始聚类中心,分析聚类效果,确定最佳聚类数。仿真实验和分析验证了以上算法方案的可行性。 相似文献
5.
《计算机科学与探索》2016,(2):230-247
针对快速K-medoids聚类算法和方差优化初始中心的K-medoids聚类算法存在需要人为给定类簇数,初始聚类中心可能位于同一类簇,或无法完全确定数据集初始类簇中心等缺陷,受密度峰值聚类算法启发,提出了两种自适应确定类簇数的K-medoids算法。算法采用样本x i的t最近邻距离之和倒数度量其局部密度ρi,并定义样本x i的新距离δi,构造样本距离相对于样本密度的决策图。局部密度较高且相距较远的样本位于决策图的右上角区域,且远离数据集的大部分样本。选择这些样本作为初始聚类中心,使得初始聚类中心位于不同类簇,并自动得到数据集类簇数。为进一步优化聚类结果,提出采用类内距离与类间距离之比作为聚类准则函数。在UCI数据集和人工模拟数据集上进行了实验测试,并对初始聚类中心、迭代次数、聚类时间、Rand指数、Jaccard系数、Adjusted Rand index和聚类准确率等经典聚类有效性评价指标进行了比较,结果表明提出的K-medoids算法能有效识别数据集的真实类簇数和合理初始类簇中心,减少聚类迭代次数,缩短聚类时间,提高聚类准确率,并对噪音数据具有很好的鲁棒性。 相似文献
6.
7.
8.
确定数据集的最佳聚类数是聚类研究中的一个重要难题。为了更有效地确定数据集的最佳聚类数,该文提出了通过改进K-means算法并结合一个不依赖于具体算法的有效性指标Q(c)对数据集的最佳聚类数进行确定的方法。理论分析和实验结果证明了该方法具有良好的性能和有效性。 相似文献
9.
10.
一种基于类间距阈值的模糊聚类算法 总被引:1,自引:0,他引:1
针对模糊C-均值(Fuzzy C-Means,FCM)算法需要事先设定聚类数C,若设定不当,容易导致错分类的问题,提出了一种利用类间距阈值搜索聚类数的方法来确定最佳聚类数.算法可以自适应确定最佳聚类数,仿真结果表明了利用类间距阈值方法确定聚类数的有效性. 相似文献
11.
为了更好地评价无监督聚类算法的聚类质量,解决因簇中心重叠而导致的聚类评价结果失效等问题,对常用聚类评价指标进行了分析,提出一个新的内部评价指标,将簇间邻近边界点的最小距离平方和与簇内样本个数的乘积作为整个样本集的分离度,平衡了簇间分离度与簇内紧致度的关系;提出一种新的密度计算方法,将样本集与各样本的平均距离比值较大的对象作为高密度点,使用最大乘积法选取相对分散且具有较高密度的数据对象作为初始聚类中心,增强了K-medoids算法初始中心点的代表性和算法的稳定性,在此基础上,结合新提出的内部评价指标设计了聚类质量评价模型,在UCI和KDD CUP 99数据集上的实验结果表明,新模型能够对无先验知识样本进行有效聚类和合理评价,能够给出最优聚类数目或最优聚类范围. 相似文献
12.
Julius T. Tou 《International journal of parallel programming》1979,8(6):541-547
A new technique for automatic clustering of multivariate data is proposed. In this approach a performance index for determining optimal clusters is introduced. This performance index is expressed in terms of the ratio of the minimum interset distance to maximum intraset distance. The optimal clusters are found when the performance index reaches a global maximum. If there are alternative groupings with equal number of clusters, the one with the largest performance index is chosen. 相似文献
13.
Sriparna Saha Author Vitae Sanghamitra Bandyopadhyay Author Vitae 《Pattern recognition》2010,43(3):738-751
In this paper the problem of automatic clustering a data set is posed as solving a multiobjective optimization (MOO) problem, optimizing a set of cluster validity indices simultaneously. The proposed multiobjective clustering technique utilizes a recently developed simulated annealing based multiobjective optimization method as the underlying optimization strategy. Here variable number of cluster centers is encoded in the string. The number of clusters present in different strings varies over a range. The points are assigned to different clusters based on the newly developed point symmetry based distance rather than the existing Euclidean distance. Two cluster validity indices, one based on the Euclidean distance, XB-index, and another recently developed point symmetry distance based cluster validity index, Sym-index, are optimized simultaneously in order to determine the appropriate number of clusters present in a data set. Thus the proposed clustering technique is able to detect both the proper number of clusters and the appropriate partitioning from data sets either having hyperspherical clusters or having point symmetric clusters. A new semi-supervised method is also proposed in the present paper to select a single solution from the final Pareto optimal front of the proposed multiobjective clustering technique. The efficacy of the proposed algorithm is shown for seven artificial data sets and six real-life data sets of varying complexities. Results are also compared with those obtained by another multiobjective clustering technique, MOCK, two single objective genetic algorithm based automatic clustering techniques, VGAPS clustering and GCUK clustering. 相似文献
14.
传统尽均值聚类算法虽然收敛速度快,但存在聚类数后无法预先确定,并且算法对初始中心点敏感的缺点。针对上述缺点,提出了基于密度期望和聚类有效性Silhouette指标的K-均值优化算法。给出了基于密度期望的初始中心点选取方案,将处于密度期望区间内相距最远的石个样本作为初始聚类中心。该方案可有效降低尽均值算法对初始中心点的依赖,从而获得较高的聚类质量。在此基础上,可进一步通过选择合适的聚类有效性指标Silhouette4指标分析不同后值下的每次聚类结果,确定最佳聚类数,则可有效改善k-值无法预先确定的缺点。实验及分析结果验证了所提出方案的可行性和有效性。 相似文献
15.
16.
Clustering trajectory data discovers and visualizes available structure in movement patterns of mobile objects and has numerous potential applications in traffic control, urban planning, astronomy, and animal science. In this paper, an automated technique for clustering trajectory data using a Particle Swarm Optimization (PSO) approach has been proposed, and Dynamic Time Warping (DTW) distance as one of the most commonly-used distance measures for trajectory data is considered. The proposed technique is able to find (near) optimal number of clusters as well as (near) optimal cluster centers during the clustering process. To reduce the dimensionality of the search space and improve the performance of the proposed method (in terms of a certain performance index), a Discrete Cosine Transform (DCT) representation of cluster centers is considered. The proposed method is able to admit various cluster validity indexes as objective function for optimization. Experimental results over both synthetic and real-world datasets indicate the superiority of the proposed technique to fuzzy C-means, fuzzy K-medoids, and two evolutionary-based clustering techniques proposed in the literature. 相似文献
17.
Kuang Yu Huang 《Knowledge》2011,24(3):420-426
This paper introduces a new hybrid cluster validity method based on particle swarm optimization, for successfully solving one of the most popular clustering/classifying complex datasets problems. The proposed method for the solution of the clustering/classifying problem, designated as PSORS index method, combines a particle swarm optimization (PSO) algorithm, Rough Set (RS) theory and a modified form of the Huang index function. In contrast to the Huang index method which simply assigns a constant number of clusters to each attribute, this method could cluster the values of the individual attributes within the dataset and achieves both the optimal number of clusters and the optimal classification accuracy. The validity of the proposed approach is investigated by comparing the classification results obtained for a real-world dataset with those obtained by pseudo-supervised classification BPNN, decision-tree and Huang index methods. There is good evidence to show that the proposed PSORS index method not only has a superior clustering accomplishment than the considered methods, but also achieves better classification accuracy. 相似文献