首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 149 毫秒
1.
逄琳  刘方爱 《计算机应用》2016,36(6):1634-1638
针对传统的聚类算法对数据集反复聚类,且在大型数据集上计算效率欠佳的问题,提出一种基于层次划分的最佳聚类数和初始聚类中心确定算法——基于层次划分密度的聚类优化(CODHD)。该算法基于层次划分,对计算过程进行研究,不需要对数据集进行反复聚类。首先,扫描数据集获得所有聚类特征的统计值;其次,自底向上地生成不同层次的数据划分,计算每个划分数据点的密度,将最大密度点定为中心点,计算中心点距离更高密度点的最小距离,以中心点密度与最小距离乘积之和的平均值为有效性指标,增量地构建一条关于不同层次划分的聚类质量曲线;最后,根据曲线的极值点对应的划分估计最佳聚类数和初始聚类中心。实验结果表明,所提CODHD算法与预处理阶段的聚类优化(COPS)算法相比,聚类准确度提高了30%,聚类算法效率至少提高14.24%。所提算法具有较强的可行性和实用性。  相似文献   

2.
在传统确定数据集聚类数算法原理的基础上,提出一种新的算法——MHC算法。该算法采用自底向上的策略生成不同层次的数据集划分,计算每个层次的聚类划分质量,通过聚类质量选择最佳的聚类数。还设计一种新的有效性指标——BIP指标,用于衡量不同划分的聚类质量,该指标主要依托数据集的几何结构。实验结果表明,该算法能准确地确定多维数据集中的最佳聚类数。  相似文献   

3.
模糊连接点聚类算法(Fuzzy Joint Points, FJP)用最大间隔下降法划分聚类的簇数目,这种确定簇数目的方法具有主观性,不利于算法的应用推广。针对此问题,提出一种基于有效近邻簇指标的自适应FJP聚类算法,通过Kernels-VCN指标来评估聚类的有效性,从而实现最佳簇数目的自适应确定,最后在UCI数据集和人工数据集上验证所提算法的可行性。  相似文献   

4.
为了更好地评价无监督聚类算法的聚类质量,解决因簇中心重叠而导致的聚类评价结果失效等问题,对常用聚类评价指标进行了分析,提出一个新的内部评价指标,将簇间邻近边界点的最小距离平方和与簇内样本个数的乘积作为整个样本集的分离度,平衡了簇间分离度与簇内紧致度的关系;提出一种新的密度计算方法,将样本集与各样本的平均距离比值较大的对象作为高密度点,使用最大乘积法选取相对分散且具有较高密度的数据对象作为初始聚类中心,增强了K-medoids算法初始中心点的代表性和算法的稳定性,在此基础上,结合新提出的内部评价指标设计了聚类质量评价模型,在UCI和KDD CUP 99数据集上的实验结果表明,新模型能够对无先验知识样本进行有效聚类和合理评价,能够给出最优聚类数目或最优聚类范围.  相似文献   

5.
密度峰值聚类算法在处理密度不均匀的数据集时易将低密度簇划分到高密度簇中或将高密度簇分为多个子簇,且在样本点分配过程中存在误差传递问题。提出一种基于相对密度的密度峰值聚类算法。引入自然最近邻域内的样本点信息,给出新的局部密度计算方法并计算相对密度。在绘制决策图确定聚类中心后,基于对簇间密度差异的考虑,提出密度因子计算各个簇的聚类距离,根据聚类距离对剩余样本点进行划分,实现不同形状、不同密度数据集的聚类。在合成数据集和真实数据集上进行实验,结果表明,该算法的FMI、ARI和NMI指标较经典的密度峰值聚类算法和其他3种聚类算法分别平均提高约14、26和21个百分点,并且在簇间密度相差较大的数据集上能够准确识别聚类中心和分配剩余的样本点。  相似文献   

6.
高阶异构数据层次联合聚类算法   总被引:1,自引:0,他引:1  
在实际应用中,包含多种特征空间信息的高阶异构数据广泛出现.由于高阶联合聚类算法能够有效融合多种特征空间信息提高聚类效果,近年来逐渐成为研究热点.目前高阶联合聚类算法多数为非层次聚类算法.然而,高阶异构数据内部往往隐藏着层次聚簇结构,为了更有效地挖掘数据内部隐藏的层次聚簇模式,提出了一种高阶层次联合聚类算法(high-order hierarchical co-clustering algorithm,HHCC).该算法利用变量相关性度量指标Goodman-Kruskal τ衡量对象变量和特征变量的相关性,将相关性较强的对象划分到同一个对象聚簇中,同时将相关性较强的特征划分到同一个特征聚簇中.HHCC算法采用自顶向下的分层聚类策略,利用指标Goodman-Kruskal τ评估每层对象和特征的聚类质量,利用局部搜索方法优化指标Goodman-Kruskal τ,自动确定聚簇数目,获得每层的聚类结果,最终形成树状聚簇结构.实验结果表明HHCC算法的聚类效果优于4种经典的同构层次聚类算法和5种已有的非层次高阶联合聚类算法.  相似文献   

7.
朱二周  孙悦  张远翔  高新  马汝辉  李学俊 《软件学报》2021,32(10):3085-3103
聚类分析是统计学、模式识别和机器学习等领域的研究热点.通过有效的聚类分析,数据集的内在结构与特征可以被很好地发掘出来.然而,无监督学习的特性使得当前已有的聚类方法依旧面临着聚类效果不稳定、无法对多种结构的数据集进行正确聚类等问题.针对这些问题,首先将K-means算法和层次聚类算法的聚类思想相结合,提出了一种混合聚类算法K-means-AHC;其次,采用拐点检测的思想,提出了一个基于平均综合度的新聚类有效性指标DAS(平均综合度之差,difference of average synthesis degree),以此来评估K-means-AHC算法聚类结果的质量;最后,将K-means-AHC算法和DAS指标相结合,设计了一种寻找数据集最佳类簇数和最优划分的有效方法.实验将K-means-AHC算法用于测试多种结构的数据集,结果表明:该算法在不过多增加时间开销的同时,提高了聚类分析的准确性.与此同时,新的DAS指标在聚类结果的评价上要优于当前已有的常用聚类有效性指标.  相似文献   

8.
针对粗糙K-means聚类及其相关衍生算法需要提前人为给定聚类数目、随机选取初始类簇中心导致类簇交叉区域的数据划分准确率偏低等问题,文中提出基于混合度量与类簇自适应调整的粗糙模糊K-means聚类算法.在计算边界区域的数据对象归属于不同类簇的隶属程度时,综合考虑局部密度和距离的混合度量,并采用自适应调整类簇数目的策略,获得最佳聚类数目.选取数据对象稠密区域中距离最小的两个样本的中点作为初始类簇中心,将附近局部密度高于平均密度的对象划分至该簇后再选取剩余的初始类簇中心,使初始类簇中心的选取更合理.在人工数据集和UCI标准数据集上的实验表明,文中算法在处理类簇交叠严重的球簇状数据集时,具有自适应性,聚类精度较优.  相似文献   

9.
为了更有效地确定数据集的最佳聚类数,提出一种新的确定数据集最佳聚类数的算法。该算法借签层次聚类的思想,一次性地生成所有可能的划分,然后根据有效性指标选择最佳的聚类划分,进而获得最佳聚类数。理论分析和实验结果证明,该算法具有良好的性能。  相似文献   

10.
一种基于克隆选择的聚类算法   总被引:3,自引:0,他引:3  
罗印升  李人厚  张维玺 《控制与决策》2005,20(11):1261-1264
将克隆选择原理同典型的划分聚类方法结合起来,提出一种克隆选择聚类算法.该算法具有完成任意形状数据集聚类的能力,可以自动确定簇的数目并得到簇的描述信息,计算量小,参数设置容易,适用于具有实值连续属性的数据集.基于模拟数据集和基准数据集分别进行实验,结果表明该算法是有效的.  相似文献   

11.
Clustering multi-dense large scale high dimensional numeric datasets is a challenging task duo to high time complexity of most clustering algorithms. Nowadays, data collection tools produce a large amount of data. So, fast algorithms are vital requirement for clustering such data. In this paper, a fast clustering algorithm, called Dimension-based Partitioning and Merging (DPM), is proposed. In DPM, first, data is partitioned into small dense volumes during the successive processing of dataset dimensions. Then, noise is filtered out using dimensional densities of the generated partitions. Finally, merging process is invoked to construct clusters based on partition boundary data samples. DPM algorithm automatically detects the number of data clusters based on three insensitive tuning parameters which decrease the burden of its usage. Performance evaluation of the proposed algorithm using different datasets shows its fastness and accuracy compared to other clustering competitors.  相似文献   

12.
自适应的软子空间聚类算法   总被引:6,自引:0,他引:6  
陈黎飞  郭躬德  姜青山 《软件学报》2010,21(10):2513-2523
软子空间聚类是高维数据分析的一种重要手段.现有算法通常需要用户事先设置一些全局的关键参数,且没有考虑子空间的优化.提出了一个新的软子空间聚类优化目标函数,在最小化子空间簇类的簇内紧凑度的同时,最大化每个簇类所在的投影子空间.通过推导得到一种新的局部特征加权方式,以此为基础提出一种自适应的k-means型软子空间聚类算法.该算法在聚类过程中根据数据集及其划分的信息,动态地计算最优的算法参数.在实际应用和合成数据集上的实验结果表明,该算法大幅度提高了聚类精度和聚类结果的稳定性.  相似文献   

13.
Clustering methods are a powerful tool for discovering patterns in a given data set through an organization of data into subsets of objects that share common features. Motivated by the independent use of some different partitions criteria and the theoretical and empirical analysis of some of its properties, in this paper, we introduce an incremental nested partition method which combines these partitions criteria for finding the inner structure of static and dynamic datasets. For this, we proved that there are relationships of nesting between partitions obtained, respectively, from these partition criteria, and besides that the sensitivity when a new object arrives to the dataset is rigorously studied. Our algorithm exploits all of these mathematical properties for obtaining the hierarchy of clusterings. Moreover, we realize a theoretical and experimental comparative study of our method with classical hierarchical clustering methods such as single-link and complete-link and other more recently introduced methods. The experimental results over databases of UCI repository and the AFP and TDT2 news collections show the usefulness and capability of our method to reveal different levels of information hidden in datasets.  相似文献   

14.
新的K-均值算法最佳聚类数确定方法   总被引:8,自引:0,他引:8       下载免费PDF全文
K-均值聚类算法是以确定的类数k和随机选定的初始聚类中心为前提对数据集进行聚类的。通常聚类数k事先无法确定,随机选定的初始聚类中心容易使聚类结果不稳定。提出了一种新的确定K-均值聚类算法的最佳聚类数方法,通过设定AP算法的参数,将AP算法产生的聚类数作为聚类数搜索范围的上界kmax,并通过选择合适的有效性指标Silhouette指标,以及基于最大最小距离算法思想设定初始聚类中心,分析聚类效果,确定最佳聚类数。仿真实验和分析验证了以上算法方案的可行性。  相似文献   

15.
Clustering has been widely used in different fields of science, technology, social science, etc. Naturally, clusters are in arbitrary (non-convex) shapes in a dataset. One important class of clustering is distance based method. However, distance based clustering methods usually find clusters of convex shapes. Classical single-link is a distance based clustering method, which can find arbitrary shaped clusters. It scans dataset multiple times and has time requirement of O(n2), where n is the size of the dataset. This is potentially a severe problem for a large dataset. In this paper, we propose a distance based clustering method, l-SL to find arbitrary shaped clusters in a large dataset. In this method, first leaders clustering method is applied to a dataset to derive a set of leaders; subsequently single-link method (with distance stopping criteria) is applied to the leaders set to obtain final clustering. The l-SL method produces a flat clustering. It is considerably faster than the single-link method applied to dataset directly. Clustering result of the l-SL may deviate nominally from final clustering of the single-link method (distance stopping criteria) applied to dataset directly. To compensate deviation of the l-SL, an improvement method is also proposed. Experiments are conducted with standard real world and synthetic datasets. Experimental results show the effectiveness of the proposed clustering methods for large datasets.  相似文献   

16.
针对现有的Sync算法具有较高时间复杂度,在处理大样本数据集时有相当的局限性,提出了一种快速大样本同步聚类算法(Fast Clustering by Synchronization on Large Sample,FCSLS)。首先将基于核密度估计(KDE)的抽样方法对大样本数据进行抽样压缩,再在压缩集上进行同步聚类,通过Davies-Bouldin指标自动寻优到最佳聚类数,最后,对剩下的大规模数据进行聚类,得到最终聚类结果。通过在人造数据集以及UCI真实数据集上的实验,FCSLS可以在大规模数据集上得到任意形状、密度、大小的聚类且不需要预设聚类数。同时与基于压缩集密度估计和中心约束最小包含球技术的快速压缩方法相比,FCSLS在不损失聚类精度的情况下,极大地缩短了同步聚类算法的运行时间。  相似文献   

17.
Classical clustering methods, such as partitioning and hierarchical clustering algorithms, often fail to deliver satisfactory results, given clusters of arbitrary shapes. Motivated by a clustering validity index based on inter-cluster and intra-cluster density, we propose that the clustering validity index be used not only globally to find optimal partitions of input data, but also locally to determine which two neighboring clusters are to be merged in a hierarchical clustering of Self-Organizing Map (SOM). A new two-level SOM-based clustering algorithm using the clustering validity index is also proposed. Experimental results on synthetic and real data sets demonstrate that the proposed clustering algorithm is able to cluster data in a better way than classical clustering algorithms on an SOM.  相似文献   

18.
Cluster ensemble first generates a large library of different clustering solutions and then combines them into a more accurate consensus clustering. It is commonly accepted that for cluster ensemble to work well the member partitions should be different from each other, and meanwhile the quality of each partition should remain at an acceptable level. Many different strategies have been used to generate different base partitions for cluster ensemble. Similar to ensemble classification, many studies have been focusing on generating different partitions of the original dataset, i.e., clustering on different subsets (e.g., obtained using random sampling) or clustering in different feature spaces (e.g., obtained using random projection). However, little attention has been paid to the diversity and quality of the partitions generated using these two approaches. In this paper, we propose a novel cluster generation method based on random sampling, which uses the nearest neighbor method to fill the category information of the missing samples (abbreviated as RS-NN). We evaluate its performance in comparison with k-means ensemble, a typical random projection method (Random Feature Subset, abbreviated as FS), and another random sampling method (Random Sampling based on Nearest Centroid, abbreviated as RS-NC). Experimental results indicate that the FS method always generates more diverse partitions while RS-NC method generates high-quality partitions. Our proposed method, RS-NN, generates base partitions with a good balance between the quality and the diversity and achieves significant improvement over alternative methods. Furthermore, to introduce more diversity, we propose a dual random sampling method which combines RS-NN and FS methods. The proposed method can achieve higher diversity with good quality on most datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号