首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
传统的聚类算法不适用于处理海量和高维数据。针对云计算环境下,利用集群系统的并行计算能力,实现海量数据的聚类问题,给出了云计算环境下基于分形维数的聚类融合算法。该算法首先对基于分形维数的聚类算法进行改进,使之更适用于并行计算,其产生聚类作为初始聚类成员;再结合投票算法的融合策略实现融合。最后,对基于分形维数的聚类融合算法在云计算环境下实现并行计算。通过在UCI数据集上的对比实验来验证该算法的有效性。  相似文献   

2.
两阶段无监督顺序前向分形属性规约算法   总被引:3,自引:0,他引:3  
采用单个属性多重分形维数及属性合并之后分形维数变化程度作为属性相关性的度量依据,以结果属性子集分形维数与属性全集分形维数的差值作为评价结果属性子集优劣的标准,将分形属性规约问题转化为属性个数受限的最大无关分形属性子集搜索问题.针对高维属性空间搜索的"组合爆炸"现象,设计了结合相关性分析与冗余性分析的两阶段顺序前向无监督分形属性规约算法.初步分析了算法的时空复杂性,基于标准与合成数据集的实验结果表明,算法能够以较低的分形维数计算工作量得到较优的属性子集.  相似文献   

3.
基于网格与分形维数的聚类算法   总被引:1,自引:0,他引:1  
提出了一种基于网格和分形维数的聚类算法,它结合了网格聚类和分形聚类的优点,克服了传统网格聚类算法聚类质量降低的缺点,改进了分形聚类耗时较大的问题。此算法首先根据网格密度得到初始类别,再利用分形的思想,将未被划分的网格依次归类。实验结果证明,它能够发现任意形状且距离非邻近的聚类,且适用于海量、高维数据。  相似文献   

4.

为确定??-means 等聚类算法的初始聚类中心, 首先由样本总量及其取值区间长度确定对应维上的样本密度统计区间数, 并将满足筛选条件的密度峰值所在区间内的样本均值作为候选初始聚类中心; 然后, 根据密度峰值区间在各维上的映射关系建立候选初始聚类中心关系树, 进一步采用最大最小距离算法获得初始聚类中心; 最后为确定最佳聚类数, 基于类内样本密度及类密度建立聚类有效性评估函数. 针对人工数据集及UCI 数据集的实验结果表明了所提出算法的有效性.

  相似文献   

5.
一种新的数据流分形聚类算法   总被引:1,自引:1,他引:1       下载免费PDF全文
提出了基于分形的数据流聚类算法,利用分形维数的变化程度来度量数据点与聚类的自相似程度,在噪音干扰下能发现反映数据流自然聚集状态的任意形状的聚类。实验证明,FClustream算法是一种高效的数据流聚类算法。  相似文献   

6.
廖纪勇  吴晟  刘爱莲 《控制与决策》2021,36(12):3083-3090
选取合理的初始聚类中心是正确聚类的前提,针对现有的K-means算法随机选取聚类中心和无法处理离群点等问题,提出一种基于相异性度量选取初始聚类中心改进的K-means聚类算法.算法根据各数据对象之间的相异性构造相异性矩阵,定义了均值相异性和总体相异性两种度量准则;然后据此准则来确定初始聚类中心,并利用各簇中数据点的中位数代替均值以进行后续聚类中心的迭代,消除离群点对聚类准确率的影响.此外,所提出的算法每次运行结果保持一致,在初始化和处理离群点方面具有较好的鲁棒性.最后,在人工合成数据集和UCI数据集上进行实验,与3种经典聚类算法和两种优化初始聚类中心改进的K-means算法相比,所提出的算法具有较好的聚类性能.  相似文献   

7.
针对传统聚类算法无法处理大数据中多视图高维数据问题,提出一种基于混沌粒子群优化算法的智能加权K均值聚类算法。在聚类模型中引入聚类之间的耦合程度以扩大聚类的相似性。为了消除初始聚类中心的敏感性,利用混沌粒子群优化算法通过全局搜索得到最优初始聚类中心、视图权重和特征权重。引入一种精确摄动策略提高混沌粒子群优化算法的寻优性能。通过在Apache Spark和Single Node两个平台上的实验验证了该方法在视图多、维数高的复杂数据集条件下具有较好的聚类性能。  相似文献   

8.
分形维数的高效求解是分形理论应用与实践的关键问题,传统分形维数计算方法由于时空复杂性高已成为当前分形技术应用的一个主要瓶颈。借鉴Z-ordering索引技术的思想,设计并实现了一种改进的多重分形维数计算方法ZBMFD(Z-ordering Based Multifractal dimension Algorithm),该方法扫描数据集一遍建立底层网格结构,通过动态修改网格坐标编码递推实现低层网格到高层网格之间的动态映射并计算数据集的分形维数。在实际数据集的实验表明算法在保持O(N×logN)时间复杂性的基础上,降低了分形维数算法的空间复杂性,且计算结果精度与已有算法相当,拓广了分形技术在当前高维、海量数据处理等领域的应用。  相似文献   

9.
聚类分析是数据挖掘最常见的技术之一,数据的规模、维数和稀疏性都是制约聚类分析的不同方面.本文提出一种有效的高属性维稀疏数据聚类方法.给出稀疏相似度、等价关系的相似度、广义的等价关系的定义.基于对象间的稀疏相似度和等价关系原理形成初始等价类,通过等价关系的相似度修正初始等价关系,使得最终聚类结果更合理.该算法聚类过程不依赖于输入样本的排列顺序,高维稀疏数据的有效压缩提高算法在维数较高时的执行效率,适合于高维稀疏数据的聚类分析.  相似文献   

10.
一种优化的基于网格的聚类算法   总被引:5,自引:0,他引:5  
聚类是数据挖掘领域中一个重要的研究课题.与其它算法相比,基于网格的聚类算法可以高效处理低维的海量数据.然而,由于划分的单元数与数据的维数呈指数增长,因此对于维数较高的数据集,生成的单元数过多,导致算法的效率较低.本文基于CD—Tree设计了新的基于网格的聚类算法,该算法的效率远高于传统的基于网格聚类算法的效率.此外,本文设计了一种剪枝优化策略,以提高算法的效率.实验表明,与传统的聚类算法相比,基于CD-Tree的聚类算法在数据集的大小及维度的可伸缩性方面均有显著提高.  相似文献   

11.
In this paper, we present an agglomerative fuzzy $k$-means clustering algorithm for numerical data, an extension to the standard fuzzy $k$-means algorithm by introducing a penalty term to the objective function to make the clustering process not sensitive to the initial cluster centers. The new algorithm can produce more consistent clustering results from different sets of initial clusters centers. Combined with cluster validation techniques, the new algorithm can determine the number of clusters in a data set, which is a well known problem in $k$-means clustering. Experimental results on synthetic data sets (2 to 5 dimensions, 500 to 5000 objects and 3 to 7 clusters), the BIRCH two-dimensional data set of 20000 objects and 100 clusters, and the WINE data set of 178 objects, 17 dimensions and 3 clusters from UCI, have demonstrated the effectiveness of the new algorithm in producing consistent clustering results and determining the correct number of clusters in different data sets, some with overlapping inherent clusters.  相似文献   

12.
Hierarchical co-clustering: off-line and incremental approaches   总被引:1,自引:0,他引:1  
Clustering data is challenging especially for two reasons. The dimensionality of the data is often very high which makes the cluster interpretation hard. Moreover, with high-dimensional data the classic metrics fail in identifying the real similarities between objects. The second challenge is the evolving nature of the observed phenomena which makes the datasets accumulating over time. In this paper we show how we propose to solve these problems. To tackle the high-dimensionality problem, we propose to apply a co-clustering approach on the dataset that stores the occurrence of features in the observed objects. Co-clustering computes a partition of objects and a partition of features simultaneously. The novelty of our co-clustering solution is that it arranges the clusters in a hierarchical fashion, and it consists of two hierarchies: one on the objects and one on the features. The two hierarchies are coupled because the clusters at a certain level in one hierarchy are coupled with the clusters at the same level of the other hierarchy and form the co-clusters. Each cluster of one of the two hierarchies thus provides insights on the clusters of the other hierarchy. Another novelty of the proposed solution is that the number of clusters is possibly unlimited. Nevertheless, the produced hierarchies are still compact and therefore more readable because our method allows multiple splits of a cluster at the lower level. As regards the second challenge, the accumulating nature of the data makes the datasets intractably huge over time. In this case, an incremental solution relieves the issue because it partitions the problem. In this paper we introduce an incremental version of our algorithm of hierarchical co-clustering. It starts from an intermediate solution computed on the previous version of the data and it updates the co-clustering results considering only the added block of data. This solution has the merit of speeding up the computation with respect to the original approach that would recompute the result on the overall dataset. In addition, the incremental algorithm guarantees approximately the same answer than the original version, but it saves much computational load. We validate the incremental approach on several high-dimensional datasets and perform an accurate comparison with both the original version of our algorithm and with the state of the art competitors as well. The obtained results open the way to a novel usage of the co-clustering algorithms in which it is advantageous to partition the data into several blocks and process them incrementally thus “incorporating” data gradually into an on-going co-clustering solution.  相似文献   

13.
针对粗糙K-means聚类及其相关衍生算法需要提前人为给定聚类数目、随机选取初始类簇中心导致类簇交叉区域的数据划分准确率偏低等问题,文中提出基于混合度量与类簇自适应调整的粗糙模糊K-means聚类算法.在计算边界区域的数据对象归属于不同类簇的隶属程度时,综合考虑局部密度和距离的混合度量,并采用自适应调整类簇数目的策略,获得最佳聚类数目.选取数据对象稠密区域中距离最小的两个样本的中点作为初始类簇中心,将附近局部密度高于平均密度的对象划分至该簇后再选取剩余的初始类簇中心,使初始类簇中心的选取更合理.在人工数据集和UCI标准数据集上的实验表明,文中算法在处理类簇交叠严重的球簇状数据集时,具有自适应性,聚类精度较优.  相似文献   

14.
This paper proposes a new method for estimating the true number of clusters and initial cluster centers in a dataset with many clusters. The observation points are assigned to the data space to observe the clusters through the distributions of the distances between the observation points and the objects in the dataset. A Gamma Mixture Model (GMM) is built from a distance distribution to partition the dataset into subsets, and a GMM tree is obtained by recursively partitioning the dataset. From the leaves of the GMM tree, a set of initial cluster centers are identified and the true number of clusters is estimated. This method is implemented in the new GMM-Tree algorithm. Two GMM forest algorithms are further proposed to ensemble multiple GMM trees to handle high dimensional data with many clusters. The GMM-P-Forest algorithm builds GMM trees in parallel, whereas the GMM-S-Forest algorithm uses a sequential process to build a GMM forest. Experiments were conducted on 32 synthetic datasets and 15 real datasets to evaluate the performance of the new algorithms. The results have shown that the proposed algorithms outperformed the existing popular methods: Silhouette, Elbow and Gap Statistic, and the recent method I-nice in estimating the true number of clusters from high dimensional complex data.  相似文献   

15.
Rough k-means clustering describes uncertainty by assigning some objects to more than one cluster. Rough cluster quality index based on decision theory is applicable to the evaluation of rough clustering. In this paper we analyze rough k-means clustering with respect to the selection of the threshold, the value of risk for assigning an object and uncertainty of objects. According to the analysis, clusters presented as interval sets with lower and upper approximations in rough k-means clustering are not adequate to describe clusters. This paper proposes an interval set clustering based on decision theory. Lower and upper approximations in the proposed algorithm are hierarchical and constructed as outer-level approximations and inner-level ones. Uncertainty of objects in out-level upper approximation is described by the assignment of objects among different clusters. Accordingly, ambiguity of objects in inner-level upper approximation is represented by local uniform factors of objects. In addition, interval set clustering can be improved to obtain a satisfactory clustering result with the optimal number of clusters, as well as optimal values of parameters, by taking advantage of the usefulness of rough cluster quality index in the evaluation of clustering. The experimental results on synthetic and standard data demonstrate how to construct clusters with satisfactory lower and upper approximations in the proposed algorithm. The experiments with a promotional campaign for the retail data illustrates the usefulness of interval set clustering for improving rough k-means clustering results.  相似文献   

16.
簇间关系的评估对于确定许多现实中的关键未知信息具有重要作用,可被广泛应用在犯罪侦查、进化树、冶金工业和生物嫁接等领域。提出了一种称为“众数模式+互信息”的方法对簇间关系进行排序。该方法使用众数模式从每个簇中寻找具有代表性的对象,使用互信息衡量簇间的关联程度。由于该方法利用簇间关系判断簇与簇的联系程度,所以它不同于传统的分类和聚类。在图形和癌症诊断数据上的实验表明了该算法的有效性。  相似文献   

17.
18.
We introduce a new approach for finding overlapping clusters given pairwise similarities of objects. In particular, we relax the problem of correlation clustering by allowing an object to be assigned to more than one cluster. At the core of our approach is an optimization problem in which each data point is mapped to a small set of labels, representing membership in different clusters. The objective is to find a mapping so that the given similarities between objects agree as much as possible with similarities taken over their label sets. The number of labels can vary across objects. To define a similarity between label sets, we consider two measures: (i) a 0–1 function indicating whether the two label sets have non-zero intersection and (ii) the Jaccard coefficient between the two label sets. The algorithm we propose is an iterative local-search method. The definitions of label set similarity give rise to two non-trivial optimization problems, which, for the measures of set-intersection and Jaccard, we solve using a greedy strategy and non-negative least squares, respectively. We also develop a distributed version of our algorithm based on the BSP model and implement it using a Pregel framework. Our algorithm uses as input pairwise similarities of objects and can thus be applied when clustering structured objects for which feature vectors are not available. As a proof of concept, we apply our algorithms on three different and complex application domains: trajectories, amino-acid sequences, and textual documents.  相似文献   

19.
张妨妨  钱雪忠 《计算机应用》2012,32(9):2476-2479
针对传统GK聚类算法无法自动确定聚类数和对初始聚类中心比较敏感的缺陷,提出一种改进的GK聚类算法。该算法首先通过基于类间分离度和类内紧致性的权和的新有效性指标来确定最佳聚类数;然后,利用改进的熵聚类的思想来确定初始聚类中心;最后,根据判定出的聚类数和新的聚类中心进行聚类。实验结果表明,新指标能准确地判断出类间有交叠的数据集的最佳聚类数,且改进后的算法具有更高的聚类准确率。  相似文献   

20.
张琳  陈燕  汲业  张金松 《计算机应用研究》2011,28(11):4071-4073
针对传统K-means算法必须事先确定聚类数目以及对初始聚类中心的选取比较敏感的缺陷,采用基于密度的思想,通过设定Eps邻域以及Eps邻域内至少包含的对象数minpts来排除孤立点,并将不重复的核心点作为初始聚类中心;采用类内距离和类间距离的比值作为准则评价函数,将准则函数取得最小值时的聚类数作为最佳聚类数,这些改进有效地克服了K-means算法的不足。最后通过几个实例介绍了改进后算法的具体应用,实例表明改进后的算法比原算法有更高的聚类准确性,更能实现类内紧密类间远离的聚类效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号