首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
针对无导师聚类K-均值算法中K值的选取问题, 提出了使用遗传算法(缩写为GA)优化K值参数的方法.通过对UCI机器学习数据库中7类数据的实验,表明本方法是比较有效的.  相似文献   

2.
使用遗传算法实现K—means聚类算法的K值选择   总被引:6,自引:0,他引:6  
杨芳  湛燕 《微机发展》2003,13(1):25-26,29
针对无导师聚类K-均值算法中K值的选取问题,提出了使用遗传算法(缩写为GA)优化K值参数的方法。通过对UCI机器学习数据库中7类数据的实验,表明本方法是比较有效的。  相似文献   

3.
聚类个数的确定是聚类分析中一个富有挑战性的难题。现有的聚类个数确定方法主要采用随机选取初始聚类中心的策略,导致聚类过程中迭代次数的稳定性不强。基于此,在利用含有类标签的先验信息优化初始类中心的基础上,提出了一种基于先验信息的混合数据聚类个数确定算法。实验证明,该算法是有效的。  相似文献   

4.
基于最大最小距离法的多中心聚类算法   总被引:19,自引:0,他引:19  
周涓  熊忠阳  张玉芳  任芳 《计算机应用》2006,26(6):1425-1427
针对k-means算法的缺陷,提出了一种新的多中心聚类算法。运用两阶段最大最小距离法搜索出最佳初始聚类中心,将原始数据集分割成小类后用合并算法形成最终类,即用多个聚类中心联合代表一个延伸状或者较大形状的簇。仿真实验表明:该算法能够智能地确定初始聚类种子个数,对不规则状数据集进行有效聚类, 聚类性能显著优于k-means算法。  相似文献   

5.
在7个数据集上对3种不同聚类算法与3种不同相似性度最标准的多种组合进行实验,以评估这些因素对聚类性能的影响.为便于确定聚类参数,提出一种针对蛋白质结构预测的聚类中心选择算法.实验结果表明,在3种相似性度量标准中,RMSD对于聚类的效果最好,而在3种聚类算法中,SPICKER性能最优,其次是AP聚类算法.  相似文献   

6.
基于密度的K-means聚类中心选取的优化算法   总被引:2,自引:0,他引:2  
针对传统的K-means算法对于初始聚类中心点和聚类数的敏感问题,提出了一种优化初始聚类中心选取的算法。该算法针对数据对象的分布密度以及计算最近两点的垂直中点方法来确定k个初始聚类中心,再结合均衡化函数对聚类个数进行优化,以获得最优聚类。采用标准的UCI数据集进行实验对比,发现改进后的算法相比传统的算法有较高的准确率和稳定性。  相似文献   

7.
混合数据的聚类过程中通常面临一个不可回避的问题:聚类个数的确定。基于Liang k-prototype算法引入属性权重,重新定义混合数据缺失某类的类间熵和(SBAE_M)、有效性指标(CUM)及相异性度量。提出一种带权的混合数据聚类个数确定算法。该算法的基本思想是:用newk-prototype算法将混合数据进行聚类,计算其聚类结果的CUM及SBAE_M,将最坏的类剔除,并将该类中的对象用新的相异性度量进行重新分配,CUM最大时包含的类别数即为聚类个数。在5个UCI数据集上验证了该算法的有效性。  相似文献   

8.
基于小生境微粒群算法的山峰聚类   总被引:2,自引:0,他引:2  
将山峰聚类法和小生境微粒群算法结合,构建一种基于小生境微粒群算法的山峰聚类法:首先在数据空间上构造网格,进而构造出表示数据密度指标的山峰函数,然后将山峰聚类方法中通过顺序地削去山峰函数来选择聚类中心这一步用小生境微粒群算法代替,通过执行小生境微粒群算法对山峰函数进行多峰函数寻优,找到山峰函数的每一个峰,即可确定聚类中心的个数和每一个聚类中心位置。仿真实验表明,构建的新算法能够弥补传统聚类算法的一些缺陷。  相似文献   

9.
谱聚类能识别非线性数据,且优于传统聚类.谱聚类中度量相似性的高斯核函数尺度参数σ和聚类个数k对聚类效果影响较大,但需要人工判断.用向量之间夹角余弦代替σ并且通过特征值的跳跃性确定聚类个数,对于非线性高维数据,提出一种自适应谱聚类算法,将数据通过显式构造映射到随机特征空间,在随机特征空间中实现聚类.实验结果表明,在UCI数据上该算法与传统算法相比效果更好.  相似文献   

10.
针对K-means算法中聚类结果易受初始聚类中心影响的缺点,提出一种改进初始聚类中心选择的算法.该算法不断寻找最大聚类,并利用距离最大的两个数据对象作为开始的聚类中心对该聚类进行分裂,如此反复,直到得到指定聚类中心个数.用KDD CUP99数据集对改进算法进行仿真实验,实验数据表明,用该算法获得的聚类中心进行聚类相对原始的K-means算法,能获得更好的聚类结果.  相似文献   

11.
Rapid technological advances imply that the amount of data stored in databases is rising very fast. However, data mining can discover helpful implicit information in large databases. How to detect the implicit and useful information with lower time cost, high correctness, high noise filtering rate and fit for large databases is of priority concern in data mining, specifying why considerable clustering schemes have been proposed in recent decades. This investigation presents a new data clustering approach called PHD, which is an enhanced version of KIDBSCAN. PHD is a hybrid density-based algorithm, which partitions the data set by K-means, and then clusters the resulting partitions with IDBSCAN. Finally, the closest pairs of clusters are merged until the natural number of clusters of data set is reached. Experimental results reveal that the proposed algorithm can perform the entire clustering, and efficiently reduce the run-time cost. They also indicate that the proposed new clustering algorithm conducts better than several existing well-known schemes such as the K-means, DBSCAN, IDBSCAN and KIDBSCAN algorithms. Consequently, the proposed PHD algorithm is efficient and effective for data clustering in large databases.  相似文献   

12.
基于密度的增量式网格聚类算法   总被引:29,自引:0,他引:29  
提出基于密度的网格聚类算法GDcA,发现大规模空间数据库中任意形状的聚类.该算法首先将数据空间划分成若干体积相同的单元,然后对单元进行聚类只有密度不小于给定阈值的单元才得到扩展,从而大大降低了时间复杂性在GDcA的基础上,给出增量式聚类算法IGDcA,适用于数据的批量更新.  相似文献   

13.
An ensemble of clustering solutions or partitions may be generated for a number of reasons. If the data set is very large, clustering may be done on tractable size disjoint subsets. The data may be distributed at different sites for which a distributed clustering solution with a final merging of partitions is a natural fit. In this paper, two new approaches to combining partitions, represented by sets of cluster centers, are introduced. The advantage of these approaches is that they provide a final partition of data that is comparable to the best existing approaches, yet scale to extremely large data sets. They can be 100,000 times faster while using much less memory. The new algorithms are compared against the best existing cluster ensemble merging approaches, clustering all the data at once and a clustering algorithm designed for very large data sets. The comparison is done for fuzzy and hard-k-means based clustering algorithms. It is shown that the centroid-based ensemble merging algorithms presented here generate partitions of quality comparable to the best label vector approach or clustering all the data at once, while providing very large speedups.  相似文献   

14.
One of the central problems in information retrieval, data mining, computational biology, statistical analysis, computer vision, geographic analysis, pattern recognition, distributed protocols is the question of classification of data according to some clustering rule. Often the data is noisy and even approximate classification is of extreme importance. The difficulty of such classification stems from the fact that usually the data has many incomparable attributes, and often results in the question of clustering problems in high dimensional spaces. Since they require measuring distance between every pair of data points, standard algorithms for computing the exact clustering solutions use quadratic or “nearly quadratic” running time; i.e., O(dn 2?α(d)) time where n is the number of data points, d is the dimension of the space and α(d) approaches 0 as d grows. In this paper, we show (for three fairly natural clustering rules) that computing an approximate solution can be done much more efficiently. More specifically, for agglomerative clustering (used, for example, in the Alta Vista? search engine), for the clustering defined by sparse partitions, and for a clustering based on minimum spanning trees we derive randomized (1 + ∈) approximation algorithms with running times Õ(d 2 n 2?γ) where γ > 0 depends only on the approximation parameter ∈ and is independent of the dimension d.  相似文献   

15.
This paper presents parallel algorithms for determining the number of partitions of a given integerN, where the partitions may be subject to restrictions, such as being composed of distinct parts, of a given number of parts, and/or of parts belonging to a specified set. We present a series of adaptive algorithms suitable for varying numbers of processors. The fastest of these algorithms computes the number of partitions ofnwith largest part equal tok, for 1 ≤knN, in timeO(log2(N)) usingO(N2/logN) processors. Parallel logarithmic time algorithms that generate partitions uniformly at random, using these quantities, are also presented.  相似文献   

16.
The selection of the most appropriate clustering algorithm is not a straightforward task, given that there is no clustering algorithm capable of determining the actual groups present in any dataset. A potential solution is to use different clustering algorithms to produce a set of partitions (solutions) and then select the best partition produced according to a specified validation measure; these measures are generally biased toward one or more clustering algorithms. Nevertheless, in several real cases, it is important to have more than one solution as the output. To address these problems, we present a hybrid partition selection algorithm, HSS, which accepts as input a set of base partitions potentially generated from clustering algorithms with different biases and aims, to return a reduced and yet diverse set of partitions (solutions). HSS comprises three steps: (i) the application of a multiobjective algorithm to a set of base partitions to generate a Pareto Front (PF) approximation; (ii) the division of the solutions from the PF approximation into a certain number of regions; and (iii) the selection of a solution per region by applying the Adjusted Rand Index. We compare the results of our algorithm with those of another selection strategy, ASA. Furthermore, we test HSS as a post-processing tool for two clustering algorithms based on multiobjective evolutionary computing: MOCK and MOCLE. The experiments revealed the effectiveness of HSS in selecting a reduced number of partitions while maintaining their quality.  相似文献   

17.
A clustering ensemble combines in a consensus function the partitions generated by a set of independent base clusterers. In this study both the employment of particle swarm clustering (PSC) and ensemble pruning (i.e., selective reduction of base partitions) using evolutionary techniques in the design of the consensus function is investigated. In the proposed ensemble, PSC plays two roles. First, it is used as a base clusterer. Second, it is employed in the consensus function; arguably the most challenging element of the ensemble. The proposed consensus function exploits a representation for the base partitions that makes cluster alignment unnecessary, allows for the combination of partitions with different number of clusters, and supports both disjoint and overlapping (fuzzy, probabilistic, and possibilistic) partitions. Results on both synthetic and real-world data sets show that the proposed ensemble can produce statistically significant better partitions, in terms of the validity indices used, than the best base partition available in the ensemble. In general, a small number of selected base partitions (below 20% of the total) yields the best results. Moreover, results produced by the proposed ensemble compare favorably to those of state-of-the-art clustering algorithms, and specially to swarm based clustering ensemble algorithms.  相似文献   

18.
Clustering data streams has drawn lots of attention in the last few years due to their ever-growing presence. Data streams put additional challenges on clustering such as limited time and memory and one pass clustering. Furthermore, discovering clusters with arbitrary shapes is very important in data stream applications. Data streams are infinite and evolving over time, and we do not have any knowledge about the number of clusters. In a data stream environment due to various factors, some noise appears occasionally. Density-based method is a remarkable class in clustering data streams, which has the ability to discover arbitrary shape clusters and to detect noise. Furthermore, it does not need the nmnber of clusters in advance. Due to data stream characteristics, the traditional density-based clustering is not applicable. Recently, a lot of density-based clustering algorithms are extended for data streams. The main idea in these algorithms is using density- based methods in the clustering process and at the same time overcoming the constraints, which are put out by data streanFs nature. The purpose of this paper is to shed light on some algorithms in the literature on density-based clustering over data streams. We not only summarize the main density-based clustering algorithms on data streams, discuss their uniqueness and limitations, but also explain how they address the challenges in clustering data streams. Moreover, we investigate the evaluation metrics used in validating cluster quality and measuring algorithms' performance. It is hoped that this survey will serve as a steppingstone for researchers studying data streams clustering, particularly density-based algorithms.  相似文献   

19.
Clustering is an important research topic that has practical applications in many fields. It has been demonstrated that fuzzy clustering, using algorithms such as the fuzzy C-means (FCM), has clear advantages over crisp and probabilistic clustering methods. Like most clustering algorithms, however, FCM and its derivatives need the number of clusters in the given data set as one of their initializing parameters. The main goal of this paper is to develop an effective fuzzy algorithm for automatically determining the number of clusters. After a brief review of the relevant literature, we present a new algorithm for determining the number of clusters in a given data set and a new validity index for measuring the “goodness” of clustering. Experimental results and comparisons are given to illustrate the performance of the new algorithm.  相似文献   

20.
大规模交易数据库的一种有效聚类算法   总被引:13,自引:0,他引:13  
陈宁  陈安  周龙骧 《软件学报》2001,12(4):475-484
研究大规模交易数据库的聚类问题,提出了一种二次聚类算法——CATD.该算法首先将数据库划分成若干分区,在每个分区内利用层次聚类算法进行局部聚类,把交易初步划分成若干亚聚类,亚聚类的个数由聚类间的距离参数控制.然后对所有的亚聚类进行全局聚类,同时识别出噪声.由于采用了分区方法和聚类的支持向量表示法,该算法只需扫描一次数据库,聚类过程在内存中进行,因此能处理大规模的数据库.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号