首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Knowledge》2006,19(1):77-83
Ensemble methods that train multiple learners and then combine their predictions have been shown to be very effective in supervised learning. This paper explores ensemble methods for unsupervised learning. Here, an ensemble comprises multiple clusterers, each of which is trained by k-means algorithm with different initial points. The clusters discovered by different clusterers are aligned, i.e. similar clusters are assigned with the same label, by counting their overlapped data items. Then, four methods are developed to combine the aligned clusterers. Experiments show that clustering performance could be significantly improved by ensemble methods, where utilizing mutual information to select a subset of clusterers for weighted voting is a nice choice. Since the proposed methods work by analyzing the clustering results instead of the internal mechanisms of the component clusterers, they are applicable to diverse kinds of clustering algorithms.  相似文献   

2.
传统k-means算法由于初始聚类中心的选择是随机的,因此会使聚类结果不稳定。针对这个问题,提出一种基于离散量改进k-means初始聚类中心选择的算法。算法首先将所有对象作为一个大类,然后不断从对象数目最多的聚类中选择离散量最大与最小的两个对象作为初始聚类中心,再根据最近距离将这个大聚类中的其他对象划分到与之最近的初始聚类中,直到聚类个数等于指定的k值。最后将这k个聚类作为初始聚类应用到k-means算法中。将提出的算法与传统k-means算法、最大最小距离聚类算法应用到多个数据集进行实验。实验结果表明,改进后的k-means算法选取的初始聚类中心唯一,聚类过程的迭代次数也减少了,聚类结果稳定且准确率较高。  相似文献   

3.
最优聚类个数和初始聚类中心点选取算法研究   总被引:2,自引:0,他引:2  
传统k-means算法的聚类数k值事先无法确定,而且算法是随机性地选取初始聚类中心点,这样容易造成聚类结果不稳定,且准确率较低。本文基于SSE用来选取聚类个数k值,基于聚类中心点所在的周围区域相对比较密集,其次聚类中心点之间距离相对较远的选取原则用来选取初始聚类中心点,避免初始聚类中心点集中在一个小的范围,防止陷入局部最优。试验证明,本文能选取最优的k值,通过用标准的 UCI数据库进行试验,本文采用的算法能选择出唯一的初始中心点,聚类准确率较高,误差平方和较小。  相似文献   

4.
This paper presents a new k-means type algorithm for clustering high-dimensional objects in sub-spaces. In high-dimensional data, clusters of objects often exist in subspaces rather than in the entire space. For example, in text clustering, clusters of documents of different topics are categorized by different subsets of terms or keywords. The keywords for one cluster may not occur in the documents of other clusters. This is a data sparsity problem faced in clustering high-dimensional data. In the new algorithm, we extend the k-means clustering process to calculate a weight for each dimension in each cluster and use the weight values to identify the subsets of important dimensions that categorize different clusters. This is achieved by including the weight entropy in the objective function that is minimized in the k-means clustering process. An additional step is added to the k-means clustering process to automatically compute the weights of all dimensions in each cluster. The experiments on both synthetic and real data have shown that the new algorithm can generate better clustering results than other subspace clustering algorithms. The new algorithm is also scalable to large data sets.  相似文献   

5.
The k-means algorithm is widely used for clustering because of its computational efficiency. Given n points in d-dimensional space and the number of desired clusters k, k-means seeks a set of k-cluster centers so as to minimize the sum of the squared Euclidean distance between each point and its nearest cluster center. However, the algorithm is very sensitive to the initial selection of centers and is likely to converge to partitions that are significantly inferior to the global optimum. We present a genetic algorithm (GA) for evolving centers in the k-means algorithm that simultaneously identifies good partitions for a range of values around a specified k. The set of centers is represented using a hyper-quadtree constructed on the data. This representation is exploited in our GA to generate an initial population of good centers and to support a novel crossover operation that selectively passes good subsets of neighboring centers from parents to offspring by swapping subtrees. Experimental results indicate that our GA finds the global optimum for data sets with known optima and finds good solutions for large simulated data sets.  相似文献   

6.
Clustering is a data analysis technique, particularly useful when there are many dimensions and little prior information about the data. Partitional clustering algorithms are efficient but suffer from sensitivity to the initial partition and noise. We propose here k-attractors, a partitional clustering algorithm tailored to numeric data analysis. As a preprocessing (initialization) step, it uses maximal frequent item-set discovery and partitioning to define the number of clusters k and the initial cluster “attractors.” During its main phase the algorithm uses a distance measure, which is adapted with high precision to the way initial attractors are determined. We applied k-attractors as well as k-means, EM, and FarthestFirst clustering algorithms to several datasets and compared results. Comparison favored k-attractors in terms of convergence speed and cluster formation quality in most cases, as it outperforms these three algorithms except from cases of datasets with very small cardinality containing only a few frequent item sets. On the downside, its initialization phase adds an overhead that can be deemed acceptable only when it contributes significantly to the algorithm's accuracy.  相似文献   

7.
针对DDoS攻击检测中k-means算法对初始聚类中心敏感和要求输入聚类数目的缺点,提出了一种基于动态指数和初始聚类中心点选取的自适应聚类算法(Adaptive Clustering Algorithm),并使用该算法建立DDoS攻击检测模型。通过使用LLS_DDoS_1.0数据集对该模型进行测试并与k-means算法对比,实验结果表明,该算法提高了DDoS攻击的检测率,降低了误警率,验证了检测方法的有效性。  相似文献   

8.
一种改进的k-means初始聚类中心选取算法   总被引:3,自引:0,他引:3       下载免费PDF全文
在传统的k-means聚类算法中,聚类结果会随着初始聚类中心点的不同而波动,针对这个缺点,提出一种优化初始聚类中心的算法。该算法通过计算每个数据对象的密度参数,然后选取k个处于高密度分布的点作为初始聚类中心。实验表明,在聚类类别数给定的情况下,通过用标准的UCI数据库进行实验比较,发现采用改进后方法选取的初始类中心的k-means算法比随机选取初始聚类中心算法有相对较高的准确率和稳定性。  相似文献   

9.
改进的基于遗传算法的粗糙聚类方法   总被引:2,自引:0,他引:2       下载免费PDF全文
传统的聚类算法都是使用硬计算来对数据对象进行划分,然而现实中不同类之间对象通常没有明确的界限。粗糙集理论提供了一种处理边界对象不确定的方法。因此将粗糙理论与k-均值方法相结合。同时,传统的k-均值聚类方法必须事先给定聚类数k,但实际情况下k很难确定;另外虽然传统k-均值算法局部搜索能力强,但容易陷入局部最优。遗传算法能得到全局最优解,但收敛过快。鉴于此,提出了一种改进的基于遗传算法的的粗糙聚类方法。该算法能动态地生成k-均值聚类数,采用最大最小原则生成初始聚类中心,同时结合粗糙集理论的上近似和下近似处理边界对象。最后,用UCI的Iris数据集分别对算法进行实际验证。实验结果表明,该算法具有较高的正确率,综合性能更加稳定。  相似文献   

10.
A clustering ensemble combines in a consensus function the partitions generated by a set of independent base clusterers. In this study both the employment of particle swarm clustering (PSC) and ensemble pruning (i.e., selective reduction of base partitions) using evolutionary techniques in the design of the consensus function is investigated. In the proposed ensemble, PSC plays two roles. First, it is used as a base clusterer. Second, it is employed in the consensus function; arguably the most challenging element of the ensemble. The proposed consensus function exploits a representation for the base partitions that makes cluster alignment unnecessary, allows for the combination of partitions with different number of clusters, and supports both disjoint and overlapping (fuzzy, probabilistic, and possibilistic) partitions. Results on both synthetic and real-world data sets show that the proposed ensemble can produce statistically significant better partitions, in terms of the validity indices used, than the best base partition available in the ensemble. In general, a small number of selected base partitions (below 20% of the total) yields the best results. Moreover, results produced by the proposed ensemble compare favorably to those of state-of-the-art clustering algorithms, and specially to swarm based clustering ensemble algorithms.  相似文献   

11.
In k-means clustering, we are given a set of n data points in d-dimensional space Rd and an integer k and the problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's (1982) algorithm. We present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation  相似文献   

12.
基于投票机制的融合聚类算法   总被引:1,自引:0,他引:1  
以一趟聚类算法作为划分数据的基本算法,讨论聚类融合问题.通过重复使用一趟聚类算法划分数据,并随机选择阈值和数据输入顺序,得到不同的聚类结果,将这些聚类结果映射为模式间的关联矩阵,在关联矩阵上使用投票机制获得最终的数据划分.在真实数据集和人造数据集上检验了提出的聚类融合算法,并与相关聚类算法进行了对比,实验结果表明,文中提出的算法是有效可行的.  相似文献   

13.
The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications.  相似文献   

14.
层次聚类的簇集成方法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
聚类集成比单个聚类方法具有更高的鲁棒性和精确性,它主要由两部分组成,即个体成员的产生和结果的融合。针对聚类集成,首先用k-means聚类算法得到个体成员,然后使用层次聚类中的单连接法、全连接法与平均连接法进行融合。为了评价聚类集成方法的性能,实验中使用了ARI(Adjusted Rand Index)。实验结果表明,平均连接法的聚类集成性能优于单连接法和全连接法。研究并讨论了融合方法的聚类正确率和集成规模的关系。  相似文献   

15.
利用FCM求解最佳聚类数的算法   总被引:2,自引:0,他引:2  
利用FCM求解最佳聚类数的算法中,每次应用FCM算法都要重新初始化类中心,而FCM算法对初始类中心敏感,这样使得利用FCM求解最佳聚类数的算法很不稳定。对该算法进行了改进,提出了一个合并函数,使得(c-1)类的类中心依赖于类的类中心。仿真实验表明:新的算法稳定性好,且运算速度明显比旧的算法要快。  相似文献   

16.
传统尽均值聚类算法虽然收敛速度快,但存在聚类数后无法预先确定,并且算法对初始中心点敏感的缺点。针对上述缺点,提出了基于密度期望和聚类有效性Silhouette指标的K-均值优化算法。给出了基于密度期望的初始中心点选取方案,将处于密度期望区间内相距最远的石个样本作为初始聚类中心。该方案可有效降低尽均值算法对初始中心点的依赖,从而获得较高的聚类质量。在此基础上,可进一步通过选择合适的聚类有效性指标Silhouette4指标分析不同后值下的每次聚类结果,确定最佳聚类数,则可有效改善k-值无法预先确定的缺点。实验及分析结果验证了所提出方案的可行性和有效性。  相似文献   

17.
针对聚类算法在教育大数据应用中存在的聚类数目依赖人工经验等问题,提出一种新的聚类有效性指标,用簇内全部样本与簇中心的距离之和表示簇内紧密度,用任意两簇间样本距离和的最小值表示簇间分离度,通过平衡簇内紧密度和簇间分离度之间的关系,实现最优聚类的划分。在UCI和KDD CUP99数据集上的测试结果表明,新指标的聚类质量评价结果有效、可靠。在此基础上,结合近邻传播算法设计新的聚类分析模型,使用该模型对某高校学生的职业能力进行聚类分析,结果表明:新模型能够准确地给出聚类数目k,有效地挖掘出学生的职业倾向,可以为大学生职业潜能分析、企业的人才选择提供依据与决策。  相似文献   

18.
Fuzzy $c$-means (FCM) and its variants suffer from two problems---local minima and cluster validity---which have a direct impact on the formation of final clustering. There are two strategies---optimization and center initialization strategies---that address the problem of local minima. This paper proposes a center initialization approach based on a minimum spanning tree to keep FCM from local minima. With regard to cluster validity, various strategies have been proposed. On the basis of the fuzzy cluster validity index, this paper proposes a selection model that combines multiple pairs of a fuzzy clustering algorithm and cluster validity index to identify the number of clusters and simultaneously selects the optimal fuzzy clustering for a dataset. The promising performance of the proposed center-initialization method and selection model is demonstrated by experiments on real datasets.   相似文献   

19.
A fuzzy k-modes algorithm for clustering categorical data   总被引:12,自引:0,他引:12  
This correspondence describes extensions to the fuzzy k-means algorithm for clustering categorical data. By using a simple matching dissimilarity measure for categorical objects and modes instead of means for clusters, a new approach is developed, which allows the use of the k-means paradigm to efficiently cluster large categorical data sets. A fuzzy k-modes algorithm is presented and the effectiveness of the algorithm is demonstrated with experimental results  相似文献   

20.
In this article, a cluster validity index and its fuzzification is described, which can provide a measure of goodness of clustering on different partitions of a data set. The maximum value of this index, called the PBM-index, across the hierarchy provides the best partitioning. The index is defined as a product of three factors, maximization of which ensures the formation of a small number of compact clusters with large separation between at least two clusters. We have used both the k-means and the expectation maximization algorithms as underlying crisp clustering techniques. For fuzzy clustering, we have utilized the well-known fuzzy c-means algorithm. Results demonstrating the superiority of the PBM-index in appropriately determining the number of clusters, as compared to three other well-known measures, the Davies-Bouldin index, Dunn's index and the Xie-Beni index, are provided for several artificial and real-life data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号