共查询到20条相似文献,搜索用时 15 毫秒
1.
在传统的k-means聚类算法中,聚类结果会随着初始聚类中心点的不同而波动,针对这个缺点,提出一种优化初始聚类中心的算法。该算法通过计算每个数据对象的密度参数,然后选取k个处于高密度分布的点作为初始聚类中心。实验表明,在聚类类别数给定的情况下,通过用标准的UCI数据库进行实验比较,发现采用改进后方法选取的初始类中心的k-means算法比随机选取初始聚类中心算法有相对较高的准确率和稳定性。 相似文献
2.
Stability of a learning algorithm with respect to small input perturbations is an important property, as it implies that the derived models are robust with respect to the presence of noisy features and/or data sample fluctuations. The qualitative nature of the stability property enhardens the development of practical, stability optimizing, data mining algorithms as several issues naturally arise, such as: how “much” stability is enough, or how can stability be effectively associated with intrinsic data properties. In the context of this work we take into account these issues and explore the effect of stability maximization in the continuous (PCA-based) k-means clustering problem. Our analysis is based on both mathematical optimization and statistical arguments that complement each other and allow for the solid interpretation of the algorithm’s stability properties. Interestingly, we derive that stability maximization naturally introduces a tradeoff between cluster separation and variance, leading to the selection of features that have a high cluster separation index that is not artificially inflated by the features variance. The proposed algorithmic setup is based on a Sparse PCA approach, that selects the features that maximize stability in a greedy fashion. In our study, we also analyze several properties of Sparse PCA relevant to stability that promote Sparse PCA as a viable feature selection mechanism for clustering. The practical relevance of the proposed method is demonstrated in the context of cancer research, where we consider the problem of detecting potential tumor biomarkers using microarray gene expression data. The application of our method to a leukemia dataset shows that the tradeoff between cluster separation and variance leads to the selection of features corresponding to important biomarker genes. Some of them have relative low variance and are not detected without the direct optimization of stability in Sparse PCA based k-means. Apart from the qualitative evaluation, we have also verified our approach as a feature selection method for $k$ -means clustering using four cancer research datasets. The quantitative empirical results illustrate the practical utility of our framework as a feature selection mechanism for clustering. 相似文献
3.
Adapting k-means for supervised clustering 总被引:2,自引:1,他引:1
k-means is traditionally viewed as an algorithm for the unsupervised clustering of a heterogeneous population into a number
of more homogeneous groups of objects. However, it is not necessarily guaranteed to group the same types (classes) of objects
together. In such cases, some supervision is needed to partition objects which have the same label into one cluster. This
paper demonstrates how the popular k-means clustering algorithm can be profitably modified to be used as a classifier algorithm. The output field itself cannot
be used in the clustering but it is used in developing a suitable metric defined on other fields. The proposed algorithm combines
Simulated Annealing with the modified k-means algorithm. We apply the proposed algorithm to real data sets, and compare the output of the resultant classifier to
that of C4.5. 相似文献
4.
5.
DBSCAN(density-based spatial clustering of applications with noise)是应用最广的密度聚类算法之一. 然而,它时间复杂度过高(
6.
周涛 《计算机工程与应用》2010,46(26):7-10
粗糙聚类是不确定聚类算法中一种有效的聚类算法,这里通过分析粗糙k-means算法,指出了其中3个参数wl,wu和ε设置时存在的缺点,提出了一种自适应粗糙k-means聚类算法,该算法能进一步优化粗糙k-means的聚类效果,降低对“噪声”的敏感程度,最后通过实验验证了算法的有效性。 相似文献
7.
A. C. S. Santos 《International journal of remote sensing》2016,37(13):3005-3020
Hyperspectral images usually have large volumes of data comprising hundreds of spectral bands. Removal of redundant bands can both reduce computational time and improve classification performance. This work proposes and analyses a band-selection method based on the k-means clustering strategy combined with a classification approach using entropy filtering. Experimental results in different terrain images show that our method can significantly reduce the number of bands while maintaining an accurate classification. 相似文献
8.
Automated variable weighting in k-means type clustering 总被引:9,自引:0,他引:9
Huang JZ Ng MK Rong H Li Z 《IEEE transactions on pattern analysis and machine intelligence》2005,27(5):657-668
This paper proposes a k-means type clustering algorithm that can automatically calculate variable weights. A new step is introduced to the k-means clustering process to iteratively update variable weights based on the current partition of data and a formula for weight calculation is proposed. The convergency theorem of the new clustering process is given. The variable weights produced by the algorithm measure the importance of variables in clustering and can be used in variable selection in data mining applications where large and complex real data are often involved. Experimental results on both synthetic and real data have shown that the new algorithm outperformed the standard k-means type algorithms in recovering clusters in data. 相似文献
9.
10.
A k-means clustering with a new privacy-preserving concept, user-centric privacy preservation, is presented. In this framework, users can conduct data mining using their private information by storing them in their
local storage. After the computation, they obtain only the mining result without disclosing private information to others.
In most cases, the number of parties that can join conventional privacy-preserving data mining has been assumed to be only
two. In our framework, we assume large numbers of parties join the protocol; therefore, not only scalability but also asynchronism
and fault-tolerance is important. Considering this, we propose a k-mean algorithm combined with a decentralized cryptographic protocol and a gossip-based protocol. The computational complexity
is O(log n) with respect to the number of parties n, and experimental results show that our protocol is scalable even with one million parties. 相似文献
11.
基于参考区域的k-means文本聚类算法 总被引:4,自引:1,他引:4
k-means是目前常用的文本聚类算法,该算法的主要缺点需要人工指定聚类的最终个数k及相应的初始中心点.针对这些缺点,提出一种基于参考区域的初始化方法,自动生成k-means的初始化分区,并且在参考区域的生成过程中,设计一种求最大斜率(绝对值)的方法确定自动阈值.理论分析和实验结果表明,该改进算法能有效的提高文本聚类的精度,且具有可行的效率. 相似文献
12.
在2010年提出已有的k-means聚类中心选取算法的基础上进行改进。通过计算样本间的距离求出每个样本的密度参数,选取最大密度参数值所对应的样本作为初始聚类中心。当最大密度参数值不惟一时,提出合理选取最大密度参数值的解决方案,依次求出k个初始聚类中心点,由此提出了一种新的k-means聚类中心选取算法。实验证明,提出的算法与对比算法相比具有更高的准确率。 相似文献
13.
14.
Fast and exact out-of-core and distributed k-means clustering 总被引:1,自引:2,他引:1
Clustering has been one of the most widely studied topics in data mining and k-means clustering has been one of the popular clustering algorithms. K-means requires several passes on the entire dataset, which can make it very expensive for large disk-resident datasets. In view of this, a lot of work has been done on various approximate versions of k-means, which require only one or a small number of passes on the entire dataset.In this paper, we present a new algorithm, called fast and exact k-means clustering (FEKM), which typically requires only one or a small number of passes on the entire dataset and provably produces the same cluster centres as reported by the original k-means algorithm. The algorithm uses sampling to create initial cluster centres and then takes one or more passes over the entire dataset to adjust these cluster centres. We provide theoretical analysis to show that the cluster centres thus reported are the same as the ones computed by the original k-means algorithm. Experimental results from a number of real and synthetic datasets show speedup between a factor of 2 and 4.5, as compared with k-means.This paper also describes and evaluates a distributed version of FEKM, which we refer to as DFEKM. This algorithm is suitable for analysing data that is distributed across loosely coupled machines. Unlike the previous work in this area, DFEKM provably produces the same results as the original k-means algorithm. Our experimental results show that DFEKM is clearly better than two other possible options for exact clustering on distributed data, which are down loading all data and running sequential k-means or running parallel k-means on a loosely coupled configuration. Moreover, even in a tightly coupled environment, DFEKM can outperform parallel k-means if there is a significant load imbalance.
Ruoming Jin is currently an assistant professor in the Computer Science Department at Kent State University. He received a BE and a ME degree in computer engineering from Beihang University (BUAA), China in 1996 and 1999, respectively. He earned his MS degree in computer science from University of Delaware in 2001, and his Ph.D. degree in computer science from the Ohio State University in 2005. His research interests include data mining, databases, processing of streaming data, bioinformatics, and high performance computing. He has published more than 30 papers in these areas. He is a member of ACM and SIGKDD.
Anjan Goswami studied robotics at the Indian Institute of Technology at Kanpur. While working with IBM, he was interested in studying computer science. He then obtained a masters degree from the University of South Florida, where he worked on computer vision problems. He then transferred to the PhD program in computer science at OSU, where he did a Masters thesis on efficient clustering algorithms for massive, distributed and streaming data. On successful completion of this, he decided to join a web-service-provider company to do research in designing and developing high-performance search solutions for very large structured data. Anjan' favourite recreations are studying and predicting technology trends, nature photography, hiking, literature and soccer.
Gagan Agrawal is an Associate Professor of Computer Science and Engineering at the Ohio State University. He received his B.Tech degree from Indian Institute of Technology, Kanpur, in 1991, and M.S. and Ph.D degrees from University of Maryland, College Park, in 1994 and 1996, respectively. His research interests include parallel and distributed computing, compilers, data mining, grid computing, and data integration. He has published more than 110 refereed papers in these areas. He is a member of ACM and IEEE Computer Society. He received a National Science Foundation CAREER award in 1998. 相似文献
15.
16.
针对CBR系统中案例检索算法存在的问题,根据k-means算法思想,将案例库进行聚类,在聚类基础上设计了一个案例检索算法。分析了样本案例的选取规则,重点论述了案例检索算法。根据实验结果表明,该方法能够有效地提高案例检索结果的召回率及案例检索效率。 相似文献
17.
k均值聚类算法在入侵检测中已经得到了广泛的研究。该文在k均值算法基础上,提出了改进的k均值算法。将k均值算法和改进的k均值算法分别应用于入侵检测。试验结果表明,改进后的k均值算法能够避免k均值算法固有的缺点,并且有比较高的检测性能。 相似文献
18.
针对原始k均值法在MapReduce建模中执行时间较长和聚类结果欠佳问题,提出一种基于MapReduce的分治k均值聚类方法。采取分治法处理大数据集,将所要处理的整个数据集拆分为较小的块并存储在每台机器的主存储器中;通过可用的机器传播,将数据集的每个块由其分配的机器独立地进行聚类;采用最小加权距离确定数据点应该被分配的类簇,判断收敛性。实验结果表明,与传统k均值聚类方法和流式k均值聚类方法相比,所提方法用时更短,结果更优。 相似文献
19.
20.
针对密度峰值聚类(CFSFDP)算法处理多密度峰值数据集时,人工选择聚类中心易造成簇的误划分问题,提出一种结合遗传k均值改进的密度峰值聚类算法。在CFSFDP求得的可能簇中心中,利用基于可变染色体长度编码的遗传k均值的全局搜索能力自动搜索出最优聚类中心,同时自适应确定遗传k均值的交叉概率,避免早熟问题的出现。在UCI数据集上的实验结果表明,改进算法具有较好的聚类质量和较少的迭代次数,验证了所提算法的可行性和有效性。 相似文献