首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
针对影响k-means聚类效果的聚类数目和初始中心点两大因素,提出了基于双重遗传的kmeans算法。它用外层遗传算法控制聚类数目,用内层遗传算法控制聚类的初始中心点,并采用类间距离和类内距离以及二者之间的比值来评价聚类结果的好坏,在算法终止后,可同时求得较优的聚类数目和某聚类数目下的较优初始中心点。此外,根据内外层遗传算法的特殊性,采用不同的编码策略适应算法需求,为保留优质个体,采用精英个体保留策略。通过UCI数据集测试实例证明此算法有很好的实用性,对数据挖掘技术有一定参考价值。  相似文献   

2.
离群点检测问题中的数据可被看作是正常点与异常点在空间中的高度混合,在减少正常点损失的前提下,离群点通常包含在离聚类中心最远的样本集中。受这种思想启发,提出一种针对高维稀疏数据的基于插值的离群点检测方法,该方法在K-means基础上应用遗传算法对原始数据进行插值处理,解决了K-means聚类中稀疏数据容易被合并的问题。实验结果表明,对比基于传统K-means聚类的离群点检测方法以及几种典型的基于改进K-means的检测方法,本文 方法损失的正常点更少,提高了检测的准确率和精确率。  相似文献   

3.
K均值算法虽被广泛应用,但其算法性能和算法稳定性严重依赖算法的初始化过程,尤其是初始聚类中心的选取。比较合理的聚类中心应该出现在数据密集的区域,基于这个假设,提出了一种依赖数据局部密度的初始化调优算法。该算法以数据的局部密度函数为依据,并在高密度区域选取初始聚类中心。与同类算法相比,该算法有如下特点:能够自主发现数据集中数据分布的局部密集度;对类别数目较多的数据表现出更好的性能;对离群点和噪声鲁棒;易于实现。  相似文献   

4.
The success rates of the expert or intelligent systems depend on the selection of the correct data clusters. The k-means algorithm is a well-known method in solving data clustering problems. It suffers not only from a high dependency on the algorithm's initial solution but also from the used distance function. A number of algorithms have been proposed to address the centroid initialization problem, but the produced solution does not produce optimum clusters. This paper proposes three algorithms (i) the search algorithm C-LCA that is an improved League Championship Algorithm (LCA), (ii) a search clustering using C-LCA (SC-LCA), and (iii) a hybrid-clustering algorithm called the hybrid of k-means and Chaotic League Championship Algorithm (KSC-LCA) and this algorithm has of two computation stages. The C-LCA employs chaotic adaptation for the retreat and approach parameters, rather than constants, which can enhance the search capability. Furthermore, to overcome the limitation of the original k-means algorithm using the Euclidean distance that cannot handle the categorical attribute type properly, we adopt the Gower distance and the mechanism for handling a discrete value requirement of the categorical value attribute. The proposed algorithms can handle not only the pure numeric data but also the mixed-type data and can find the best centroids containing categorical values. Experiments were conducted on 14 datasets from the UCI repository. The SC-LCA and KSC-LCA competed with 16 established algorithms including the k-means, k-means++, global k-means algorithms, four search clustering algorithms and nine hybrids of k-means algorithm with several state-of-the-art evolutionary algorithms. The experimental results show that the SC-LCA produces the cluster with the highest F-Measure on the pure categorical dataset and the KSC-LCA produces the cluster with the highest F-Measure for the pure numeric and mixed-type tested datasets. Out of 14 datasets, there were 13 centroids produced by the SC-LCA that had better F-Measures than that of the k-means algorithm. On the Tic-Tac-Toe dataset containing only categorical attributes, the SC-LCA can achieve an F-Measure of 66.61 that is 21.74 points over that of the k-means algorithm (44.87). The KSC-LCA produced better centroids than k-means algorithm in all 14 datasets; the maximum F-Measure improvement was 11.59 points. However, in terms of the computational time, the SC-LCA and KSC-LCA took more NFEs than the k-means and its variants but the KSC-LCA ranks first and SC-LCA ranks fourth among the hybrid clustering and the search clustering algorithms that we tested. Therefore, the SC-LCA and KSC-LCA are general and effective clustering algorithms that could be used when an expert or intelligent system requires an accurate high-speed cluster selection.  相似文献   

5.
Li  Min  Yang  Chao  Sun  Qiao  Ma  Wen-Jing  Cao  Wen-Long  Ao  Yu-Long 《计算机科学技术学报》2019,34(1):77-93

With the advent of the big data era, the amounts of sampling data and the dimensions of data features are rapidly growing. It is highly desired to enable fast and efficient clustering of unlabeled samples based on feature similarities. As a fundamental primitive for data clustering, the k-means operation is receiving increasingly more attentions today. To achieve high performance k-means computations on modern multi-core/many-core systems, we propose a matrix-based fused framework that can achieve high performance by conducting computations on a distance matrix and at the same time can improve the memory reuse through the fusion of the distance-matrix computation and the nearest centroids reduction. We implement and optimize the parallel k-means algorithm on the SW26010 many-core processor, which is the major horsepower of Sunway TaihuLight. In particular, we design a task mapping strategy for load-balanced task distribution, a data sharing scheme to reduce the memory footprint and a register blocking strategy to increase the data locality. Optimization techniques such as instruction reordering and double buffering are further applied to improve the sustained performance. Discussions on block-size tuning and performance modeling are also presented. We show by experiments on both randomly generated and real-world datasets that our parallel implementation of k-means on SW26010 can sustain a double-precision performance of over 348.1 Gflops, which is 46.9% of the peak performance and 84% of the theoretical performance upper bound on a single core group, and can achieve a nearly ideal scalability to the whole SW26010 processor of four core groups. Performance comparisons with the previous state-of-the-art on both CPU and GPU are also provided to show the superiority of our optimized k-means kernel.

  相似文献   

6.
Liu  Tong  Zhu  Jingting  Zhou  Jukai  Zhu  YongXin  Zhu  Xiaofeng 《Multimedia Tools and Applications》2019,78(23):33279-33296
Multimedia Tools and Applications - Classic k-means clustering algorithm randomly selects centroids for initialization to possibly output unstable clustering results. Moreover, random...  相似文献   

7.
针对异常离群点对k-means ■算法的聚类精确度影响较大且在确定中心点过程中会泄露聚类数据隐私的问题,提出DPk-means ■算法。标记离群点,降低离群点对k-means ■算法聚类精确度的影响,将差分隐私应用于k-means ■聚类算法中保护聚类数据隐私。在选择聚类初始中心点和迭代求取均值中心点的过程中,应用拉普拉斯机制注入噪声,解决数据隐私泄露的问题。通过隐私预算动态变化对聚类结果准确性的影响及同类算法对比实验分析验证,DPk-means ■算法能够提供更高的隐私保护水平且保证聚类结果的准确性。  相似文献   

8.
Document clustering using synthetic cluster prototypes   总被引:3,自引:0,他引:3  
The use of centroids as prototypes for clustering text documents with the k-means family of methods is not always the best choice for representing text clusters due to the high dimensionality, sparsity, and low quality of text data. Especially for the cases where we seek clusters with small number of objects, the use of centroids may lead to poor solutions near the bad initial conditions. To overcome this problem, we propose the idea of synthetic cluster prototype that is computed by first selecting a subset of cluster objects (instances), then computing the representative of these objects and finally selecting important features. In this spirit, we introduce the MedoidKNN synthetic prototype that favors the representation of the dominant class in a cluster. These synthetic cluster prototypes are incorporated into the generic spherical k-means procedure leading to a robust clustering method called k-synthetic prototypes (k-sp). Comparative experimental evaluation demonstrates the robustness of the approach especially for small datasets and clusters overlapping in many dimensions and its superior performance against traditional and subspace clustering methods.  相似文献   

9.
Data clustering has been proven to be an effective method for discovering structure in medical datasets. The majority of clustering algorithms produce exclusive clusters meaning that each sample can belong to one cluster only. However, most real-world medical datasets have inherently overlapping information, which could be best explained by overlapping clustering methods that allow one sample belong to more than one cluster. One of the simplest and most efficient overlapping clustering methods is known as overlapping k-means (OKM), which is an extension of the traditional k-means algorithm. Being an extension of the k-means algorithm, the OKM method also suffers from sensitivity to the initial cluster centroids. In this paper, we propose a hybrid method that combines k-harmonic means and overlapping k-means algorithms (KHM-OKM) to overcome this limitation. The main idea behind KHM-OKM method is to use the output of KHM method to initialize the cluster centers of OKM method. We have tested the proposed method using FBCubed metric, which has been shown to be the most effective measure to evaluate overlapping clustering algorithms regarding homogeneity, completeness, rag bag, and cluster size-quantity tradeoff. According to results from ten publicly available medical datasets, the KHM-OKM algorithm outperforms the original OKM algorithm and can be used as an efficient method for clustering medical datasets.  相似文献   

10.
[K]均值聚类算法是聚类领域最知名的方法之一,然而[K]均值聚类完全依赖欧式距离进行聚类,忽略了样本特征离散程度对聚类结果的影响,导致聚类边缘样本容易被误聚类,且算法易局部收敛,聚类准确率较低。针对传统[K]均值聚类算法的不足,提出了似然[K]均值聚类算法,对于每个聚类的所有样本考虑每个维度样本特征的离散程度信息,分别计算样本属于某一聚类的似然概率,能够有效提高聚类准确率。在人造数据集和基准数据集验证了似然[K]均值聚类算法的优越性,将其应用于涡扇发动机气路部件故障以及传感器故障的模式识别,验证了该算法在涡扇发动机故障诊断中的实用性和有效性。  相似文献   

11.
一种改进的k-均值聚类算法   总被引:4,自引:0,他引:4  
针对k-均值(k-means)聚类算法中随机选取初始聚类中心的缺陷,提出了一种新的基于数据样本分布选取初始聚类中心的方法.实验结果表明,改进后的算法能改善其聚类性能,并能取得较高的分类准确率.  相似文献   

12.
针对电力公司海量数据分类问题,提出一种改进的k-means数据分类方法。在k-means算法的基础上,应用PCA对k-means算法进行降维处理,用canopy算法优化最佳簇集数、初始聚类中心。然后,应用改进的k-means算法对居民用户用电进行聚类;最后以该聚类结果为基础,建立LSTM预测模型。通过LSTM预测模型对某小区90户居民用电数据进行仿真实验,并对比分析了传统聚类、改进聚类和不适用聚类下LSTM三种模型的预测结果。结果表明,未使用任何聚类算法构建的LSTM模型在进行电力负荷预测时,预测结果的精度最低;应用改进的k-means算法构建的LSTM模型预测结果精度最佳。  相似文献   

13.
针对k-means算法对于远离群点敏感和k值难以确定等缺陷,在分析已有的k-means改进算法的基础上,引进肘部法则的思想对数据进行优化处理并且根据自适应思想结合误差平方和SSE(sum of squared error),提出一种自适应调整k值的k-means改进算法。选取机器学习库中的真实数据集进行仿真实验,其结果表明,改进后的k-means算法中的剔除远离群点和自适应调整k值的方法均可行,准确性高、聚类效果质量更优。  相似文献   

14.
无监督异常检测的核聚类和序列分析方法   总被引:2,自引:0,他引:2  
利用核函数构造数据的特征空间并在此空间采用核函数结合RA算法选取初始聚类中心,在核k-means聚类基础上,划分出大簇小簇,然后在大簇中进行异类分离以发现实验数据中以小概率事件出现的R2L,U2R和PROBE攻击;并且在大簇中挖掘闭合序列模式,获得描述大簇的序列规则,从中判断是否存在DoS攻击.算法分析和实验结果表明提出的方法可以获得较高的检测率并降低误报率.  相似文献   

15.
基于谱残差和多分辨率分析的显著目标检测   总被引:1,自引:0,他引:1       下载免费PDF全文
根据人类视觉系统的特点,提出一种融合谱残差和多分辨率分析的显著目标检测方法。该方法通过在不同尺度上计算图像的亮度、颜色以及方向特征的谱残差,构建多分辨率显著性图谱序列,然后用线性插值方法将不同分辨率的特征显著图叠加得到3个特征显著图,再利用k均值聚类算法将每个特征显著图聚为两类,选择聚类中心距离最大的特征显著图作为最终的显著图,最后经过动态阈值处理获得图像的显著目标区域。基于自然图像的显著目标检测实验结果表明,该方法具有较强的稳定性和实用性,得到较为满意的检测结果。  相似文献   

16.
In this paper, we present "k-means+ID3", a method to cascade k-means clustering and the ID3 decision tree learning methods for classifying anomalous and normal activities in a computer network, an active electronic circuit, and a mechanical mass-beam system. The k-means clustering method first partitions the training instances into k clusters using Euclidean distance similarity. On each cluster, representing a density region of normal or anomaly instances, we build an ID3 decision tree. The decision tree on each cluster refines the decision boundaries by learning the subgroups within the cluster. To obtain a final decision on classification, the decisions of the k-means and ID3 methods are combined using two rules: 1) the nearest-neighbor rule and 2) the nearest-consensus rule. We perform experiments on three data sets: 1) network anomaly data (NAD), 2) Duffing equation data (DED), and 3) mechanical system data (MSD), which contain measurements from three distinct application domains of computer networks, an electronic circuit implementing a forced Duffing equation, and a mechanical system, respectively. Results show that the detection accuracy of the k-means+ID3 method is as high as 96.24 percent at a false-positive-rate of 0.03 percent on NAD; the total accuracy is as high as 80.01 percent on MSD and 79.9 percent on DED  相似文献   

17.
差分隐私保护k- means聚类方法研究   总被引:3,自引:1,他引:2  
研究了基于差分隐私保护的k-means聚类隐私保护方法。首先介绍了隐私保护数据挖掘和隐私保护聚类分析的研究现状,简单介绍了差分隐私保护的基本原理和方法。为了解决差分隐私k-means聚类方法聚类结果可用性差的问题,提出了一个新的IDP k-means聚类方法,并证明了其满足e-差分隐私保护。最后的仿真实验表明,在相同隐私保护级别下,IDP k-means聚类方法与差分隐私k-means聚类方法相比,聚类可用性得到了较大程度的提高。  相似文献   

18.
一种改进的k-means初始聚类中心选取算法   总被引:3,自引:0,他引:3       下载免费PDF全文
在传统的k-means聚类算法中,聚类结果会随着初始聚类中心点的不同而波动,针对这个缺点,提出一种优化初始聚类中心的算法。该算法通过计算每个数据对象的密度参数,然后选取k个处于高密度分布的点作为初始聚类中心。实验表明,在聚类类别数给定的情况下,通过用标准的UCI数据库进行实验比较,发现采用改进后方法选取的初始类中心的k-means算法比随机选取初始聚类中心算法有相对较高的准确率和稳定性。  相似文献   

19.
针对k-means聚类算法效率底、优化不足等问题,提出了一种基于变异的迭代k-meaus算法(ik-means)。该算法从k-means算法(随机k-means算法)所产生的初始解向量中随机选取一定比例的位置,对其中的类标号进行随机变异并优化;再通过多次迭代获得了相应的优化解。实验表明在数据集相同、基本k—means算法调用次数相同的条件下,ik-means算法相对于k-means算法具有运行效率高、解更优化的特点。  相似文献   

20.
The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号