首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
王莉  周献中  沈捷 《控制与决策》2012,27(11):1711-1714
Lingras提出的粗K均值聚类算法易受随机初始聚类中心和离群点的影响,可能出现一致性和无法收敛的聚类结果.对此,提出一种改进的粗K均值算法,选择潜能最大的K个对象作为初始的聚类中心,根据数据对象与聚类中心的相对距离来确定其上下近似归属,使边界区域的划分更合理.定义了广义分类正确率,该指标同时考虑了下近似集和边界区域中的对象,评价算法性能更准确.仿真实验结果表明,该算法分类正确率高,收敛速度快,能够克服离群点的不利影响.  相似文献   

2.
一种基于条件概率分布的近似重复记录检测方法   总被引:3,自引:0,他引:3  
数据集成往往会形成一些近似重复记录 ,如何检测重复信息是数据质量研究中的一个热门课题 .文中提出了一种高效的基于条件概率分布的动态聚类算法来进行近似重复记录检测 .该方法在评估两个记录之间是否近似等价的问题上 ,解决了原来的算法忽略序列结构特点的问题 ,基于条件概率分布定义了记录间的距离 ;并根据近邻函数准则选择了一个评议聚类结果质量的准则函数 ,采用动态聚类算法完成对序列数据集的聚类 .使用该方法 ,对仿真数据进行了聚类实验 ,都获得了比较好的聚类结果  相似文献   

3.
一种协同的可能性模糊聚类算法   总被引:1,自引:0,他引:1  
模糊C-均值聚类(FCM)对噪声数据敏感和可能性C-均值聚类(PCM)对初始中心非常敏感易导致一致性聚类。协同聚类算法利用不同特征子集之间的协同关系并与其他算法相结合,可提高原有的聚类性能。对此,在可能性C-均值聚类算法(PCM)基础上将其与协同聚类算法相结合,提出一种协同的可能性C-均值模糊聚类算法(C-FCM)。该算法在改进的PCM的基础上,提高了对数据集的聚类效果。在对数据集Wine和Iris进行测试的结果表明,该方法优于PCM算法,说明该算法的有效性。  相似文献   

4.
基于自适应权重的粗糙K均值聚类算法   总被引:2,自引:0,他引:2  
原有Rough K-means算法中类的上、下近似采用固定经验权重,其科学性值得商榷,针对这一问题,设计了一种基于自适应权重的粗糙K均值聚类算法。基于自适应权重的粗糙聚类算法在每一次迭代过程中,根据当前的数据划分状态,动态计算每个样本对于类的权重,降低了原有算法对初始权重的依赖。此外,该算法采用近似集合中的高斯距离比例来表现样本权重,从而可以在多种数据分布上得到更精确的聚类结果。实验结果表明,基于自适应权重的粗糙K均值算法是一种较优的聚类算法。  相似文献   

5.
为了克服k-均值聚类算法容易受到数据空间分布影响的缺点,将线性规划下的一类支持向量机算法与K-均值聚类方法相结合提出一种支持向量聚类算法,该算法的每次循环都采用线性规划下的一类支持向量机进行运算.该算法实现简单,与二次规划下的支持向量机聚类算法相比,该算法能够大大减小计算的复杂性,而且能保持良好的聚类效果.与K-均值聚类算法、自组织映射聚类算法等进行仿真比较,人工数据和实际数据表明了该算法的有效性和可行性.  相似文献   

6.
基于图松弛优化为非近似迭代方法提供了有效的分析解决方案,且实现简单。然而,由于矩阵的逆在计算时需要多项式时间,则在运行速度方面不是很理想,当面对较大规模数据时此方法将变得不可行。提出了对基于图松弛优化聚类进行快速近似提升的两种方法:一个是基于k均值聚类,另一个是基于随机投影树。广泛实验表明,这些算法在运算速度方面表现较优,聚类精度变化非常小。具体来讲,该算法在运算大规模数据时精度优于k均值算法,并且在保证精度的情况下运行速度远快于基于图松弛优化聚类算法。值得注意的是,该算法可以使得单个机器在数分钟内对具有数百万样本的数据集进行聚类。  相似文献   

7.
[K]均值聚类算法是聚类领域最知名的方法之一,然而[K]均值聚类完全依赖欧式距离进行聚类,忽略了样本特征离散程度对聚类结果的影响,导致聚类边缘样本容易被误聚类,且算法易局部收敛,聚类准确率较低。针对传统[K]均值聚类算法的不足,提出了似然[K]均值聚类算法,对于每个聚类的所有样本考虑每个维度样本特征的离散程度信息,分别计算样本属于某一聚类的似然概率,能够有效提高聚类准确率。在人造数据集和基准数据集验证了似然[K]均值聚类算法的优越性,将其应用于涡扇发动机气路部件故障以及传感器故障的模式识别,验证了该算法在涡扇发动机故障诊断中的实用性和有效性。  相似文献   

8.
在众多聚类算法中,谱聚类作为一种代表性的图聚类算法,由于其对复杂数据分布的适应性强、聚类效果好等优点而受到人们的广泛关注.然而,由于其高计算时间复杂度难以应用于处理大规模数据.为提高谱聚类算法在大规模数据集上的可用性,提出关键节点选择的快速图聚类算法.该算法包含三个重要步骤:第一,提出一种充分考虑抱团性和分离性的快速节点重要性评价方法;第二,选择关键节点代替原数据集构建二分图,通过奇异值分解获得数据的近似特征向量;第三,集成多次的近似特征向量,提高近似谱聚类结果的鲁棒性.该算法将时间复杂度由谱聚类原有的O(n3)降低到O(t(n+2n2)),增强了其在大规模数据集上的可用性.通过该算法与其他七个具有代表性的谱聚类算法在五个Benchmark数据集上进行的实验分析,比较结果展示了该算法相比其他算法能够更加高效地识别数据中的复杂类结构.  相似文献   

9.
针对经典k_均值聚类方法只能处理静态数据聚类的问题,本文提出一种能够处理动态数据的改进动态k-均值聚类算法,称为Dynamical K-means算法.该方法在经典k-均值方法的基础上,通过对动态变化的数据集中 新加入样本进行分析和处理,根据聚类目标函数改变的实际情况选择最相似的类别进行局部更新或进行全局经典k_均值聚类,有效检测发生聚类概念漂移和没有发生聚类概念漂移的情况,从而实现了动态数据的在线聚类,避免了经典k_均值方法在动态数据中每次都要对全部数据重新聚类而导致算法速度过慢的问题.标准数据集和人工社会网络数据集上的实验结果表明,与经典k_均值聚类方法相比,本文提出的动态k_均值聚类方法能快速高效地处理动态数据聚类问题,并有效地检测动态数据聚类过程中所产生的概念漂移问题.  相似文献   

10.
一种基于粗糙集理论的谱聚类算法   总被引:1,自引:1,他引:0  
谱聚类算法利用特征向量构造简化的数据空间,在降低数据维数的同时,使得数据在子空间中的分布结构更加明显.现有谱聚类算法的聚类结果多为精确集,而真实数据集中重叠现象广泛存在.基于粗糙集理论提出了一种新的谱聚类算法,其主要思想是对谱聚类算法进行粗糙集扩展,使得聚类结果成为具有下近似和上近似定义的、类与类之间存在重叠区域的结构.实验表明,该算法与现有的谱聚类算法相比,稳定性和准确率都有一定的提高.  相似文献   

11.
为了使分类器能够在某个强度级别的行为样本集上训练而在其他强度级别上正确分类行为,提出了行为识别的随机逼近模型。在训练阶段从加速度计的时间序列数据提取特征,然后将特征送入聚类算法。数据依据行为聚类,聚类的均值和方差组合成相对应的SAM。在识别随机行为阶段,测试样本和每种行为类别的SAM进行比较。利用聚类算法和随机逼近给每种行为创建模型,然后使用启发式随机逼近最近邻方法来对行为进行分类。在实验中结合k-均值和高斯混合模型两种聚类算法,验证了提出的随机逼近模型的性能优于其他几种流行的行为分类方案。  相似文献   

12.
Clustering aggregation, known as clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a single better clustering. Existing clustering aggregation algorithms are applied directly to data points, in what is referred to as the point-based approach. The algorithms are inefficient if the number of data points is large. We define an efficient approach for clustering aggregation based on data fragments. In this fragment-based approach, a data fragment is any subset of the data that is not split by any of the clustering results. To establish the theoretical bases of the proposed approach, we prove that clustering aggregation can be performed directly on data fragments under two widely used goodness measures for clustering aggregation taken from the literature. Three new clustering aggregation algorithms are described. The experimental results obtained using several public data sets show that the new algorithms have lower computational complexity than three well-known existing point-based clustering aggregation algorithms (Agglomerative, Furthest, and LocalSearch); nevertheless, the new algorithms do not sacrifice the accuracy.  相似文献   

13.
In Ad Hoc networks, the performance is significantly degraded as the size of the network grows. The network clustering by which the nodes are hierarchically organized on the basis of the proximity relieves this performance degradation. Finding the weakly connected dominating set (WCDS) is a promising approach for clustering the wireless Ad Hoc networks. Finding the minimum WCDS in the unit disk graph is an NP-Hard problem, and a host of approximation algorithms has been proposed. In this article, we first proposed a centralized approximation algorithm called DLA-CC based on distributed learning automata (DLA) for finding a near optimal solution to the minimum WCDS problem. Then, we propose a DLA-based clustering algorithm called DLA-DC for clustering the wireless Ad Hoc networks. The proposed cluster formation algorithm is a distributed implementation of DLA-CC, in which the dominator nodes and their closed neighbors assume the role of the cluster-heads and cluster members, respectively. In this article, we compute the worst case running time and message complexity of the clustering algorithm for finding a near optimal cluster-head set. We argue that by a proper choice of the learning rate of the clustering algorithm, a trade-off between the running time and message complexity of algorithm with the cluster-head set size (clustering optimality) can be made. The simulation results show the superiority of the proposed algorithms over the existing methods.  相似文献   

14.
The analysis of microarray data is fundamental to microbiology. Although clustering has long been realized as central to the discovery of gene functions and disease diagnostic, researchers have found the construction of good algorithms a surprisingly difficult task. In this paper, we address this problem by using a component-based approach for clustering algorithm design, for class retrieval from microarray data. The idea is to break up existing algorithms into independent building blocks for typical sub-problems, which are in turn reassembled in new ways to generate yet unexplored methods. As a test, 432 algorithms were generated and evaluated on published microarray data sets. We found their top performers to be better than the original, component-providing ancestors and also competitive with a set of new algorithms recently proposed. Finally, we identified components that showed consistently good performance for clustering microarray data and that should be considered in further development of clustering algorithms.  相似文献   

15.
Shell clustering algorithms are ideally suited for computer vision tasks such as boundary detection and surface approximation, particularly when the boundaries have jagged or scattered edges and when the range data is sparse. This is because shell clustering is insensitive to local aberrations, it can be performed directly in image space, and unlike traditional approaches it does assume dense data and does not use additional features such as curvatures and surface normals. The shell clustering algorithms introduced in Part I of this paper assume that the number of clusters is known, however, which is not the case in many boundary detection and surface approximation applications. This problem can be overcome by considering cluster validity. We introduce a validity measure called surface density which is explicitly meant for the type of applications considered in this paper, we show through theoretical derivations that surface density is relatively invariant to size and partiality (incompleteness) of the clusters. We describe unsupervised clustering algorithms that use the surface density measure and other measures to determine the optimum number of shell clusters automatically, and illustrate the application of the proposed algorithms to boundary detection in the case of intensity images and to surface approximation in the case of range images  相似文献   

16.
This paper presents the development of soft clustering and learning vector quantization (LVQ) algorithms that rely on a weighted norm to measure the distance between the feature vectors and their prototypes. The development of LVQ and clustering algorithms is based on the minimization of a reformulation function under the constraint that the generalized mean of the norm weights be constant. According to the proposed formulation, the norm weights can be computed from the data in an iterative fashion together with the prototypes. An error analysis provides some guidelines for selecting the parameter involved in the definition of the generalized mean in terms of the feature variances. The algorithms produced from this formulation are easy to implement and they are almost as fast as clustering algorithms relying on the Euclidean norm. An experimental evaluation on four data sets indicates that the proposed algorithms outperform consistently clustering algorithms relying on the Euclidean norm and they are strong competitors to non-Euclidean algorithms which are computationally more demanding.  相似文献   

17.
Hierarchical clustering of mixed data based on distance hierarchy   总被引:1,自引:0,他引:1  
Data clustering is an important data mining technique which partitions data according to some similarity criterion. Abundant algorithms have been proposed for clustering numerical data and some recent research tackles the problem of clustering categorical or mixed data. Unlike the subtraction scheme used for numerical attributes, there is no standard for measuring distance between categorical values. In this article, we propose a distance representation scheme, distance hierarchy, which facilitates expressing the similarity between categorical values and also unifies distance measuring of numerical and categorical values. We then apply the scheme to mixed data clustering, in particular, to integrate with a hierarchical clustering algorithm. Consequently, this integrated approach can uniformly handle numerical data and categorical data, and also enables one to take the similarity between categorical values into consideration. Experimental results show that the proposed approach produces better clustering results than conventional clustering algorithms when categorical attributes are present and their values have different degree of similarity.  相似文献   

18.
Generally, abnormal points (noise and outliers) cause cluster analysis to produce low accuracy especially in fuzzy clustering. These data not only stay in clusters but also deviate the centroids from their true positions. Traditional fuzzy clustering like Fuzzy C-Means (FCM) always assigns data to all clusters which is not reasonable in some circumstances. By reformulating objective function in exponential equation, the algorithm aggressively selects data into the clusters. However noisy data and outliers cannot be properly handled by clustering process therefore they are forced to be included in a cluster because of a general probabilistic constraint that the sum of the membership degrees across all clusters is one. In order to improve this weakness, possibilistic approach relaxes this condition to improve membership assignment. Nevertheless, possibilistic clustering algorithms generally suffer from coincident clusters because their membership equations ignore the distance to other clusters. Although there are some possibilistic clustering approaches that do not generate coincident clusters, most of them require the right combination of multiple parameters for the algorithms to work. In this paper, we theoretically study Possibilistic Exponential Fuzzy Clustering (PXFCM) that integrates possibilistic approach with exponential fuzzy clustering. PXFCM has only one parameter and not only partitions the data but also filters noisy data or detects them as outliers. The comprehensive experiments show that PXFCM produces high accuracy in both clustering results and outlier detection without generating coincident problems.  相似文献   

19.
Clustering Incomplete Data Using Kernel-Based Fuzzy C-means Algorithm   总被引:3,自引:0,他引:3  
  相似文献   

20.
聚类分析是数据挖掘中的一个重要研究课题。在许多实际应用中,聚类分析的数据往往具有很高的维度,例如文档数据、基因微阵列等数据可以达到上千维,而在高维数据空间中,数据的分布较为稀疏。受这些因素的影响,许多对低维数据有效的经典聚类算法对高维数据聚类常常失效。针对这类问题,本文提出了一种基于遗传算法的高维数据聚类新方法。该方法利用遗传算法的全局搜索能力对特征空间进行搜索,以找出有效的聚类特征子空间。同时,为了考察特征维在子空间聚类中的特征,本文设计出一种基于特征维对子空间聚类贡献率的适应度函数。人工数据、真实数据的实验结果以及与k-means算法的对比实验证明了该方法的可行性和有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号