首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Clustering is an important unsupervised learning technique widely used to discover the inherent structure of a given data set. Some existing clustering algorithms uses single prototype to represent each cluster, which may not adequately model the clusters of arbitrary shape and size and hence limit the clustering performance on complex data structure. This paper proposes a clustering algorithm to represent one cluster by multiple prototypes. The squared-error clustering is used to produce a number of prototypes to locate the regions of high density because of its low computational cost and yet good performance. A separation measure is proposed to evaluate how well two prototypes are separated. Multiple prototypes with small separations are grouped into a given number of clusters in the agglomerative method. New prototypes are iteratively added to improve the poor cluster separations. As a result, the proposed algorithm can discover the clusters of complex structure with robustness to initial settings. Experimental results on both synthetic and real data sets demonstrate the effectiveness of the proposed clustering algorithm.  相似文献   

2.
Robust projected clustering   总被引:4,自引:2,他引:2  
Projected clustering partitions a data set into several disjoint clusters, plus outliers, so that each cluster exists in a subspace. Subspace clustering enumerates clusters of objects in all subspaces of a data set, and it tends to produce many overlapping clusters. Such algorithms have been extensively studied for numerical data, but only a few have been proposed for categorical data. Typical drawbacks of existing projected and subspace clustering algorithms for numerical or categorical data are that they rely on parameters whose appropriate values are difficult to set appropriately or that they are unable to identify projected clusters with few relevant attributes. We present P3C, a robust algorithm for projected clustering that can effectively discover projected clusters in the data while minimizing the number of required parameters. P3C does not need the number of projected clusters as input, and can discover, under very general conditions, the true number of projected clusters. P3C is effective in detecting very low-dimensional projected clusters embedded in high dimensional spaces. P3C positions itself between projected and subspace clustering in that it can compute both disjoint or overlapping clusters. P3C is the first projected clustering algorithm for both numerical and categorical data.  相似文献   

3.
This paper presents a new partitioning algorithm, designated as the Adaptive C-Populations (ACP) clustering algorithm, capable of identifying natural subgroups and influential minor prototypes in an unlabeled dataset. In contrast to traditional Fuzzy C-Means clustering algorithms, which partition the whole dataset equally, adaptive clustering algorithms, such as that presented in this study, identify the natural subgroups in unlabeled datasets. In this paper, data points within a small, dense region located at a relatively large distance from any of the major cluster centers are considered to form a minor prototype. The aim of ACP is to adaptively separate these isolated minor clusters from the major clusters in the dataset. The study commences by introducing the mathematical model of the proposed ACP algorithm and demonstrates its convergence to a stable solution. The ability of ACP to detect minor prototypes is confirmed via its application to the clustering of three different datasets with different sizes and characteristics.  相似文献   

4.
FDBSCAN:一种快速 DBSCAN算法   总被引:19,自引:0,他引:19  
聚类分析是一门重要的技术 ,在数据挖掘、统计数据分析、模式匹配和图象处理等领域具有广泛的应用前景 .目前 ,人们已经提出了许多聚类算法 .其中 ,DBSCAN是一种性能优越的基于密度的空间聚类算法 .利用基于密度的聚类概念 ,用户只需输入一个参数 ,DBSCAN算法就能够发现任意形状的类 ,并可以有效地处理噪声 .文章提出了一种加快 DBSCAN算法的方法 .新算法以核心对象邻域中所有对象的代表对象为种子对象来扩展类 ,从而减少区域查询次数 ,降低 I/ O开销 .实验结果表明 ,FDBSCAN能够有效地  相似文献   

5.
合理的聚类原型是正确聚类的前提.针对现有聚类算法原型选取不合理、计算聚类个数存在偏差等问题,提出基于过滤模型的聚类算法(CA-FM).算法以提出的过滤模型去除干扰聚类过程的边界和噪声对象,依据核心对象之间的近邻关系生成邻接矩阵,通过遍历矩阵计算聚类个数;然后,按密度因子将数据对象排序,从中选出聚类原型;最后,将其余对象按照距高密度对象的最小距离划分到相应的簇中,形成最终聚类.在人工合成数据集、UCI数据集以及人脸识别数据集上的实验结果验证了算法的有效性,与同类算法相比,CA-FM算法具有较高的聚类精度.  相似文献   

6.
Fuzzy C-means (FCM) clustering has been widely used successfully in many real-world applications. However, the FCM algorithm is sensitive to the initial prototypes, and it cannot handle non-traditional curved clusters. In this paper, a multi-center fuzzy C-means algorithm based on transitive closure and spectral clustering (MFCM-TCSC) is provided. In this algorithm, the initial guesses of the locations of the cluster centers or the membership values are not necessary. Multi-centers are adopted to represent the non-spherical shape of clusters. Thus, the clustering algorithm with multi-center clusters can handle non-traditional curved clusters. The novel algorithm contains three phases. First, the dataset is partitioned into some subclusters by FCM algorithm with multi-centers. Then, the subclusters are merged by spectral clustering. Finally, based on these two clustering results, the final results are obtained. When merging subclusters, we adopt the lattice similarity method as the distance between two subclusters, which has explicit form when we use the fuzzy membership values of subclusters as the features. Experimental results on two artificial datasets, UCI dataset and real image segmentation show that the proposed method outperforms traditional FCM algorithm and spectral clustering obviously in efficiency and robustness.  相似文献   

7.
一种新的聚类分析算法   总被引:5,自引:0,他引:5       下载免费PDF全文
给出了一种新的无监督聚类算法,但这种算法并非是基于目标函数的聚类算法,而是对数据直接设计一种迭代运算,以使数据在保持类特征的情况下进行重新组合最终达到分类的目的。通过对一类数据的实验表明,该算法在无监督给出类数方面具有较好的鲁棒性;另外,该算法在数据的准确归类、无监督聚类、确定性,以及对特殊类分布的适用性等方面均优于HCM和FCM算法,  相似文献   

8.
In cluster analysis, a fundamental problem is to determine the best estimate of the number of clusters; this is known as the automatic clustering problem. Because of lack of prior domain knowledge, it is difficult to choose an appropriate number of clusters, especially when the data have many dimensions, when clusters differ widely in shape, size, and density, and when overlapping exists among groups. In the late 1990s, the automatic clustering problem gave rise to a new era in cluster analysis with the application of nature-inspired metaheuristics. Since then, researchers have developed several new algorithms in this field. This paper presents an up-to-date review of all major nature-inspired metaheuristic algorithms used thus far for automatic clustering. Also, the main components involved during the formulation of metaheuristics for automatic clustering are presented, such as encoding schemes, validity indices, and proximity measures. A total of 65 automatic clustering approaches are reviewed, which are based on single-solution, single-objective, and multiobjective metaheuristics, whose usage percentages are 3%, 69%, and 28%, respectively. Single-objective clustering algorithms are adequate to efficiently group linearly separable clusters. However, a strong tendency in using multiobjective algorithms is found nowadays to address non-linearly separable problems. Finally, a discussion and some emerging research directions are presented.  相似文献   

9.
The self-organizing map (SOM) has been widely used in many industrial applications. Classical clustering methods based on the SOM often fail to deliver satisfactory results, specially when clusters have arbitrary shapes. In this paper, through some preprocessing techniques for filtering out noises and outliers, we propose a new two-level SOM-based clustering algorithm using a clustering validity index based on inter-cluster and intra-cluster density. Experimental results on synthetic and real data sets demonstrate that the proposed clustering algorithm is able to cluster data better than the classical clustering algorithms based on the SOM, and find an optimal number of clusters.  相似文献   

10.
高维数据流的自适应子空间聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
高维数据流聚类是数据挖掘领域中的研究热点。由于数据流具有数据量大、快速变化、高维性等特点,许多聚类算法不能取得较好的聚类质量。提出了高维数据流的自适应子空间聚类算法SAStream。该算法改进了HPStream中的微簇结构并定义了候选簇,只在相应的子空间内计算新来数据点到候选簇质心的距离,减少了聚类时被检查微簇的数目,将形成的微簇存储在金字塔时间框架中,使用时间衰减函数删除过期的微簇;当数据流量大时,根据监测的系统资源使用情况自动调整界限半径和簇选择因子,从而调节聚类的粒度。实验结果表明,该算法具有良好的聚类质量和快速的数据处理能力。  相似文献   

11.
概率聚类的算法已经广泛地应用于聚类分析领域,但是这些算法都没有回答如何选择一个最佳的聚类个数的问题。该文首先分析了通用的确定概率聚类个数的方法,然后针对蒙特卡罗交叉验证算法不能解决后验概率分散的问题,提出一种改进的蒙特卡罗交叉验证算法(iMCCV)。实验结果证明该算法可以有效地确定最佳K值。  相似文献   

12.
聚类分析的两个基本任务是分析数据集中簇的数量以及这些簇的位置。大多数的聚类方法通常只关注后一个问题。为了在聚类数不确定的情况下实现聚类分析,本文提出了一种新的结合人工免疫网络和Tabu搜索的动态聚类算法—DCBIT。新算法主要包含两个阶段:先使用人工免疫网络算法获得一个候选聚类中心集,然后使用Tabu搜索在候选聚类中心集上实现动态聚类。仿真实验结果表明与现有方法相比,新方法具有更好的收敛概率和收敛速度。  相似文献   

13.
聚类是无监督机器学习算法的一个分支,它在信息时代具有广泛的应用。然而,在多样化的聚类算法研究中,常存在密度计算需要指定固定的近邻数、需要提前指定簇数目、需要多次迭代完成信息叠加更新等问题,这些问题会让模型丢失部分数据特征,也会加大计算量,从而使得模型的时间复杂度较高。为了解决这些问题,受萤火虫发光和光信息传递、交流的启发,提出了一种萤光信息导航聚类算法(firefly luminescent information navigation clustering algorithm, FLINCA)。该方法由腐草生萤和聚萤成树两大模块构成,首先将数据点视作萤火虫,并采用自适应近邻数的方式确定萤火虫亮度,通过亮度完成萤火虫初步聚类,然后再根据萤火虫树进行簇融合,完成最终聚类。实验证明,与12种不同的算法进行对比,FLINCA在4个聚类benchmark数据集和3个多维真实数据集上表现出较好的聚类效果。这说明基于萤火虫发光和光信息传递的FLINCA算法在聚类问题中具有广泛的应用价值,能够有效解决传统聚类算法中存在的问题,提高聚类结果的准确率。  相似文献   

14.
Clustering Incomplete Data Using Kernel-Based Fuzzy C-means Algorithm   总被引:3,自引:0,他引:3  
  相似文献   

15.
李金泽  徐喜荣  潘子琦  李晓杰 《计算机科学》2017,44(Z6):424-427, 450
聚类算法是近年来国际上机器学习领域的一个新的研究热点。为了能在任意形状的样本空间上聚类,学者们提出了谱聚类和图论聚类等优秀的算法。首先介绍了图论聚类算法中的谱聚类经典NJW算法和NeiMu图论聚类算法的基本思路,提出了改进的自适应谱聚类NJW算法。提出的自适应NJW算法的优点在于无需调试参数,即可自动求出聚类个数,克服了经典NJW算法需要事先设置聚类个数且需反复调试参数δ才能得出数据分类结果的缺点。在UCI标准数据集及实测数据集上对自适应NJW算法与经典NJW算法、自适应NJW算法与NeiMu图论聚类算法进行了比较。实验结果表明,自适应NJW算法方便快捷,且具有较好的实用性。  相似文献   

16.
一种基于蜂群原理的划分聚类算法*   总被引:1,自引:0,他引:1  
针对现有的大部分划分聚类算法受聚类簇的个数K的限制,提出一种基于蜂群原理的划分聚类算法。该方法通过引入蜂群采蜜机制,将聚类中心视为食物源,通过寻找食物源的自组织过程来实现数据对象的聚集。在聚类的过程中引入紧密度函数来评价聚类中心(局部),引入分离度函数来确定最佳聚类簇的个数(全局)。与传统的划分聚类算法相比,本算法无须指定聚类个数即可实现聚类过程。通过仿真实验表明,本文提出的算法不但对最佳聚类数有良好的搜索能力,而且有较高的准确率:算法时间复杂度仅为O(n*k3)(k<相似文献   

17.
Clustering analysis is an important topic in artificial intelligence, data mining and pattern recognition research. Conventional clustering algorithms, for instance, the famous Fuzzy C-means clustering algorithm (FCM), assume that all the attributes are equally relevant to all the clusters. However in most domains, especially for high-dimensional dataset, some attributes are irrelevant, and some relevant ones are less important than others with respect to a specific class. In this paper, such imbalances between the attributes are considered and a new weighted fuzzy kernel-clustering algorithm (WFKCA) is presented. WFKCA performs clustering in a kernel feature space mapped by mercer kernels. Compared with the conventional hard kernel-clustering algorithm, WFKCA can yield the meaningful prototypes (cluster centers) of the clusters. Numerical convergence properties of WFKCA are also discussed. For in-depth studies, WFKCA is extended to WFKCA2, which has been demonstrated as a useful tool for clustering incomplete data. Numerical examples demonstrate the effectiveness of the new WFKCA algorithm  相似文献   

18.
针对许多经典的图聚类算法存在输入参数难以确定、时间复杂度过高、聚类精度较低等缺点,本文提出了一种无需输入参数的基于核心顶点的图聚类算法(NGCC)。该算法将相似的顶点分配到同一个簇后,再利用PageRank算法发现核心顶点以形成初始簇。然后,将剩余的未标记顶点进行分配,形成最终簇结构。实验结果证明,NGCC算法在无需任何参数的条件下,在不同规模的数据集上的聚类质量与对比的经典图聚类算法相当或更优,而且适用范围更广。  相似文献   

19.
在不确定性数据聚类算法的研究中,普遍需要假设不确定性数据服从某种分布,继而获得表示不确定性数据的概率密度函数或概率分布函数,然而这种假设很难保证与实际应用系统中的不确定性数据分布一致。现有的基于密度的算法对初始参数敏感,在对密度不均匀的不确定性数据聚类时,无法发现任意密度的类簇。鉴于这些不足,提出基于区间数的不确定性数据对象排序识别聚类结构算法(UD-OPTICS)。该算法利用区间数理论,结合不确定性数据的相关统计信息来更加合理地表示不确定性数据,提出了低计算复杂度的区间核心距离与区间可达距离的概念与计算方法,将其用于度量不确定性数据间的相似度,拓展类簇与对象排序识别聚类结构。该算法可很好地发现任意密度的类簇。实验结果表明,UD-OPTICS算法具有较高的聚类精度和较低的复杂度。  相似文献   

20.
Major problems exist in both crisp and fuzzy clustering algorithms. The fuzzy c-means type of algorithms use weights determined by a power m of inverse distances that remains fixed over all iterations and over all clusters, even though smaller clusters should have a larger m. Our method uses a different “distance” for each cluster that changes over the early iterations to fit the clusters. Comparisons show improved results. We also address other perplexing problems in clustering: (i) find the optimal number K of clusters; (ii) assess the validity of a given clustering; (iii) prevent the selection of seed vectors as initial prototypes from affecting the clustering; (iv) prevent the order of merging from affecting the clustering; and (v) permit the clusters to form more natural shapes rather than forcing them into normed balls of the distance function. We employ a relatively large number K of uniformly randomly distributed seeds and then thin them to leave fewer uniformly distributed seeds. Next, the main loop iterates by assigning the feature vectors and computing new fuzzy prototypes. Our fuzzy merging then merges any clusters that are too close to each other. We use a modified Xie-Bene validity measure as the goodness of clustering measure for multiple values of K in a user-interaction approach where the user selects two parameters (for eliminating clusters and merging clusters after viewing the results thus far). The algorithm is compared with the fuzzy c-means on the iris data and on the Wisconsin breast cancer data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号