首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
传统基于划分的聚类算法需要人工给定聚类数,且由于算法采取刚性划分,可能会导致将较大或延伸状的聚类簇分割的现象,导致错误的聚类结果。密度峰聚类是近年提出的一种新的基于密度的聚类算法,该算法不需要预先指定聚类数目,且能够发现非球形簇。将密度峰思想引入基于划分的聚类算法,提出一种基于密度峰和划分的快速聚类算法(DDBSCAN),该算法首先获取一组簇的核心对象(密度峰),用于描述簇的“骨骼”,而后将周围的点划分到最近的核心对象,最后通过判断划分边界处的密度情况合并簇。实验证明,该算法能有效地适应任意形状、大小不一的数据集,与传统基于密度的聚类算法相比收敛速度更快。  相似文献   

2.
一个好的聚类算法应该是用户输入参数少,对噪声不敏感,能够发现任意形状,可以处理高维数据,具有可解释性和可扩展性.将聚类分析应用于地理信息系统中,可以实现对GIS数据信息概括和综合.文中提出一种基于距离阈值相邻的聚类算法,通过距离阈值可达的方式逐个将对象加入到已知聚类中,可以发现任意形状的聚类并对噪声数据有很好的分离效果,实验中将该算法应用于地理信息系统中的数据挖掘实现上,结果证明此算法对于实现GIS聚类具有满意的效果.  相似文献   

3.
为了有效地发现数据聚簇,尤其是任意形状的聚簇,近年来提出了许多基于密度的聚类算法,如DBSCAN.OPTICS,DENCLUE,CLIQUE等.提出了一个新的基于密度的聚类算法CODU(clustering by ordering dense unit),基本思想是对单位子空间按密度排序,对每一个子空间,如果其密度大于周围邻居的密度则形成一个新的聚簇.由于子空间的数目远小于数据对象的数目,因此算法效率较高.同时,提出了一个新的数据可视化方法,将数据对象看做刺激光谱映射到三维空间,使聚类的结果清晰地展示出来.  相似文献   

4.
Clustering is an important unsupervised learning technique widely used to discover the inherent structure of a given data set. Some existing clustering algorithms uses single prototype to represent each cluster, which may not adequately model the clusters of arbitrary shape and size and hence limit the clustering performance on complex data structure. This paper proposes a clustering algorithm to represent one cluster by multiple prototypes. The squared-error clustering is used to produce a number of prototypes to locate the regions of high density because of its low computational cost and yet good performance. A separation measure is proposed to evaluate how well two prototypes are separated. Multiple prototypes with small separations are grouped into a given number of clusters in the agglomerative method. New prototypes are iteratively added to improve the poor cluster separations. As a result, the proposed algorithm can discover the clusters of complex structure with robustness to initial settings. Experimental results on both synthetic and real data sets demonstrate the effectiveness of the proposed clustering algorithm.  相似文献   

5.
针对目前已有的聚类算法不能很好地处理包含不同密度的簇数据,或者不能很好地区分相邻的密度相差不大的簇的问题,提出1种新的基于严格最近邻居和共享最近邻居的聚类算法.通过构造共享严格最近邻图,使样本点在密度一致的区域保持连接,而在密度不同的相邻区域断开连接,并尽可能去除噪声点和孤立点.该算法可以处理包含有不同密度的簇数据,而且在处理高维数据时具有较低的时间复杂度、实验结果证明,该算法能有效找出不同大小、形状和密度的聚类.  相似文献   

6.
张梅  陈梅  李明 《计算机工程与科学》2021,43(12):2243-2252
针对聚类算法在检测任意簇时精确度不高、迭代次数多及效果不佳等缺点,提出了基于局部中心度量的边界点划分密度聚类算法——DBLCM.在局部中心度量的限制下,数据点被划分到核心区域或边界区域.核心区域的点按照互近邻优先成簇的分配方式形成初始簇,边界区域的点参考互近邻中距离最近点所在簇进行分配,从而得到最终簇.为验证算法的有效性,将DBLCM与3个经典算法和3个近几年新提出的优秀算法,在包含任意形状、任意密度的二维数据集和任意维度的多维数据集上进行测试.另外,为了验证DBLCM算法中参数k的敏感性,在所用的数据集上做了k值与簇质量的相关性测试.实验结果表明,DBLCM算法具有识别精度高,检测任意簇效果好和无需迭代等优点,综合性能优于6个对比算法.  相似文献   

7.
Clustering became a classical problem in databases, data warehouses, pattern recognition, artificial intelligence, and computer graphics. Applications in large spatial databases, point-based graphics, etc., give rise to new requirements for the clustering algorithms: automatic discovering of arbitrary shaped and/or non-homogeneous clusters, discovering of clusters located in low-dimensional hyperspace, detecting cluster boundaries. On that account, a new clustering and boundary detecting algorithm, ADACLUS, is proposed. It is based on the specially constructed adaptive influence function, and therefore, discovers clusters of arbitrary shapes and diverse densities, adequately captures clusters boundaries, and it is robust to noise. Normally ADACLUS performs clustering purely automatically without any preliminary parameter settings. But it also gives the user an optional possibility to set three parameters with clear meaning in order to adjust clustering for special applications. The algorithm was tested on various two-dimensional data sets, and it exhibited its effectiveness in discovering clusters of complex shapes and diverse densities. Linear complexity of the ADACLUS gives it an advantage over some well-known algorithms.  相似文献   

8.
Clustering problem is an unsupervised learning problem. It is a procedure that partition data objects into matching clusters. The data objects in the same cluster are quite similar to each other and dissimilar in the other clusters. Density-based clustering algorithms find clusters based on density of data points in a region. DBSCAN algorithm is one of the density-based clustering algorithms. It can discover clusters with arbitrary shapes and only requires two input parameters. DBSCAN has been proved to be very effective for analyzing large and complex spatial databases. However, DBSCAN needs large volume of memory support and often has difficulties with high-dimensional data and clusters of very different densities. So, partitioning-based DBSCAN algorithm (PDBSCAN) was proposed to solve these problems. But PDBSCAN will get poor result when the density of data is non-uniform. Meanwhile, to some extent, DBSCAN and PDBSCAN are both sensitive to the initial parameters. In this paper, we propose a new hybrid algorithm based on PDBSCAN. We use modified ant clustering algorithm (ACA) and design a new partitioning algorithm based on ‘point density’ (PD) in data preprocessing phase. We name the new hybrid algorithm PACA-DBSCAN. The performance of PACA-DBSCAN is compared with DBSCAN and PDBSCAN on five data sets. Experimental results indicate the superiority of PACA-DBSCAN algorithm.  相似文献   

9.
SUDBC:一种基于空间单元密度的快速聚类算法   总被引:3,自引:0,他引:3  
随着数据规模越来越大,要求聚类算法有很高的执行效率,很好的扩展性,能发现任意形状的聚类以及对噪音数据的不敏感性.提出了一种基于空间单元密度的快速聚类算法SUDBC,该算法首先将被聚类的数据划分成若干个空间单元,然后基于空间单元密度将密度超过给定阈值的邻居单元合并为一个类.实验结果验证了SUDBC算法具有处理任意形状的数据和对噪音数据不敏感的特点.  相似文献   

10.
K‐means clustering can be highly accurate when the number of clusters and the initial cluster centre are appropriate. An inappropriate determination of the number of clusters or the initial cluster centre decreases the accuracy of K‐means clustering. However, determining these values is problematic. To solve these problems, we used density‐based spatial clustering of application with noise (DBSCAN) because it does not require a predetermined number of clusters; however, it has some significant drawbacks. Using DBSCAN with high‐dimensional data and data with potentially different densities decreases the accuracy to some degree. Therefore, the objective of this research is to improve the efficiency of DBSCAN through a selection of region clusters based on density DBSCAN to automatically find the appropriate number of clusters and initial cluster centres for K‐means clustering. In the proposed method, DBSCAN is used to perform clustering and to select the appropriate clusters by considering the density of each cluster. Subsequently, the appropriate region data are chosen from the selected clusters. The experimental results yield the appropriate number of clusters and the appropriate initial cluster centres for K‐means clustering. In addition, the results of the selection of region clusters based on density DBSCAN method are more accurate than those obtained by traditional methods, including DBSCAN and K‐means and related methods such as Partitioning‐based DBSCAN (PDBSCAN) and PDBSCAN by applying the Ant Clustering Algorithm DBSCAN (PACA‐DBSCAN).  相似文献   

11.
Classical clustering methods, such as partitioning and hierarchical clustering algorithms, often fail to deliver satisfactory results, given clusters of arbitrary shapes. Motivated by a clustering validity index based on inter-cluster and intra-cluster density, we propose that the clustering validity index be used not only globally to find optimal partitions of input data, but also locally to determine which two neighboring clusters are to be merged in a hierarchical clustering of Self-Organizing Map (SOM). A new two-level SOM-based clustering algorithm using the clustering validity index is also proposed. Experimental results on synthetic and real data sets demonstrate that the proposed clustering algorithm is able to cluster data in a better way than classical clustering algorithms on an SOM.  相似文献   

12.
一种有效的基于网格和密度的聚类分析算法   总被引:12,自引:0,他引:12  
胡泱  陈刚 《计算机应用》2003,23(12):64-67
讨论数据挖掘中聚类的相关概念、技术和算法。提出一种基于网格和密度的算法,它的优点在于能够自动发现包含有趣知识的子空间,并将里面存在的所有聚类挖掘出来;另一方面它能很好地处理高维数据和大数据集的数据表格。算法将最后的结果用DNF的形式表示出来。  相似文献   

13.
魏方圆  黄德才 《计算机科学》2017,44(Z11):442-447
不确定性数据聚类方法的研究日益受到广泛关注,其中UIDK-means算法与U-PAM算法继承了基于划分算法无法识别任意形状簇和对噪声点敏感的缺陷。FDBSCAN算法事先假定不确定性数据的概率分布函数或概率密度函数是已知的,然而这些信息在实际应用中往往难以获取。针对上述算法的不足,提出一种基于区间数的多维不确定性数据聚类UID-DBSCAN算法。该算法利用区间数结合数据的统计信息合理地表示不确定性数据,采用低计算复杂度的区间数距离函数衡量不确定性数据对象间的相似度,首次提出区间数的密度、密度可达与密度相连等概念,并将其用于扩展簇中,同时结合数据集的统计特征自适应地选取算法的密度参数来实现自动聚类。实验结果表明,UID-DBSCAN算法能够有效识别噪声,处理任意形状簇,具有较高的聚类精度和较低的计算复杂度。  相似文献   

14.
高维数据流子空间聚类发现及维护算法   总被引:5,自引:2,他引:3  
近年来由于数据流应用的大量涌现,基于数据流模型的数据挖掘算法研究已成为重要的应用前沿课题.提出一种基于Hoeffding界的高维数据流的子空间聚类发现及维护算法--SHStream.算法将数据流分段(分段长度由Hoeffding界确定),在数据分段上进行子空间聚类,通过迭代逐步得到满足聚类精度要求的聚类结果,同时针对数据流的动态性,算法对聚类结果进行调整和维护.算法可以有效地处理高雏数据流和对任意形状分布数据的聚类问题.基于真实数据集与仿真数据集的实验表明,算法具有良好的适用性和有效性.  相似文献   

15.
为解决现有密度聚类算法中参数设置依赖经验、复杂密度环境下聚类精度不高等问题,提出了基于簇间最大密度连通点进行密度簇分割与合并的模糊聚类方法。基于高斯混合模型计算数据点密度,形成高维离散密度空间,通过低精度网格连续数据空间,结合插值算法赋予空白网格相应密度,构建连续高维密度空间。对数据点按密度排序后,利用能否从大于当前密度的点集中连续可达识别密度极大值点,再以密度序实现极大值点的邻域扩张,以扩张矛盾实现稀疏交界处最大密度连通点识别、密度簇分割。最后基于最大密度连通点计算密度簇间隶属度,设定隶属度阈值,实现相关邻簇的合并,完成聚类。通过与多种密度聚类算法进行仿真对比验证,该算法大大降低了经验参数的依赖性,具有全局统一的合并隶属度,提升了多密度下的类识别能力。  相似文献   

16.
周世波  徐维祥 《控制与决策》2018,33(11):1921-1930
聚类是数据挖掘领域的一个重要研究方向,针对复杂数据集中存在的簇间密度不均匀、聚类形态多样、聚类中心的识别等问题,引入样本点k近邻信息计算样本点的相对密度,借鉴快速搜索和发现密度峰值聚类(CFSFDP)算法的簇中心点识别方法,提出一种基于相对密度和决策图的聚类算法,实现对任意分布形态数据集聚类中心快速、准确地识别和有效聚类.在7类典型测试数据集上的实验结果表明,所提出的聚类算法具有较好的适用性,与经典的DBSCAN算法和CFSFDP等算法相比,在没有显著提高时间复杂度的基础上,聚类效果更好,对不同类型数据集的适应性也更广.  相似文献   

17.
余莉  甘淑  袁希平  李佳田 《计算机应用》2016,36(5):1267-1272
空间聚类是空间数据挖掘和知识发现领域的主要研究方向之一,但点目标空间分布密度的不均匀、分布形状的多样化,以及"多桥"链接问题的存在,使得基于距离和密度的聚类算法不能高效且有效地识别聚集性高的点目标。提出了基于空间邻近的点目标聚类方法,通过Voronoi建模识别点目标间的空间邻近关系,并以Voronoi势力范围来定义相似度准则,最终构建树结构以实现点目标的聚集模式识别。实验将所提算法与K-means、具有噪声的基于密度的聚类(DBSCAN)算法进行比较分析,结果表明算法能够发现密度不均且任意形状分布的点目标集群,同时准确划分"桥"链接的簇,适用于空间点目标异质分布下的聚集模式识别。  相似文献   

18.
FDBSCAN:一种快速 DBSCAN算法   总被引:19,自引:0,他引:19  
聚类分析是一门重要的技术 ,在数据挖掘、统计数据分析、模式匹配和图象处理等领域具有广泛的应用前景 .目前 ,人们已经提出了许多聚类算法 .其中 ,DBSCAN是一种性能优越的基于密度的空间聚类算法 .利用基于密度的聚类概念 ,用户只需输入一个参数 ,DBSCAN算法就能够发现任意形状的类 ,并可以有效地处理噪声 .文章提出了一种加快 DBSCAN算法的方法 .新算法以核心对象邻域中所有对象的代表对象为种子对象来扩展类 ,从而减少区域查询次数 ,降低 I/ O开销 .实验结果表明 ,FDBSCAN能够有效地  相似文献   

19.
Although there are a lot of clustering algorithms available in the literature, existing algorithms are usually afflicted by practical problems of one form or another, including parameter dependence and the inability to generate clusters of arbitrary shapes. In this paper we aim to solve these two problems by merging the merits of dominant sets and density based clustering algorithms. We firstly apply histogram equalization to eliminate the parameter dependence problem of the dominant sets algorithm. Noticing that the obtained clusters are usually smaller than the real ones, a density threshold based cluster growing step is then used to improve the clustering results, where the involved parameters are determined based on the initial clusters. This is followed by the second cluster growing step which makes use of the density relationship between neighboring data. Data clustering experiments and comparison with other algorithms validate the effectiveness of the proposed algorithm.  相似文献   

20.
基于密度的增量式网格聚类算法   总被引:29,自引:0,他引:29  
提出基于密度的网格聚类算法GDcA,发现大规模空间数据库中任意形状的聚类.该算法首先将数据空间划分成若干体积相同的单元,然后对单元进行聚类只有密度不小于给定阈值的单元才得到扩展,从而大大降低了时间复杂性在GDcA的基础上,给出增量式聚类算法IGDcA,适用于数据的批量更新.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号