首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 359 毫秒
1.
针对DBSCAN聚类算法不能对变密度分布数据集进行有效聚类,VDBSCAN算法借助k-dist图来自动获取各个密度层次的数据对象的邻域半径,解决了具有不同密度层次分布数据集的聚类问题. k-VDBSCAN算法通过对k值的自动获取,减小了VDBSCAN中参数k对最终聚类结果的影响. 针对k值的自动获取,在原有的k-VDBSCAN聚类算法基础上,依据数据集本身,利用数据对象间距离的特征,提出了一种k值改进自动获取聚类算法. 理论分析与实验结果表明,新的改进算法能够有效的自动获得参数k的值,并且在聚类结果、时间效率方面都有明显的提高.  相似文献   

2.
贾洪杰  丁世飞  史忠植 《软件学报》2015,26(11):2836-2846
谱聚类将聚类问题转化成图划分问题,是一种基于代数图论的聚类方法.在求解图划分目标函数时,一般利用Rayleigh熵的性质,通过计算Laplacian矩阵的特征向量将原始数据点映射到一个低维的特征空间中,再进行聚类.然而在谱聚类过程中,存储相似矩阵的空间复杂度是O(n2),对Laplacian矩阵特征分解的时间复杂度一般为O(n3),这样的复杂度在处理大规模数据时是无法接受的.理论证明,Normalized Cut图聚类与加权核k-means都等价于矩阵迹的最大化问题.因此,可以用加权核k-means算法来优化Normalized Cut的目标函数,这就避免了对Laplacian矩阵特征分解.不过,加权核k-means算法需要计算核矩阵,其空间复杂度依然是O(n2).为了应对这一挑战,提出近似加权核k-means算法,仅使用核矩阵的一部分来求解大数据的谱聚类问题.理论分析和实验对比表明,近似加权核k-means的聚类表现与加权核k-means算法是相似的,但是极大地减小了时间和空间复杂性.  相似文献   

3.
丁世飞  徐晓  王艳茹 《软件学报》2020,31(11):3321-3333
密度峰值聚类(clustering by fast search and find of density peaks,简称DPC)是一种基于局部密度和相对距离属性快速寻找聚类中心的有效算法.DPC通过决策图寻找密度峰值作为聚类中心,不需要提前指定类簇数,并可以得到任意形状的簇聚类.但局部密度和相对距离的计算都只是简单依赖基于距离度量的相似度矩阵,所以在复杂数据上DPC聚类结果不尽如人意,特别是当数据分布不均匀、数据维度较高时.另外,DPC算法中局部密度的计算没有统一的度量,根据不同的数据集需要选择不同的度量方式.第三,截断距离dc的度量只考虑数据的全局分布,忽略了数据的局部信息,所以dc的改变会影响聚类的结果,尤其是在小样本数据集上.针对这些弊端,提出一种基于不相似性度量优化的密度峰值聚类算法(optimized density peaks clustering algorithm based on dissimilarity measure,简称DDPC),引入基于块的不相似性度量方法计算相似度矩阵,并基于新的相似度矩阵计算样本的K近邻信息,然后基于样本的K近邻信息重新定义局部密度的度量方法.经典数据集的实验结果表明,基于不相似性度量优化的密度峰值聚类算法优于DPC的优化算法FKNN-DPC和DPC-KNN,可以在密度不均匀以及维度较高的数据集上得到满意的结果;同时统一了局部密度的度量方式,避免了传统DPC算法中截断距离dc对聚类结果的影响.  相似文献   

4.
粗糙聚类是不确定聚类算法中一种有效的聚类算法,这里通过分析粗糙k-means算法,指出了其中3个参数wl,wu和ε设置时存在的缺点,提出了一种自适应粗糙k-means聚类算法,该算法能进一步优化粗糙k-means的聚类效果,降低对“噪声”的敏感程度,最后通过实验验证了算法的有效性。  相似文献   

5.
k-Median近似计算复杂度与局部搜索近似算法分析   总被引:1,自引:0,他引:1  
k-Median问题的近似算法研究一直是计算机科学工作者关注的焦点,现有研究结果大多是关于欧式空间和Metric空间的,一般距离空间k-Median的结果多年来一直未见.考虑一般距离空间k-Median问题,设dmax/dmin表示k-Median实例中与客户点邻接的最长边长比最短边长的最大者.首先证明dmax/dmin≤ω+ε的k-Median问题不存在近似度小于1+ω-1/e的多项式时间近似算法,除非,由此推出Metric k-Median问题不可近似到1+2/e,除非NP(∈)DTME(NO(log logn)).然后给出k-Median问题的一个局部搜索算法,分析表明,若有dmax/dmin≤ω,则算法的近似度为1+ω-1/2.该结果亦适用于Metric k-Median,ω≤5时,局部搜索算法求解Metric k-Median的近似度为3,好于现有结果3+2/P.通过计算机实验,进一步研究了k-Median局部搜索求解算法的实际计算效果和该算法的改进方法.  相似文献   

6.
在PSO算法的基础上提出的基于量子行为的QPSO算法,并将其应用到基因表达数据集上。QPSO基因聚类算法是将N条基因根据使TWCV(Total Within-Cluster Variation)函数值达到最小分到由用户指定的K个聚类中。根据K-means算法的优点,利用K-means聚类的结果重新初始化粒子群,结合QPSO和PSO的聚类算法提出了KQPSO和KPSO算法。通过在4个实验数据集上利用K-means、PSO、QPSO、KPSO、KQPSO 5个聚类算法得出的结果比较显示QPSO算法在基因表达数据分析上具有良好的性能。  相似文献   

7.
谢民主  陈建二  王建新 《软件学报》2007,18(9):2070-2082
个体单体型MSR(minimum SNP removal)问题是指如何利用个体的基因测序片断数据去掉最少的SNP(single-nucleotide polymorphisms)位点,以确定该个体单体型的计算问题.对此问题,Bafna等人提出了时间复杂度为O(2kn2m)的算法,其中,m为DNA片断总数,n为SNP位点总数,k为片断中洞(片断中的空值位点)的个数.由于一个Mate-Pair片段中洞的个数可以达到100,因此,在片段数据中有Mate-Pair的情况下,Bafna的算法通常是不可行的.根据片段数据的特点提出了一个时间复杂度为O((n-1)(k1-1)k222h+(k1+1)2h+nk2+mk1)的新算法,其中,k1为一个片断覆盖的最大SNP位点数(不大于n),k2为覆盖同一SNP位点的片段的最大数(通常不大于19),h为覆盖同一SNP位点且在该位点取空值的片断的最大数(不大于k2).该算法的时间复杂度与片断中洞的个数的最大值k没有直接的关系,在有Mate-Pair片断数据的情况下仍然能够有效地进行计算,具有良好的可扩展性和较高的实用价值.  相似文献   

8.
目的 高光谱图像波段数目巨大,导致在解译及分类过程中出现“维数灾难”的现象。针对该问题,在K-means聚类算法基础上,考虑各个波段对不同聚类的重要程度,同时顾及类间信息,提出一种基于熵加权K-means全局信息聚类的高光谱图像分类算法。方法 首先,引入波段权重,用来刻画各个波段对不同聚类的重要程度,并定义熵信息测度表达该权重。其次,为避免局部最优聚类,引入类间距离测度实现全局最优聚类。最后,将上述两类测度引入K-means聚类目标函数,通过最小化目标函数得到最优分类结果。结果 为了验证提出的高光谱图像分类方法的有效性,对Salinas高光谱图像和Pavia University高光谱图像标准图中的地物类别根据其光谱反射率差异程度进行合并,将合并后的标准图作为新的标准分类图。分别采用本文算法和传统K-means算法对Salinas高光谱图像和Pavia University高光谱图像进行实验,并定性、定量地评价和分析了实验结果。对于图像中合并后的地物类别,光谱反射率差异程度大,从视觉上看,本文算法较传统K-means算法有更好的分类结果;从分类精度看,本文算法的总精度分别为92.20%和82.96%, K-means算法的总精度分别为83.39%和67.06%,较K-means算法增长8.81%和15.9%。结论 提出一种基于熵加权K-means全局信息聚类的高光谱图像分类算法,实验结果表明,本文算法对高光谱图像中具有不同光谱反射率差异程度的各类地物目标均能取得很好的分类结果。  相似文献   

9.
自适应的软子空间聚类算法   总被引:6,自引:0,他引:6  
陈黎飞  郭躬德  姜青山 《软件学报》2010,21(10):2513-2523
软子空间聚类是高维数据分析的一种重要手段.现有算法通常需要用户事先设置一些全局的关键参数,且没有考虑子空间的优化.提出了一个新的软子空间聚类优化目标函数,在最小化子空间簇类的簇内紧凑度的同时,最大化每个簇类所在的投影子空间.通过推导得到一种新的局部特征加权方式,以此为基础提出一种自适应的k-means型软子空间聚类算法.该算法在聚类过程中根据数据集及其划分的信息,动态地计算最优的算法参数.在实际应用和合成数据集上的实验结果表明,该算法大幅度提高了聚类精度和聚类结果的稳定性.  相似文献   

10.
高晓莉  惠小静  朱乃调 《软件学报》2017,28(7):1629-1639
本文首先对n值Goguen命题逻辑进行公理化扩张,Goguen~,△,记为Π~,△.利用公式的诱导函数给出公式在kk任取~或△)连接词下相对于局部有限理论Γ的Γ-k真度的定义;讨论了Π~,△中Γ-k真度的MP规则、HS规则等相关性质;最后,在Π~,△中定义了两公式间的Γ-k相似度与Γ-k伪距离,得到了公式在连接词下相对于局部有限理论Γ的Γ-k相似度与Γ-k伪距离所具有的一些良好性质.  相似文献   

11.
Color quantization is an important operation with many applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, we investigate the performance of k-means as a color quantizer. We implement fast and exact variants of k-means with several initialization schemes and then compare the resulting quantizers to some of the most popular quantizers in the literature. Experiments on a diverse set of images demonstrate that an efficient implementation of k-means with an appropriate initialization strategy can in fact serve as a very effective color quantizer.  相似文献   

12.
Rough k-means clustering algorithm and its extensions are introduced and successfully applied to real-life data where clusters do not necessarily have crisp boundaries. Experiments with the rough k-means clustering algorithm have shown that it provides a reasonable set of lower and upper bounds for a given dataset. However, the same weight was used for all the data objects in a lower or upper approximate set when computing the new centre for each cluster while the different impacts of the objects in a same approximation were ignored. An improved rough k-means clustering based on weighted distance measure with Gaussian function is proposed in this paper. The validity of this algorithm is demonstrated by simulation and experimental analysis.  相似文献   

13.
In this paper, we present a fast global k-means clustering algorithm by making use of the cluster membership and geometrical information of a data point. This algorithm is referred to as MFGKM. The algorithm uses a set of inequalities developed in this paper to determine a starting point for the jth cluster center of global k-means clustering. Adopting multiple cluster center selection (MCS) for MFGKM, we also develop another clustering algorithm called MFGKM+MCS. MCS determines more than one starting point for each step of cluster split; while the available fast and modified global k-means clustering algorithms select one starting point for each cluster split. Our proposed method MFGKM can obtain the least distortion; while MFGKM+MCS may give the least computing time. Compared to the modified global k-means clustering algorithm, our method MFGKM can reduce the computing time and number of distance calculations by a factor of 3.78-5.55 and 21.13-31.41, respectively, with the average distortion reduction of 5,487 for the Statlog data set. Compared to the fast global k-means clustering algorithm, our method MFGKM+MCS can reduce the computing time by a factor of 5.78-8.70 with the average reduction of distortion of 30,564 using the same data set. The performances of our proposed methods are more remarkable when a data set with higher dimension is divided into more clusters.  相似文献   

14.
Clustering a large volume of data in a distributed environment is a challenging issue. Data stored across multiple machines are huge in size, and solution space is large. Genetic algorithm deals effectively with larger solution space and provides better solution. In this paper, we proposed a novel clustering algorithm for distributed datasets, using combination of genetic algorithm (GA) with Mahalanobis distance and k-means clustering algorithm. The proposed algorithm is two phased; in phase 1, GA is applied in parallel on data chunks located across different machines. Mahalanobis distance is used as fitness value in GA, which considers covariance between the data points and thus provides a better representation of initial data. K-means with K-means\( ++ \) initialization is applied in phase 2 on intermediate output to get final result. The proposed algorithm is implemented on Hadoop framework, which is inherently designed to deal with distributed datasets in a fault-tolerant manner. Extensive experiments were conducted for multiple real-life and synthetic datasets to measure performance of our proposed algorithm. Results were compared with MapReduce-based algorithms, mrk-means, parallel k-means and scaling GA.  相似文献   

15.
Clustering is a popular data analysis and data mining technique. A popular technique for clustering is based on k-means such that the data is partitioned into K clusters. However, the k-means algorithm highly depends on the initial state and converges to local optimum solution. This paper presents a new hybrid evolutionary algorithm to solve nonlinear partitional clustering problem. The proposed hybrid evolutionary algorithm is the combination of FAPSO (fuzzy adaptive particle swarm optimization), ACO (ant colony optimization) and k-means algorithms, called FAPSO-ACO–K, which can find better cluster partition. The performance of the proposed algorithm is evaluated through several benchmark data sets. The simulation results show that the performance of the proposed algorithm is better than other algorithms such as PSO, ACO, simulated annealing (SA), combination of PSO and SA (PSO–SA), combination of ACO and SA (ACO–SA), combination of PSO and ACO (PSO–ACO), genetic algorithm (GA), Tabu search (TS), honey bee mating optimization (HBMO) and k-means for partitional clustering problem.  相似文献   

16.
While reducing the dimensionality of a corpus, concept decomposition (CD) based on fuzzy K-means (FKM) clustering provides better approximation than CD based on spherical k-means clustering. However, performance of the FKM algorithm is limited by its distance metric and it is proved that assignment of feature weights can improve the performance of FKM. Our work builds upon this analysis and proposes two approaches to feature weight selection. Using four testing document collections, we demonstrate that the CD based on the proposed feature-weighted FKM provides better approximation than the CD based on FKM while maintaining the quality of retrieval.  相似文献   

17.
Clustering is a very powerful data mining technique for topic discovery from text documents. The partitional clustering algorithms, such as the family of k-means, are reported performing well on document clustering. They treat the clustering problem as an optimization process of grouping documents into k clusters so that a particular criterion function is minimized or maximized. Usually, the cosine function is used to measure the similarity between two documents in the criterion function, but it may not work well when the clusters are not well separated. To solve this problem, we applied the concepts of neighbors and link, introduced in [S. Guha, R. Rastogi, K. Shim, ROCK: a robust clustering algorithm for categorical attributes, Information Systems 25 (5) (2000) 345–366], to document clustering. If two documents are similar enough, they are considered as neighbors of each other. And the link between two documents represents the number of their common neighbors. Instead of just considering the pairwise similarity, the neighbors and link involve the global information into the measurement of the closeness of two documents. In this paper, we propose to use the neighbors and link for the family of k-means algorithms in three aspects: a new method to select initial cluster centroids based on the ranks of candidate documents; a new similarity measure which uses a combination of the cosine and link functions; and a new heuristic function for selecting a cluster to split based on the neighbors of the cluster centroids. Our experimental results on real-life data sets demonstrated that our proposed methods can significantly improve the performance of document clustering in terms of accuracy without increasing the execution time much.  相似文献   

18.
We present the global k-means algorithm which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N (with N being the size of the data set) executions of the k-means algorithm from suitable initial positions. We also propose modifications of the method to reduce the computational load without significantly affecting solution quality. The proposed clustering methods are tested on well-known data sets and they compare favorably to the k-means algorithm with random restarts.  相似文献   

19.
The phenoregion delineation facilitates more effective monitoring and more accurate forecasting of land-surface phenology (LSP), and thereby can greatly improve natural resources management. This article delineated a series of phenoregion maps by applying the Dynamic-Time-Warping (DTW)-based k-means++ clustering on normalized difference vegetation index (NDVI) time series. The DTW distance, a well-known shape-based similarity measure for time series data, was used as the distance measure instead of the traditional Euclidean distance in k-means++ clustering. These phenoregion maps were compared with the ones clustered based on the similarity of phenological forcing variables. The results demonstrated that the DTW-based k-means++ clustering can capture much more homogeneous phenological cycles within each phenoregion; the two types of phenoregion maps have a medium degree of spatial concordance, and their representativeness of vegetation types is comparable. The phenocycle-based phenoregion map with 15 phenoregions was selected as the optimal one, based on the criteria of cluster cohesion and separation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号