首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
自适应的软子空间聚类算法   总被引:6,自引:0,他引:6  
陈黎飞  郭躬德  姜青山 《软件学报》2010,21(10):2513-2523
软子空间聚类是高维数据分析的一种重要手段.现有算法通常需要用户事先设置一些全局的关键参数,且没有考虑子空间的优化.提出了一个新的软子空间聚类优化目标函数,在最小化子空间簇类的簇内紧凑度的同时,最大化每个簇类所在的投影子空间.通过推导得到一种新的局部特征加权方式,以此为基础提出一种自适应的k-means型软子空间聚类算法.该算法在聚类过程中根据数据集及其划分的信息,动态地计算最优的算法参数.在实际应用和合成数据集上的实验结果表明,该算法大幅度提高了聚类精度和聚类结果的稳定性.  相似文献   

2.
Density Conscious Subspace Clustering for High-Dimensional Data   总被引:2,自引:0,他引:2  
Instead of finding clusters in the full feature space, subspace clustering is an emergent task which aims at detecting clusters embedded in subspaces. Most of previous works in the literature are density-based approaches, where a cluster is regarded as a high-density region in a subspace. However, the identification of dense regions in previous works lacks of considering a critical problem, called "the density divergence problem” in this paper, which refers to the phenomenon that the region densities vary in different subspace cardinalities. Without considering this problem, previous works utilize a density threshold to discover the dense regions in all subspaces, which incurs the serious loss of clustering accuracy (either recall or precision of the resulting clusters) in different subspace cardinalities. To tackle the density divergence problem, in this paper, we devise a novel subspace clustering model to discover the clusters based on the relative region densities in the subspaces, where the clusters are regarded as regions whose densities are relatively high as compared to the region densities in a subspace. Based on this idea, different density thresholds are adaptively determined to discover the clusters in different subspace cardinalities. Due to the infeasibility of applying previous techniques in this novel clustering model, we also devise an innovative algorithm, referred to as DENCOS (DENsity COnscious Subspace clustering), to adopt a divide-and-conquer scheme to efficiently discover clusters satisfying different density thresholds in different subspace cardinalities. As validated by our extensive experiments on various data sets, DENCOS can discover the clusters in all subspaces with high quality, and the efficiency of DENCOS outperformes previous works.  相似文献   

3.
为了有效地发现数据聚簇,尤其是任意形状的聚簇,近年来提出了许多基于密度的聚类算法,如DBSCAN.OPTICS,DENCLUE,CLIQUE等.提出了一个新的基于密度的聚类算法CODU(clustering by ordering dense unit),基本思想是对单位子空间按密度排序,对每一个子空间,如果其密度大于周围邻居的密度则形成一个新的聚簇.由于子空间的数目远小于数据对象的数目,因此算法效率较高.同时,提出了一个新的数据可视化方法,将数据对象看做刺激光谱映射到三维空间,使聚类的结果清晰地展示出来.  相似文献   

4.
When dealing with high dimensional data, clustering faces the curse of dimensionality problem. In such data sets, clusters of objects exist in subspaces rather than in whole feature space. Subspace clustering algorithms have already been introduced to tackle this problem. However, noisy data points present in this type of data can have great impact on the clustering results. Therefore, to overcome these problems simultaneously, the fuzzy soft subspace clustering with noise detection (FSSC-ND) is proposed. The presented algorithm is based on the entropy weighting soft subspace clustering and noise clustering. The FSSC-ND algorithm uses a new objective function and update rules to achieve the mentioned goals and present more interpretable clustering results. Several experiments have been conducted on artificial and UCI benchmark datasets to assess the performance of the proposed algorithm. In addition, a number of cancer gene expression datasets are used to evaluate the performance of the proposed algorithm when dealing with high dimensional data. The results of these experiments demonstrate the superiority of the FSSC-ND algorithm in comparison with the state of the art clustering algorithms developed in earlier research.  相似文献   

5.
针对现有子空间聚类方法处理类簇间存在重叠时聚类准确率较低的问题,文中提出基于概率模型的重叠子空间聚类算法.首先采用混合范数的子空间表示方法将高维数据分割为若干个子空间.然后使用服从指数族分布的概率模型判断子空间内数据的重叠部分,并将数据分配到正确的子空间内,进而得到聚类结果,在参数估计时利用交替最大化方法确定函数最优解.在人造数据集和UCI数据集上的测试实验表明,文中算法具有良好的聚类性能,适用于较大规模的数据集.  相似文献   

6.
Robust projected clustering   总被引:4,自引:2,他引:2  
Projected clustering partitions a data set into several disjoint clusters, plus outliers, so that each cluster exists in a subspace. Subspace clustering enumerates clusters of objects in all subspaces of a data set, and it tends to produce many overlapping clusters. Such algorithms have been extensively studied for numerical data, but only a few have been proposed for categorical data. Typical drawbacks of existing projected and subspace clustering algorithms for numerical or categorical data are that they rely on parameters whose appropriate values are difficult to set appropriately or that they are unable to identify projected clusters with few relevant attributes. We present P3C, a robust algorithm for projected clustering that can effectively discover projected clusters in the data while minimizing the number of required parameters. P3C does not need the number of projected clusters as input, and can discover, under very general conditions, the true number of projected clusters. P3C is effective in detecting very low-dimensional projected clusters embedded in high dimensional spaces. P3C positions itself between projected and subspace clustering in that it can compute both disjoint or overlapping clusters. P3C is the first projected clustering algorithm for both numerical and categorical data.  相似文献   

7.
The well known clustering algorithm DBSCAN is founded on the density notion of clustering. However, the use of global density parameter ε-distance makes DBSCAN not suitable in varying density datasets. Also, guessing the value for the same is not straightforward. In this paper, we generalise this algorithm in two ways. First, adaptively determine the key input parameter ε-distance, which makes DBSCAN independent of domain knowledge satisfying the unsupervised notion of clustering. Second, the approach of deriving ε-distance based on checking the data distribution of each dimension makes the approach suitable for subspace clustering, which detects clusters enclosed in various subspaces of high dimensional data. Experimental results illustrate that our approach can efficiently find out the clusters of varying sizes, shapes as well as varying densities.  相似文献   

8.
How to address the challenges of the “curse of dimensionality” and “scalability” in clustering simultaneously? In this paper, we propose arbitrarily oriented synchronized clusters (ORSC), a novel effective and efficient method for subspace clustering inspired by synchronization. Synchronization is a basic phenomenon prevalent in nature, capable of controlling even highly complex processes such as opinion formation in a group. Control of complex processes is achieved by simple operations based on interactions between objects. Relying on the weighted interaction model and iterative dynamic clustering, our approach ORSC (a) naturally detects correlation clusters in arbitrarily oriented subspaces, including arbitrarily shaped nonlinear correlation clusters. Our approach is (b) robust against noise and outliers. In contrast to previous methods, ORSC is (c) easy to parameterize, since there is no need to specify the subspace dimensionality or other difficult parameters. Instead, all interesting subspaces are detected in a fully automatic way. Finally, (d) ORSC outperforms most comparison methods in terms of runtime efficiency and is highly scalable to large and high-dimensional data sets. Extensive experiments have demonstrated the effectiveness and efficiency of our approach.  相似文献   

9.
吴涛  陈黎飞  钟韵宁  孔祥增 《计算机应用研究》2023,40(11):3303-3308+3314
针对传统K-means型软子空间聚类技术中子空间差异度量定义的困难问题,提出一种基于概率距离的子空间差异表示模型,以此为基础提出一种自适应的投影聚类算法。该方法首先基于子空间聚类理论提出一个描述各簇类所关联的软子空间之间的相异度公式;其次,将其与软子空间聚类相结合,定义了聚类目标优化函数,并根据局部搜索策略给出了聚类算法过程。在合成和实际数据集上进行了一系列实验,结果表明该算法引入子空间比较可以为簇类学习更优的软子空间;与现有主流子空间聚类算法相比,所提算法大幅度提升了聚类精度,适用于高维数据聚类分析。  相似文献   

10.
Clustering high dimensional data has become a challenge in data mining due to the curse of dimensionality. To solve this problem, subspace clustering has been defined as an extension of traditional clustering that seeks to find clusters in subspaces spanned by different combinations of dimensions within a dataset. This paper presents a new subspace clustering algorithm that calculates the local feature weights automatically in an EM-based clustering process. In the algorithm, the features are locally weighted by using a new unsupervised weighting method, as a means to minimize a proposed clustering criterion that takes into account both the average intra-clusters compactness and the average inter-clusters separation for subspace clustering. For the purposes of capturing accurate subspace information, an additional outlier detection process is presented to identify the possible local outliers of subspace clusters, and is embedded between the E-step and M-step of the algorithm. The method has been evaluated in clustering real-world gene expression data and high dimensional artificial data with outliers, and the experimental results have shown its effectiveness.  相似文献   

11.
Projective clustering by histograms   总被引:5,自引:0,他引:5  
Recent research suggests that clustering for high-dimensional data should involve searching for "hidden" subspaces with lower dimensionalities, in which patterns can be observed when data objects are projected onto the subspaces. Discovering such interattribute correlations and location of the corresponding clusters is known as the projective clustering problem. We propose an efficient projective clustering technique by histogram construction (EPCH). The histograms help to generate "signatures", where a signature corresponds to some region in some subspace, and signatures with a large number of data objects are identified as the regions for subspace clusters. Hence, projected clusters and their corresponding subspaces can be uncovered. Compared to the best previous methods to our knowledge, this approach is more flexible in that less prior knowledge on the data set is required, and it is also much more efficient. Our experiments compare behaviors and performances of this approach and other projective clustering algorithms with different data characteristics. The results show that our technique is scalable to very large databases, and it is able to return accurate clustering results.  相似文献   

12.
多视图子空间聚类是一种从子空间中学习所有视图共享的统一表示, 挖掘数据潜在聚类结构的方法. 作为一种处理高维数据的聚类方法, 子空间聚类是多视图聚类领域的研究热点之一. 多视图低秩稀疏子空间聚类是一种结合了低秩表示和稀疏约束的子空间聚类方法. 该算法在构造亲和矩阵过程中, 利用低秩稀疏约束同时捕捉了数据的全局结构和局部结构, 优化了子空间聚类的性能. 三支决策是一种基于粗糙集模型的决策思想, 常被应用于聚类算法来反映聚类过程中对象与类簇之间的不确定性关系. 本文基于三支决策的思想, 设计了一种投票制度作为决策依据, 将其与多视图稀疏子空间聚类组成一个统一框架, 从而形成一种新的算法. 在多个人工数据集和真实数据集上的实验表明, 该算法可提高多视图聚类的准确性.  相似文献   

13.
基于方差权重矩阵模型的高维数据子空间聚类算法   总被引:1,自引:1,他引:0  
在处理高维数据时,聚类的工作往往归结为对子空间的划分问题。大量的真实实验数据表明,相同的属性对于高维数据的每一类子空间而言并不是同等重要的,因此,在FCM算法的基础上引入了方差权重矩阵模型,创造出了新的聚类算法称之为WM-FCM。该算法通过不断地聚类迭代调整权重值,使得其重要的属性在各个子空间内更为显著地表征出来,从而达到更好的聚类效果。从基于模拟数据集以及UCI数据集的实验结果表明,该改进的算法是有效的。  相似文献   

14.
This paper presents a new k-means type algorithm for clustering high-dimensional objects in sub-spaces. In high-dimensional data, clusters of objects often exist in subspaces rather than in the entire space. For example, in text clustering, clusters of documents of different topics are categorized by different subsets of terms or keywords. The keywords for one cluster may not occur in the documents of other clusters. This is a data sparsity problem faced in clustering high-dimensional data. In the new algorithm, we extend the k-means clustering process to calculate a weight for each dimension in each cluster and use the weight values to identify the subsets of important dimensions that categorize different clusters. This is achieved by including the weight entropy in the objective function that is minimized in the k-means clustering process. An additional step is added to the k-means clustering process to automatically compute the weights of all dimensions in each cluster. The experiments on both synthetic and real data have shown that the new algorithm can generate better clustering results than other subspace clustering algorithms. The new algorithm is also scalable to large data sets.  相似文献   

15.
A major challenge in subspace clustering is that subspace clustering may generate an explosive number of clusters with high computational complexity, which severely restricts the usage of subspace clustering. The problem gets even worse with the increase of the data’s dimensionality. In this paper, we propose to summarize the set of subspace clusters into k representative clusters to alleviate the problem. Typically, subspace clusters can be clustered further into k groups, and the set of representative clusters can be selected from each group. In such a way, only the most representative subspace clusters will be returned to user. Unfortunately, when the size of the set of representative clusters is specified, the problem of finding the optimal set is NP-hard. To solve this problem efficiently, we present two approximate methods: PCoC and HCoC. The greatest advantage of our methods is that we only need a subset of subspace clusters as the input instead of the complete set of subspace clusters. Precisely, only the clusters in low-dimensional subspaces are computed and assembled into representative clusters in high-dimensional subspaces. The approximate results can be found in polynomial time. Our performance study shows both the effectiveness and efficiency of these methods.  相似文献   

16.
为了提升分类数据聚类集成的效果,提出了一种新的相关随机子空间聚类集成模型。该模型利用粗糙集理论将分类属性分解成相关和不相关子集,在相关属性子集上随机生成多个相关子空间并对分类数据进行聚类,通过集成多个较优且具差异性的聚类结果以获得最终的聚类划分。此外,将粗糙集约简概念应用于相关子空间属性数目的确定,有效地避免了参数对聚类结果的影响。UCI数据集实验表明,新模型的性能优于其他已有模型,说明了其有效性。  相似文献   

17.
在D-S证据理论的基础上,给出了可信子空间的定义及能够发现所有可信子空间的贪心算法CSL(creditable subspace labeling)。该方法迭代地发现原始特征空间的信任子空间集Cs。用户根据应用领域的需求, 对Cs中的每个可信子空间调用传统聚类算法发现聚类结果。实验结果表明,CSL具有正确发现原始特征空间的真实子空间的能力,为传统聚类算法处理高维数据空间聚类问题提供了一种新的途径。  相似文献   

18.
一种有效的基于网格和密度的聚类分析算法   总被引:12,自引:0,他引:12  
胡泱  陈刚 《计算机应用》2003,23(12):64-67
讨论数据挖掘中聚类的相关概念、技术和算法。提出一种基于网格和密度的算法,它的优点在于能够自动发现包含有趣知识的子空间,并将里面存在的所有聚类挖掘出来;另一方面它能很好地处理高维数据和大数据集的数据表格。算法将最后的结果用DNF的形式表示出来。  相似文献   

19.
一种基于网格方法的高维数据流子空间聚类算法   总被引:4,自引:0,他引:4  
基于对网格聚类方法的分析,结合由底向上的网格方法和自顶向下的网格方法,设计了一个能在线处理高维数据流的子空间聚类算法。通过利用由底向上网格方法对数据的压缩能力和自顶向下网格方法处理高维数据的能力,算法能基于对数据流的一次扫描,快速识别数据中位于不同子空间内的簇。理论分析以及在多个数据集上的实验表明算法具有较高的计算精度与计算效率。  相似文献   

20.
Subspace clustering finds sets of objects that are homogeneous in subspaces of high-dimensional datasets, and has been successfully applied in many domains. In recent years, a new breed of subspace clustering algorithms, which we denote as enhanced subspace clustering algorithms, have been proposed to (1) handle the increasing abundance and complexity of data and to (2) improve the clustering results. In this survey, we present these enhanced approaches to subspace clustering by discussing the problems they are solving, their cluster definitions and algorithms. Besides enhanced subspace clustering, we also present the basic subspace clustering and the related works in high-dimensional clustering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号