首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
K-means type clustering algorithms for mixed data that consists of numeric and categorical attributes suffer from cluster center initialization problem. The final clustering results depend upon the initial cluster centers. Random cluster center initialization is a popular initialization technique. However, clustering results are not consistent with different cluster center initializations. K-Harmonic means clustering algorithm tries to overcome this problem for pure numeric data. In this paper, we extend the K-Harmonic means clustering algorithm for mixed datasets. We propose a definition for a cluster center and a distance measure. These cluster centers and the distance measure are used with the cost function of K-Harmonic means clustering algorithm in the proposed algorithm. Experiments were carried out with pure categorical datasets and mixed datasets. Results suggest that the proposed clustering algorithm is quite insensitive to the cluster center initialization problem. Comparative studies with other clustering algorithms show that the proposed algorithm produce better clustering results.  相似文献   

2.
Finite mixture models are being increasingly used to provide model-based cluster analysis. To tackle the problem of block clustering which aims to organize the data into homogeneous blocks, recently we have proposed a block mixture model; we have considered this model under the classification maximum likelihood approach and we have developed a new algorithm for simultaneous partitioning based on the classification EM algorithm. From the estimation point of view, classification maximum likelihood approach yields inconsistent estimates of the parameters and in this paper we consider the block clustering problem under the maximum likelihood approach; unfortunately, the application of the classical EM algorithm for the block mixture model is not direct: difficulties arise due to the dependence structure in the model and approximations are required. Considering the block clustering problem under a fuzzy approach, we propose a fuzzy block clustering algorithm to approximate the EM algorithm. To illustrate our approach, we study the case of binary data by using a Bernoulli block mixture.  相似文献   

3.
Mixing matrix estimation in instantaneous blind source separation (BSS) can be performed by exploiting the sparsity and disjoint orthogonality of source signals. As a result, approaches for estimating the unknown mixing process typically employ clustering algorithms on the mixtures in a parametric domain, where the signals can be sparsely represented. In this paper, we propose two algorithms to perform discriminative clustering of the mixture signals for estimating the mixing matrix. For the case of overdetermined BSS, we develop an algorithm to perform linear discriminant analysis based on similarity measures and combine it with K-hyperline clustering. Furthermore, we propose to perform discriminative clustering in a high-dimensional feature space obtained by an implicit mapping, using the kernel trick, for the case of underdetermined source separation. Using simulations on synthetic data, we demonstrate the improvements in mixing matrix estimation performance obtained using the proposed algorithms in comparison to other clustering methods. Finally we perform mixing matrix estimation from speech mixtures, by clustering single source points in the time-frequency domain, and show that the proposed algorithms achieve higher signal to interference ratio when compared to other baseline algorithms.  相似文献   

4.
密度峰值快速搜索聚类CFSFDP算法选择聚类中心时需要通过人工在决策图中选择,且最后进行簇核心与簇光晕划分时会将簇的一些边缘部分划入簇光晕中,导致划分结果不够合理。针对以上问题,提出一种聚类中心自动选择及簇核心与簇光晕分割优化的聚类算法。利用异常检测的思想,寻找簇中心权值的异常点,将异常点作为各簇的聚类中心;引入簇内局部密度,实现对簇核心与簇光晕更合理的分割。通过实验对比,本文提出的算法自动化效果优于CFSFDP算法且得到的聚类结果更为精确。  相似文献   

5.
杜航原  张晶  王文剑   《智能系统学报》2020,15(6):1113-1120
针对聚类集成中一致性函数设计问题,本文提出一种深度自监督聚类集成算法。该算法首先根据基聚类划分结果采用加权连通三元组算法计算样本之间的相似度矩阵,基于相似度矩阵表达邻接关系,将基聚类由特征空间中的数据表示变换至图数据表示;在此基础上,基聚类的一致性集成问题被转化为对基聚类图数据表示的图聚类问题。为此,本文利用图神经网络构造自监督聚类集成模型,一方面采用图自动编码器学习图的低维嵌入,依据低维嵌入似然分布估计聚类集成的目标分布;另一方面利用聚类集成目标对低维嵌入过程进行指导,确保模型获得的图低维嵌入与聚类集成结果是一致最优的。在大量数据集上进行了仿真实验,结果表明本文算法相比HGPA、CSPA和MCLA等算法可以进一步提高聚类集成结果的准确性。  相似文献   

6.
This paper studies the problem of underdetermined blind source separation with the nonstrictly sparse condition. Different from current approaches in literature, we propose a new and more effective algorithm to estimate the mixing matrices resulted from noise output data sets. After we introduce a clustering prototype of orthogonal complement space and give an extension of the normal vector clustering prototype, a new method combing the fuzzy clustering and eigenvalue decomposition technique to estimate the mixing matrix is presented in order to deal with the nonstrictly sparse situation. A convergent algorithm for estimating the mixing matrices is established, and numerical simulations are given to demonstrate the effectiveness of the proposed approach.  相似文献   

7.
为了提高K-medoids算法的精度和稳定性,并解决K-medoids算法的聚类数目需要人工给定和对初始聚类中心点敏感的问题,提出了基于密度权重Canopy的改进K-medoids算法。该算法首先计算数据集中每个样本点的密度值,选择密度值最大的样本点作为第1个聚类中心,并从数据集中删除这个密度簇;然后通过计算剩下样本点的权重,选择出其他聚类中心;最后将密度权重Canopy作为K-medoids的预处理过程,其结果作为K-medoids算法的聚类数目和初始聚类中心。UCI真实数据集和人工模拟数据集上的仿真实验表明,该算法具有较高的精度和较好的稳定性。  相似文献   

8.
Partitional clustering of categorical data is normally performed by using K-modes clustering algorithm, which works well for large datasets. Even though the design and implementation of K-modes algorithm is simple and efficient, it has the pitfall of randomly choosing the initial cluster centers for invoking every new execution that may lead to non-repeatable clustering results. This paper addresses the randomized center initialization problem of K-modes algorithm by proposing a cluster center initialization algorithm. The proposed algorithm performs multiple clustering of the data based on attribute values in different attributes and yields deterministic modes that are to be used as initial cluster centers. In the paper, we propose a new method for selecting the most relevant attributes, namely Prominent attributes, compare it with another existing method to find Significant attributes for unsupervised learning, and perform multiple clustering of data to find initial cluster centers. The proposed algorithm ensures fixed initial cluster centers and thus repeatable clustering results. The worst-case time complexity of the proposed algorithm is log-linear to the number of data objects. We evaluate the proposed algorithm on several categorical datasets and compared it against random initialization and two other initialization methods, and show that the proposed method performs better in terms of accuracy and time complexity. The initial cluster centers computed by the proposed approach are close to the actual cluster centers of the different data we tested, which leads to faster convergence of K-modes clustering algorithm in conjunction to better clustering results.  相似文献   

9.
Unsupervised clustering for datasets with severe outliers inside is a difficult task. In this approach, we propose a cluster-dependent multi-metric clustering approach which is robust to severe outliers. A dataset is modeled as clusters each contaminated by noises of cluster-dependent unknown noise level in formulating outliers of the cluster. With such a model, a multi-metric Lp-norm transformation is proposed and learnt which maps each cluster to the most Gaussian distribution by minimizing some non-Gaussianity measure. The approach is composed of two consecutive phases: multi-metric location estimation (MMLE) and multi-metric iterative chi-square cutoff (ICSC). Algorithms for MMLE and ICSC are proposed. It is proved that the MMLE algorithm searches for the solution of a multi-objective optimization problem and in fact learns a cluster-dependent multi-metric Lq-norm distance and/or a cluster-dependent multi-kernel defined in data space for each cluster. Experiments on heavy-tailed alpha-stable mixture datasets, Gaussian mixture datasets with radial and diffuse outliers added respectively, and the real Wisconsin breast cancer dataset and lung cancer dataset show that the proposed method is superior to many existent robust clustering and outlier detection methods in both clustering and outlier detection performances.  相似文献   

10.
11.
为了更好地评价无监督聚类算法的聚类质量,解决因簇中心重叠而导致的聚类评价结果失效等问题,对常用聚类评价指标进行了分析,提出一个新的内部评价指标,将簇间邻近边界点的最小距离平方和与簇内样本个数的乘积作为整个样本集的分离度,平衡了簇间分离度与簇内紧致度的关系;提出一种新的密度计算方法,将样本集与各样本的平均距离比值较大的对象作为高密度点,使用最大乘积法选取相对分散且具有较高密度的数据对象作为初始聚类中心,增强了K-medoids算法初始中心点的代表性和算法的稳定性,在此基础上,结合新提出的内部评价指标设计了聚类质量评价模型,在UCI和KDD CUP 99数据集上的实验结果表明,新模型能够对无先验知识样本进行有效聚类和合理评价,能够给出最优聚类数目或最优聚类范围.  相似文献   

12.
In this paper, we define a validity measure for fuzzy criterion clustering which is a novel approach to fuzzy clustering that in addition to being non-distance-based, addresses the cluster validity problem. The model is then recast as a bilevel fuzzy criterion clustering problem. We propose an algorithm for this model that solves both the validity and clustering problems. Our approach is validated via some sample problems.  相似文献   

13.
An EM algorithm for the block mixture model   总被引:1,自引:0,他引:1  
Although many clustering procedures aim to construct an optimal partition of objects or, sometimes, of variables, there are other methods, called block clustering methods, which consider simultaneously the two sets and organize the data into homogeneous blocks. Recently, we have proposed a new mixture model called block mixture model which takes into account this situation. This model allows one to embed simultaneous clustering of objects and variables in a mixture approach. We have studied this probabilistic model under the classification likelihood approach and developed a new algorithm for simultaneous partitioning based on the classification EM algorithm. In this paper, we consider the block clustering problem under the maximum likelihood approach and the goal of our contribution is to estimate the parameters of this model. Unfortunately, the application of the EM algorithm for the block mixture model cannot be made directly; difficulties arise due to the dependence structure in the model and approximations are required. Using a variational approximation, we propose a generalized EM algorithm to estimate the parameters of the block mixture model and, to illustrate our approach, we study the case of binary data by using a Bernoulli block mixture.  相似文献   

14.
张林  刘辉 《自动化学报》2012,38(10):1709-1713
面向 Illumina GoldenGate 甲基化微阵列数据提出了一种基于模型的聚类算法. 算法通过建立贝塔无限混合模型, 采用 Dirichlet 过程作为先验, 实现了基于数据和模型的聚类结构的建立, 实验结果表明该算法能够有效估计出聚类类别个数、 每个聚类类别的混合权重、每个聚类类别的特征等信息, 达到比较理想的聚类效果.  相似文献   

15.
Data clustering is a fundamental unsupervised learning task in several domains such as data mining, computer vision, information retrieval, and pattern recognition. In this paper, we propose and analyze a new clustering approach based on both hierarchical Dirichlet processes and the generalized Dirichlet distribution, which leads to an interesting statistical framework for data analysis and modelling. Our approach can be viewed as a hierarchical extension of the infinite generalized Dirichlet mixture model previously proposed in Bouguila and Ziou (IEEE Trans Neural Netw 21(1):107–122, 2010). The proposed clustering approach tackles the problem of modelling grouped data where observations are organized into groups that we allow to remain statistically linked by sharing mixture components. The resulting clustering model is learned using a principled variational Bayes inference-based algorithm that we have developed. Extensive experiments and simulations, based on two challenging applications namely images categorization and web service intrusion detection, demonstrate our model usefulness and merits.  相似文献   

16.
针对多项式有限混合模型参数估计过程中存在的初始化依赖、参数易收敛到边界值以及容易陷入局部最优等问题,引入了最小信息长度准则,优化多项式有限混合模型的参数估计过程。在此基础上,采用基于多项式有限混合模型的聚类算法对用户评分行为进行聚类,利用模型求解得到的聚类归属概率对Slope One算法实施改进。实验结果表明:应用最小信息长度准则对多项式有限混合模型进行优化后,聚类效果明显提高;同时,相比于基于用户聚类的Slope One推荐算法,改进算法具有明显的改进效果。  相似文献   

17.
基于密度的K-means聚类中心选取的优化算法   总被引:2,自引:0,他引:2  
针对传统的K-means算法对于初始聚类中心点和聚类数的敏感问题,提出了一种优化初始聚类中心选取的算法。该算法针对数据对象的分布密度以及计算最近两点的垂直中点方法来确定k个初始聚类中心,再结合均衡化函数对聚类个数进行优化,以获得最优聚类。采用标准的UCI数据集进行实验对比,发现改进后的算法相比传统的算法有较高的准确率和稳定性。  相似文献   

18.
传统K-均值算法对初始聚类中心敏感大,易陷入局部最优值.将遗传算法与K均值算法结合起来进行探讨并提出一种改进的基于K-均值聚类算法的遗传算法,改进后的算法是基于可变长度的聚类中心的实际数目来实现的.同时分别设计出新的交叉算子和变异算子,并且使用的聚类有效性指标DB-Index作为目标函数,该算法很好地解决了聚类中心优化问题,与之前的两种算法相比,改进后的算法改善了聚类的质量,提高了全局的收敛速度.  相似文献   

19.
传统的谱聚类算法对初始化敏感,针对这个缺陷,引入Canopy算法对样本进行“粗”聚类得到初始聚类中心点,将结果作为K-Means算法的输入,提出了一种基于Canopy和谱聚类融合的聚类算法(Canopy-SC),减少了传统谱聚类算法选择初始中心点的盲目性,并将其用于人脸图像聚类。与传统的谱聚类算法相比,Canopy-SC算法能够得到较好的聚类中心和聚类结果,同时具有更高的聚类精确度。实验结果表明了该算法的有效性和可行性。  相似文献   

20.
Basing cluster analysis on mixture models has become a classical and powerful approach. Until now, this approach, which allows to explain some classic clustering criteria such as the well-known k-means criteria and to propose general criteria, has been developed to classify a set of objects measured on a set of variables. But, for this kind of data, if most clustering procedures are designated to construct an optimal partition of objects or, sometimes, of variables, there exist others methods, named block clustering methods, which consider simultaneously the two sets and organize the data into homogeneous blocks.In this work, a new mixture model called block mixture model is proposed to take into account this situation. This model allows to embed simultaneous clustering of objects and variables in a mixture approach. We first consider this probabilistic model in a general context and we develop a new algorithm of simultaneous partitioning based on the CEM algorithm. Then, we focus on the case of binary data and we show that our approach allows us to extend a block clustering method, which had been proposed in this case. Simplicity, fast convergence and the possibility to process large data sets are the major advantages of the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号