首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于特征加权距离的双指数模糊子空间聚类算法   总被引:2,自引:2,他引:0  
传统的模糊聚类算法(FCM)使用欧氏距离计算数据点之间的差异时,对于高维数据集聚类效果不够理想.对此,以FCM算法的目标函数为基础,用特征加权距离代替传统的欧氏距离,同时向约束条件中引入指数γ和β,提出了一种基于特征加权距离的双指数模糊子空间聚类算法,并讨论了该算法的收敛性.实验表明,所提出算法可以有效提取高维数据集各类别的相关特征,在真实数据集上有较好的聚类效果.  相似文献   

2.
The weighting exponent m is called the fuzzifier that can influence the performance of fuzzy c-means (FCM). It is generally suggested that m∈[1.5,2.5]. On the basis of a robust analysis of FCM, a new guideline for selecting the parameter m is proposed. We will show that a large m value will make FCM more robust to noise and outliers. However, considerably large m values that are greater than the theoretical upper bound will make the sample mean a unique optimizer. A simple and efficient method to avoid this unexpected case in fuzzy clustering is to assign a cluster core to each cluster. We will also discuss some clustering algorithms that extend FCM to contain the cluster cores in fuzzy clusters. For a large theoretical upper bound case, we suggest the implementation of the FCM with a suitable large m value. Otherwise, we suggest implementing the clustering methods with cluster cores. When the data set contains noise and outliers, the fuzzifier m=4 is recommended for both FCM and cluster-core-based methods in a large theoretical upper bound case.  相似文献   

3.
In the fuzzy c-means (FCM) clustering algorithm, almost none of the data points have a membership value of 1. Moreover, noise and outliers may cause difficulties in obtaining appropriate clustering results from the FCM algorithm. The embedding of FCM into switching regressions, called the fuzzy c-regressions (FCRs), still has the same drawbacks as FCM. In this paper, we propose the alpha-cut implemented fuzzy clustering algorithms, referred to as FCMalpha, which allow the data points being able to completely belong to one cluster. The proposed FCMalpha algorithms can form a cluster core for each cluster, where data points inside a cluster core will have a membership value of 1 so that it can resolve the drawbacks of FCM. On the other hand, the fuzziness index m plays different roles for FCM and FCMalpha. We find that the clustering results obtained by FCMalpha are more robust to noise and outliers than FCM when a larger m is used. Moreover, the cluster cores generated by FCMalpha are workable for various data shape clusters, so that FCMalpha is very suitable for embedding into switching regressions. The embedding of FCMalpha into switching regressions is called FCRalpha. The proposed FCRalpha provides better results than FCR for environments with noise or outliers. Numerical examples show the robustness and the superiority of our proposed methods.  相似文献   

4.
The generalized fuzzy c-means clustering algorithm with improved fuzzy partition (GFCM) is a novel modified version of the fuzzy c-means clustering algorithm (FCM). GFCM under appropriate parameters can converge more rapidly than FCM. However, it is found that GFCM is sensitive to noise in gray images. In order to overcome GFCM?s sensitivity to noise in the image, a kernel version of GFCM with spatial information is proposed. In this method, first a term about the spatial constraints derived from the image is introduced into the objective function of GFCM, and then the kernel induced distance is adopted to substitute the Euclidean distance in the new objective function. Experimental results show that the proposed method behaves well in segmentation performance and convergence speed for gray images corrupted by noise.  相似文献   

5.
Fuzzy c-means (FCM) algorithm is an important clustering method in pattern recognition, while the fuzziness parameter, m, in FCM algorithm is a key parameter that can significantly affect the result of clustering. Cluster validity index (CVI) is a kind of criterion function to validate the clustering results, thereby determining the optimal cluster number of a data set. From the perspective of cluster validation, we propose a novel method to select the optimal value of m in FCM, and four well-known CVIs, namely XB, VK, VT, and SC, for fuzzy clustering are used. In this method, the optimal value of m is determined when CVIs reach their minimum values. Experimental results on four synthetic data sets and four real data sets have demonstrated that the range of m is [2, 3.5] and the optimal interval is [2.5, 3].  相似文献   

6.
In fuzzy clustering, the fuzzy c-means (FCM) clustering algorithm is the best known and used method. Since the FCM memberships do not always explain the degrees of belonging for the data well, Krishnapuram and Keller proposed a possibilistic approach to clustering to correct this weakness of FCM. However, the performance of Krishnapuram and Keller's approach depends heavily on the parameters. In this paper, we propose another possibilistic clustering algorithm (PCA) which is based on the FCM objective function, the partition coefficient (PC) and partition entropy (PE) validity indexes. The resulting membership becomes the exponential function, so that it is robust to noise and outliers. The parameters in PCA can be easily handled. Also, the PCA objective function can be considered as a potential function, or a mountain function, so that the prototypes of PCA can be correspondent to the peaks of the estimated function. To validate the clustering results obtained through a PCA, we generalized the validity indexes of FCM. This generalization makes each validity index workable in both fuzzy and possibilistic clustering models. By combining these generalized validity indexes, an unsupervised possibilistic clustering is proposed. Some numerical examples and real data implementation on the basis of the proposed PCA and generalized validity indexes show their effectiveness and accuracy.  相似文献   

7.
软硬结合的快速模糊C-均值聚类算法的研究   总被引:2,自引:1,他引:1  
讨论的是对模糊C-均值聚类方法的改进,在原有的模糊C-均值算法的基础上,提出一种软硬结合的快速模糊C-均值聚类算法。快速模糊C-均值聚类算法是在模糊C-均值聚类算法之前加入一层硬C-均值聚类算法。硬聚类算法能比模糊聚类算法以高得多的速度完成,将硬聚类中心作为模糊聚类中心的迭代初值,从而提高模糊C-均值聚类算法的收敛速度,这对于大量数据的聚类是很有意义的。用数据仿真验证了这种快速模糊C-均值聚类算法比模糊C-均值算法迭代调整过程短,收敛速度快,聚类效果好。  相似文献   

8.
Effective fuzzy c-means clustering algorithms for data clustering problems   总被引:3,自引:0,他引:3  
Clustering is a well known technique in identifying intrinsic structures and find out useful information from large amount of data. One of the most extensively used clustering techniques is the fuzzy c-means algorithm. However, computational task becomes a problem in standard objective function of fuzzy c-means due to large amount of data, measurement uncertainty in data objects. Further, the fuzzy c-means suffer to set the optimal parameters for the clustering method. Hence the goal of this paper is to produce an alternative generalization of FCM clustering techniques in order to deal with the more complicated data; called quadratic entropy based fuzzy c-means. This paper is dealing with the effective quadratic entropy fuzzy c-means using the combination of regularization function, quadratic terms, mean distance functions, and kernel distance functions. It gives a complete framework of quadratic entropy approaching for constructing effective quadratic entropy based fuzzy clustering algorithms. This paper establishes an effective way of estimating memberships and updating centers by minimizing the proposed objective functions. In order to reduce the number iterations of proposed techniques this article proposes a new algorithm to initialize the cluster centers.In order to obtain the cluster validity and choosing the number of clusters in using proposed techniques, we use silhouette method. First time, this paper segments the synthetic control chart time series directly using our proposed methods for examining the performance of methods and it shows that the proposed clustering techniques have advantages over the existing standard FCM and very recent ClusterM-k-NN in segmenting synthetic control chart time series.  相似文献   

9.
传统的快速聚类算法大多基于模糊C均值算法(Fuzzy C-means,FCM),而FCM对初始聚类中心敏感,对噪音数据敏感并且容易收敛到局部极小值,因而聚类准确率不高。可能性C-均值聚类较好地解决了FCM对噪声敏感的问题,但容易产生一致性聚类。将FCM和可能性C-均值聚类结合的聚类算法较好地解决了一致性聚类问题。为进一步提高算法收敛速度和鲁棒性,提出一种基于核的快速可能性聚类算法。该方法引入核聚类的思想,同时使用样本方差对目标函数中参数η进行优化。标准数据集和人造数据集的实验结果表明这种基于核的快速可能性聚类算法提高了算法的聚类准确率,加快了收敛速度。  相似文献   

10.
模糊聚类是数据挖掘中一个重要聚类算法。当前,基于数据流模型的聚类算法已有了广泛的研究,但这些算法均为硬聚类,尚未见数据流上进行模糊聚类的文献。提出一种针对数据流模型的加权模糊聚类算法,基于真实数据集合和人工数据集的实验表明该算法比传统的模糊聚类算法具有更好的聚类性能。  相似文献   

11.
In cluster analysis, the fuzzy c-means (FCM) clustering algorithm is the best known and most widely used method. It was proven to converge to either a local minimum or saddle points by Bezdek et al. Wei and Mendel produced efficient optimality tests for FCM fixed points. Recently, a weighting exponent selection for FCM was proposed by Yu et al. Inspired by these results, we unify several alternative FCM algorithms into one model, called the generalized fuzzy c-means (GFCM). This GFCM model presents a wide variation of FCM algorithms and can easily lead to new and interesting clustering algorithms. Moreover, we construct a general optimality test for GFCM fixed points. This is applied to theoretically choose the parameters in the GFCM model. The experimental results demonstrate the precision of the theoretical analysis.  相似文献   

12.
Gath–Geva (GG) algorithm is one of the most popular methodologies for fuzzy c-means (FCM)-type clustering of data comprising numeric attributes; it is based on the assumption of data deriving from clusters of Gaussian form, a much more flexible construction compared to the spherical clusters assumption of the original FCM. In this paper, we introduce an extension of the GG algorithm to allow for the effective handling of data with mixed numeric and categorical attributes. Traditionally, fuzzy clustering of such data is conducted by means of the fuzzy k-prototypes algorithm, which merely consists in the execution of the original FCM algorithm using a different dissimilarity functional, suitable for attributes with mixed numeric and categorical attributes. On the contrary, in this work we provide a novel FCM-type algorithm employing a fully probabilistic dissimilarity functional for handling data with mixed-type attributes. Our approach utilizes a fuzzy objective function regularized by Kullback–Leibler (KL) divergence information, and is formulated on the basis of a set of probabilistic assumptions regarding the form of the derived clusters. We evaluate the efficacy of the proposed approach using benchmark data, and we compare it with competing fuzzy and non-fuzzy clustering algorithms.  相似文献   

13.
Wu  Ziheng  Wu  Zhongcheng  Zhang  Jun 《Neural computing & applications》2017,28(10):3113-3118

Fuzzy c-means clustering algorithm (FCM) often used in pattern recognition is an important method that has been successfully used in large amounts of practical applications. The FCM algorithm assumes that the significance of each data point is equal, which is obviously inappropriate from the viewpoint of adaptively adjusting the importance of each data point. In this paper, considering the different importance of each data point, a new clustering algorithm based on FCM is proposed, in which an adaptive weight vector W and an adaptive exponent p are introduced and the optimal values of the fuzziness parameter m and adaptive exponent p are determined by SA-PSO when the objective function reaches its minimum value. In this method, the particle swarm optimization (PSO) is integrated with simulated annealing (SA), which can improve the global search ability of PSO. Experimental results have demonstrated that the proposed algorithm can avoid local optima and significantly improve the clustering performance.

  相似文献   

14.
G. Gan  J. Wu 《Pattern recognition》2008,41(6):1939-1947
We establish the convergence of the fuzzy subspace clustering (FSC) algorithm by applying Zangwill's convergence theorem. We show that the iteration sequence produced by the FSC algorithm terminates at a point in the solution set S or there is a subsequence converging to a point in S. In addition, we present experimental results that illustrate the convergence properties of the FSC algorithm in various scenarios.  相似文献   

15.
In this paper we present a new distance metric that incorporates the distance variation in a cluster to regularize the distance between a data point and the cluster centroid. It is then applied to the conventional fuzzy C-means (FCM) clustering in data space and the kernel fuzzy C-means (KFCM) clustering in a high-dimensional feature space. Experiments on two-dimensional artificial data sets, real data sets from public data libraries and color image segmentation have shown that the proposed FCM and KFCM with the new distance metric generally have better performance on non-spherically distributed data with uneven density for linear and nonlinear separation.  相似文献   

16.
17.
In practical cluster analysis tasks, an efficient clustering algorithm should be less sensitive to parameter configurations and tolerate the existence of outliers. Based on the neural gas (NG) network framework, we propose an efficient prototype-based clustering (PBC) algorithm called enhanced neural gas (ENG) network. Several problems associated with the traditional PBC algorithms and original NG algorithm such as sensitivity to initialization, sensitivity to input sequence ordering and the adverse influence from outliers can be effectively tackled in our new scheme. In addition, our new algorithm can establish the topology relationships among the prototypes and all topology-wise badly located prototypes can be relocated to represent more meaningful regions. Experimental results1on synthetic and UCI datasets show that our algorithm possesses superior performance in comparison to several PBC algorithms and their improved variants, such as hard c-means, fuzzy c-means, NG, fuzzy possibilistic c-means, credibilistic fuzzy c-means, hard/fuzzy robust clustering and alternative hard/fuzzy c-means, in static data clustering tasks with a fixed number of prototypes.  相似文献   

18.
在综合分析标准的模糊C-均值聚类算法和条件模糊C-均值聚类算法基础上,对模糊划分空间进行修改,进一步弱化模糊划分矩阵的约束,给出一种扩展的条件模糊C-均值聚类算法。算法的划分矩阵和原型不依赖于背景约束及模糊划分矩阵的隶属度总和。实验结果表明:该算法可以得到不同的聚类原型,并具有很好的聚类效果。  相似文献   

19.
Although there have been many researches on cluster analysis considering feature (or variable) weights, little effort has been made regarding sample weights in clustering. In practice, not every sample in a data set has the same importance in cluster analysis. Therefore, it is interesting to obtain the proper sample weights for clustering a data set. In this paper, we consider a probability distribution over a data set to represent its sample weights. We then apply the maximum entropy principle to automatically compute these sample weights for clustering. Such method can generate the sample-weighted versions of most clustering algorithms, such as k-means, fuzzy c-means (FCM) and expectation & maximization (EM), etc. The proposed sample-weighted clustering algorithms will be robust for data sets with noise and outliers. Furthermore, we also analyze the convergence properties of the proposed algorithms. This study also uses some numerical data and real data sets for demonstration and comparison. Experimental results and comparisons actually demonstrate that the proposed sample-weighted clustering algorithms are effective and robust clustering methods.  相似文献   

20.
Fuzzy c-means (FCM) clustering algorithms have been widely used to solve clustering problems. Yang and Yu [1] extended these to optimization procedures with respect to any probability distribution. They showed that the optimal cluster centers are the fixed points of these generalized FCM clustering algorithms. The convergence properties of algorithms are the important theoretical issue. In this paper, we present convergence properties of the generalized FCM clustering algorithms. These are global convergence, local convergence, and its rate of convergence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号