首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Clustering is the process of grouping objects that are similar, where similarity between objects is usually measured by a distance metric. The groups formed by a clustering method are referred as clusters. Clustering is a widely used activity with multiple applications ranging from biology to economics. Each clustering technique has some advantages and disadvantages. Some clustering algorithms may even require input parameters which strongly affect the result. In most cases, it is not possible to choose the best distance metric, the best clustering method, and the best input argument values for an input data set. Therefore, multiple clusterings can be obtained by several distance metrics, several clustering methods, and several input argument values. And, multiple clusterings can be combined into a new and better quality final clustering. We propose a family of combining multiple clustering algorithms that are memory efficient, scalable, robust, and intuitive. Our new algorithms offer tremendous speed gain and low memory requirements by working at cluster level, while producing very good quality final clusters. Extensive experimental evaluations on some very challenging artificially generated and real data sets from a diverse set of domains establish the usefulness of our methods.  相似文献   

2.
Clustering is an important unsupervised learning technique widely used to discover the inherent structure of a given data set. Some existing clustering algorithms uses single prototype to represent each cluster, which may not adequately model the clusters of arbitrary shape and size and hence limit the clustering performance on complex data structure. This paper proposes a clustering algorithm to represent one cluster by multiple prototypes. The squared-error clustering is used to produce a number of prototypes to locate the regions of high density because of its low computational cost and yet good performance. A separation measure is proposed to evaluate how well two prototypes are separated. Multiple prototypes with small separations are grouped into a given number of clusters in the agglomerative method. New prototypes are iteratively added to improve the poor cluster separations. As a result, the proposed algorithm can discover the clusters of complex structure with robustness to initial settings. Experimental results on both synthetic and real data sets demonstrate the effectiveness of the proposed clustering algorithm.  相似文献   

3.
4.
The task of discovering natural groupings of input patterns, or clustering, is an important aspect of machine learning and pattern analysis. In this paper, we study the widely used spectral clustering algorithm which clusters data using eigenvectors of a similarity/affinity matrix derived from a data set. In particular, we aim to solve two critical issues in spectral clustering: (1) how to automatically determine the number of clusters, and (2) how to perform effective clustering given noisy and sparse data. An analysis of the characteristics of eigenspace is carried out which shows that (a) not every eigenvectors of a data affinity matrix is informative and relevant for clustering; (b) eigenvector selection is critical because using uninformative/irrelevant eigenvectors could lead to poor clustering results; and (c) the corresponding eigenvalues cannot be used for relevant eigenvector selection given a realistic data set. Motivated by the analysis, a novel spectral clustering algorithm is proposed which differs from previous approaches in that only informative/relevant eigenvectors are employed for determining the number of clusters and performing clustering. The key element of the proposed algorithm is a simple but effective relevance learning method which measures the relevance of an eigenvector according to how well it can separate the data set into different clusters. Our algorithm was evaluated using synthetic data sets as well as real-world data sets generated from two challenging visual learning problems. The results demonstrated that our algorithm is able to estimate the cluster number correctly and reveal natural grouping of the input data/patterns even given sparse and noisy data.  相似文献   

5.
在现实世界中经常遇到混合数值属性和分类属性的数据, k-prototypes是聚类该类型数据的主要算法之一。针对现有混合属性聚类算法的不足,提出一种基于分布式质心和新差异测度的改进的 k-prototypes 算法。在新算法中,首先引入分布式质心来表示簇中的分类属性的簇中心,然后结合均值和分布式质心来表示混合属性的簇中心,并提出一种新的差异测度来计算数据对象与簇中心的距离,新差异测度考虑了不同属性在聚类过程中的重要性。在三个真实数据集上的仿真实验表明,与传统的聚类算法相比,本文算法的聚类精度要优于传统的聚类算法,从而验证了本文算法的有效性。  相似文献   

6.
A Novel Density-Based Clustering Framework by Using Level Set Method   总被引:1,自引:0,他引:1  
In this paper, a new density-based clustering framework is proposed by adopting the assumption that the cluster centers in data space can be regarded as target objects in image space. First, the level set evolution is adopted to find an approximation of cluster centers by using a new initial boundary formation scheme. Accordingly, three types of initial boundaries are defined so that each of them can evolve to approach the cluster centers in different ways. To avoid the long iteration time of level set evolution in data space, an efficient termination criterion is presented to stop the evolution process in the circumstance that no more cluster centers can be found. Then, a new effective density representation called level set density (LSD) is constructed from the evolution results. Finally, the valley seeking clustering is used to group data points into corresponding clusters based on the LSD. The experiments on some synthetic and real data sets have demonstrated the efficiency and effectiveness of the proposed clustering framework. The comparisons with DBSCAN method, OPTICS method, and valley seeking clustering method further show that the proposed framework can successfully avoid the overfitting phenomenon and solve the confusion problem of cluster boundary points and outliers.  相似文献   

7.
In this paper, a novel clustering method in the kernel space is proposed. It effectively integrates several existing algorithms to become an iterative clustering scheme, which can handle clusters with arbitrary shapes. In our proposed approach, a reasonable initial core for each of the cluster is estimated. This allows us to adopt a cluster growing technique, and the growing cores offer partial hints on the cluster association. Consequently, the methods used for classification, such as support vector machines (SVMs), can be useful in our approach. To obtain initial clusters effectively, the notion of the incomplete Cholesky decomposition is adopted so that the fuzzy c‐means (FCM) can be used to partition the data in a kernel defined‐like space. Then a one‐class and a multiclass soft margin SVMs are adopted to detect the data within the main distributions (the cores) of the clusters and to repartition the data into new clusters iteratively. The structure of the data set is explored by pruning the data in the low‐density region of the clusters. Then data are gradually added back to the main distributions to assure exact cluster boundaries. Unlike the ordinary SVM algorithm, whose performance relies heavily on the kernel parameters given by the user, the parameters are estimated from the data set naturally in our approach. The experimental evaluations on two synthetic data sets and four University of California Irvine real data benchmarks indicate that the proposed algorithms outperform several popular clustering algorithms, such as FCM, support vector clustering (SVC), hierarchical clustering (HC), self‐organizing maps (SOM), and non‐Euclidean norm fuzzy c‐means (NEFCM). © 2009 Wiley Periodicals, Inc.4  相似文献   

8.
This article describes a clustering technique that can automatically detect any number of well-separated clusters which may be of any shape, convex and/or non-convex. This is in contrast to most other techniques which assume a value for the number of clusters and/or a particular cluster structure. The proposed technique is based on an iterative partitioning of the relative neighborhood graph, coupled with a post-processing step for merging small clusters. Techniques for improving the efficiency of the proposed scheme are implemented. The clustering scheme is able to detect outliers in data. It is also able to indicate the inherent hierarchical nature of the clusters present in a data set. Moreover, the proposed technique is also able to identify the situation when the data do not have any natural clusters at all. Results demonstrating the effectiveness of the clustering scheme are provided for several data sets.  相似文献   

9.
Features extracted from real world applications increase dramatically, while machine learning methods decrease their performance given the previous scenario, and feature reduction is required. Particularly, for fault diagnosis in rotating machinery, the number of extracted features are sizable in order to collect all the available information from several monitored signals. Several approaches lead to data reduction using supervised or unsupervised strategies, where the supervised ones are the most reliable and its main disadvantage is the beforehand knowledge of the fault condition. This work proposes a new unsupervised algorithm for feature selection based on attribute clustering and rough set theory. Rough set theory is used to compute similarities between features through the relative dependency. The clustering approach combines classification based on distance with clustering based on prototype to group similar features, without requiring the number of clusters as an input. Additionally, the algorithm has an evolving property that allows the dynamic adjustment of the cluster structure during the clustering process, even when a new set of attributes feeds the algorithm. That gives to the algorithm an incremental learning property, avoiding a retraining process. These properties define the main contribution and significance of the proposed algorithm. Two fault diagnosis problems of fault severity classification in gears and bearings are studied to test the algorithm. Classification results show that the proposed algorithm is able to select adequate features as accurate as other feature selection and reduction approaches.  相似文献   

10.
一个好的聚类算法应该是用户输入参数少,对噪声不敏感,能够发现任意形状,可以处理高维数据,具有可解释性和可扩展性.将聚类分析应用于地理信息系统中,可以实现对GIS数据信息概括和综合.文中提出一种基于距离阈值相邻的聚类算法,通过距离阈值可达的方式逐个将对象加入到已知聚类中,可以发现任意形状的聚类并对噪声数据有很好的分离效果,实验中将该算法应用于地理信息系统中的数据挖掘实现上,结果证明此算法对于实现GIS聚类具有满意的效果.  相似文献   

11.
In cluster analysis, one of the most challenging and difficult problems is the determination of the number of clusters in a data set, which is a basic input parameter for most clustering algorithms. To solve this problem, many algorithms have been proposed for either numerical or categorical data sets. However, these algorithms are not very effective for a mixed data set containing both numerical attributes and categorical attributes. To overcome this deficiency, a generalized mechanism is presented in this paper by integrating Rényi entropy and complement entropy together. The mechanism is able to uniformly characterize within-cluster entropy and between-cluster entropy and to identify the worst cluster in a mixed data set. In order to evaluate the clustering results for mixed data, an effective cluster validity index is also defined in this paper. Furthermore, by introducing a new dissimilarity measure into the k-prototypes algorithm, we develop an algorithm to determine the number of clusters in a mixed data set. The performance of the algorithm has been studied on several synthetic and real world data sets. The comparisons with other clustering algorithms show that the proposed algorithm is more effective in detecting the optimal number of clusters and generates better clustering results.  相似文献   

12.
Clustering for symbolic data type is a necessary process in many scientific disciplines, and the fuzzy c-means clustering for interval data type (IFCM) is one of the most popular algorithms. This paper presents an adaptive fuzzy c-means clustering algorithm for interval-valued data based on interval-dividing technique. This method gives a fuzzy partition and a prototype for each fuzzy cluster by optimizing an objective function. And the adaptive distance between the pattern and its cluster center varies with each algorithm iteration and may be either different from one cluster to another or the same for all clusters. The novel part of this approach is that it takes into account every point in both intervals when computing the distance between the cluster and its representative. Experiments are conducted on synthetic data sets and a real data set. To compare the comprehensive performance of the proposed method with other four existing methods, the corrected rand index, the value of objective function and iterations are introduced as the evaluation criterion. Clustering results demonstrate that the algorithm proposed in this paper has remarkable advantages.  相似文献   

13.
This paper studies the problem of point cloud simplification by searching for a subset of the original input data set according to a specified data reduction ratio (desired number of points). The unique feature of the proposed approach is that it aims at minimizing the geometric deviation between the input and simplified data sets. The underlying simplification principle is based on clustering of the input data set. The cluster representation essentially partitions the input data set into a fixed number of point clusters and each cluster is represented by a single representative point. The set of the representatives is then considered as the simplified data set and the resulting geometric deviation is evaluated against the input data set on a cluster-by-cluster basis. Due to the fact that the change to a representative selection only affects the configuration of a few neighboring clusters, an efficient scheme is employed to update the overall geometric deviation during the search process. The search involves two interrelated steps. It first focuses on a good layout of the clusters and then on fine tuning the local composition of each cluster. The effectiveness and performance of the proposed approach are validated and illustrated through case studies using synthetic as well as practical data sets.  相似文献   

14.
针对现有聚类算法普遍存在聚类质量低、参数依赖性大、孤立点难识别等问题,提出一种基于数据场的聚类算法。该算法通过计算每个数据对象点的势值,根据类簇中心的势值比周围邻居的势值大,且与其他类簇中心有相对较大距离的特点,确定类簇中心;根据孤立点的势值等于零的特点,选出孤立点;最后将其他数据对象点划分到比自身势值大且最近邻的类簇中,从而实现聚类。仿真实验表明,该算法在不需要人为调参的情况下准确找出类簇中心和孤立点,聚类效果优良,且与数据集的形状无关。  相似文献   

15.
Spectral clustering is an important component of clustering method, via tightly relying on the affinity matrix. However, conventional spectral clustering methods 1). equally treat each data point, so that easily affected by the outliers; 2). are sensitive to the initialization; 3). need to specify the number of cluster. To conquer these problems, we have proposed a novel spectral clustering algorithm, via employing an affinity matrix learning to learn an intrinsic affinity matrix, using the local PCA to resolve the intersections; and further taking advantage of a robust clustering that is insensitive to initialization to automatically generate clusters without an input of number of cluster. Experimental results on both artificial and real high-dimensional datasets have exhibited our proposed method outperforms the clustering methods under comparison in term of four clustering metrics.  相似文献   

16.
As a data mining method, clustering, which is one of the most important tools in information retrieval, organizes data based on unsupervised learning which means that it does not require any training data. But, some text clustering algorithms cannot update existing clusters incrementally and, instead, have to recompute a new clustering from scratch. In view of above, this paper presents a novel down-top incremental conceptual hierarchical text clustering approach using CFu-tree (ICHTC-CF) representation, which starts with each item as a separate cluster. Term-based feature extraction is used for summarizing a cluster in the process. The Comparison Variation measure criterion is also adopted for judging whether the closest pair of clusters can be merged or a previous cluster can be split. And, our incremental clustering method is not sensitive to the input data order. Experimental results show that the performance of our method outperforms k-means, CLIQUE, single linkage clustering and complete linkage clustering, which indicate our new technique is efficient and feasible.  相似文献   

17.
In this paper the problem of automatic clustering a data set is posed as solving a multiobjective optimization (MOO) problem, optimizing a set of cluster validity indices simultaneously. The proposed multiobjective clustering technique utilizes a recently developed simulated annealing based multiobjective optimization method as the underlying optimization strategy. Here variable number of cluster centers is encoded in the string. The number of clusters present in different strings varies over a range. The points are assigned to different clusters based on the newly developed point symmetry based distance rather than the existing Euclidean distance. Two cluster validity indices, one based on the Euclidean distance, XB-index, and another recently developed point symmetry distance based cluster validity index, Sym-index, are optimized simultaneously in order to determine the appropriate number of clusters present in a data set. Thus the proposed clustering technique is able to detect both the proper number of clusters and the appropriate partitioning from data sets either having hyperspherical clusters or having point symmetric clusters. A new semi-supervised method is also proposed in the present paper to select a single solution from the final Pareto optimal front of the proposed multiobjective clustering technique. The efficacy of the proposed algorithm is shown for seven artificial data sets and six real-life data sets of varying complexities. Results are also compared with those obtained by another multiobjective clustering technique, MOCK, two single objective genetic algorithm based automatic clustering techniques, VGAPS clustering and GCUK clustering.  相似文献   

18.
Many clustering models define good clusters as extrema of objective functions. Optimization of these models is often done using an alternating optimization (AO) algorithm driven by necessary conditions for local extrema. We abandon the objective function model in favor of a generalized model called alternating cluster estimation (ACE). ACE uses an alternating iteration architecture, but membership and prototype functions are selected directly by the user. Virtually every clustering model can be realized as an instance of ACE. Out of a large variety of possible instances of non-AO models, we present two examples: 1) an algorithm with a dynamically changing prototype function that extracts representative data and 2) a computationally efficient algorithm with hyperconic membership functions that allows easy extraction of membership functions. We illustrate these non-AO instances on three problems: a) simple clustering of plane data where we show that creating an unmatched ACE algorithm overcomes some problems of fuzzy c-means (FCM-AO) and possibilistic c-means (PCM-AO); b) functional approximation by clustering on a simple artificial data set; and c) functional approximation on a 12 input 1 output real world data set. ACE models work pretty well in all three cases  相似文献   

19.
There are many important issues that need to be resolved for identification of a fuzzy rule-based system using clustering. We address three such important issues: 1) deciding on the proper domain(s) of clustering; 2) deciding on the number of rules; and 3) getting an initial estimate of parameters of the fuzzy systems. We justify that one should start with separate clustering of X (input) and Y (output). We propose a scheme to establish correspondence between the clusters obtained in X and Y. The correspondence dictates whether further splitting/merging of clusters is needed or not. If X and Y do not exhibit strong cluster substructures, then again clustering of X* (input data augmented by the output data) exploiting the results of separate clustering of X and Y, and of the correspondence scheme is recommended. We justify that usual cluster validity indices are not suitable for finding the number of rules, and the proposed scheme does not use any cluster validity index. Three methods are suggested to get the initial estimate of membership functions (MFs). The proposed scheme is used to identify the rule base needed to realize a self-tuning fuzzy PI-type controller and its performance is found to be quite satisfactory.  相似文献   

20.
网络入侵检测中的自动决定聚类数算法   总被引:13,自引:0,他引:13  
针对模糊C均值算法(fuzzy C-means algorithm,简称FCM)在入侵检测中需要预先指定聚类数的问题,提出了一种自动决定聚类数算法(fuzzy C-means and support vector machine algorithm,简称F-CMSVM).它首先用模糊C均值算法把目标数据集分为两类,然后使用带有模糊成员函数的支持向量机(support vector machihe,简称SVM)算法对结果进行评估以确定目标数据集是否可分,再迭代计算,最终得到聚类结果.支持向量机算法引入模糊C均值算法得出的隶属矩阵作为模糊成员函数,使得不同的输入样本可以得到不同的惩罚值,从而得到最优的分类超平面.该算法既不需要对训练数据集进行标记,也不需要指定聚类数,因此是一种真正的无监督算法.在对KDD CUP 1999数据集的仿真实验结果表明,该算法不仅能够得到最佳聚类数,而且对入侵有较好的检测效果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号