首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
An ensemble of clustering solutions or partitions may be generated for a number of reasons. If the data set is very large, clustering may be done on tractable size disjoint subsets. The data may be distributed at different sites for which a distributed clustering solution with a final merging of partitions is a natural fit. In this paper, two new approaches to combining partitions, represented by sets of cluster centers, are introduced. The advantage of these approaches is that they provide a final partition of data that is comparable to the best existing approaches, yet scale to extremely large data sets. They can be 100,000 times faster while using much less memory. The new algorithms are compared against the best existing cluster ensemble merging approaches, clustering all the data at once and a clustering algorithm designed for very large data sets. The comparison is done for fuzzy and hard-k-means based clustering algorithms. It is shown that the centroid-based ensemble merging algorithms presented here generate partitions of quality comparable to the best label vector approach or clustering all the data at once, while providing very large speedups.  相似文献   

2.
为解决广义噪声聚类(GNC)算法非常依赖参数和在运行GNC算法前必须运行FCM算法以便计算参数的缺点,在GNC的目标函数和可能聚类算法(PCA)基础上,提出一种快速的广义噪声聚类(FGNC)算法。FGNC算法通过一种非参数化方法计算GNC目标函数中的参数,因而FGNC算法不依赖参数并且聚类速度快于GNC算法。对人工含噪声数据集和两个实际数据集进行仿真实验,实验结果表明FGNC算法能很好地处理含噪声数据,具有聚类中心更接近真实聚类中心,聚类准确性高,聚类时间少的优良性能。  相似文献   

3.
针对现有层次聚类算法难以处理不完备数据集,同时考虑样本与类簇之间的不确定关系,提出一种面向不完备数据的集对粒层次聚类算法-SPGCURE.首先,采用集对信息粒的知识对缺失值进行处理,不同于以往算法中将缺失属性删除或者填充,用集对联系度中的差异度来表示缺失属性值,提出一种改进的集对信息距离度量方法,用于考量不完备数据样本间的紧密程度;其次,基于改进后的集对距离度量,给出各个类簇的类内平均距离的定义,形成以正同域Cs(样本一定属于类簇)、边界域Cu(样本可能属于类簇)和负反域Co(样本不属于类簇)表示的集对粒层次聚类;SPGCURE算法在完备和不完备数据都适用,最后,选用5个经典的UCI数据集,与常用的经典及改进聚类算法进行实验评价,结果表明,SPGCURE算法在准确度、F-measure、调整兰德系数和标准互信息等指标上均具有不错的聚类性能.  相似文献   

4.
多尺度的谱聚类算法   总被引:1,自引:1,他引:0       下载免费PDF全文
提出了一种多尺度的谱聚类算法。与传统谱聚类算法不同,多尺度谱聚类算法用改进的k-means算法对未经规范的Laplacian矩阵的特征向量进行聚类。与传统k-means算法不同,改进的k-means算法提出一种新颖的划分数据点到聚类中心的方法,通过比较聚类中心与原点的距离和引入尺度参数来计算数据点与聚类中心的距离。实验表明,改进算法在人工数据集上取得令人满意的结果,在真实数据集上聚类结果较优。  相似文献   

5.
As one of the most important techniques in data mining, cluster analysis has attracted more and more attentions in this big data era. Most clustering algorithms have encountered with challenges including cluster centers determination difficulty, low clustering accuracy, uneven clustering efficiency of different data sets and sensible parameter dependence. Aiming at clustering center determination difficulty and parameter dependence, a novel cluster center fast determination clustering algorithm was proposed in this paper. It is supposed that clustering centers are those data points with higher density and larger distance from other data points of higher density. Normal distribution curves are designed to fit the density distribution curve of density distance product. And the singular points outside the confidence interval by setting the confidence interval are proved to be clustering centers by theory analysis and simulations. Finally, according to these clustering centers, a time scan clustering is designed for the rest of the points by density to complete the clustering. Density radius is a sensible parameter in calculating density for each data point, mountain climbing algorithm is thus used to realize self-adaptive density radius. Abundant typical benchmark data sets are testified to evaluate the performance of the brought up algorithms compared with other clustering algorithms in both aspects of clustering quality and time complexity.  相似文献   

6.
Information granules, such as e.g., fuzzy sets, capture essential knowledge about data and the key dependencies between them. Quite commonly, we may envision that information granules (fuzzy sets) have become a result of fuzzy clustering and therefore could be succinctly represented in the form of some fuzzy partition matrices. Interestingly, the same data set could be represented from various standpoints and this multifaceted view yields a collection of different partition matrices being reflective of the higher-order granular knowledge about the data. The levels of specificity of the clusters the data are organized into could be quite different—the larger the number of clusters, the more detailed insight into the structure of data becomes available. Given the granularity of the resulting constructs (rather than plain data themselves), one could view a collection of partition matrices as a certain type of a network of knowledge. Considering a variety of sources of knowledge encountered across the network, we are interested in forming consensus between them. In a nutshell, this leads to the construction of certain fuzzy partition matrices which “reconcile” the knowledge captured by the individual partition matrices. Given that the granularity of the sources of knowledge under consideration could vary quite substantially, we develop a unified optimization perspective by introducing fuzzy proximity matrices that are induced by the corresponding partition matrices. In the sequel, the optimization is realized on a basis of these proximity matrices. We offer a detailed algorithm and illustrate its performance using a series of numeric experiments.  相似文献   

7.
The k-means algorithm is widely used for clustering because of its computational efficiency. Given n points in d-dimensional space and the number of desired clusters k, k-means seeks a set of k-cluster centers so as to minimize the sum of the squared Euclidean distance between each point and its nearest cluster center. However, the algorithm is very sensitive to the initial selection of centers and is likely to converge to partitions that are significantly inferior to the global optimum. We present a genetic algorithm (GA) for evolving centers in the k-means algorithm that simultaneously identifies good partitions for a range of values around a specified k. The set of centers is represented using a hyper-quadtree constructed on the data. This representation is exploited in our GA to generate an initial population of good centers and to support a novel crossover operation that selectively passes good subsets of neighboring centers from parents to offspring by swapping subtrees. Experimental results indicate that our GA finds the global optimum for data sets with known optima and finds good solutions for large simulated data sets.  相似文献   

8.
This paper presents a fuzzy clustering algorithm for the extraction of a smooth curve from unordered noisy data. In this method, the input data are first clustered into different regions using the fuzzy c-means algorithm and each region is represented by its cluster center. Neighboring cluster centers are linked to produce a graph according to the average class membership values. Loops in the graph are removed to form a curve according to spatial relations of the cluster centers. The input samples are then reclustered using the fuzzy c-means (FCM) algorithm, with the constraint that the curve must be smooth. The method has been tested with both open and closed curves with good results.  相似文献   

9.
基于密度峰值和网格的自动选定聚类中心算法   总被引:1,自引:0,他引:1  
夏庆亚 《计算机科学》2017,44(Z11):403-406
针对快速搜索和发现密度峰值的聚类算法(DPC)中数据点之间计算复杂,最终聚类的中心个数需要通过决策图手动选取等问题,提出基于密度峰值和网格的自动选定聚类中心的改进算法GADPC。首先结合Clique网格聚类算法的思想,不再针对点对象进行操作,而是将点映射到网格,并将网格作为聚类对象,从而减少了DPC算法中对数据点之间的距离计算和聚类次数;其次通过改进后的聚类中心个数判定准则更精确地自动选定聚类中心个数;最后对网格边缘点和噪声点,采用网格内点对象和相邻网格间的相似度进行了处理。实验通过采用UEF(University of Eastern Finland)提供的数据挖掘使用的人工合成数据集和UCI自然数据集进行对比,其聚类评价指标(Rand Index)表明,改进的算法在计算大数据集时聚类质量不低于DPC和K-means算法,而且提高了DPC算法的处理效率。  相似文献   

10.
Fuzzy Ants and Clustering   总被引:2,自引:0,他引:2  
A swarm-intelligence-inspired approach to clustering data is described. The algorithm consists of two stages. In the first stage of the algorithm, ants move the cluster centers in feature space. The cluster centers found by the ants are evaluated using a reformulated fuzzy C-means (FCM) criterion. In the second stage, the best cluster centers found are used as the initial cluster centers for the FCM algorithm. Results on 18 data sets show that the partitions found using the ant initialization are better optimized than those obtained from random initializations. The use of a reformulated fuzzy partition validity metric as the optimization criterion is shown to enable determination of the number of cluster centers in the data for several data sets. Hard C-means (HCM) was also used after reformulation, and the partitions obtained from the ant-based algorithm were better optimized than those from randomly initialized HCM.  相似文献   

11.
为了提高支持向量机在大规模数据集处理时的精度,提出了基于核空间和样本中心角度的支持向量机算法.在核特征空间下,求得原训练集的两类中心点和两个中心点的超法平面,并获取原训练集样本到超法平面距离和到两中心点中点的比值,用比值最小的n个样本点替代训练集.给出的数学模型显示,该算法不需要计算核空间,比现有的同类缩减策略保留了更多的支持向量数目.结合实例对算法进行了仿真实验,实验结果表明,与同类算法相比,该算法在基本没有降低训练速度的情况下获得了更准确的训练精度.  相似文献   

12.
In this paper, we introduce a new algorithm for clustering and aggregating relational data (CARD). We assume that data is available in a relational form, where we only have information about the degrees to which pairs of objects in the data set are related. Moreover, we assume that the relational information is represented by multiple dissimilarity matrices. These matrices could have been generated using different sensors, features, or mappings. CARD is designed to aggregate pairwise distances from multiple relational matrices, partition the data into clusters, and learn a relevance weight for each matrix in each cluster simultaneously. The cluster dependent relevance weights offer two advantages. First, they guide the clustering process to partition the data set into more meaningful clusters. Second, they can be used in subsequent steps of a learning system to improve its learning behavior. The performance of the proposed algorithm is illustrated by using it to categorize a collection of 500 color images. We represent the pairwise image dissimilarities by six different relational matrices that encode color, texture, and structure information.  相似文献   

13.
针对传统基于最大熵模糊 C 均值聚类算法(MEFCM)仅适用于球状或椭圆状聚类,为了解决数据分布混乱以及高度相关难以划分的情形,引入 Mercer 核函数,使原来没有显现的特征突现出来,从而使聚类效果更好。然而在实际问题中,大多数样本集的样本数据都存在着重要性(权重)不同的现象,主要针对样本集中各个数据的不同重要程度来设计加权方法,同时为了克服聚类算法对初始聚类中心选取的敏感性这一弱点,提出了一个初始聚类中心优化的加权最大熵核模糊聚类算法(WKMEFCM)。通过实验验证,该算法与原MEFCM算法比较,其聚类结果更加稳定、准确,从而达到更好的聚类划分效果。  相似文献   

14.
基于改进K均值聚类的异常检测算法   总被引:1,自引:0,他引:1  
左进  陈泽茂 《计算机科学》2016,43(8):258-261
通过改进传统K-means算法的初始聚类中心随机选取过程,提出了一种基于改进K均值聚类的异常检测算法。在选择初始聚类中心时,首先计算所有数据点的紧密性,排除离群点区域,在数据紧密的地方均匀选择K个初始中心,避免了随机性选择容易导致局部最优的缺陷。通过优化选取过程,使得算法在迭代前更加接近真实的聚类类簇中心,减少了迭代次数,提高了聚类质量和异常检测率。实验表明,改进算法在聚类性能和异常检测方面都明显优于原算法。  相似文献   

15.
The linear dynamic model (LDM), also known as the Kalman filter model, has been the subject of research in the engineering, control, and more recently, machine learning and speech technology communities. The Gaussian noise processes are usually assumed to have diagonal, or occasionally full, covariance matrices. A number of recent papers have considered modelling the precision rather than covariance matrix of a Gaussian distribution, and this work applies such ideas to the LDM. A Gaussian precision matrix P can be factored into the form P = UTSU where U is a transform and S a diagonal matrix. By varying the form of U, the covariance can be specified as being diagonal or full, or used to model a given set of spatial dependencies. Furthermore, the transform and scaling components can be shared between models, allowing richer distributions with only marginally more parameters than required to specify diagonal covariances.

The method described in this paper allows the construction of models with an appropriate number of parameters for the amount of available training data. We provide illustrative experimental results on synthetic and real speech data in which models with factored precision matrices and automatically-selected numbers of parameters are as good as or better than models with diagonal covariances on small data sets and as good as models with full covariance matrices on larger data sets.  相似文献   


16.
Reservoir operation optimization (ROO) is a complicated dynamically constrained nonlinear problem that is important in the context of reservoir system operation. In this study, parallel deterministic dynamic programming (PDDP) and a hierarchical adaptive genetic algorithm (HAGA) are proposed to solve the problem, which involves many conflicting objectives and constraints. In the PDDP method, multi-threads are found to exhibit better speed-up than single threads and to perform well for up to four threads. In the HAGA, an adaptive dynamic parameter control mechanism is applied to determine parameter settings, and an elite individual is preserved in the archive from the first hierarchy to the second hierarchy. Compared with other methods, the HAGA provides a better operational result with greater effectiveness and robustness because of the population diversity created by the archive operator. Comparison of the results of the HAGA and PDDP shows two contradictory objectives in the ROO problem-economy and reliability. The simulation results reveal that: compared with proposed PDDP, the proposed HAGA integrated with parallel model appears to be better in terms of power generation benefit and computational efficiency.  相似文献   

17.
Many special purpose algorithms exist for extracting information from streaming data. Constraints are imposed on the total memory and on the average processing time per data item. These constraints are usually satisfied by deciding in advance the kind of information one wishes to extract, and then extracting only the data relevant for that goal. Here, we propose a general data representation that can be computed using modest memory requirements with limited processing power per data item, and yet permits the application of an arbitrary data mining algorithm chosen and/or adjusted after the data collection process has begun. The new representation allows for the at-once analysis of a significantly larger number of data items than would be possible using the original representation of the data. The method depends on a rapid computation of a factored form of the original data set. The method is illustrated with two real datasets, one with dense and one with sparse attribute values.  相似文献   

18.
针对K-means算法中聚类结果易受初始聚类中心影响的缺点,提出一种改进初始聚类中心选择的算法.该算法不断寻找最大聚类,并利用距离最大的两个数据对象作为开始的聚类中心对该聚类进行分裂,如此反复,直到得到指定聚类中心个数.用KDD CUP99数据集对改进算法进行仿真实验,实验数据表明,用该算法获得的聚类中心进行聚类相对原始的K-means算法,能获得更好的聚类结果.  相似文献   

19.
The leading partitional clustering technique, k-modes, is one of the most computationally efficient clustering methods for categorical data. However, the performance of the k-modes clustering algorithm which converges to numerous local minima strongly depends on initial cluster centers. Currently, most methods of initialization cluster centers are mainly for numerical data. Due to lack of geometry for the categorical data, these methods used in cluster centers initialization for numerical data are not applicable to categorical data. This paper proposes a novel initialization method for categorical data which is implemented to the k-modes algorithm. The method integrates the distance and the density together to select initial cluster centers and overcomes shortcomings of the existing initialization methods for categorical data. Experimental results illustrate the proposed initialization method is effective and can be applied to large data sets for its linear time complexity with respect to the number of data objects.  相似文献   

20.
Reducing the time complexity of the fuzzy c-means algorithm   总被引:13,自引:0,他引:13  
In this paper, we present an efficient implementation of the fuzzy c-means clustering algorithm. The original algorithm alternates between estimating centers of the clusters and the fuzzy membership of the data points. The size of the membership matrix is on the order of the original data set, a prohibitive size if this technique is to be applied to very large data sets with many clusters. Our implementation eliminates the storage of this data structure by combining the two updates into a single update of the cluster centers. This change significantly affects the asymptotic runtime as the new algorithm is linear with respect to the number of clusters, while the original is quadratic. Elimination of the membership matrix also reduces the overhead associated with repeatedly accessing a large data structure. Empirical evidence is presented to quantify the savings achieved by this new method  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号