首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
张岩金  白亮 《计算机科学》2021,48(4):111-116
由于在实际应用中有大量的符号数据生成,符号数据聚类成为了聚类分析的一个重要研究领域.目前,已有许多符号数据聚类算法被提出,但将它们应用于大数据环境时,仍然存在计算成本高、运行速度慢等问题.文中提出了一种基于符号关系图的快速符号数据聚类算法.该算法使用符号关系图替代原始数据,缩小数据集的规模,有效地解决了这一问题.大量的...  相似文献   

2.
一种面向高维符号数据的随机投影聚类算法   总被引:1,自引:0,他引:1  
现实数据往往分布在高维空间中,从整个向量空间来看,这些数据间的联系非常分散,因此如何降低维数实现高维数据的聚类受到众多研究者的普遍关注.介绍了一种适用于符号型高维数据的随机投影聚类算法.其根据频率选择与聚类相关的维向量,随机产生并根据投影聚类效果择优选择聚类中心及相关维向量,将投影聚类算法扩展至符号数据空间.实验结果证实了这种算法的实用性与有效性.  相似文献   

3.
聚类是数据挖掘领域中最活跃的研究分支之一,聚类技术在其他的科学领域也有广泛的应用。迄今为止已经提出了大量的聚类算法,其中基于密度的DBSCAN算法因其很多优点而备受关注,为了减少DBSCAN的区域查询次数,降低I/O开销而提出的改进算法有FDBSCAN、LSNCCP等。随着应用的发展,增量聚类显得越来越重要,而现有的增量聚类算法存在很大的局限性。基于LSNCCP,提出了一种有效的增量聚类算法,同时它也可以用于对LSNCCP进行性能优化。  相似文献   

4.
一种基于网格的增量聚类算法   总被引:1,自引:0,他引:1  
分析了现有基于网格的聚类算法,该算法具有高效且可以处理高维数据的特点,但传统网格聚类算法的聚类质量受网格划分的粒度影响较大。为此,提出了一种基于网格的增量聚类算法IGrid。IGrid算法具有传统网格聚类算法的高效性,且通过维度半径对网格空间进行了动态增量划分以提高聚类的质量。在真实数据集与仿真数据集上的实验结果表明,IGrid算法在聚类准确度以及效率上要高于传统的网格聚类算法。  相似文献   

5.
为了改进当前社会化标注系统在标签浏览和检索方面的弱点,提出一种基于加权网络分割的社会性标签聚类算法。算法基于标签节点的核心度和相似性对标签共现网络进行分割,并在聚类后自动生成该类的特征标签来代表该类簇。实验测试表明算法具有较好的聚类效果。  相似文献   

6.
针对现有的增量聚类算法对参数敏感度较高、时空复杂度较高等问题,提出了一种基于代表点的增量聚类算法。首先采用代表点聚类算法对静态的数据库进行聚类;然后根据新增加的节点与已存的代表点之间的关系,判断是否将其添加到已存的代表点所属的类簇中,或是提升为新的代表点;最后,再次采用代表点聚类算法对其进行聚类。实验结果证明,该算法对参数的敏感性低、效率高、占用空间小。  相似文献   

7.
一种基于密度的高性能增量聚类算法   总被引:4,自引:1,他引:4       下载免费PDF全文
刘建晔  李芳 《计算机工程》2006,32(21):76-78
提出并证明了一种基于密度的高性能增量聚类算法,算法的主要工作包括:(1)利用分区和抽样技术对数据进行抽取和清理。(2)利用密度和网格技术对数据进行聚类。(3)改变阈值后提出一种增量算法,只对受影响的点重新计算聚类。(4)在动态环境下,数据增删后的增量聚类算法。实验证明,该算法能很好地处理高维数据,有效过滤噪声数据,大大节省聚类时间。  相似文献   

8.
子空间聚类是高维数据聚类的一种有效手段,子空间聚类的原理就是在最大限度地保留原始数据信息的同时用尽可能小的子空间对数据聚类。在研究了现有的子空间聚类的基础上,引入了一种新的子空间的搜索方式,它结合簇类大小和信息熵计算子空间维的权重,进一步用子空间的特征向量计算簇类的相似度。该算法采用类似层次聚类中凝聚层次聚类的思想进行聚类,克服了单用信息熵或传统相似度的缺点。通过在Zoo、Votes、Soybean三个典型分类型数据集上进行测试发现:与其他算法相比,该算法不仅提高了聚类精度,而且具有很高的稳定性。  相似文献   

9.
邓广彪 《数字社区&智能家居》2014,(31):7237-7240,7243
在数据库中增加数据且调整最小支持度时,数据库中关联规则会发生变化,为从数据量和最小支持度同时发生变化的数据库中快速获取频繁项集,发现变化后的关联规则,通过对FIM和AIUA算法进行分析,提出一种结合两种算法优点的增量数据关联规则挖掘My_FIM_AIUA算法,该算法能减少数据库扫描次数,减少候选项集数量。通过实验表明My_FIM_AIUA算法能在数据量和最小支持度同时变化时快速找到频繁项集,提高挖掘增量数据关联规则的速度。  相似文献   

10.
在数据库中增加数据且调整最小支持度时,数据库中关联规则会发生变化,为从数据量和最小支持度同时发生变化的数据库中快速获取频繁项集,发现变化后的关联规则,通过对FIM和AIUA算法进行分析,提出一种结合两种算法优点的增量数据关联规则挖掘My_FIM_AIUA算法,该算法能减少数据库扫描次数,减少候选项集数量。通过实验表明My_FIM_AIUA算法能在数据量和最小支持度同时变化时快速找到频繁项集,提高挖掘增量数据关联规则的速度。  相似文献   

11.
面向分类数据的自组织神经网络   总被引:1,自引:2,他引:1  
作为一种优良的聚类和降维工具,自组织神经网络SOM(SelfOrganizingFeatureMaps)已经得到广泛应用。其不足之处是仅适合于数值数据,这对时常需要处理分类型数据(Categoricalvalueddata)或数值型与分类型混合数据(Mixednumericandcategoricalvalueddata)的数据挖掘应用是不够的。该文提出了一种新的基于覆盖(Overlap)的距离函数并将其用于SOM训练。实验结果表明,在不增加时空开销的前提下可取得较好的聚类效果。  相似文献   

12.
NLOF:一种新的基于密度的局部离群点检测算法   总被引:1,自引:0,他引:1  
基于密度的局部离群点检测算法(LOF)的时间复杂度较高且不适用于大规模数据集和高维数据集的离群点检测。通过对LOF算法的分析,提出了一种新的局部离群点检测算法NLOF,该算法的主要思想如下:在数据对象邻域查询过程中,尽可能地利用已知信息优化邻近对象的邻域查询操作,有关邻域的计算查找都采用这种思想。首先通过聚类算法DBSCAN对数据集进行预处理,得到初步的异常数据集。然后利用LOF算法中计算局部异常因子的方法计算初步异常数据集中对象的局部异常程度。在计算数据对象的局部异常因子的过程中,引入去一划分信息熵增量,用去一划分信息熵差确定属性的权重,対属性的权值做具体的量化,在计算各对象之间的距离时采用加权距离。 在真实数据集上 对NLOF算法进行了充分的验证。结果显示,该算法能够提高离群点检测的精度,降低时间复杂度,实现有效的局部离群点的检测。  相似文献   

13.
    
A variety of clustering algorithms exists to group objects having similar characteristics. But the implementations of many of those algorithms are challenging in the process of dealing with categorical data. While some of the algorithms cannot handle categorical data, others are unable to handle uncertainty within categorical data in nature. This is prerequisite for clustering categorical data which also deal with uncertainty. An algorithm, termed minimum-minimum roughness (MMR) was proposed, which uses the rough set theory in order to deal with the above problems in clustering categorical data. Later many algorithms has developed to improve the handling of hybrid data. This research proposes information-theoretic dependency roughness (ITDR), another technique for categorical data clustering taking into account information-theoretic attributes dependencies degree of categorical-valued information systems. In addition, it is second to none of all its predecessors; MMR, MMeR, SDR and standard-deviation of standard-deviation roughness (SSDR). Experimental results on two benchmark UCI datasets show that ITDR technique is better with the baseline categorical data clustering technique with respect to computational complexity and the purity of clusters.  相似文献   

14.
Almost all subspace clustering algorithms proposed so far are designed for numeric datasets. In this paper, we present a k-means type clustering algorithm that finds clusters in data subspaces in mixed numeric and categorical datasets. In this method, we compute attributes contribution to different clusters. We propose a new cost function for a k-means type algorithm. One of the advantages of this algorithm is its complexity which is linear with respect to the number of the data points. This algorithm is also useful in describing the cluster formation in terms of attributes contribution to different clusters. The algorithm is tested on various synthetic and real datasets to show its effectiveness. The clustering results are explained by using attributes weights in the clusters. The clustering results are also compared with published results.  相似文献   

15.
Frequent itemset mining was initially proposed and has been studied extensively in the context of association rule mining. In recent years, several studies have also extended its application to transaction or document clustering. However, most of the frequent itemset based clustering algorithms need to first mine a large intermediate set of frequent itemsets in order to identify a subset of the most promising ones that can be used for clustering. In this paper, we study how to directly find a subset of high quality frequent itemsets that can be used as a concise summary of the transaction database and to cluster the categorical data. By exploring key properties of the subset of itemsets that we are interested in, we proposed several search space pruning methods and designed an efficient algorithm called SUMMARY. Our empirical results show that SUMMARY runs very fast even when the minimum support is extremely low and scales very well with respect to the database size, and surprisingly, as a pure frequent itemset mining algorithm it is very effective in clustering the categorical data and summarizing the dense transaction databases. Jianyong Wang received the Ph.D. degree in computer science in 1999 from the Institute of Computing Technology, the Chinese Academy of Sciences. Since then, he ever worked as an assistant professor in the Department of Computer Science and Technology, Peking (Beijing) University in the areas of distributed systems and Web search engines, and visited the School of Computing Science at Simon Fraser University, the Department of Computer Science at the University of Illinois at Urbana-Champaign, and the Digital Technology Center and the Department of Computer Science at the University of Minnesota, mainly working in the area of data mining. He is currently an associate professor of the Department of Computer Science and Technology at Tsinghua University, P.R. China. George Karypis received his Ph.D. degree in computer science at the University of Minnesota and he is currently an associate professor at the Department of Computer Science and Engineering at the University of Minnesota. His research interests spans the areas of parallel algorithm design, data mining, bioinformatics, information retrieval, applications of parallel processing in scientific computing and optimization, sparse matrix computations, parallel preconditioners, and parallel programming languages and libraries. His research has resulted in the development of software libraries for serial and parallel graph partitioning (METIS and ParMETIS), hypergraph partitioning (hMETIS), for parallel Cholesky factorization (PSPASES), for collaborative filtering-based recommendation algorithms (SUGGEST), clustering high dimensional datasets (CLUTO), and finding frequent patterns in diverse datasets (PAFI). He has coauthored over ninety journal and conference papers on these topics and a book title “Introduction to Parallel Computing” (Publ. Addison Wesley, 2003, 2nd edition). In addition, he is serving on the program committees of many conferences and workshops on these topics and is an associate editor of the IEEE Transactions on Parallel and Distributed Systems.  相似文献   

16.
分布不均衡的数据在通过传统聚类分析的方式进行标注时,聚类效果容易偏向于样本数多的类,从而造成标注出现误差的问题。针对此问题提出改进的含有均衡约束聚类算法的标注方法,对不均衡数据的聚类标注准确率实现了比较有效的提高,方法包含数据初始聚类、专家知识调整,数据均衡化处理,含均衡约束聚类等步骤。通过初始聚类对不均衡数据进行初始类标签分配,专家知识调整对部分数据错误标注进行标签调整修改,对数据进行均衡化处理得到均衡数据集,通过均衡约束聚类对均衡数据进行标签最终精确分配。经仿真验证表明,上述方法比较有效的提高了不均衡数据标注准确率。  相似文献   

17.
In this paper, we develop a semi-supervised regression algorithm to analyze data sets which contain both categorical and numerical attributes. This algorithm partitions the data sets into several clusters and at the same time fits a multivariate regression model to each cluster. This framework allows one to incorporate both multivariate regression models for numerical variables (supervised learning methods) and k-mode clustering algorithms for categorical variables (unsupervised learning methods). The estimates of regression models and k-mode parameters can be obtained simultaneously by minimizing a function which is the weighted sum of the least-square errors in the multivariate regression models and the dissimilarity measures among the categorical variables. Both synthetic and real data sets are presented to demonstrate the effectiveness of the proposed method.  相似文献   

18.
BIRCH: A New Data Clustering Algorithm and Its Applications   总被引:14,自引:0,他引:14  
Data clustering is an important technique for exploratory data analysis, and has been studied for several years. It has been shown to be useful in many practical domains such as data classification and image processing. Recently, there has been a growing emphasis on exploratory analysis of very large datasets to discover useful patterns and/or correlations among attributes. This is called data mining, and data clustering is regarded as a particular branch. However existing data clustering methods do not adequately address the problem of processing large datasets with a limited amount of resources (e.g., memory and cpu cycles). So as the dataset size increases, they do not scale up well in terms of memory requirement, running time, and result quality.In this paper, an efficient and scalable data clustering method is proposed, based on a new in-memory data structure called CF-tree, which serves as an in-memory summary of the data distribution. We have implemented it in a system called BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and studied its performance extensively in terms of memory requirements, running time, clustering quality, stability and scalability; we also compare it with other available methods. Finally, BIRCH is applied to solve two real-life problems: one is building an iterative and interactive pixel classification tool, and the other is generating the initial codebook for image compression.  相似文献   

19.
基于最长频繁闭项集的聚类算法   总被引:3,自引:0,他引:3       下载免费PDF全文
张泽洪  张伟 《计算机工程》2007,33(1):187-189
针对许多算法不适合对分类数据进行聚类的特点,提出了一种基于最长频繁闭项集(LFCI)的聚类算法。使用改造后的频繁模式树,得到每个事务的LFCI,由于LFCI的两个重要属性,因此可以将LFCI作为该事务的描述,从而直接得到聚类结果。实验证明了该算法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号