首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
数据挖掘的一个基本任务是在海量数据的数据库中开采频繁项目集。本文提出了一种方法,不用开采频繁项目集全集,而是开采它的一个称为频繁无规则集集合的精简集。我们能用频繁无规则集集合还原出完整的频繁项目集集合和它们的精确支持度而不用读取数据库。可以看到,对频繁无规则集集合的开采是高效的。我们给出了一个算法HOPE-Ⅲ来开采频繁无规则集集合,并将它和算法A-Close进行了比较。实验结果显示,HOPE-Ⅲ在任何情况下都比A-Close的性能更好。  相似文献   

2.
卢炎生  王莉  赵栋 《计算机工程》2005,31(5):99-101
提出了一个基于无或言规则集的改进的关联规则算法——IHPD,无或言规则集(disjunction-frce sets)是一种精简集表示。这一算法总结了HLinEx,IHP和DHP算法的优点,极大地改善了算法性能,并且使算法的使用不仅仅局限于长类型的频繁项目集挖掘,实验结果表明IHPD算法在性能上比HLinEx更高效。  相似文献   

3.
张军  陈凯明 《计算机工程》2008,34(9):76-77,8
为缩减关联规则存储空间和方便查询关联规则,提出一种前件为单一项目的最小预测集算法。利用集合枚举树找到最大频繁项 目集,据此来挖掘最小预测集。对规则扩展的有效性进行证明。实验结果表明,通过该算法得到的最小预测集比传统方法小1个数量级。  相似文献   

4.
数据库的更新会引起数据库中的关联规则的更新,找出更新后的所有的频繁项目集,也就能生成更新后的关联规则,因此关联规则的更新就转化为频繁项目集的更新。UWEP算法 利用以前的挖掘结果来减少挖掘新的频繁项目集的开销,采用了一些优化技术来减少数据库的扫描次数和候选项目集的数量,但UWEP算法只能处理增加新事务的情况。本文提出 的UWEP2算法是UWEP算法的扩展,能处理数据库中事务的增加、删除、修改等情况。我们将它与另一种更新频繁项目集的算法FUP2比较,实验显示,UWEP2算法比FUP2算法生成的候选项目集要少,性能要高。  相似文献   

5.
杨君锐 《计算机工程》2004,30(14):116-118
关联规则是当前数据挖掘研究的主要领域之一。发现频繁项目集是关联规则数据开采中的关键问题。该文提出了一种基于最夫频繁项目集的逆向开采算法IDMFI(inverse discovery maximum frequent itemsets),该算法利用频繁项目集的有关特性作为启发信息,采用逆向(即自顶向下)的搜索策略,能够大大减少候选项目集的生成,从而显著地提高了开采效率。  相似文献   

6.
关联规则挖掘的主要性能由发现频繁项目集决定.频繁项目集是最大频繁项目集的子集,因而找到所有最大频繁项目集是问题的关键.本文使用位串数组的数据结构提出了一种挖掘最大频繁项目集的算法MMFI.该算法通过位串与操作直接得到最大频繁项目集.  相似文献   

7.
为解决在挖掘关联规则时存在大量冗余规则以及效率不高的问题,提出了一种基于事务ID集合的带约束的关联规则挖掘算法ACARMT.该算法结合了Separate算法以及基于数据垂直分布算法的优势,先根据约束条件产生基础频繁项目集,再利用事务ID集合存储项目集信息,从而避免重复扫描数据库,提高了挖掘效率.应用该算法挖掘实际的生殖健康数据的实验表明,在数据量大到超出基于数据垂直分布算法的使用范围时,该算法仍然有效,并且其效率优于Separate算法.  相似文献   

8.
一般关联规则挖掘算法分为两步:第一步是发现频繁项目集;第二步是利用频繁项目集产生关联规则.文章讨论了现今关联规则挖掘算法的特点和不足,同时提出一种效率更高的挖掘算法.与其它算法不同的是,该算法侧重于知识领域的使用和关联规则系统应用的预备.  相似文献   

9.
一种高效的关联规则挖掘算法研究   总被引:2,自引:1,他引:2  
一般关联规则挖掘算法分为两步:第一步是发现频繁项目集;第二步是利用频繁项目集产生关联规则。文章讨论了现今关联规则挖掘算法的特点和不足,同时提出一种效率更高的挖掘算法。与其它算法不同的是,该算法侧重于知识领域的使用和关联规则系统应用的预备。  相似文献   

10.
Apriori算法是数据挖掘领域挖掘关联规则频繁项目集的经典算法,但该算法存在产生大量的候选项目集及需要多次扫描数据库的缺陷。为此提出一种新的挖掘关联规则频繁项目集算法( CApriori算法):利用分解事务矩阵来压缩存放数据库的相关信息,进而对分解事务矩阵进行关联规则挖掘;优化了由频繁k -1项目集生成频繁k项目集的连接过程;提出了一种不需要扫描数据库,利用行集“与运算”快速计算支持数的方法,改进算法挖掘所有的频繁项目集只需扫描数据库两次。实验结果表明,改进算法在最小支持度较小时效率高于Apriori算法。  相似文献   

11.
Given a large set of data, a common data mining problem is to extract the frequent patterns occurring in this set. The idea presented in this paper is to extract a condensed representation of the frequent patterns called disjunction-bordered condensation (DBC), instead of extracting the whole frequent pattern collection. We show that this condensed representation can be used to regenerate all frequent patterns and their exact frequencies. Moreover, this regeneration can be performed without any access to the original data. Practical experiments show that the DBCcan be extracted very efficiently even in difficult cases and that this extraction and the regeneration of the frequent patterns is much more efficient than the direct extraction of the frequent patterns themselves. We compared the DBC with another representation of frequent patterns previously investigated in the literature called frequent closed sets. In nearly all experiments we have run, the DBC have been extracted much more efficiently than frequent closed sets. In the other cases, the extraction times are very close.  相似文献   

12.
Generating a Condensed Representation for Association Rules   总被引:1,自引:0,他引:1  
Association rule extraction from operational datasets often produces several tens of thousands, and even millions, of association rules. Moreover, many of these rules are redundant and thus useless. Using a semantic based on the closure of the Galois connection, we define a condensed representation for association rules. This representation is characterized by frequent closed itemsets and their generators. It contains the non-redundant association rules having minimal antecedent and maximal consequent, called min-max association rules. We think that these rules are the most relevant since they are the most general non-redundant association rules. Furthermore, this representation is a basis, i.e., a generating set for all association rules, their supports and their confidences, and all of them can be retrieved needless accessing the data. We introduce algorithms for extracting this basis and for reconstructing all association rules. Results of experiments carried out on real datasets show the usefulness of this approach. In order to generate this basis when an algorithm for extracting frequent itemsets—such as Apriori for instance—is used, we also present an algorithm for deriving frequent closed itemsets and their generators from frequent itemsets without using the dataset.  相似文献   

13.
A core issue of the association rule extracting process in the data mining field is to find the frequent patterns in the database of operational transactions. If these patterns discovered, the decision making process and determining strategies in organizations will be accomplished with greater precision. Frequent pattern is a pattern seen in a significant number of transactions. Due to the properties of these data models which are unlimited and high-speed production, these data could not be stored in memory and for this reason it is necessary to develop techniques that enable them to be processed online and find repetitive patterns. Several mining methods have been proposed in the literature which attempt to efficiently extract a complete or a closed set of different types of frequent patterns from a dataset. In this paper, a method underpinned upon Cellular Learning Automata (CLA) is presented for mining frequent itemsets. The proposed method is compared with Apriori, FP-Growth and BitTable methods and it is ultimately concluded that the frequent itemset mining could be achieved in less running time. The experiments are conducted on several experimental data sets with different amounts of minsup for all the algorithms as well as the presented method individually. Eventually the results prod to the effectiveness of the proposed method.  相似文献   

14.
15.
Non-derivable itemset mining   总被引:3,自引:2,他引:3  
All frequent itemset mining algorithms rely heavily on the monotonicity principle for pruning. This principle allows for excluding candidate itemsets from the expensive counting phase. In this paper, we present sound and complete deduction rules to derive bounds on the support of an itemset. Based on these deduction rules, we construct a condensed representation of all frequent itemsets, by removing those itemsets for which the support can be derived, resulting in the so called Non-Derivable Itemsets (NDI) representation. We also present connections between our proposal and recent other proposals for condensed representations of frequent itemsets. Experiments on real-life datasets show the effectiveness of the NDI representation, making the search for frequent non-derivable itemsets a useful and tractable alternative to mining all frequent itemsets.  相似文献   

16.
基于iceberg概念格并置集成的闭频繁项集挖掘算法   总被引:2,自引:0,他引:2  
由于概念格的完备性,在基于概念格的数据挖掘过程中,构造概念格的时间复杂度和空间复杂度一直是影响其应用的主要因素.结合iceberg概念格的半格特性和概念格的集成思想,首先在理论上分析并置集成后的iceberg概念格与由完备概念格裁剪得到的iceberg格同构;然后分析了iceberg概念格集成过程中的映射关系;最终提出一个新颖的基于iceberg概念格并置的闭频繁项集挖掘算法(Icegalamera).此算法避免了完备概念格的计算,并且在构造过程中采用集成和剪枝策略,从而显著提高了挖掘效率.实验证明其产生的闭频繁项集的完备性.使用稠密和稀疏数据集在单站点模式下进行了性能测试,结果表明稀疏数据集上性能优势明显.  相似文献   

17.
Mining closed frequent itemsets from data streams is of interest recently. However, it is not easy for users to determine a proper minimum support threshold. Hence, it is more reasonable to ask users to set a bound on the result size. Therefore, an interactive single-pass algorithm, called TKC-DS (top-K frequent closed itemsets of data streams), is proposed for mining top-K closed itemsets from data streams efficiently. A novel data structure, called CIL (closed itemset lattice), is developed for maintaining the essential information of closed itemsets generated so far. Experimental results show that the proposed TKC-DS algorithm is an efficient method for mining top-K frequent itemsets from data streams.  相似文献   

18.
Given a large collection of transactions containing items, a basic common data mining problem is to extract the so-called frequent itemsets (i.e., sets of items appearing in at least a given number of transactions). In this paper, we propose a structure called free-sets, from which we can approximate any itemset support (i.e., the number of transactions containing the itemset) and we formalize this notion in the framework of -adequate representations (H. Mannila and H. Toivonen, 1996. In Proc. of the Second International Conference on Knowledge Discovery and Data Mining (KDD'96), pp. 189–194). We show that frequent free-sets can be efficiently extracted using pruning strategies developed for frequent itemset discovery, and that they can be used to approximate the support of any frequent itemset. Experiments on real dense data sets show a significant reduction of the size of the output when compared with standard frequent itemset extraction. Furthermore, the experiments show that the extraction of frequent free-sets is still possible when the extraction of frequent itemsets becomes intractable, and that the supports of the frequent free-sets can be used to approximate very closely the supports of the frequent itemsets. Finally, we consider the effect of this approximation on association rules (a popular kind of patterns that can be derived from frequent itemsets) and show that the corresponding errors remain very low in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号