首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
研究微阵列数据中挖掘Top-k频繁闭合项集问题,并设计挖掘算法ZDtop。算法采用ZBDD结构压缩存储数据集,使用自顶向下深度优先搜索策略挖掘项集长度不小于给定值min_l的Top-k频繁闭合项集,并对搜索空间进行有效修剪。通过实例证明该算法是正确有效的。  相似文献   

2.
王丹丹  蒋文娟 《计算机科学》2012,39(11):153-156
为了提高工作流环境下频繁模式挖掘的准确性,提出了一种新的频繁闭合模式挖掘算法。首先扩展了依赖 矩阵的定义,即利用工作流日志建立包含直接依赖关系和交叠关系的依赖支持度矩阵。然后扩展了CHARM算法, 以在支持度矩阵的基础上自动挖掘频繁闭合活动集。最后对频繁闭合项集进行处理,以形成最终的工作流频繁闭合 模式。该算法对于并行和选择关系的处理能力优于同类算法。  相似文献   

3.
差分隐私保护下一种精确挖掘top-k频繁模式方法   总被引:1,自引:0,他引:1  
频繁模式挖掘是分析事务数据集常用技术.然而,当事务数据集含有敏感数据时(如用户行为记录、电子病例等),直接发布频繁模式及其支持度计数会给个人隐私带来相当大的风险.对此提出了一种满足ε-差分隐私的top-k频繁模式挖掘算法DP-topkP(differentially private top-kpattern mining).该算法利用指数机制从候选频繁模式集合中挑选出top-k个携带真实支持度计数的模式;采用拉普拉斯机制产生的噪音扰动所选模式的真实支持度计数;为了增强输出模式的可用性,采用后置处理技术对top-k个模式的噪音支持度计数进行求精处理.从理论角度证明了该算法满足ε-差分隐私,并符合(λ,δ)-useful要求.实验结果证明了DP-topkP算法具有较好的准确性、可用性和可扩展性.  相似文献   

4.
研究微阵列数据中挖掘Top—k频繁闭合项集问题,并设计挖掘算法ZDtoP。算法采用ZBDD结构压缩存储数据集,使用自顶向下深度优先搜索策略挖掘项集长度不小于给定值min_l的Top—k频繁闭合项集,并对搜索空间进行有效修剪。通过实例证明该算法是正确有效的。  相似文献   

5.
刘芳 《计算机工程》2012,38(1):59-61
基于图的关联规则挖掘算法会产生大量候选项集。针对该问题,提出一种结合双向搜索策略的改进算法。按照支持度对频繁 1-项集排序,对频繁k-项集的最长超集进行验证,利用Apriori算法进行剪枝。实验结果表明,在支持度阈值较小时,改进算法能有效减少候选项集的数量,提高挖掘效率。  相似文献   

6.
传统的关联规则挖掘算法不能在同一事务数据库中连续挖掘多个最小支持度的频繁项目集。为此,提出基于多个最小支持度的频繁项目集挖掘算法。运用集合论定义模型库的概念,将事务数据库转化成模型库,通过检索模型库得到频繁项目集,从而降低频繁项目集的挖掘时间。实验结果表明,该算法的挖掘效率高于Apriori算法。  相似文献   

7.
频繁子图挖掘是频繁模式挖掘的一种具体形式,广泛应用于社会网络分析、生物技术、推荐系统等领域。然而,图数据集中可能包含一些敏感的信息,在挖掘过程中或发布频繁子图信息时都可能造成隐私的泄露。对此,提出一种面向差分隐私保护的top-k子图挖掘算法——DP-TGM(Differential Private Top-k subGraph Mining)。算法首先依据挖掘出的频繁点和边对数据集剪枝,然后将频繁的边依次进行扩展挖掘,得到最终的top-k频繁子图。该算法使用一个优先权队列存储临时挖掘到的前k个最频繁的子图,在扩展挖掘的过程中不断更新队列里的元素,并将阈值始终更新为队列里的最小噪音支持度,减少图的扩展次数。算法使用拉普拉斯机制在三个不同的阶段对子图的真实支持度添加噪音,并且采用均分法和特殊级数法对隐私预算进行合理的分配以提高数据可用性。文章用理论证明算法满足ε-差分隐私保护,且在不同规模的数据集上验证了算法的可用性。  相似文献   

8.
基于FP-Tree的频繁闭合项目集挖掘算法的研究   总被引:1,自引:0,他引:1  
目前频繁闭合项目集挖掘算法有很多,例如CLOSET[1]。CLOSET以FP-Growth为基础,采用FP-Tree来表示模式支持集,通过深度优先搜索来挖掘频繁闭合模式。其困难是,递归构造“条件FP-Tree”的CPU开销和存储开销很大。为解决上面的问题,论文提出一种基于FP-Tree和COFI-Tree的频繁闭合项目集挖掘算法,在该算法中引用了COFI-Tree结构,COFI-Tree无需递归地构造“条件FP-Tree”,并且某一时刻只有一个频繁项的COFI-Tree在内存,所以大大减少了内存消耗。通过实验证明:当挖掘大型数据库时,在执行时间方面,该算法比其它算法更有效。  相似文献   

9.
在传统剪枝策略中,具有相同事务集的父子结点搜索空间没有充分剪枝,效率较低.为此,提出父子等价的剪枝策略.采用深度优先搜索集合枚举树,对于父子结点中具有相同事务集的搜索空间进行剪枝,有效地缩小搜索空间,减少频繁项计算的次数,给出基于该剪枝策略的最大频繁项集挖掘算法.实验结果表明,该算法可缩短同一支持度下的最大频繁项集挖掘时间.  相似文献   

10.
在多支持度关联规则挖掘算法中,针对最小支持度的选取问题,提出一种基于分段函数的多支持度关联规则挖掘算法.在多支持度算法中挖掘频繁集的时候,最小支持度由项集最小项支持度的最小值、最大值和给定的参考值所决定,这样避免了采用最小值作为最小支持度算法的时间复杂度高和存在无效规则的问题,以及采用最大值致使剪枝程度过大而造成规则遗漏的问题.通过实验结果表明了该算法的有效性.  相似文献   

11.
TFP: an efficient algorithm for mining top-k frequent closed itemsets   总被引:5,自引:0,他引:5  
Frequent itemset mining has been studied extensively in literature. Most previous studies require the specification of a min/spl I.bar/support threshold and aim at mining a complete set of frequent itemsets satisfying min/spl I.bar/support. However, in practice, it is difficult for users to provide an appropriate min/spl I.bar/support threshold. In addition, a complete set of frequent itemsets is much less compact than a set of frequent closed itemsets. In this paper, we propose an alternative mining task: mining top-k frequent closed itemsets of length no less than min/spl I.bar/l, where k is the desired number of frequent closed itemsets to be mined, and min/spl I.bar/l is the minimal length of each itemset. An efficient algorithm, called TFP, is developed for mining such itemsets without mins/spl I.bar/support. Starting at min/spl I.bar/support = 0 and by making use of the length constraint and the properties of top-k frequent closed itemsets, min/spl I.bar/support can be raised effectively and FP-Tree can be pruned dynamically both during and after the construction of the tree using our two proposed methods: the closed node count and descendant/spl I.bar/sum. Moreover, mining is further speeded up by employing a top-down and bottom-up combined FP-Tree traversing strategy, a set of search space pruning methods, a fast 2-level hash-indexed result tree, and a novel closed itemset verification scheme. Our extensive performance study shows that TFP has high performance and linear scalability in terms of the database size.  相似文献   

12.
针对现有的一阶段Top-K高效用项集挖掘算法挖掘过程中阈值提升慢,迭代时生成大量候选项集造成内存占用过多等问题,提出一种基于重用链表(R-list)的Top-K高效用挖掘算法RHUM。使用一种新的数据结构R-list来存储并快速访问项集信息,无需第2次扫描数据库进行项集挖掘。该算法重用内存以保存候选集信息,结合改进的RSD阈值提升策略对数据进行预处理,期间采用更严格的剪枝参数在递归搜索的过程中同时计算多个项集的效用来缩小搜索空间。在不同类型数据集中的实验结果表明:RHUM算法在内存效率方面均优于其他一阶段算法,且在K值变化时能保持稳定。  相似文献   

13.
频繁闭合项目集集可惟一确定频繁项目集完全集且数量小得多,然而有关频繁闭合项目集的更新还不多见。为此,提出快速更新频繁闭合项目集算法—FUAFCI(Fast Updating Algorithm of Frequent Closed Itemsets),该算法主要考虑最小支持度发生变化时频繁闭合项目集的更新情况。FUAFCI在最坏的情况下仅须扫描各局部数据库一遍,且利用CLOSET+的项目集合并、子项目集修剪以及子集检验等优化策略及已挖掘的结果,可确保对频繁闭合项目集进行高效的更新。验结果表明,FUAFCI算法是有效可行的。  相似文献   

14.
李广璞  黄妙华 《计算机科学》2018,45(Z11):1-11, 26
关联分析作为数据挖掘的主要研究模块之一,主要用于发现隐藏在大型数据集中的强关联特征。而多数关联规则挖掘任务可分为频繁模式(频繁项集、频繁序列、频繁子图)的产生和规则的产生。前者发现数据集中满足最小支持度阈值的项集、序列与子图;后者从上一步发现的频繁模式中提取高置信度的规则。频繁项集挖掘是许多数据挖掘任务中的关键问题,也是关联规则挖掘算法的核心。十几年来,学者们致力于提高频繁项集的生成效率,从不同的角度进行改进以提高算法效率,大量的高效可伸缩性算法被提出。文中对频繁项集挖掘进行深入分析,对完全频繁项集、闭频繁项集、极大频繁项集的典型算法进行介绍和评述,最后对频繁项集挖掘算法的研究方向进行简要分析。  相似文献   

15.
Mining closed frequent itemsets from data streams is of interest recently. However, it is not easy for users to determine a proper minimum support threshold. Hence, it is more reasonable to ask users to set a bound on the result size. Therefore, an interactive single-pass algorithm, called TKC-DS (top-K frequent closed itemsets of data streams), is proposed for mining top-K closed itemsets from data streams efficiently. A novel data structure, called CIL (closed itemset lattice), is developed for maintaining the essential information of closed itemsets generated so far. Experimental results show that the proposed TKC-DS algorithm is an efficient method for mining top-K frequent itemsets from data streams.  相似文献   

16.
最频繁项集挖掘是文本关联规则挖掘中研究的重点和难点,它决定了文本关联规则挖掘算法的性能。针对当前在最频繁项集挖掘方面的不足,将集合论引入倒排表以对其进行改进,然后以此为基础提出了几个命题和推论,并结合最小支持度阈值动态调整策略,提出了一个基于改进的倒排表和集合理论的最频繁项集挖掘算法,最后对所提算法进行验证。实验结果表明,所提算法的规则有效率和时间性能比常用的两个最频繁项集挖掘算法,即NApriori和IntvMatrix算法都好。  相似文献   

17.
Frequent pattern mining in data streams is an important research topic in the data mining community. In previous studies, a minimum support threshold was assumed to be available for mining frequent patterns. However, setting such a threshold is typically difficult. Hence, it is more reasonable to ask users to set a bound on the result size. The present study considers mining top-k frequent patterns from data streams using a sliding window technique. A single-pass algorithm, called MSWTP, is developed for the generation of top-k frequent patterns without a threshold. In the method, the content of the transactions in the sliding window is incrementally maintained in a summary data structure, named SWTP-tree, by scanning the stream only once. To make the mining operation efficient, insignificant patterns are distinguished from others by applying the Chernoff bound. Two kinds of obsolete pattern and one kind of insignificant pattern are periodically pruned from the pattern tree. Whenever necessary, the k most frequent patterns can be selected from SWTP-tree in order of their descending frequency. The performance of the proposed technique is evaluated via simulation experiments. The results show that the proposed method is both efficient and scalable, and that it outperforms comparable algorithms.  相似文献   

18.
Top-k frequent pattern mining finds interesting patterns from the highest support to the k-th support. The approach can be effectively applied in numerous fields such as marketing, finance, bio-data analysis, and so on since it does not need constraints by a minimum support threshold. Top-k mining methods use the support of the k-th pattern, not a user-specified minimum support. Thus, the methods conduct mining operations based on very low supports until the k-th pattern is detected. When a low support is used in the mining process, single-paths with numerous items are generated, where the top-k mining algorithm extracts valid patterns by combining the items for each single-path. Therefore, the bigger the number of combinations is, the larger the increase in time and memory consumption is. In this paper, in order to mine top-k frequent patterns more efficiently, we consider converting patterns obtained from single-paths into composite patterns during the mining process and recovering them as the original patterns when the top-k frequent patterns are extracted. For this, we define a new concept, the composite pattern, and propose novel techniques for reducing pattern combinations in the single-path. Two algorithms are introduced in this paper, where the former is CRM (Combination Reducing method), applying our reduction manner, and the latter is CRMN (Combination Reducing method for N-itemset), considering N-itemset, i.e., patterns’ lengths. A performance evaluation shows that CRM and CRMN algorithms can efficiently reduce pattern combinations in single-paths compared to state-of-the-art algorithms. The experimental results also illustrate that our approaches have outstanding performance in terms of runtime, memory, and scalability.  相似文献   

19.
Frequent itemset mining over data streams becomes a hot topic in data mining and knowledge discovery in recent years, and has been applied to different areas. However, the setting of a minimum support threshold needs some domain knowledge. It will bring a lot of difficulties or much burden to users if the support threshold is not set reasonably. It is interesting for users to find top-K frequent itemsets over data streams. In this paper, a dynamical incremental approximate algorithm TOPSIL-Miner is presented to mine top-K significant itemsets in landmark windows. A new data structure, TOPSIL-Tree, is designed to store the potential significant itemsets and other data structures of maximum support list, ordered item list, TOPSET and minimum support list are devised to maintain information about mining results. Moreover, three optimal strategies are exploited to reduce time and space cost of the algorithm: (1) pruning trivial nodes in the current data stream, (2) promoting mining support threshold during mining process adaptively and heuristically, and (3) promoting pruning threshold dynamically. The accuracy of the algorithm is also analyzed. Extensive experiments are performed to evaluate the good effectiveness and the high efficiency and precision of the algorithm.  相似文献   

20.
在数据库中增加数据且调整最小支持度时,数据库中关联规则会发生变化,为从数据量和最小支持度同时发生变化的数据库中快速获取频繁项集,发现变化后的关联规则,通过对FIM和AIUA算法进行分析,提出一种结合两种算法优点的增量数据关联规则挖掘My_FIM_AIUA算法,该算法能减少数据库扫描次数,减少候选项集数量。通过实验表明My_FIM_AIUA算法能在数据量和最小支持度同时变化时快速找到频繁项集,提高挖掘增量数据关联规则的速度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号