首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Conventional data mining methods for finding frequent itemsets require considerable computing time to produce their results from a large data set. Due to this reason, it is almost impossible to apply them to an analysis task in an online data stream where a new transaction is continuously generated at a rapid rate. An algorithm for finding frequent itemsets over an online data stream should support flexible trade-off between processing time and mining accuracy. Furthermore, the most up-to-date resulting set of frequent itemsets should be available quickly at any moment. To satisfy these requirements, this paper proposes a data mining method for finding frequent itemsets over an online data stream. The proposed method examines each transaction one-by-one without any candidate generation process. The count of an itemset that appears in each transaction is monitored by a lexicographic tree resided in main memory. The current set of monitored itemsets in an online data stream is minimized by two major operations: delayed-insertion and pruning. The former is delaying the insertion of a new itemset in recent transactions until the itemset becomes significant enough to be monitored. The latter is pruning a monitored itemset when the itemset turns out to be insignificant. The number of monitored itemsets can be flexibly controlled by the thresholds of these two operations. As the number of monitored itemsets is decreased, frequent itemsets in the online data stream are more rapidly traced while they are less accurate. The performance of the proposed method is analyzed through a series of experiments in order to identify its various characteristics.  相似文献   

2.
DSM-FI: an efficient algorithm for mining frequent itemsets in data streams   总被引:4,自引:4,他引:0  
Online mining of data streams is an important data mining problem with broad applications. However, it is also a difficult problem since the streaming data possess some inherent characteristics. In this paper, we propose a new single-pass algorithm, called DSM-FI (data stream mining for frequent itemsets), for online incremental mining of frequent itemsets over a continuous stream of online transactions. According to the proposed algorithm, each transaction of the stream is projected into a set of sub-transactions, and these sub-transactions are inserted into a new in-memory summary data structure, called SFI-forest (summary frequent itemset forest) for maintaining the set of all frequent itemsets embedded in the transaction data stream generated so far. Finally, the set of all frequent itemsets is determined from the current SFI-forest. Theoretical analysis and experimental studies show that the proposed DSM-FI algorithm uses stable memory, makes only one pass over an online transactional data stream, and outperforms the existing algorithms of one-pass mining of frequent itemsets.
Suh-Yin LeeEmail:
  相似文献   

3.
Mining top?k frequent patterns without minimum support threshold   总被引:1,自引:1,他引:0  
Finding frequent patterns play an important role in mining association rules, sequences, episodes, Web log mining and many other interesting relationships among data. Frequent pattern mining methods often produce a huge number of frequent itemsets that is not feasible for effective usage. The number of highly correlated patterns is usually very small and may even be one. Most of the existing frequent pattern mining techniques often require the setting of many input parameters and may involve multiple passes over the database. Minimum support is the widely used parameter in frequent pattern mining to discover statistically significant patterns. Specifying appropriate minimum support is a challenging task for a data analyst as the choice of minimum support value is somewhat arbitrary. Generally, it is required to repeatedly execute an algorithm, heuristically tuning the value of minimum support over a wide range, until the desired result is obtained, certainly, a very time-consuming process. Setting up an inappropriate minimum support may also cause an algorithm to fail in finding the true patterns. We present a novel method to efficiently retrieve top few maximal frequent patterns in order of significance without use of the minimum support parameter. Instead, we are only required to specify a more human understandable parameter, namely the desired number itemsets k. Our technique requires only a single pass over the database and generation of length two itemsets. The association ratio graph is proposed as a compact structure containing concise information, which is created in time quadratic to the size of the database. Algorithms are described for using this graph structure to discover top-most and top-k maximal frequent itemsets without minimum support threshold. To effectively achieve this, the method employs construction of an all path source-to-destination tree to discover all maximal cycles in the graph. The results can be ranked in decreasing order of significance. Results are presented demonstrating the performance advantages to be gained from the use of this approach.  相似文献   

4.
农村社会保障体系数据流关联规则挖掘   总被引:2,自引:0,他引:2       下载免费PDF全文
针对我国农村社会保障体系数据流存在的隐含信息,对该体系数据流关联规则挖掘已成为研究的热点。鉴于此,提出农村社会保障体系数据流产生关联规则的几个步骤及相应的实现方法,包括在数据流采样中的置换方法,以及在频繁集生成中的MFI—TCQ方法,介绍关联规则树的产生并举例进行说明。  相似文献   

5.
In the past, many algorithms were proposed to adopt fuzzy-set theory for discovering fuzzy association rules from quantitative databases. The fuzzy frequent pattern (FFP)-tree and the compressed fuzzy frequent pattern (CFFP)-tree algorithms were respectively proposed to mine the incomplete fuzzy frequent itemsets from the tree-based structures. In the past, multiple fuzzy frequent pattern (MFFP)-tree algorithm was proposed to keep more linguistic terms for mining fuzzy frequent itemsets. Since the MFFP-tree algorithm inherits the property of the FFP-tree algorithm, numerous tree nodes are thus required to build the MFFP-tree structure for mining the desired multiple fuzzy frequent itemsets. In this paper, the compressed multiple fuzzy frequent pattern (CMFFP)-tree algorithm is designed to keep not only the linguistic term with maximum membership value but also the other frequent linguistic terms for mining the completely fuzzy frequent itemsets. In the designed CMFFP-tree algorithm, the multiple frequent linguistic terms are sorted in descending order of their occurrence frequencies to build the CMFFP-tree structure. The construction process is the same as the CFFP-tree algorithm except more information are kept for later mining process to discover the completely fuzzy frequent itemsets. Each node in the CMFFP-tree uses the additional array to keep the membership values of its prefix path by intersection operation. A CMFFP-mine algorithm is also designed to efficiently mine the multiple fuzzy frequent itemsets from the developed CMFFP-tree structure. Experiments are then conducted to show the performance of the proposed CMFFP-tree algorithm in terms of execution time and the number of tree nodes, compared to those of the MFFP-tree and CFFP-tree algorithms.  相似文献   

6.
Utility of an itemset is considered as the value of this itemset, and utility mining aims at identifying the itemsets with high utilities. The temporal high utility itemsets are the itemsets whose support is larger than a pre-specified threshold in current time window of the data stream. Discovery of temporal high utility itemsets is an important process for mining interesting patterns like association rules from data streams. In this paper, we propose a novel method, namely THUI (Temporal High Utility Itemsets)-Mine, for mining temporal high utility itemsets from data streams efficiently and effectively. To the best of our knowledge, this is the first work on mining temporal high utility itemsets from data streams. The novel contribution of THUI-Mine is that it can effectively identify the temporal high utility itemsets by generating fewer candidate itemsets such that the execution time can be reduced substantially in mining all high utility itemsets in data streams. In this way, the process of discovering all temporal high utility itemsets under all time windows of data streams can be achieved effectively with less memory space and execution time. This meets the critical requirements on time and space efficiency for mining data streams. Through experimental evaluation, THUI-Mine is shown to significantly outperform other existing methods like Two-Phase algorithm under various experimental conditions.  相似文献   

7.
在图像关联规则挖掘的某些领域,要求提取出具有较高置信度的关联规则,同时对支持度的要求相对较低。提出了一种在兼顾支持度的情况下挖掘出高置信度的图像关联规则的方法。为了便于有效地提取图像关联规则,使用了名为bSQ(bit Sequential)的一种栅格数据格式。而后采取“逐层搜索”的方法,建立规则树,避免了传统方法在处理低支持度时产生的大量频繁项集。最后通过多图像关联规则提取优先级和图像数据立方体等技术在多幅图像中提取基于象素级的关联规则。通过实验证明,该方法能有效地提取图像数据高置信度关联规则,方法具有可行性。  相似文献   

8.
针对目前时态关联规则研究中存在的挖掘效率不高、规则可解释性低、未考虑项集时间关联关系等问题,在原有相关研究的基础上,提出一种新的基于频繁项集树的时态关联规则挖掘算法.通过对时间序列数据进行降维离散化处理,采用向量运算生成频繁项集,提高频繁项集挖掘效率.考虑到项集之间的时态关系以及树结构的优势,提出一种新的频繁项集树结构挖掘时态关联规则,其挖掘频繁项集与树结构构建同时进行,无需产生候选项集,提高了规则挖掘效率.实验表明,对比于其他算法,所提出算法在挖掘效率和规则解释性方面效果更好,具有较好的应用前景.  相似文献   

9.
A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Due to this reason, most algorithms for data streams sacrifice the correctness of their results for fast processing time. The processing time is greatly influenced by the amount of information that should be maintained. This issue becomes more serious in finding frequent itemsets or frequency counting over an online transactional data stream since there can be a large number of itemsets to be monitored. We have proposed a method called the estDec method for finding frequent itemsets over an online data stream. In order to reduce the number of monitored itemsets in this method, monitoring the count of an itemset is delayed until its support is large enough to become a frequent itemset in the near future. For this purpose, the count of an itemset should be estimated. Consequently, how to estimate the count of an itemset is a critical issue in minimizing memory usage as well as processing time. In this paper, the effects of various count estimation methods for finding frequent itemsets are analyzed in terms of mining accuracy, memory usage and processing time.  相似文献   

10.
挖掘滑动时间衰减窗口中网络流频繁项集*   总被引:1,自引:1,他引:0  
网络流数据频繁项集挖掘是网络流量分析的重要基础。提出一种新颖的基于字典顺序前缀树LOP-Tree的频繁项集挖掘算法STFWFI,该算法采用更符合网络流特点的滑动时间衰减窗口模型,有效降低挖掘频繁项集的时间和空间复杂度;在该树结构上提出一种新的基于统计分布的节点权值计算方法SDNW代替传统的统计计算方法,提高了网络流节点估值的精确度。实验结果表明该算法在网络流频繁项集挖掘过程中获得了良好的效果。  相似文献   

11.
针对目前大数据快速增加的环境下,海量数据的频繁项集挖掘在实际中所面临的增量更新问题,在频繁项超度量树算法(frequent items ultrametric trees,FIUT)的基础上,引入MapReduce并行编程模型,提出了一种针对频繁项集增量更新的面向大数据的并行算法。该算法通过检查频繁超度量树叶子节点的支持度来确定频繁项集,同时采用准频繁项集的策略来优化并行计算过程,从而提高数据挖掘效率。实验结果显示,所提出的算法能快速完成扫描和更新数据,具有较好的可扩展性,适合于在动态增长的大数据环境中进行关联规则相关数据挖掘。  相似文献   

12.
挖掘频繁项集是挖掘数据流的基本任务.许多近似算法能够对数据流进行频繁项集的挖掘,但不能有效控制内存资源消耗和挖掘运行时间.为了提高数据流挖掘的效率,通过挖掘数据流中的频繁闭项集来减少挖掘结果项集的数量,并借鉴Relim算法和Manku算法,引入事务链表组作为概要数据结构,提出了一种新的数据流频繁闭项集的挖掘算法.最后通过实验,证明了该算法的有效性.  相似文献   

13.
多数据库中全局负关联规则挖掘研究   总被引:1,自引:0,他引:1  
全局负关联规则挖掘是多数据库关联信息挖掘的重要研究内容,具有广泛的应用范围和使用价值.合并各子数据库的负关联规则是现有全局负关联规则挖掘常用的方法,但数据密度大、规则不全面及运算时间高等问题影响了已有全局负关联规则挖掘方法的效率.本文给出一种新的全局负关联规则挖掘算法,其具体步骤为:(1)扫描各子数据库,建立多数据库频繁模式树;(2)依据频繁项集全局一致性原则,对多数据库频繁模式树执行精简操作;(3)在此基础上产生全局极小非频繁项集;(4)依据极大频繁项集向上闭包原则,产生全局非频繁项集;(5)在规则相关度的基础上提取全局负关联规则.大量的对比实验结果表明,本文算法具有快速发现全局负关联规则的能力.  相似文献   

14.
The frequent pattern tree (FP-tree. is an efficient data structure for association-rule mining without generation of candidate itemsets. It was used to compress a database into a tree structure which stored only large items. It, however, needed to process all transactions in a batch way. In the past, we proposed a Fast Updated FP-tree (FUFP-tree. structure to efficiently handle new transactions and to make the tree update process become easier. In this paper, we propose the structure of prelarge trees to incrementally mine association rules based on the concept of pre-large itemsets. Due to the properties of pre-large concepts, the proposed approach does not need to rescan the original database until a number of new transactions have been inserted. The proposed approach can thus achieve a good execution time for tree construction especially when a small number of transactions are inserted each time. Experimental results also show that the proposed approach has a good performance for incrementally handling new transactions.  相似文献   

15.
16.
多重最小支持度频繁项集挖掘算法研究   总被引:1,自引:0,他引:1  
张慧哲  王坚 《计算机应用》2007,27(9):2290-2293
某些情况下提取关联规则挖掘时需要根据项目的特点设置不同的最小支持度,针对此问题进行了多重最小支持度的频繁项集挖掘算法研究。在FP-growth的基础上提出了多重最小支持度树(MS-tree)的新方法,并设计了MS-growth算法对MS-tree进行频繁模式集的挖掘。该算法只需扫描一次数据库,克服了MSapriori算法在生成关联规则时需要重新扫描数据库的缺点。实验表明,新算法的性能可以和FP-growth算法相比,而且可以处理多重最小支持度的问题。  相似文献   

17.
目前已提出了许多频繁项集更新算法,但是它们往往需要至少扫描一次原数据库,且会丢失一些重要规则。为此,文章提出了一种新的快速更新频繁项集算法CUFIA(Classifying Update Frequent Itemsets Algorithm),该算法通过对新增事务数据分区后快速逐一扫描,获得频繁项集,并将它们归入3个不同的类别,从而不需要扫描原数据库,便可有效地挖掘出其中的频繁项集,且不丢失重要规则。研究表明,该算法具有很好的可测量性。  相似文献   

18.
关联规则中频繁项集数量庞大的问题是关联规则可视化要解决的一个主要问题,本文介绍了一种基于平行坐标系和项目分类树的频繁项集和关联规则可视化方法。首先,在频繁项集中设置显示边界,利用频繁项集的闭包特性,实现对大的频繁项集的剪枝;然后,结合overview+detail的视点控制技术,通过交互,由用户选择感兴趣的某一节点上的频繁项集,在de-tail窗口中详细显示,从而实现人机交互的频繁项集和关联规则可视化。  相似文献   

19.
In a very large database, there exists sensitive information that must be protected against unauthorized accesses. The confidentiality protection of the information has been a long-term goal pursued by the database security research community and the government statistical agencies. In this paper, we proposed greedy methods for hiding sensitive rules. The experimental results showed the effectiveness of our approaches in terms of undesired side effects avoided in the rule hiding process. The results also revealed that in most cases, all the sensitive rules are hidden without generating spurious rules. First, the good scalability of our approach in terms of database sizes was achieved by using an efficient data structure, FCET, to store only maximal frequent itemsets instead of storing all frequent itemsets. Furthermore, we also proposed a new framework for enforcing the privacy in mining association rules. In the framework, we combined the techniques of efficiently hiding sensitive rules with the transaction retrieval engine based on the FCET index tree. For hiding sensitive rules, the proposed greedy approach includes a greedy approximation algorithm and a greedy exhausted algorithm to sanitize the database. In particular, we presented four strategies in the sanitizing procedure and four strategies in the exposed procedure, respectively, for hiding a group of association rules characterized as sensitive or artificial rules. In addition, the exposed procedure would expose missing rules during the processing so that the number of missing rules could be lowered as much as possible.  相似文献   

20.
This paper presents some new algorithms to efficiently mine max frequent generalized itemsets (g-itemsets) and essential generalized association rules (g-rules). These are compact and general representations for all frequent patterns and all strong association rules in the generalized environment. Our results fill an important gap among algorithms for frequent patterns and association rules by combining two concepts. First, generalized itemsets employ a taxonomy of items, rather than a flat list of items. This produces more natural frequent itemsets and associations such as (meat, milk) instead of (beef, milk), (chicken, milk), etc. Second, compact representations of frequent itemsets and strong rules, whose result size is exponentially smaller, can solve a standard dilemma in mining patterns: with small threshold values for support and confidence, the user is overwhelmed by the extraordinary number of identified patterns and associations; but with large threshold values, some interesting patterns and associations fail to be identified. Our algorithms can also expand those max frequent g-itemsets and essential g-rules into the much larger set of ordinary frequent g-itemsets and strong g-rules. While that expansion is not recommended in most practical cases, we do so in order to present a comparison with existing algorithms that only handle ordinary frequent g-itemsets. In this case, the new algorithm is shown to be thousands, and in some cases millions, of the time faster than previous algorithms. Further, the new algorithm succeeds in analyzing deeper taxonomies, with the depths of seven or more. Experimental results for previous algorithms limited themselves to taxonomies with depth at most three or four. In each of the two problems, a straightforward lattice-based approach is briefly discussed and then a classificationbased algorithm is developed. In particular, the two classification-based algorithms are MFGI_class for mining max frequent g-itemsets and EGR_class for mining essential g-rules. The classification-based algorithms are featured with conceptual classification trees and dynamic generation and pruning algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号