首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
近年来随着新的应用的出现,比如网络流量分析、在线事物分析和网络欺诈检测等,对数据流的挖掘成了一个越来越重要的课题。对于数据流频繁项集的挖掘,目前绝大部分的研究都集中在传统的窗口模式下进行,即时间衰退窗口模式、界标窗口模式和滑动窗口模式。Pauray S.M.Tsai于2009年提出了一种新的窗口模式:加权滑动窗口模式,并设计了两个基于此窗口模式的数据流频繁项集挖掘算法WSW和WSW-Imp,其中WSW-Imp是对WSW算法的改进。在研究了加权滑动窗口模式以及WSW-Imp算法的基础上,对WSW-Imp算法作了进一步的改进,设计了算法WSW-Imp2,并从理论上证明了WSW-Imp2算法比WSW-Imp算法更高效,实验结果也表明了这一点。  相似文献   

2.
As data have been accumulated more quickly in recent years, corresponding databases have also become huger, and thus, general frequent pattern mining methods have been faced with limitations that do not appropriately respond to the massive data. To overcome this problem, data mining researchers have studied methods which can conduct more efficient and immediate mining tasks by scanning databases only once. Thereafter, the sliding window model, which can perform mining operations focusing on recently accumulated parts over data streams, was proposed, and a variety of mining approaches related to this have been suggested. However, it is hard to mine all of the frequent patterns in the data stream environment since generated patterns are remarkably increased as data streams are continuously extended. Thus, methods for efficiently compressing generated patterns are needed in order to solve that problem. In addition, since not only support conditions but also weight constraints expressing items’ importance are one of the important factors in the pattern mining, we need to consider them in mining process. Motivated by these issues, we propose a novel algorithm, weighted maximal frequent pattern mining over data streams based on sliding window model (WMFP-SW) to obtain weighted maximal frequent patterns reflecting recent information over data streams. Performance experiments report that MWFP-SW outperforms previous algorithms in terms of runtime, memory usage, and scalability.  相似文献   

3.
Sliding window-based frequent pattern mining over data streams   总被引:2,自引:0,他引:2  
Finding frequent patterns in a continuous stream of transactions is critical for many applications such as retail market data analysis, network monitoring, web usage mining, and stock market prediction. Even though numerous frequent pattern mining algorithms have been developed over the past decade, new solutions for handling stream data are still required due to the continuous, unbounded, and ordered sequence of data elements generated at a rapid rate in a data stream. Therefore, extracting frequent patterns from more recent data can enhance the analysis of stream data. In this paper, we propose an efficient technique to discover the complete set of recent frequent patterns from a high-speed data stream over a sliding window. We develop a Compact Pattern Stream tree (CPS-tree) to capture the recent stream data content and efficiently remove the obsolete, old stream data content. We also introduce the concept of dynamic tree restructuring in our CPS-tree to produce a highly compact frequency-descending tree structure at runtime. The complete set of recent frequent patterns is obtained from the CPS-tree of the current window using an FP-growth mining technique. Extensive experimental analyses show that our CPS-tree is highly efficient in terms of memory and time complexity when finding recent frequent patterns from a high-speed data stream.  相似文献   

4.
荣文亮  杨燕 《计算机应用》2008,28(6):1467-1470
用挖掘频繁闭合模式集代替挖掘频繁模式集是近年来提出的一个重要策略。根据数据流的特点,提出了一种基于滑动窗口的频繁闭合模式的新方法DSFC_Mine。该算法以滑动窗口中的基本窗口为更新单位,利用改进的CHARM算法计算每个基本窗口的潜在频繁闭合项集,将它们存储到一种新的数据结构中,利用该数据结构可以快速地挖掘滑动窗口中的所有频繁闭合项集。实验验证了该算法在时间上和空间上的可行性和有效性。  相似文献   

5.
Sliding window is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. The main idea behind a transactional sliding window is to keep a fixed size window over a data stream. The window size is kept constant by removing old transactions from the window, when new transactions arrive. Older transactions of window are removed irrespective to whether a significant change has occurred or not. Another challenge of sliding window model is determining window size. The classic approach for determining the window size is to obtain it from the user. In order to determine the precise size of the window, the user must have prior knowledge about the time and scale of changes within the data stream. However, due to the unpredictable changing nature of data streams, this prior knowledge cannot be easily determined. Moreover, by using a fixed window size during a data stream mining, the performance of this model is degraded in terms of reflecting recent changes. Based on these observations, this study relaxes the notion of window size and proposes a new algorithm named VSW (Variable Size sliding Window frequent itemset mining) which is suitable for observing recent changes in the set of frequent itemsets over data streams. The window size is determined dynamically based on amounts of concept change that occurs within the arriving data stream. The window expands as the concept becomes stable and shrinks when a concept change occurs. In this study, it is shown that if stale transactions are removed from the window after a concept change, updated frequent itemsets always belong to the most recent concept. Experimental evaluations on both synthetic and real data show that our algorithm effectively detects the concept change, adjust the window size, and adapts itself to the new concepts along the data stream.  相似文献   

6.
A data stream is a massive, open-ended sequence of data elements continuously generated at a rapid rate. Mining data streams is more difficult than mining static databases because the huge, high-speed and continuous characteristics of streaming data. In this paper, we propose a new one-pass algorithm called DSM-MFI (stands for Data Stream Mining for Maximal Frequent Itemsets), which mines the set of all maximal frequent itemsets in landmark windows over data streams. A new summary data structure called summary frequent itemset forest (abbreviated as SFI-forest) is developed for incremental maintaining the essential information about maximal frequent itemsets embedded in the stream so far. Theoretical analysis and experimental studies show that the proposed algorithm is efficient and scalable for mining the set of all maximal frequent itemsets over the entire history of the data streams.  相似文献   

7.
High utility pattern (HUP) mining over data streams has become a challenging research issue in data mining. When a data stream flows through, the old information may not be interesting in the current time period. Therefore, incremental HUP mining is necessary over data streams. Even though some methods have been proposed to discover recent HUPs by using a sliding window, they suffer from the level-wise candidate generation-and-test problem. Hence, they need a large amount of execution time and memory. Moreover, their data structures are not suitable for interactive mining. To solve these problems of the existing algorithms, in this paper, we propose a novel tree structure, called HUS-tree (high utility stream tree) and a new algorithm, called HUPMS (high utility pattern mining over stream data) for incremental and interactive HUP mining over data streams with a sliding window. By capturing the important information of stream data into an HUS-tree, our HUPMS algorithm can mine all the HUPs in the current window with a pattern growth approach. Furthermore, HUS-tree is very efficient for interactive mining. Extensive performance analyses show that our algorithm is very efficient for incremental and interactive HUP mining over data streams and significantly outperforms the existing sliding window-based HUP mining algorithms.  相似文献   

8.
DSM-FI: an efficient algorithm for mining frequent itemsets in data streams   总被引:4,自引:4,他引:0  
Online mining of data streams is an important data mining problem with broad applications. However, it is also a difficult problem since the streaming data possess some inherent characteristics. In this paper, we propose a new single-pass algorithm, called DSM-FI (data stream mining for frequent itemsets), for online incremental mining of frequent itemsets over a continuous stream of online transactions. According to the proposed algorithm, each transaction of the stream is projected into a set of sub-transactions, and these sub-transactions are inserted into a new in-memory summary data structure, called SFI-forest (summary frequent itemset forest) for maintaining the set of all frequent itemsets embedded in the transaction data stream generated so far. Finally, the set of all frequent itemsets is determined from the current SFI-forest. Theoretical analysis and experimental studies show that the proposed DSM-FI algorithm uses stable memory, makes only one pass over an online transactional data stream, and outperforms the existing algorithms of one-pass mining of frequent itemsets.
Suh-Yin LeeEmail:
  相似文献   

9.
频繁项集挖掘是数据流挖掘中的一个热点问题.提出了一种新的数据流频繁闭项集挖掘算法MFCI-SW.首先设计了两个新的数据结构:频繁闭项集表FCIL和频繁闭合模式树MFCI-SW-Tree,在此基础上以滑动窗口中的基本窗口为更新单位,在每个基本窗口中提取出频繁闭项集的数据项,将其支持度F和窗口序列号K存到FCIL中;然后随着新基本窗口的到来,通过删除频繁闭项集表中K值最小的数据项和插入新数据项完成对FCIL的更新和MFCI-SW-Tree树的裁剪;最后在MFCI-SW-Tree中可以迅速挖掘出满足用户需要的频繁闭项集.实验结果证明了该算法在执行效率上明显优于DS-CFI算法.  相似文献   

10.
随着大数据时代的到来,网络上产生了大量非结构化文本数据流,这些文本数据流具有动态、高维、稀疏等特征。针对这些特点,首先将传统的AP算法及流式文本数据特征相结合,然后提出文本数据流聚类算法——OAP-s算法。该算法通过在AP算法上引入衰减因子,对聚类中心结果进行衰减,同时将当前时间窗口的聚类中心带入到下一时间窗口中进行聚类。针对OAP-s算法的不足,又提出了OWAP-s算法。该算法在OAP-s算法模型的基础上定义了加权相似度,并通过引入吸引度因子,使得历史聚类中心更具吸引性,得到更精确的聚类结果。同时,两种算法均采用滑动时间窗口模式,使算法既能体现数据流的时态特征,又能反映数据流的分布特征。实验结果表明,两种算法在聚类精确度、稳定性方面均高于OSKM算法,而且具有较好的伸缩性和可扩展性。  相似文献   

11.
滑动窗口是一种对最近一段时间内的数据进行挖掘的有效的技术,本文提出一种基于滑动窗口的流数据频繁项挖掘算法.算法采用了链表队列策略大大简化了算法,提高了挖掘的效率.对于给定的阈值S、误差ε和窗口长度n,算法可以检测在窗口内频度超过Sn的数据流频繁项,且使误差在εn以内.算法的空间复杂度为O(ε-1),对每个数据项的处理和查询时间均为O(1).在此基础上,我们还将该算法进行了扩展,可以通过参数的变化得到不同的流数据频繁项挖掘算法,使得算法的时间和空间复杂度之间得到调节.通过大量的实验证明,本文算法比其它类似算法具有更好的精度以及时间和空间效率.  相似文献   

12.
基于时间衰减模型的数据流频繁模式挖掘   总被引:1,自引:0,他引:1  
吴枫  仲妍  吴泉源 《自动化学报》2010,36(5):674-684
频繁模式挖掘是数据流挖掘中的重要研究课题. 针对数据流的时效性和流中心的偏移性特点, 提出了界标窗口模型与时间衰减模型相结合的数据流频繁模式挖掘算法. 该算法通过动态构建全局模式树, 利用时间指数衰减函数对模式树中各模式的支持数进行统计, 以此刻画界标窗口内模式的频繁程度; 进而, 为有效降低空间开销, 设计了剪枝阈值函数, 用于对预期难以成长为频繁的模式及时从全局树中剪除. 本文对出现在算法中的重要参数和阈值进行了深入分析. 一系列实验表明, 与现有同类算法MSW相比, 该算法挖掘精度高(平均超过90%), 内存开销小, 速度上可以满足高速数据流的处理要求, 且可以适应不同事务数量、不同事务平均长度和不同最大潜在频繁模式平均长度的数据流频繁模式挖掘.  相似文献   

13.
时兵 《计算机仿真》2020,37(4):330-334
针对传统的复杂网络数据流频繁项集人工智能挖掘方法存在数据挖掘时间较长、准确性较低等问题,提出一种基于时间戳的复杂网络数据流频繁项集人工智能挖掘方法。在训练阶段,利用贝叶斯分类算法找到所有复杂网络数据流频繁项集,并计算不同复杂网络数据流频繁项集的概率估值,在测试阶段,针对不同的测试样本构造不同的分类器,集成分类器,获取分类结果。通过分类结果,构建时间戳的滑动窗口模型,根据滑动窗口的大小对项集进行延迟处理,当项集的类型变化界限超过一定的阈值时,需要重新计算支持度,根据计算结果更新变化界限,完成复杂网络数据流频繁项集人工智能挖掘。实验结果表明,所提方法能够快速、准确地对数据流频繁项集进行人工智能挖掘。  相似文献   

14.
序列模式在基因分析、金融预测等方面有着重要的应用,是数据挖掘的一个主要分支,鉴于数据流应用的日益增多。本文在研究传统序列模式挖掘算法的基础上,提出了一种基于可扩展滑动窗口和贝叶斯概率过滤的面向数据流的序列模式挖掘算法(BMSP—DS算法),目的是简化序列模式发现的中间结果,提高挖掘效率.以便在小的存储空间和低的运算时间内快速发现流数据的频繁序列模式,同时算法也减少了因主观支持度取值不当对模式发现造成的负面影响,实验结果表明,该算法是可行、较优的.  相似文献   

15.
李海峰  章宁 《计算机科学》2011,38(5):164-168
数据流高速、无限和动态的特点决定了必须在有限的内存中以尽快的计算速度完成流数据上的频繁项集挖掘。将数据流中的数据按照段进行划分,采用二元组列表的数据结构进行保存,提出了一种基于滑动窗口的近似频繁项集挖掘方法AFIoDS,以实时获取频繁项集集合的真子集,并引入了概率参数,利用Chernoff Bound来动态改变支持度的近似值,保证真子集中的频繁项集被限制在一定的误差范围之内。此外,为了进一步节省内存,AFIoDS采用闭合项集的形式压缩每个段中获取的频繁项集。通过在3种真实数据集上的实验表明,AFIoDS算法与现有算法相比,在精度没有下降的情况下,具有更快的处理速度,同时其存储开销大大降低。  相似文献   

16.
Online mining of path traversal patterns from continuous Web click streams is one of the challenging research problems of Web usage mining. Most of previous works focus on mining path traversal patterns over the entire history of Web click streams. Mining the recent changes of Web click streams can provide valuable information for the analysis of the Web click streams. In this paper, we propose a new, online mining algorithm, called Top-DSW (top-k path traversal patterns of stream Damped Sliding Window), to discover the set of top-k path traversal patterns from streaming maximal forward references, where k is the desired number of path traversal patterns to be mined. An effective summary data structure, called TKP-DSW-list (a list of top-k path traversal patterns of stream Damped Sliding Windows) is developed to maintain the essential information about the top-k path traversal patterns from the maximal forward references within a stream damped sliding window. An effective space pruning mechanism, called TKR-list-maintain, is developed to control the memory requirement of the TKP-DSW-list. Experimental studies show that the proposed Top-DSW algorithm is an efficient, single-pass algorithm for online mining of the set of top-k path traversal patterns over stream damped sliding windows.  相似文献   

17.
作为数据流挖掘的一个重要研究问题,滑动窗口下的数据流频繁模式挖掘近年来得到了广泛应用和研究。已有的算法大多要对数据流中所有的数据都进行处理,而现实中用户往往只关注事物的某些方面,由此借鉴MFI-TransSW算法,提出了一种基于事务型滑动窗口的算法BSW-Filter(Bit Sliding Window with Filter)。算法采用比特序列实现滑动窗口操作,同时由于增加了频繁项的筛选,减少了所需保存的数据项个数,从而减小了内存使用和提升处理速度。算法的空间复杂度与滑动窗口大小以及数据流取值范围无关,特别适用于周期较长数据范围广的数据挖掘。分析和实验验证了该算法的可行性和有效性。  相似文献   

18.
In view of a series of problems existing in support update, window update mode and frequent k-itemset mining of traditional frequent itemset mining algorithm in data flow, which results in low efficiency of space and time,an efficient AO algorithm for mining frequent itemsets in data streams is improved. The algorithm uses the idea of sliding window to mine the data stream in blocks; when there is new data flowing in the full window, the residual insertion is used to update the data; and operation is used to solve the support degree of frequent k-itemsets, and the superset detection is combined in the mining process, which greatly improves the mining efficiency.The experimental results show that the algorithm has good superiority in both time and space efficiency.  相似文献   

19.
SWFPM:一种有效的数据流频繁项挖掘算法*   总被引:1,自引:0,他引:1  
分析了数据流频繁项挖掘算法EC的不足之处,如不能准确地挖掘最近一段时间内数据流的频繁项。提出了一种频繁项样本特征复合四元组的数据结构来保存样本集合,在此基础上,提出了一种基于滑动窗口的数据流频繁项挖掘算法——SWFPM。该算法能准确地挖掘出该滑动窗口中的频繁项。实验数据采用IBM合成数据发生器产生的顾客购物数据和1998年世界杯官方网站的访问日志数据。实验结果表明,该算法具有很高的频繁项挖掘准确度、快速的数据处理能力。  相似文献   

20.
概念漂移数据流挖掘算法综述   总被引:1,自引:0,他引:1  
丁剑  韩萌  李娟 《计算机科学》2016,43(12):24-29, 62
数据流是一种新型的数据模型,具有动态、无限、高维、有序、高速和变化等特性。在真实的数据流环境中,一些数据分布是随着时间改变的,即具有概念漂移特征,称为可变数据流或概念漂移数据流。因此处理数据流模型的方法需要处理时空约束和自适应调整概念变化。对概念漂移问题和概念漂移数据流分类、聚类和模式挖掘等内容进行综述。首先介绍概念漂移的类型和常用概念改变检测方法。为了解决概念漂移问题,数据流挖掘中常使用滑动窗口模型对新近事务进行处理。数据流分类常用的模型包括单分类模型和集成分类模型,常用的方法包括决策树、分类关联规则等。数据流聚类方式通常包括基于k- means的和非基于k- means的。模式挖掘可以为分类、聚类和关联规则等提供有用信息。概念漂移数据流中的模式包括频繁模式、序列模式、episode、模式树、模式图和高效用模式等。最后详细介绍其中的频繁模式挖掘算法和高效用模式挖掘算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号