首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sliding window-based frequent pattern mining over data streams   总被引:2,自引:0,他引:2  
Finding frequent patterns in a continuous stream of transactions is critical for many applications such as retail market data analysis, network monitoring, web usage mining, and stock market prediction. Even though numerous frequent pattern mining algorithms have been developed over the past decade, new solutions for handling stream data are still required due to the continuous, unbounded, and ordered sequence of data elements generated at a rapid rate in a data stream. Therefore, extracting frequent patterns from more recent data can enhance the analysis of stream data. In this paper, we propose an efficient technique to discover the complete set of recent frequent patterns from a high-speed data stream over a sliding window. We develop a Compact Pattern Stream tree (CPS-tree) to capture the recent stream data content and efficiently remove the obsolete, old stream data content. We also introduce the concept of dynamic tree restructuring in our CPS-tree to produce a highly compact frequency-descending tree structure at runtime. The complete set of recent frequent patterns is obtained from the CPS-tree of the current window using an FP-growth mining technique. Extensive experimental analyses show that our CPS-tree is highly efficient in terms of memory and time complexity when finding recent frequent patterns from a high-speed data stream.  相似文献   

2.
针对相关算法在挖掘数据流最大频繁项集时所存在的问题,提出了一种基于向量的数据流滑动窗口中最大频繁项集挖掘算法。该算法首先用向量作为概要数据结构,采用定量更新滑动窗口策略解决时间粒度问题;其次通过位运算产生频繁项集,利用矩阵和数组存储辅助信息,深度优先搜索产生最大频繁项集时利用剪枝策略进一步减少挖掘时间;最后用索引链表存储挖掘结果以提高超集检测效率。理论分析和实验结果验证了该算法的有效性。  相似文献   

3.
数据流中基于滑动窗口的最大频繁项集挖掘算法*   总被引:2,自引:0,他引:2  
挖掘数据流中最大频繁项集是从数据流中获得信息的一种有效手段,是数据流挖掘研究的热点之一。结合数据流的特点,提出了一种新的基于滑动窗口的最大频繁项集挖掘算法。该算法用位图来存储数据流中流动的数据;采用直接覆盖的方法存储和更新数据流上的数据;在深度优先搜索挖掘最大频繁项集时,除采用经典的剪枝策略外,还提出了与父等价原理相对应的子等价剪枝策略;最后将挖掘结果存储在索引链表中以提高超集检测效率,进一步减少挖掘最大频繁项集的时间。理论分析和实验结果证实了该算法在时间和空间上的有效性。  相似文献   

4.
韩萌  丁剑 《计算机应用》2019,39(3):719-727
一些先进应用如欺诈检测和趋势学习等带来了数据流频繁模式挖掘的发展。不同于静态数据,数据流挖掘面临着时空约束和项集组合爆炸等问题。对已有数据流频繁模式挖掘算法进行综述并对经典和最新算法进行分析。按照模式集合的完整程度进行分类,数据流中频繁模式分为全集模式和压缩模式。压缩模式主要包括闭合模式、最大模式、top-k模式以及三者的组合模式。不同之处是闭合模式是无损压缩的,而其他模式是有损压缩的。为了得到有趣的频繁模式,可以挖掘基于用户约束的模式。为了处理数据流中的新近事务,将算法分为基于窗口模型和基于衰减模型的方法。数据流中模式挖掘常见的还包含序列模式和高效用模式,对经典和最新算法进行介绍。最后给出了数据流模式挖掘的下一步工作。  相似文献   

5.
滑动窗口中数据流频繁项集挖掘方法   总被引:2,自引:0,他引:2  
根据数据流的流动性与连续性,提出了一种滑动窗口中频繁项集挖掘算法NSW,满足了人们快速获取最近到达数据中频繁项集的需求。该算法采用二进制矩阵表示滑动窗口中的事务列表,通过直接删除最老事务、不产生候选项集等方法控制时间和空间的开销。实验表明,该算法具有较好的时间和空间效率。  相似文献   

6.
In recent years, data stream mining has become an important research topic. With the emergence of new applications, the data we process are not again static, but the continuous dynamic data stream. Examples include network traffic analysis, Web click stream mining, network intrusion detection, and on-line transaction analysis. In this paper, we propose a new framework for data stream mining, called the weighted sliding window model. The proposed model allows the user to specify the number of windows for mining, the size of a window, and the weight for each window. Thus users can specify a higher weight to a more significant data section, which will make the mining result closer to user’s requirements. Based on the weighted sliding window model, we propose a single pass algorithm, called WSW, to efficiently discover all the frequent itemsets from data streams. By analyzing data characteristics, an improved algorithm, called WSW-Imp, is developed to further reduce the time of deciding whether a candidate itemset is frequent or not. Empirical results show that WSW-Imp outperforms WSW under the weighted sliding window model.  相似文献   

7.
近年来,数据流挖掘一直是国内外研究的热点,频繁项集挖掘又是数据流挖掘中的重要问题。根据数据流无限性和流动性的特点,提出了一种在滑动窗口中挖掘频繁项集的算法FIM-SW,FIM-SW算法主要是采用垂直的数据库表示方法,使用二进制向量表示每个数据项,并利用Apriori性质产生频繁项集。实验结果表明,这种算法显著地提高了挖掘效率。  相似文献   

8.
数据流频繁项集挖掘是当今数据挖掘和知识学习领域重要的研究课题之一。数据流高速性、连续性、无界性、实时性对挖掘算法在时间和空间方面提出了更高的要求。传统的数据挖掘算法由于其存储结构需要频繁地维护,其挖掘方式的精度和速度较低,空间、时间效率不高。在基于粒计算和ECLAT算法的基础上提出一种挖掘数据流滑动窗口中top-K频繁项集算法,采用二进制方式存储项,利用位移运算实现增量更新,实施与运算计算项集支持度,同时利用二分查找法插入到项目序表中,输出前K个频繁项。实验结果表明,该算法在K取值不太高时具有较好的时空高效性。  相似文献   

9.
数据流频繁项集挖掘是当今数据挖掘和知识学习领域重要的研究课题之一。数据流高速性、连续性、无界性、实时性对挖掘算法在时间和空间方面提出了更高的要求。传统的数据挖掘算法由于其存储结构需要频繁地维护,其挖掘方式的精度和速度较低,空间、时间效率不高。在基于粒计算和ECLAT算法的基础上提出一种挖掘数据流滑动窗口中topK频繁项集算法,采用二进制方式存储项,利用位移运算实现增量更新,实施与运算计算项集支持度,同时利用二分查找法插入到项目序表中,输出前K个频繁项。实验结果表明,该算法在K取值不太高时具有较好的时空高  相似文献   

10.
A data stream is a massive, open-ended sequence of data elements continuously generated at a rapid rate. Mining data streams is more difficult than mining static databases because the huge, high-speed and continuous characteristics of streaming data. In this paper, we propose a new one-pass algorithm called DSM-MFI (stands for Data Stream Mining for Maximal Frequent Itemsets), which mines the set of all maximal frequent itemsets in landmark windows over data streams. A new summary data structure called summary frequent itemset forest (abbreviated as SFI-forest) is developed for incremental maintaining the essential information about maximal frequent itemsets embedded in the stream so far. Theoretical analysis and experimental studies show that the proposed algorithm is efficient and scalable for mining the set of all maximal frequent itemsets over the entire history of the data streams.  相似文献   

11.
一种基于变尺度滑动窗口的数据流频繁集挖掘算法   总被引:2,自引:0,他引:2  
基干传统滑动窗口机制的数据流频繁集挖掘算法较多地考虑快速且精确的效果,而较少考虑数据流的时变特性,对传统的滑动窗口机制进行改进.同时考虑数据流的海量特性和时变特性,提出一种基于变尺度滑动窗口机制的数据流频繁集挖掘算法V-Stream.该算法采用事务链表组的概要数据结构.能够根据数据流的数据分布变化自适应调整窗口大小.Eclipse上的仿真实验结果表明,V-Stream相比Manku算法提高了挖掘数据流频繁集的时间与空间效率.  相似文献   

12.
Sliding window is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. The main idea behind a transactional sliding window is to keep a fixed size window over a data stream. The window size is kept constant by removing old transactions from the window, when new transactions arrive. Older transactions of window are removed irrespective to whether a significant change has occurred or not. Another challenge of sliding window model is determining window size. The classic approach for determining the window size is to obtain it from the user. In order to determine the precise size of the window, the user must have prior knowledge about the time and scale of changes within the data stream. However, due to the unpredictable changing nature of data streams, this prior knowledge cannot be easily determined. Moreover, by using a fixed window size during a data stream mining, the performance of this model is degraded in terms of reflecting recent changes. Based on these observations, this study relaxes the notion of window size and proposes a new algorithm named VSW (Variable Size sliding Window frequent itemset mining) which is suitable for observing recent changes in the set of frequent itemsets over data streams. The window size is determined dynamically based on amounts of concept change that occurs within the arriving data stream. The window expands as the concept becomes stable and shrinks when a concept change occurs. In this study, it is shown that if stale transactions are removed from the window after a concept change, updated frequent itemsets always belong to the most recent concept. Experimental evaluations on both synthetic and real data show that our algorithm effectively detects the concept change, adjust the window size, and adapts itself to the new concepts along the data stream.  相似文献   

13.
提出一种结合倾斜时间窗的TWCT树结构,可以保存不同时间粒度下频繁模式的完全集,并设计了其顺序更新和删除算法,使其能够存储在外存,从而有效地降低算法的内存空间需求。结合TWCT树结构特点,提出了数据流上的频繁模式挖掘算法TWCT-Stream,其模式生长的TWCT-Growth算法按字典顺序生成频繁模式,以配合TWCT结构的顺序更新。实验证实算法的内存需求低于FP-Stream等同类算法。  相似文献   

14.
数据流的无限性、连续性和速度快等特点,使得挖掘出所有准确的数据流频繁项通常是不可能的.算法的空间复杂度和时间复杂度通常是评价频繁项挖掘算法优劣的两个主要度量.通过引入局部性原理改进数据流近似频繁项的挖掘算法,该算法的空间复杂性为O(1/ε),数据流每个数据项的最坏处理时间是O(1/ε),其最好处理时间是O(1),输出结果的频率值误差为∑_(i=2)^j(1-μi)×ki。  相似文献   

15.
随着数据流应用领域的不断扩大,数据流频繁模式挖掘技术逐渐成为数据挖掘领域研究的核心问题。对DSFPM算法进行研究和改进,提出了一种基于界标窗口的数据流频繁模式挖掘算法DSMFP_LW。该算法实现了单边扫描数据流;利用扩展的前缀模式树存储全局临界频繁模式,实现数据增量更新。通过对比实验,结果证明DSMFP_LW算法有较好的时间开销和空间利用率,优于经典的Lossy Counting算法,适合数据流频繁模式挖掘。  相似文献   

16.
In this paper, we study the incremental update of Frequent Closed Itemsets (FCIs) over a sliding window in a high-speed data stream. We propose the notion of semi-FCIs, which is to progressively increase the minimum support threshold for an itemset as it is retained longer in the window, thereby drastically reducing the number of itemsets that need to be maintained and processed. We explore the properties of semi-FCIs and observe that a majority of the subsets of a semi-FCI are not semi-FCIs and need not be updated. This finding allows us to devise an efficient algorithm, IncMine, that incrementally updates the set of semi-FCIs over a sliding window. We also develop an inverted index to facilitate the update process. Our empirical results show that IncMine achieves significantly higher throughput and consumes less memory than the state-of-the-art streaming algorithms for mining FCIs and FIs. IncMine also attains high accuracy of 100% precision and over 93% recall.  相似文献   

17.
王鑫  刘方爱 《计算机应用》2016,36(7):1988-1992
针对已有的多数据流协同频繁项集挖掘算法存在内存占用率高以及发现频繁项集效率低的问题,提出了改进的多数据流协同频繁项集挖掘(MCMD-Stream)算法。首先,该算法利用单遍扫描数据库的字节序列滑动窗口挖掘算法发现数据流中的潜在频繁项集和频繁项集;其次,构建类似频繁模式树(FP-Tree)的压缩频繁模式树(CP-Tree)存储已发现的潜在频繁项集和频繁项集,同时更新CP-Tree树中每个节点生成的对数倾斜时间表中的频繁项计数;最后,通过汇总分析得出在多条数据流中多次出现的且有价值的频繁项集,即协同频繁项集。相比A-Stream和H-Stream算法,MCMD-Stream算法不仅能够提高多数据流中协同频繁项集挖掘的效率,并且还降低了内存空间的使用率。实验结果表明MCMD-Stream算法能够有效地应用于多数据流的协同频繁项集挖掘。  相似文献   

18.
在频繁模式挖掘过程中能够动态改变约束的算法比较少.提出了一种基于约束的频繁模式挖掘算法MCFP.MCFP首先按照约束的性质来建立频繁模式树,并且只需扫描一遍数据库,然后建立每个项的条件树,挖掘以该项为前缀的最大频繁模式,并用最大模式树来存储,最后根据最大模式来找出所有支持度明确的频繁模式.MCFP算法允许用户在挖掘频繁模式过程中动态地改变约束.实验表明,该算法与iCFP算法相比是很有效的.  相似文献   

19.
基于FP-tree的最大频繁项目集挖掘算法   总被引:1,自引:0,他引:1  
最大频繁项目集挖掘是数据挖掘领域最重要的基本问题之一,在分析已有算法的基础上提出了FP-MMFI算法,它是对FP-growth算法在最大频繁项目集挖掘上的扩展.提出了频繁路径的概念,用它可以有效地对FP-tree进行压缩和缩小搜索空间,同时使用投影的方法对超集检测进行了优化,减少了项目匹配的次数.最后实验结果表明,该算法在性能上优于已有的同类算法.  相似文献   

20.
Mining top-K frequent itemsets from data streams   总被引:1,自引:0,他引:1  
Frequent pattern mining on data streams is of interest recently. However, it is not easy for users to determine a proper frequency threshold. It is more reasonable to ask users to set a bound on the result size. We study the problem of mining top K frequent itemsets in data streams. We introduce a method based on the Chernoff bound with a guarantee of the output quality and also a bound on the memory usage. We also propose an algorithm based on the Lossy Counting Algorithm. In most of the experiments of the two proposed algorithms, we obtain perfect solutions and the memory space occupied by our algorithms is very small. Besides, we also propose the adapted approach of these two algorithms in order to handle the case when we are interested in mining the data in a sliding window. The experiments show that the results are accurate.
Ada Wai-Chee FuEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号