首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
基于Hadoop分布式计算平台,给出一种适用于大数据集的并行挖掘算法。该算法对非结构化的原始大数据集以及中间结果文件进行垂直划分以确保能够获得完整的频繁项集,将各个垂直分块数据分配给不同的Hadoop计算节点进行处理,以减少各个计算节点的存储数据,进而减少各个计算节点执行交集操作的次数,提高并行挖掘效率。实验结果表明,给出的并行挖掘算法解决了大数据集挖掘过程中产生的大量数据通信、中间数据以及执行大量交集操作的问题,算法高效、可扩展。  相似文献   

2.
大数据环境下高效用项集挖掘算法中过多的候选项集极大地降低了算法的时空效率,提出了一种减少候选项集的数据流高效用项集挖掘算法。首先,通过数据流中当前窗口的一次扫描建立一个全局树,并降低全局树中头表入口与节点的冗余效用值;然后,基于全局树生成候选模式,基于增长算法降低局部树的候选项集效用;最终,从候选模式中选出高效用模式。基于真实数据流的实验结果表明,本算法的时空效率与内存占用比均优于其他数据流的高效用模式挖掘算法。  相似文献   

3.
现有FP-growth频繁集挖掘算法在处理大数据时存在时空效率不高的问题,且内存的使用随着数据的增加已经无法满足把待挖掘数据压缩存储在单个内存中,为此,提出一种基于MapReduce模型的频繁项集并行挖掘算法。该算法采用一种基于key/value键值对直接扫描value寻找条件模式基的方式,同时通过在原有FP-tree树节点中新增一个带频繁项前缀的域空间来构建一颗新的条件模式树NFP-tree,使得对一项频繁项的条件模式基进行一次建树一次遍历就可以得到相应的频繁项集。对所提出的算法在Hadoop平台进行了验证与分析,实验结果表明该算法效率较传统FP-growth算法平均提高16.6%。  相似文献   

4.
根据人工神经网络自组织、高度并行以及具有非线性映射能力的特点,提出一种基于云计算的Hadoop多模式并行分类算法。通过将自组织映射网络与多个并行BP神经网络结合,提高多语义模式中复杂分类问题的学习效率和训练精度。采用Hadoop平台下的Map Reduce框架实现算法的并行处理,解决大规模数据样本训练时内存开销大、通信耗时长的问题。实验结果表明,与传统单BP多输出分类算法相比,该算法训练速度更快、分类精度更高,在处理大规模数据集时具有实时和高效的特性。  相似文献   

5.
通过用户设定阈值获取高效用模式的算法效率较低且挖掘结果不一定满足用户需求。针对这一问题,基于EFIM算法提出一种高效用Top-k模式挖掘算法。由用户指定高效用模式个数来代替人为阈值设定。采用基于扩展效用和剩余效用的双重剪枝策略,有效控制模式的增长。在数据库投影过程中,应用事务排序及合并策略减少运行时间和内存消耗。实验结果表明,该算法在运行时间和内存消耗上具有较大优势,尤其适用于密集型数据集的高效用模式挖掘。  相似文献   

6.
为提高聚类算法效率,提出一种基于动态云平台的快速闭树聚类并行算法。针对云计算平台Hadoop中任务的随机分配策略,给出一个满足最小化消耗成本的任务分配算法 CDA-GA,并基于该算法提出动态云平台模型。将传统的频繁闭树挖掘算法与聚类算法并行化,应用于动态云平台中,设计基于动态云平台的闭树聚类算法框架。实验结果表明,该算法有效可行,适合在大规模数据下进行聚类分析。  相似文献   

7.
k-modes是一种代表性的分类数据的聚类算法.首先对k-modes聚类算法的实现过程进行了改进:通过在分配数据对象到簇时更新这个簇中各个属性项的次数,使得在遍历一次全部数据对象就能计算出新的簇中心.为了使k-modes能够处理大规模分类数据,在Hadoop平台上用MapReduce并行计算模型实现了k-modes算法.实验表明:在处理大量数据时,并行k-modes比串行k-modes极大地缩短了聚类时间,取得了较好的加速比.  相似文献   

8.
陈敏  李徽翡 《计算机工程》2009,35(20):71-72
针对FP-Growth算法面临大规模数据库时空效率不高的问题,提出一种面向计算机集群的并行算法。采用投影方法直接寻找频繁项的条件数据库,将挖掘条件数据库的工作分化成若干独立的子任务,分配到集群中的节点上并行实现,由中央节点汇总结果并输出。结果证明,该算法不仅能够提高计算速度,解决数据库规模过大时内存溢出的情况,且具有良好的延展性。  相似文献   

9.
摘 要: 高效用模式挖掘被广泛应用于数据挖掘领域。为了挖掘指定数量的高效用模式,一些基于树结构和效用表结构的top-k高效用挖掘算法被提出,但前者在挖掘过程中产生了大量候选模式,后者在效用模式增长时需要进行多次比较。同时,由于在信息社会,数据量呈爆炸性增长。因此,在数据集过大的情况下,挖掘高效用模式需以大量存储空间以及计算开销为代价。为了解决这两个问题,基于MapReduce的top-k高效用模式挖掘算法(TKHUP_MaR)被提出。该算法通过两次扫描数据库,利用三次MapReduce来实现并行top-k高效用模式的挖掘。通过实验表明TKHUP_MaR 算法在并行挖掘top-k高效用模式的过程中是有效的。  相似文献   

10.
频繁项集挖掘FIM(Frequent Itemsets Mining)是关联规则挖掘算法的重要组成部分。而经典Apriori和FP-Growth算法在海量数据处理时面临内存占用、计算性能等方面的瓶颈。基于Hadoop云计算平台,提出适用大数据处理的频繁项集挖掘HBFP(High Balanced parallel FP-growth)算法,设计后缀模式转换的数据分割及均衡任务分组方案,使计算节点本地拥有计算所依赖的数据,实现不同节点相互独立的并行数据挖掘方法,并保证算法全局的负载均衡特性。实验数据表明,HBFP算法能均匀地将计算量分散至不同计算节点,并行且相互独立地进行FP-Growth挖掘过程,算法效率提高了约12%,算法全局稳定性及效率取得提升。  相似文献   

11.
针对传统平台运行Apriori算法来挖掘中医病案中用药组合规律时,存在着占用内存空间大、计算效率低和PB级数据无法处理等问题,提出基于Hadoop的中医哮喘用药组合关联分析方法。采用Mapreduce分布式计算框架和HBase分布式数据库优化Apriori算法性能:一方面使用Mapreduce计算框架并行处理数据,借助HBase高速读写数据的特性,加速频繁项集的产生;另一方面摒弃传统算法中的自连接产生候选项集方式,对每个节点上的数据,使用循环和递归相结合的方式产生候选集,提高候选集产生的效率。实验结果证明,借助基于Hadoop的中医哮喘用药组合关联分析方法挖掘中医药组合规律,效率更高,能更有效地指导临床实践。  相似文献   

12.
Traditional frequent pattern mining methods consider an equal profit/weight for all items and only binary occurrences (0/1) of the items in transactions. High utility pattern mining becomes a very important research issue in data mining by considering the non-binary frequency values of items in transactions and different profit values for each item. However, most of the existing high utility pattern mining algorithms suffer in the level-wise candidate generation-and-test problem and generate too many candidate patterns. Moreover, they need several database scans which are directly dependent on the maximum candidate length. In this paper, we present a novel tree-based candidate pruning technique, called HUC-Prune (High Utility Candidates Prune), to solve these problems. Our technique uses a novel tree structure, called HUC-tree (High Utility Candidates tree), to capture important utility information of the candidate patterns. HUC-Prune avoids the level-wise candidate generation process by adopting a pattern growth approach. In contrast to the existing algorithms, its number of database scans is completely independent of the maximum candidate length. Extensive experimental results show that our algorithm is very efficient for high utility pattern mining and it outperforms the existing algorithms.  相似文献   

13.
High utility itemset mining considers the importance of items such as profit and item quantities in transactions. Recently, mining high utility itemsets has emerged as one of the most significant research issues due to a huge range of real world applications such as retail market data analysis and stock market prediction. Although many relevant algorithms have been proposed in recent years, they incur the problem of generating a large number of candidate itemsets, which degrade mining performance. In this paper, we propose an algorithm named MU-Growth (Maximum Utility Growth) with two techniques for pruning candidates effectively in mining process. Moreover, we suggest a tree structure, named MIQ-Tree (Maximum Item Quantity Tree), which captures database information with a single-pass. The proposed data structure is restructured for reducing overestimated utilities. Performance evaluation shows that MU-Growth not only decreases the number of candidates but also outperforms state-of-the-art tree-based algorithms with overestimated methods in terms of runtime with a similar memory usage.  相似文献   

14.
Incrementally mining high utility patterns based on pre-large concept   总被引:1,自引:1,他引:0  
In traditional association rule mining, most algorithms are designed to discover frequent itemsets from a binary database. Utility mining was thus proposed to measure the utility values of purchased items for revealing high utility itemsets from a quantitative database. In the past, a two-phase high utility mining algorithm was thus proposed for efficiently discovering high utility itemsets from a quantitative database. In dynamic data mining, transactions may be inserted, deleted, or modified from a database. In this case, a batch mining procedure must rescan the whole updated database to maintain the up-to-date information. Designing an efficient approach for handling dynamic databases is thus a critical research issue in utility mining. In this paper, an incremental mining algorithm is proposed for efficiently maintaining discovered high utility itemsets based on pre-large concepts. Itemsets are first partitioned into three parts according to whether they have large (high), pre-large, or small transaction-weighted utilization in the original database and in inserted transactions. Individual procedures are then executed for each part. Experimental results show that the proposed incremental high utility mining algorithm outperforms existing algorithms.  相似文献   

15.
Mining high utility itemsets by dynamically pruning the tree structure   总被引:2,自引:2,他引:0  
Mining high utility itemsets is one of the most important research issues in data mining owing to its ability to consider nonbinary frequency values of items in transactions and different profit values for each item. Mining such itemsets from a transaction database involves finding those itemsets with utility above a user-specified threshold. In this paper, we propose an efficient concurrent algorithm, called CHUI-Mine (Concurrent High Utility Itemsets Mine), for mining high utility itemsets by dynamically pruning the tree structure. A tree structure, called the CHUI-Tree, is introduced to capture the important utility information of the candidate itemsets. By recording changes in support counts of candidate high utility items during the tree construction process, we implement dynamic CHUI-Tree pruning, and discuss the rationality thereof. The CHUI-Mine algorithm makes use of a concurrent strategy, enabling the simultaneous construction of a CHUI-Tree and the discovery of high utility itemsets. Our algorithm reduces the problem of huge memory usage for tree construction and traversal in tree-based algorithms for mining high utility itemsets. Extensive experimental results show that the CHUI-Mine algorithm is both efficient and scalable.  相似文献   

16.
Recently, high utility pattern mining (HUPM) has been extensively studied. Many approaches for HUPM have been proposed in recent years, but most of them aim at mining HUPs without any consideration for their frequency. This has the major drawback that any combination of a low utility item with a very high utility pattern is regarded as a HUP, even if this combination has low affinity and contains items that rarely co-occur. Thus, frequency should be a key criterion to select HUPs. To address this issue, and derive high utility interesting patterns (HUIPs) with strong frequency affinity, the HUIPM algorithm was proposed. However, it recursively constructs a series of conditional trees to produce candidates and then derive the HUIPs. This procedure is time-consuming and may lead to a combinatorial explosion when the minimum utility threshold is set relatively low. In this paper, an efficient algorithm named fast algorithm for mining discriminative high utility patterns (DHUPs) with strong frequency affinity (FDHUP) is proposed to efficiently discover DHUPs by considering both the utility and frequency affinity constraints. Two compact structures named EI-table and FU-tree and three pruning strategies are introduced in the proposed algorithm to reduce the search space, and efficiently and effectively discover DHUPs. An extensive experimental study shows that the proposed FDHUP algorithm considerably outperforms the state-of-the-art HUIPM algorithm in terms of execution time, memory consumption, and scalability.  相似文献   

17.
含负项高效用项集(HUI)挖掘是新兴的数据挖掘任务之一。为了挖掘满足用户需求的含负项HUI结果集,提出了含负项top-k高效用项集(THN)挖掘算法。为了提升THN算法的时空性能,提出了自动提升最小效用阈值的策略,并采用模式增长方法进行深度优先搜索;使用重新定义的子树效用和重新定义的本地效用修剪搜索空间;使用事务合并技术和数据集投影技术解决多次扫描数据库的问题;为了提高效用计数的速度,使用效用数组计数技术计算项集的效用。实验结果表明,THN算法的内存消耗约为HUINIV-Mine算法的1/60,约为FHN算法的1/2;THN算法的执行时间是FHN算法的1/10;而且该算法在密集数据集上的性能更好。  相似文献   

18.
赵月  任永功  刘洋 《计算机科学》2017,44(6):250-254
随着移动通信和互联网技术的迅猛发展,如何高效地分析移动用户的需求并及时推送有用信息成为数据挖掘领域的热点之一。针对上述问题,提出一种基于云计算Hadoop平台的分布式关联规则MRS-Apriori算法。该方法在经典Apriori算法的基础上优化了数据库编码规则,增加了判断标记Judgemark来判断事务项是否频繁,提高了MRS-Apriori算法在连接时扫描数据库的效率。在编码的基础上,采用Hadoop平台下的MapReduce编程框架模型实现并行化处理,提高了迭代时连接步骤的效率,降低了大规模数据样本运算的时间开销。实验结果表明,改进的MRS-Apriori算法可以有效地减少运算时间,在处理大规模数据集上具有较高的准确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号