共查询到20条相似文献,搜索用时 5 毫秒
1.
针对最大频繁项目集挖掘算法(DMFIA)当候选项目集维数高而最大频繁项目集维数较低的情况下要产生大量的候选项目集的缺点,提出了一种改进的基于频繁模式树(FP-tree)结构的最大频繁项目集挖掘算法--FP-MFIA。该算法根据FP-tree的项目头表,采用自底向上的搜索策略逐层挖掘最大频繁项目集,从而加速每次对候选集计数的操作。在挖掘时根据每层的条件模式基产生维数较低的非频繁项目集,尽早对候选项目集进行剪枝和降维,可大量减少候选项目集的数量。同时在挖掘时充分利用最大频繁项集的性质,减少搜索空间。通过算法在不同支持度下挖掘时间的对比可知,算法FP-MFIA在最小支持度较低的情况下时间效率是DMFIA以及基于降维的最大频繁模式挖掘算法(BDRFI)的2倍以上,说明FP-MFIA在候选集维数较高的时候优势明显。 相似文献
2.
The FP-growth algorithm using the FP-tree has been widely studied for frequent pattern mining because it can dramatically improve performance compared to the candidate generation-and-test paradigm of Apriori. However, it still requires two database scans, which are not consistent with efficient data stream processing. In this paper, we present a novel tree structure, called CP-tree (compact pattern tree), that captures database information with one scan (insertion phase) and provides the same mining performance as the FP-growth method (restructuring phase). The CP-tree introduces the concept of dynamic tree restructuring to produce a highly compact frequency-descending tree structure at runtime. An efficient tree restructuring method, called the branch sorting method, that restructures a prefix-tree branch-by-branch, is also proposed in this paper. Moreover, the CP-tree provides full functionality for interactive and incremental mining. Extensive experimental results show that the CP-tree is efficient for frequent pattern mining, interactive, and incremental mining with a single database scan. 相似文献
3.
Most work on pattern mining focuses on simple data structures such as itemsets and sequences of itemsets. However, a lot of recent applications dealing with complex data like chemical compounds, protein structures, XML and Web log databases and social networks, require much more sophisticated data structures such as trees and graphs. In these contexts, interesting patterns involve not only frequent object values (labels) appearing in the graphs (or trees) but also frequent specific topologies found in these structures. Recently, several techniques for tree and graph mining have been proposed in the literature. In this paper, we focus on constraint-based tree pattern mining. We propose to use tree automata as a mechanism to specify user constraints over tree patterns. We present the algorithm CoBMiner which allows user constraints specified by a tree automata to be incorporated in the mining process. An extensive set of experiments executed over synthetic and real data (XML documents and Web usage logs) allows us to conclude that incorporating constraints during the mining process is far more effective than filtering the interesting patterns after the mining process. 相似文献
4.
Web mining involves the application of data mining techniques to large amounts of web-related data in order to improve web services. Web traversal pattern mining involves discovering users’ access patterns from web server access logs. This information can provide navigation suggestions for web users indicating appropriate actions that can be taken. However, web logs keep growing continuously, and some web logs may become out of date over time. The users’ behaviors may change as web logs are updated, or when the web site structure is changed. Additionally, it can be difficult to determine a perfect minimum support threshold during the data mining process to find interesting rules. Accordingly, we must constantly adjust the minimum support threshold until satisfactory data mining results can be found.The essence of incremental data mining and interactive data mining is the ability to use previous mining results in order to reduce unnecessary processes when web logs or web site structures are updated, or when the minimum support is changed. In this paper, we propose efficient incremental and interactive data mining algorithms to discover web traversal patterns that match users’ requirements. The experimental results show that our algorithms are more efficient than other comparable approaches. 相似文献
5.
In this paper, we introduce item-centric mining, a new semantics for mining long-tailed datasets. Our algorithm, TopPI, finds for each item its top-k most frequent closed itemsets. While most mining algorithms focus on the globally most frequent itemsets, TopPI guarantees that each item is represented in the results, regardless of its frequency in the database.TopPI allows users to efficiently explore Web data, answering questions such as “what are the k most common sets of songs downloaded together with the ones of my favorite artist?”. When processing retail data consisting of 55 million supermarket receipts, TopPI finds the itemset “milk, puff pastry” that appears 10,315 times, but also “frangipane, puff pastry” and “nori seaweed, wasabi, sushi rice” that occur only 1120 and 163 times, respectively. Our experiments with analysts from the marketing department of our retail partner demonstrate that item-centric mining discover valuable itemsets. We also show that TopPI can serve as a building-block to approximate complex itemset ranking measures such as the p-value.Thanks to efficient enumeration and pruning strategies, TopPI avoids the search space explosion induced by mining low support itemsets. We show how TopPI can be parallelized on multi-cores and distributed on Hadoop clusters. Our experiments on datasets with different characteristics show the superiority of TopPI when compared to standard top-k solutions, and to Parallel FP-Growth, its closest competitor. 相似文献
6.
7.
Association-rule mining, which is based on frequency values of items, is the most common topic in data mining. In real-world applications, customers may, however, buy many copies of products and each product may have different factors, such as profits and prices. Only mining frequent itemsets in binary databases is thus not suitable for some applications. Utility mining is thus presented to consider additional measures, such as profits or costs according to user preference. In the past, a two-phase mining algorithm was designed for fast discovering high utility itemsets from databases. When data come intermittently, the approach needs to process all the transactions in a batch way. In this paper, an incremental mining algorithm for efficiently mining high utility itemsets is proposed to handle the above situation. It is based on the concept of the fast-update (FUP) approach, which was originally designed for association mining. The proposed approach first partitions itemsets into four parts according to whether they are high transaction-weighted utilization itemsets in the original database and in the newly inserted transactions. Each part is then executed by its own procedure. Experimental results also show that the proposed algorithm executes faster than the two-phase batch mining algorithm in the intermittent data environment 相似文献
8.
Vineet Chaoji Mohammad Al Hasan Saeed Salem Mohammed J. Zaki 《Data mining and knowledge discovery》2008,17(3):457-495
Frequent pattern mining (FPM) is an important data mining paradigm to extract informative patterns like itemsets, sequences,
trees, and graphs. However, no practical framework for integrating the FPM tasks has been attempted. In this paper, we describe
the design and implementation of the Data Mining Template Library (DMTL) for FPM. DMTL utilizes a generic data mining approach, where all aspects of mining are controlled via a set of properties. It uses a novel pattern property hierarchy to define and mine different pattern types. This property hierarchy can be thought of as a systematic characterization of
the pattern space, i.e., a meta-pattern specification that allows the analyst to specify new pattern types, by extending this
hierarchy. Furthermore, in DMTL all aspects of mining are controlled by a set of different mining properties. For example, the kind of mining approach to use, the kind of data types and formats to mine over, the kind of back-end storage
manager to use, are all specified as a list of properties. This provides tremendous flexibility to customize the toolkit for
various applications. Flexibility of the toolkit is exemplified by the ease with which support for a new pattern can be added.
Experiments on synthetic and public dataset are conducted to demonstrate the scalability provided by the persistent back-end
in the library. DMTL been publicly released as open-source software (), and has been downloaded by numerous researchers from all over the world. 相似文献
9.
10.
ChenWang Ming-ShengHong WeiWang Bai-LeShi 《计算机科学技术学报》2004,19(3):0-0
With the development of Internet, frequent pattern mining has been extended to more complexpatterns like tree mining and graph mining. Such applications arise in complex domains like bioinformatics, webmining, etc. In this papert we present a novel algorithm, named Chopper, to discover frequent subtrees fromordered labeled trees. An extensive performance study shows that the newly developed algorithm outperformsTreeMinerV one of the fastest methods proposed previously, in mining large databases. At the end of this paper,the potential improvement of Chopper is mentioned. 相似文献
11.
增量更新关联规则挖掘主要解决事务数据库中交易记录不断更新和最小支持度发生变化时关联规则的维护问题。针对目前诸多增量更新关联规则挖掘算法存在效率低、计算成本高、规则难以维护等问题,提出一种基于倒排索引树的增量更新关联挖掘算法。该算法有效地将倒排索引技术与树型结构相结合,使得交易数据库中的数据不断更新和最小支持度随应用环境不同而不断改变时,以实现无需扫描原始交易数据库和不产生候选项集的情况下生成频繁项集。实验结果表明,该算法只需占用较小的存储空间、且检索项集的效率较高,能高效地解决增量更新关联规则难以维护的问题。 相似文献
12.
Xingquan ZhuAuthor Vitae Bin LiAuthor VitaeXindong WuAuthor Vitae Dan HeAuthor VitaeChengqi ZhangAuthor Vitae 《Decision Support Systems》2011,52(1):40-51
The purpose of data mining from distributed information systems is usually threefold: (1) identifying locally significant patterns in individual databases; (2) discovering emerging significant patterns after unifying distributed databases in a single view; and (3) finding patterns which follow special relationships across different data collections. While existing research has significantly advanced the techniques for mining local and global patterns (the first two goals), very little attempt has been made to discover patterns across distributed databases (the third goal). Moreover, no framework currently exists to support the mining of all three types of patterns. This paper proposes solutions to discover patterns from distributed databases. More specifically, we consider pattern mining as a query process where the purpose is to discover patterns from distributed databases with patterns' relationships satisfying user specified query constraints. We argue that existing self-contained mining frameworks are neither efficient, nor feasible to fulfill the objective, mainly because their pattern pruning is single-database oriented. To solve the problem, we advocate a cross-database pruning concept and propose a collaborative pattern (CLAP) mining framework with cross-database pruning mechanisms for distributed pattern mining. In CLAP, distributed databases collaboratively exchange pattern information between sites so that each site can leverage information from other sites to gain cross-database pruning. Experimental results show that CLAP fits a niche position, and demonstrate that CLAP not only outperforms its other peers with significant runtime performance gains, but also helps find patterns incapable of being discovered by others. 相似文献
13.
Anonymity preserving pattern discovery 总被引:5,自引:0,他引:5
Maurizio Atzori Francesco Bonchi Fosca Giannotti Dino Pedreschi 《The VLDB Journal The International Journal on Very Large Data Bases》2008,17(4):703-727
It is generally believed that data mining results do not violate the anonymity of the individuals recorded in the source database. In fact, data mining models and patterns, in order to ensure a required
statistical significance, represent a large number of individuals and thus conceal individual identities: this is the case
of the minimum support threshold in frequent pattern mining. In this paper we show that this belief is ill-founded. By shifting the concept of k
-anonymity from the source data to the extracted patterns, we formally characterize the notion of a threat to anonymity in the context
of pattern discovery, and provide a methodology to efficiently and effectively identify all such possible threats that arise
from the disclosure of the set of extracted patterns. On this basis, we obtain a formal notion of privacy protection that
allows the disclosure of the extracted knowledge while protecting the anonymity of the individuals in the source database.
Moreover, in order to handle the cases where the threats to anonymity cannot be avoided, we study how to eliminate such threats
by means of pattern (not data!) distortion performed in a controlled way. 相似文献
14.
针对传统数据流挖掘算法不能挖掘出频繁项之间的关系而且挖掘时间和空间复杂度高、准确度不高的问题,本文提出了一种数据流中结构二叉树挖掘算法(AMST)。该算法利用了二叉树结构的优势,将所处理事务数据库中的数据流转化成结构化二叉树,然后利用数据流矩阵对结构二叉树进行挖掘。整个过程只对事务数据库进行了一次扫描,大大提高了挖掘的效率。此外,算法还找出了具有层次关系的频繁子树。实验结果表明,AMST算法性能稳定,在时间复杂度和空间复杂度方面有很大的优越性,能够快速准确地对数据流进行挖掘。 相似文献
15.
When computationally feasible, mining huge databases produces tremendously large numbers of frequent patterns. In many cases,
it is impractical to mine those datasets due to their sheer size; not only the extent of the existing patterns, but mainly
the magnitude of the search space. Many approaches have suggested the use of constraints to apply to the patterns or searching
for frequent patterns in parallel. So far, those approaches are still not genuinely effective to mine extremely large datasets.
We propose a method that combines both strategies efficiently, i.e. mining in parallel for the set of patterns while pushing
constraints. Using this approach we could mine significantly large datasets; with sizes never reported in the literature before.
We are able to effectively discover frequent patterns in a database made of billion transactions using a 32 processors cluster
in less than an hour and a half.
Recommended by: Ahmed Elmagarmid 相似文献
16.
频繁项集挖掘中的两种哈希树构建方法 总被引:1,自引:0,他引:1
1 引言从大型数据库中发现频繁项集/模式的研究作为关联规则、序贯模式、因果关系、最大模式、多维模式等挖掘问题的核心,已经成为近年数据挖掘领域的研究热点,并有不少有效的挖掘算法被提出。在这些挖掘算法中,它们大多数都采用了类似于Apriori算法的方法进行频繁项集的挖掘与更新。类Apriori算法的共同特点是:为了找出库中所有包含k(k>1)个项的频繁k-项集,首先产生包含频 相似文献
17.
Fernando Alonso Juan P. Caraa-Valente Angel L. Gonzlez Csar Montes 《Expert systems with applications》2002,23(4)
The medical diagnosis system described here uses underlying knowledge in the isokinetic domain, obtained by combining the expertise of a physician specialised in isokinetic techniques and data mining techniques applied to a set of existing data. An isokinetic machine is basically a physical support on which patients exercise one of their joints, in this case the knee, according to different ranges of movement and at a constant speed. The data on muscle strength supplied by the machine are processed by an expert system that has built-in knowledge elicited from an expert in isokinetics. It cleans and pre-processes the data and conducts an intelligent analysis of the parameters and morphology of the isokinetic curves. Data mining methods based on the discovery of sequential patterns in time series and the fast Fourier transform, which identifies similarities and differences among exercises, were applied to the processed information to characterise injuries and discover reference patterns specific to populations. The results obtained were applied in two environments: one for the blind and another for elite athletes. 相似文献
18.
Given a large set of data, a common data mining problem is to extract the frequent patterns occurring in this set. The idea presented in this paper is to extract a condensed representation of the frequent patterns called disjunction-bordered condensation (DBC), instead of extracting the whole frequent pattern collection. We show that this condensed representation can be used to regenerate all frequent patterns and their exact frequencies. Moreover, this regeneration can be performed without any access to the original data. Practical experiments show that the DBCcan be extracted very efficiently even in difficult cases and that this extraction and the regeneration of the frequent patterns is much more efficient than the direct extraction of the frequent patterns themselves. We compared the DBC with another representation of frequent patterns previously investigated in the literature called frequent closed sets. In nearly all experiments we have run, the DBC have been extracted much more efficiently than frequent closed sets. In the other cases, the extraction times are very close. 相似文献
19.
Very little research in knowledge discovery has studied how to incorporate statistical methods to automate linear correlation discovery (LCD). We present an automatic LCD methodology that adopts statistical measurement functions to discover correlations from databases’ attributes. Our methodology automatically pairs attribute groups having potential linear correlations, measures the linear correlation of each pair of attribute groups, and confirms the discovered correlation. The methodology is evaluated in two sets of experiments. The results demonstrate the methodology’s ability to facilitate linear correlation discovery for databases with a large amount of data. 相似文献
20.
High utility pattern (HUP) mining is one of the most important research issues in data mining. Although HUP mining extracts important knowledge from databases, it requires long calculations and multiple database scans. Therefore, HUP mining is often unsuitable for real-time data processing schemes such as data streams. Furthermore, many HUPs may be unimportant due to the poor correlations among the items inside of them. Hence,the fast discovery of fewer but more important HUPs would be very useful in many practical domains. In this paper, we propose a novel framework to introduce a very useful measure, called frequency affinity, among the items in a HUP and the concept of interesting HUP with a strong frequency affinity for the fast discovery of more applicable knowledge. Moreover, we propose a new tree structure, utility tree based on frequency affinity (UTFA), and a novel algorithm, high utility interesting pattern mining (HUIPM), for single-pass mining of HUIPs from a database. Our approach mines fewer but more valuable HUPs, significantly reduces the overall runtime of existing HUP mining algorithms and is applicable to real-time data processing. Extensive performance analyses show that the proposed HUIPM algorithm is very efficient and scalable for interesting HUP mining with a strong frequency affinity. 相似文献