首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
Mining minimal distinguishing subsequence patterns with gap constraints   总被引:1,自引:4,他引:1  
Discovering contrasts between collections of data is an important task in data mining. In this paper, we introduce a new type of contrast pattern, called a Minimal Distinguishing Subsequence (MDS). An MDS is a minimal subsequence that occurs frequently in one class of sequences and infrequently in sequences of another class. It is a natural way of representing strong and succinct contrast information between two sequential datasets and can be useful in applications such as protein comparison, document comparison and building sequential classification models. Mining MDS patterns is a challenging task and is significantly different from mining contrasts between relational/transactional data. One particularly important type of constraint that can be integrated into the mining process is the gap constraint. We present an efficient algorithm called ConSGapMiner (Contrast Sequences with Gap Miner), to mine all MDSs satisfying a minimum and maximum gap constraint, plus a maximum length constraint. It employs highly efficient bitset and boolean operations, for powerful gap-based pruning within a prefix growth framework. A performance evaluation with both sparse and dense datasets, demonstrates the scalability of ConSGapMiner and shows its ability to mine patterns from high dimensional datasets at low supports.  相似文献   

2.
Incorporating constraints into frequent itemset mining not only improves data mining efficiency, but also leads to concise and meaningful results. In this paper, a framework for closed constrained gradient itemset mining in retail databases is proposed by introducing the concept of gradient constraint into closed itemset mining. A tailored version of CLOSET+, LCLOSET, is first briefly introduced, which is designed for efficient closed itemset mining from sparse databases. Then, a newly proposed weaker but antimonotone measure, top-X average measure, is proposed and can be adopted to prune search space effectively. Experiments show that a combination of LCLOSET and the top-X average pruning provides an efficient approach to mining frequent closed gradient itemsets.  相似文献   

3.
冯玉才  刘玉葆  冯剑琳 《软件学报》2003,14(10):1706-1716
约束立方梯度挖掘是一项重要的挖掘任务,其主要目的是从数据立方中挖掘出满足梯度约束的梯度-探测元组对.然而,现有的研究都是基于一般数据立方的.研究了浓缩数据立方中约束数据立方梯度的挖掘问题.通过扩展LiveSet驱动算法,提出了一个eLiveSet算法.测试表明,该算法在立方梯度挖掘效率上比现有算法要高.  相似文献   

4.
High Performance OLAP and Data Mining on Parallel Computers   总被引:2,自引:0,他引:2  
On-Line Analytical Processing (OLAP) techniques are increasingly being used in decision support systems to provide analysis of data. Queries posed on such systems are quite complex and require different views of data. Analytical models need to capture the multidimensionality of the underlying data, a task for which multidimensional databases are well suited. Multidimensional OLAP systems store data in multidimensional arrays on which analytical operations are performed. Knowledge discovery and data mining requires complex operations on the underlying data which can be very expensive in terms of computation time. High performance parallel systems can reduce this analysis time. Precomputed aggregate calculations in a Data Cube can provide efficient query processing for OLAP applications. In this article, we present algorithms for construction of data cubes on distributed-memory parallel computers. Data is loaded from a relational database into a multidimensional array. We present two methods, sort-based and hash-based for loading the base cube and compare their performances. Data cubes are used to perform consolidation queries used in roll-up operations using dimension hierarchies. Finally, we show how data cubes are used for data mining using Attribute Focusing techniques. We present results for these on the IBM-SP2 parallel machine. Results show that our algorithms and techniques for OLAP and data mining on parallel systems are scalable to a large number of processors, providing a high performance platform for such applications.  相似文献   

5.
MAFIA: a maximal frequent itemset algorithm   总被引:4,自引:0,他引:4  
We present a new algorithm for mining maximal frequent itemsets from a transactional database. The search strategy of the algorithm integrates a depth-first traversal of the itemset lattice with effective pruning mechanisms that significantly improve mining performance. Our implementation for support counting combines a vertical bitmap representation of the data with an efficient bitmap compression scheme. In a thorough experimental analysis, we isolate the effects of individual components of MAFIA including search space pruning techniques and adaptive compression. We also compare our performance with previous work by running tests on very different types of data sets. Our experiments show that MAFIA performs best when mining long itemsets and outperforms other algorithms on dense data by a factor of three to 30.  相似文献   

6.
A new approach to online generation of association rules   总被引:6,自引:0,他引:6  
We discuss the problem of online mining of association rules in a large database of sales transactions. The online mining is performed by preprocessing the data effectively in order to make it suitable for repeated online queries. We store the preprocessed data in such a way that online processing may be done by applying a graph theoretic search algorithm whose complexity is proportional to the size of the output. The result is an online algorithm which is independent of the size of the transactional data and the size of the preprocessed data. The algorithm is almost instantaneous in the size of the output. The algorithm also supports techniques for quickly discovering association rules from large itemsets. The algorithm is capable of finding rules with specific items in the antecedent or consequent. These association rules are presented in a compact form, eliminating redundancy. The use of nonredundant association rules helps significantly in the reduction of irrelevant noise in the data mining process  相似文献   

7.
Many existing data mining algorithms search interesting patterns from transactional databases of precise data. However, there are situations in which data are uncertain. Items in each transaction of these probabilistic databases of uncertain data are usually associated with existential probabilities, which express the likelihood of these items to be present in the transaction. When compared with mining from precise data, the search space for mining from uncertain data is much larger due to the presence of the existential probabilities. This problem is worsened as we are moving to the era of Big data. Furthermore, in many real-life applications, users may be interested in a tiny portion of this large search space for Big data mining. Without providing opportunities for users to express the interesting patterns to be mined, many existing data mining algorithms return numerous patterns—out of which only some are interesting. In this article, we propose an algorithm that allows users to express their interest in terms of constraints, uses the MapReduce model to mine uncertain Big data for frequent patterns that satisfy the user-specified anti-monotone and monotone constraints, as well as balance the load.  相似文献   

8.
Nearest-neighbor search of high-dimensionality spaces is critical for many applications, such as content-based retrieval from multimedia databases, similarity search of patterns in data mining, and nearest-neighbor classification. Unfortunately, even with the aid of the commonly used indexing schemes, the performance of nearest-neighbor (NN) queries deteriorates rapidly with the number of dimensions. We propose a method, called Clustering with Singular Value Decomposition (CSVD), which supports efficient approximate processing of NN queries, while maintaining good precision-recall characteristics. CSVD groups homogeneous points into clusters and separately reduces the dimensionality of each cluster using SVD. Cluster selection for NN queries relies on a branch-and-bound algorithm and within-cluster searches can be performed with traditional or in-memory indexing methods. Experiments with texture vectors extracted from satellite images show that CSVD achieves significantly higher dimensionality reduction than plain SVD for the same normalized mean squared error (NMSE), which translates into a higher efficiency in processing approximate NN queries.  相似文献   

9.
Set-oriented data mining in relational databases   总被引:2,自引:0,他引:2  
Data mining is an important real-life application for businesses. It is critical to find efficient ways of mining large data sets. In order to benefit from the experience with relational databases, a set-oriented approach to mining data is needed. In such an approach, the data mining operations are expressed in terms of relational or set-oriented operations. Query optimization technology can then be used for efficient processing.

In this paper, we describe set-oriented algorithms for mining association rules. Such algorithms imply performing multiple joins and thus may appear to be inherently less efficient than special-purpose algorithms. We develop new algorithms that can be expressed as SQL queries, and discuss optimization of these algorithms. After analytical evaluation, an algorithm named SETM emerges as the algorithm of choice. Algorithm SETM uses only simple database primitives, viz., sorting and merge-scan join. Algorithm SETM is simple, fast, and stable over the range of parameter values. It is easily parallelized and we suggest several additional optimizations. The set-oriented nature of Algorithm SETM makes it possible to develop extensions easily and its performance makes it feasible to build interactive data mining tools for large databases.  相似文献   


10.
组最近邻居查询是移动对象数据库重要的查询类型之一。本文提出了一种基于网格索引结构的剪枝搜索策略,将空间区域划分为网格,通过对象点的网格单元标识减少组最近邻居查询所需要的节点访问代价。用步长迭代法得到查询对象集的质心,提出了一种移动对象组最近邻居查询MOGNN算法,采用更精确的裁剪搜索空间准则,减少了查询所需要访问的节点数目。实验结果与分析表明,基于网格索引的MOGNN查询算法具有良好的查询性能。  相似文献   

11.
Transactional data are ubiquitous. Several methods, including frequent itemset mining and co-clustering, have been proposed to analyze transactional databases. In this work, we propose a new research problem to succinctly summarize transactional databases. Solving this problem requires linking the high level structure of the database to a potentially huge number of frequent itemsets. We formulate this problem as a set covering problem using overlapped hyperrectangles (a concept generally regarded as tile according to some existing papers); we then prove that this problem and its several variations are NP-hard, and we further reveal its relationship with the compact representation of a directed bipartite graph. We develop an approximation algorithm Hyper which can achieve a logarithmic approximation ratio in polynomial time. We propose a pruning strategy that can significantly speed up the processing of our algorithm, and we also propose an efficient algorithm Hyper+ to further summarize the set of hyperrectangles by allowing false positive conditions. Additionally, we show that hyperrectangles generated by our algorithms can be properly visualized. A detailed study using both real and synthetic datasets shows the effectiveness and efficiency of our approaches in summarizing transactional databases.  相似文献   

12.
基于用户搜索行为的query-doc关联挖掘   总被引:1,自引:0,他引:1  
朱亮  陆静雅  左万利 《自动化学报》2014,40(8):1654-1666
query和doc之间的关联关系是搜索引擎期望获取的一类有价值的信息. query和doc间准确的关联分析不仅可以帮助搜索结果排序,也在query和doc之间的桥接中起到重要作用,以实现相关query和doc之间的信息传递,有利于更深入的query理解和doc理解,并在此基础上开展相关应用.本文提出了一种基于用户搜索行为的query和doc关联关系挖掘算法,该方法首先对用户搜索点击日志中的数据进行整理与分析,构建query与doc间的二部图,再通过采用马尔可夫随机游走模型对二部图数据进行建模,挖掘二部图中的点击数据和session数据,最终挖掘出点击日志中用户没有点击到的doc数据,从而预测出query和doc间的隐含关联关系,同时也可以利用该算法得到query和query潜在的关联关系.基于以上理论基础,我们实现了一套完整的日志挖掘系统,通过大量的实验对比,该系统在各方面均取得了优异的表现,其中对检索结果相关性的性能提升可以达到71.23%,这充分表明,本文所提出的理论和算法能够很好地解决query和doc之间的隐含关系挖掘问题,为提高搜索结果的召回率、实现查询推荐和检索结果聚类奠定了良好的前提基础.  相似文献   

13.
In this article we present ConQueSt, a constraint-based querying system able to support the intrinsically exploratory (i.e., human-guided, interactive and iterative) nature of pattern discovery. Following the inductive database vision, our framework provides users with an expressive constraint-based query language, which allows the discovery process to be effectively driven toward potentially interesting patterns. Such constraints are also exploited to reduce the cost of pattern mining computation. ConQueSt is a comprehensive mining system that can access real-world relational databases from which to extract data. Through the interaction with a friendly graphical user interface (GUI), the user can define complex mining queries by means of few clicks. After a pre-processing step, mining queries are answered by an efficient and robust pattern mining engine which entails the state-of-the-art of data and search space reduction techniques. Resulting patterns are then presented to the user in a pattern browsing window, and possibly stored back in the underlying database as relations.  相似文献   

14.
Yao Liu  Hui Xiong 《Information Sciences》2006,176(9):1215-1240
A data warehouse stores current and historical records consolidated from multiple transactional systems. Securing data warehouses is of ever-increasing interest, especially considering areas where data are sold in pieces to third parties for data mining practices. In this case, existing data warehouse security techniques, such as data access control, may not be easy to enforce and can be ineffective. Instead, this paper proposes a data perturbation based approach, called the cubic-wise balance method, to provide privacy preserving range queries on data cubes in a data warehouse. This approach is motivated by the following observation: analysts are usually interested in summary data rather than individual data values. Indeed, our approach can provide a closely estimated summary data for range queries without providing access to actual individual data values. As demonstrated by our experimental results on APB benchmark data set from the OLAP council, the cubic-wise balance method can achieve both better privacy preservation and better range query accuracy than random data perturbation alternatives.  相似文献   

15.
This paper proposes a computation method for holistic multi-feature cube (MF-Cube) queries based on the characteristics of MF-Cubes. Three simple yet efficient strategies are designed to optimize the dependent complex aggregate at multiple granularities for a complex data-mining query within data cubes. One strategy is the computation of Holistic MF-Cube queries using the PDAP (Part Distributive Aggregate Property). More efficiency is gained by another strategy, that of dynamic subset data selection (the iceberg query technique), which reduces the size of the materialized data cubes. To extend this efficiency further, the second approach may adopt the chunk-based caching technique that reuses the output of previous queries. By combining these three strategies, we design an algorithm called the PDIC (Part Distributive Iceberg Chunk). We experimentally evaluate this algorithm using synthetic and real-world datasets and demonstrate that our approach delivers up to approximately twice the performance efficiency of traditional computation methods.  相似文献   

16.
In recent years, high utility itemsets (HUIs) mining from the transactional databases becomes one of the most emerging research topic in the field of data mining due to its wide range of applications in online e-commerce data analysis, identifying interesting patterns in biomedical data and for cross marketing solutions in retail business. It aims to discover the itemsets with high utilities efficiently by considering item quantities in a transaction and profit values of each item. However, it produces a tremendous number of HUIs, which imposes further burden in analysis of the extracted patterns and also degrades the performance of mining methods. Mining the set of closed + high utility itemsets (CHUIs) solves this issue as it is a loss-less and condensed representation of all HUIs. In this paper, we aim to present a new algorithm for finding CHUIs from a transactional database, called the CHUM (Closed + High Utility itemset Miner), which is scalable and efficient. The proposed mining algorithm adopts a tricky aimed vertical representation of the database in order to speed up the execution time in generating itemset closures and compute their utility information without accessing the database. The proposed method makes use of the item co-occurrences strategy in order to further reduce the number of intersections needed to be performed. Several experiments are conducted on various sparse and dense datasets and the simulation results clearly show the scalability and superior performance of our algorithm as compared to those for the existing state-of-the-art CHUD (Closed + High Utility itemset Discovery) algorithm.  相似文献   

17.
Abstract

To overcome the limitation of high-utility itemset mining, more compact, lossless, and concise representations of high utility itemsets (HUIs) have been proposed in previous works, such as closed HUIs (CHUIs) or maximal HUIs (MHUIs). Focusing into MHUI mining, in this article, we present efficient approaches to directly mine MHUIs from transactional databases without generating any candidates. The proposed algorithms, which all execute in one phase, utilize many efficient data structures and pruning techniques such as EUCP combined with EUCS, CUIP combined with FUCS, and the P-set structure to significantly reduce the search space and remove nonpromising itemsets, thus, increase the performance of the MHUI mining process. Furthermore, while previous works assumed that the unit profit of items is fixed, which is not practical in many real-world applications, our work resolved this issue by applying a new utility calculation into the mining process to reflect the true nature of real-world databases, thus, generating more accurate results.  相似文献   

18.
Inductive database languages: requirements and examples   总被引:1,自引:1,他引:0  
Inductive databases (IDBs) represent a database perspective on Knowledge discovery in databases (KDD). In an IDB, the KDD application can express both queries capable of accessing and manipulating data, and queries capable of generating, manipulating, and applying patterns allowing to formalize the notion of mining process. The feature that makes them different from other data mining applications is exactly the idea of looking at the support for knowledge discovery as an extension of the query process. This paper draws a list of desirable properties to be taken into account in the definition of an IDB framework. They involve several dimensions, such as the expressiveness of the language in representing data and models, the closure principle, the capability to provide a support for an efficient algorithm programming. These requirements are a basis for a comparative study that highlights strengths and weaknesses of existing IDB approaches. The paper focuses on the SQL-based ATLaS language/system, on the logic-based LDL++{\mathcal{LDL}++} language/system, and on the XML-based KDDML language/system.  相似文献   

19.
分布式环境下约束性关联规则的快速挖掘   总被引:2,自引:0,他引:2  
研究人员针对单机环境提出了约束性关联规则的挖掘算法,但它们不适用于分布式环境.为此本文讨论分布式环境下约束性关联规则的快速挖掘技术,提出一种基于分布式环境的约束性关联规则快速挖掘算法DCAR,其中包括局部约束性频繁项目集挖掘算法MLFC和全局约束性频繁项目集挖掘算法MGFC.该算法根据布尔约束条件产生向导集,采用一种新的候选项集生成函数Reorder-gen,该函数通过向导集高效地产生分布式环境中满足约束条件的、数量较少且完备的候选项集,并且求解全局约束性频繁项集过程中,传送局部候选项集支持数的通信量为O(n),从而提高了算法的挖掘效率.将本文提出的算法加以实现,实验结果表明DCAR算法高效可行,其效率大约是DMA-IC算法的2-3倍.  相似文献   

20.
The design of an OLAP system for supporting real-time queries is one of the major research issues. One approach is to use data cubes, which are materialized precomputed multidimensional views of data in a data warehouse. We can derive a set of data cubes to answer each frequently asked query directly. However, there are two practical problems: (1) the maintenance cost of the data cubes, and (2) the query cost to answer those queries. Maintaining a data cube requires disk storage and CPU computation, so the maintenance cost is related to the total size as well as the total number of data cubes materialized. In most cases, materializing all data cubes is impractical. The maintenance cost may be reduced by merging some data cubes. However, the resulting larger data cubes will increase the query cost of answering some queries. If the bounds on the maintenance cost and the query cost are too strict, we help the user decide which queries to be sacrificed and not taken into consideration. We have defined an optimization problem in data cube system design. Given a maintenance-cost bound, a query-cost bound and a set of frequently asked queries, it is necessary to determine a set of data cubes such that the system can answer a largest subset of the queries without violating the two bounds. This is an NP-hard problem. We propose approximate Greedy algorithms GR, 2GM and 2GMM, which are shown to be both effective and efficient by experiments done on a census data set and a forest-cover-type data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号