首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
挖掘闭合模式的高性能算法   总被引:16,自引:1,他引:16  
频繁闭合模式集惟一确定频繁模式完全集并且尺寸小得多,然而挖掘频繁闭合模式仍然是时间与存储开销很大的任务.提出一种高性能算法来解决这一难题.采用复合型频繁模式树来组织频繁模式集,存储开销较小.通过集成深度与宽度优先策略,伺机选择基于数组或基于树的模式支持子集表示形式,启发式运用非过滤虚拟投影或过滤型投影,实现复合型频繁模式树的快速生成.局部和全局剪裁方法有效地缩小了搜索空间.通过树生成与剪裁代价的平衡实现时间效率与可伸缩性最大化.实验表明,该算法时间效率比其他算法高5倍到3个数量级,空间可伸缩性最佳.它可以进一步应用到无冗余关联规则发现、序列分析等许多数据挖掘问题.  相似文献   

2.
Most algorithms for mining sequential rules focus on generating all sequential rules. These algorithms produce an enormous number of redundant rules, making mining inefficient in intelligent systems. In order to solve this problem, the mining of non-redundant sequential rules was recently introduced. Most algorithms for mining such rules depend on patterns obtained from existing frequent sequence mining algorithms. Several steps are required to organize the data structure of these sequences before rules can be generated. This process requires a great deal of time and memory. The present study proposes a technique for mining non-redundant sequential rules directly from sequence databases. The proposed method uses a dynamic bit vector data structure and adopts a prefix tree in the mining process. In addition, some pruning techniques are used to remove unpromising candidates early in the mining process. Experimental results show the efficiency of the algorithm in terms of runtime and memory usage.  相似文献   

3.
决策树算法及其在乳腺疾病图像数据挖掘中的应用   总被引:5,自引:1,他引:5  
介绍了ID3决策树算法建立决策树的基本原理,着重介绍了决策树的修剪问题和两种典型的修剪算法-减少分类错误修剪算法和最小代价-复杂度修剪算法,并利用介绍的决策树算法和修剪算法对乳腺疾病图像进行数据挖掘,得到了一些有实际参考价值的规则,获得了很高的分类准确率,证明了决策树算法在医学图像数据挖掘领域有着广泛的应用前景。  相似文献   

4.
A critical issue in classification tree design-obtaining right-sized trees, i.e. trees which neither underfit nor overfit the data-is addressed. Instead of stopping rules to halt partitioning, the approach of growing a large tree with pure terminal nodes and selectively pruning it back is used. A new efficient iterative method is proposed to grow and prune classification trees. This method divides the data sample into two subsets and iteratively grows a tree with one subset and prunes it with the other subset, successively interchanging the roles of the two subsets. The convergence and other properties of the algorithm are established. Theoretical and practical considerations suggest that the iterative free growing and pruning algorithm should perform better and require less computation than other widely used tree growing and pruning algorithms. Numerical results on a waveform recognition problem are presented to support this view  相似文献   

5.
6.
A linear model tree is a decision tree with a linear functional model in each leaf. Previous model tree induction algorithms have been batch techniques that operate on the entire training set. However there are many situations when an incremental learner is advantageous. In this article a new batch model tree learner is described with two alternative splitting rules and a stopping rule. An incremental algorithm is then developed that has many similarities with the batch version but is able to process examples one at a time. An online pruning rule is also developed. The incremental training time for an example is shown to only depend on the height of the tree induced so far, and not on the number of previous examples. The algorithms are evaluated empirically on a number of standard datasets, a simple test function and three dynamic domains ranging from a simple pendulum to a complex 13 dimensional flight simulator. The new batch algorithm is compared with the most recent batch model tree algorithms and is seen to perform favourably overall. The new incremental model tree learner compares well with an alternative online function approximator. In addition it can sometimes perform almost as well as the batch model tree algorithms, highlighting the effectiveness of the incremental implementation. Editor: Johannes Fürnkranz  相似文献   

7.
Feature selection is viewed as an important preprocessing step for pattern recognition, machine learning and data mining. Traditional hill-climbing search approaches to feature selection have difficulties to find optimal reducts. And the current stochastic search strategies, such as GA, ACO and PSO, provide a more robust solution but at the expense of increased computational effort. It is necessary to investigate fast and effective search algorithms. Rough set theory provides a mathematical tool to discover data dependencies and reduce the number of features contained in a dataset by purely structural methods. In this paper, we define a structure called power set tree (PS-tree), which is an order tree representing the power set, and each possible reduct is mapped to a node of the tree. Then, we present a rough set approach to feature selection based on PS-tree. Two kinds of pruning rules for PS-tree are given. And two novel feature selection algorithms based on PS-tree are also given. Experiment results demonstrate that our algorithms are effective and efficient.  相似文献   

8.
In this paper, we consider the problem of finding good next moves in two-player games. Traditional search algorithms, such as minimax and alpha-beta pruning, suffer great temporal and spatial expansion when exploring deeply into search trees to find better next moves. The evolution of genetic algorithms with the ability to find global or near global optima in limited time seems promising, but they are inept at finding compound optima, such as the minimax in a game-search tree. We thus propose a new genetic algorithm-based approach that can find a good next move by reserving the board evaluation values of new offspring in a partial game-search tree. Experiments show that solution accuracy and search speed are greatly improved by our algorithm.  相似文献   

9.
Knowledge discovery refers to identifying hidden and valid patterns in data and it can be used to build knowledge inference systems. Decision tree is one such successful technique for supervised learning and extracting knowledge or rules. This paper aims at developing a decision tree model to predict the occurrence of diabetes disease. Traditional decision tree algorithms have a problem with crisp boundaries. Much better decision rules can be identified from these clinical data sets with the use of the fuzzy decision boundaries. The key step in the construction of a decision tree is the identification of split points and in this work best split points are identified using the Gini index. Authors propose a method to minimize the calculation of Gini indices by identifying false split points and used the Gaussian fuzzy function because the clinical data sets are not crisp. As the efficiency of the decision tree depends on many factors such as number of nodes and the length of the tree, pruning of decision tree plays a key role. The modified Gini index-Gaussian fuzzy decision tree algorithm is proposed and is tested with Pima Indian Diabetes (PID) clinical data set for accuracy. This algorithm outperforms other decision tree algorithms.  相似文献   

10.
频繁项集挖掘的研究与进展   总被引:6,自引:0,他引:6  
挖掘频繁项集是许多数据挖掘任务中的关键问题,也是关联规则挖掘算法的核心,所以提高频繁项集的生成效率一直是近几年数据挖掘领域研究的热点之一,研究人员从不同的角度对算法进行改进以提高算法的效率。该文从频繁项集生成过程中解空间的类型、搜索方法和剪枝策略、数据库的表示方法、数据压缩技术等几个方面对频繁项集挖掘的基本策略进行了研究,对完全频繁项集挖掘、频繁闭项集挖掘和最大频繁项集挖掘的典型算法特别是最新算法进行了介绍和评述,并分析了各种算法的性能特点,指出其适于哪种类型的数据集。最后,对频繁项集挖掘算法的发展方向进行了初步的探讨。  相似文献   

11.
刘晓平 《计算机仿真》2005,22(12):76-79
用于知识发现的大部分数据挖掘工具均采用规则发现和决策树分类技术来发现数据模式和规则。该文通过采用基于仿真属性的离散化方法,基于概率统计的未知属性与噪声数据处理方法以及基于误差的剪枝算法,实现了用于自动生成决策树的通用算法模板。利用该模板,决策树算法的设计者可以快速验证为解决特定决策问题而设计的新算法。构造决策树的基本机制是算法的设计者利用其自己定义的公式来初始化通用算法模板。然后利用该系统提供的交互式图形环境,针对不同的决策问题测试该算法,从而找出适合特定问题的算法。  相似文献   

12.
q-gram matching is used for approximate substring matching problems in a wide range of application areas, including intrusion detection. In this paper, we present a tree-based model to perform fast linear time q-gram matching. All q-grams present in the text are stored in a tree structure similar to trie. We use a tree redundancy pruning algorithm to reduce the size of the tree without losing any information. We also use suffix links for fast q-gram search during query matching. We compare our work with the Rabin-Karp-based hash-table technique, commonly used for multiple q-gram search. We present results of experiments on system call sequence data used for intrusion detection.  相似文献   

13.
《Knowledge》2002,15(1-2):37-43
Decision tree is a divide and conquer classification method used in machine learning. Most pruning methods for decision trees minimize a classification error rate. In uncertain domains, some sub-trees that do not decrease the error rate can be relevant in pointing out some populations of specific interest or to give a representation of a large data file. A new pruning method (called DI pruning) is presented here. It takes into account the complexity of sub-trees and is able to keep sub-trees with leaves yielding to determine relevant decision rules, although they do not increase the classification efficiency. DI pruning allows to assess the quality of the data used for the knowledge discovery task. In practice, this method is implemented in the UnDeT software.  相似文献   

14.
雷捷维  王嘉旸  任航  闫天伟  黄伟 《计算机工程》2021,47(3):304-310,320
麻将作为典型的非完备信息博弈游戏主要通过传统Expectimax搜索算法实现,其剪枝策略与估值函数基于人工先验知识设计,存在假设不合理等问题。提出一种结合Expectimax搜索与Double DQN强化学习算法的非完备信息博弈算法。在Expectimax搜索树扩展过程中,采用Double DQN输出的估值设计估值函数并在限定搜索层数内获得分支估值,同时设计剪枝策略对打牌动作进行排序与部分扩展实现搜索树剪枝。在Double DQN模型训练过程中,将麻将信息编码为特征数据输入神经网络获得估值,使用Expectimax搜索算法得到最优动作以改进探索策略。实验结果表明,与Expectimax搜索算法、Double DQN算法等监督学习算法相比,该算法在麻将游戏上胜率与得分更高,具有更优异的博弈性能。  相似文献   

15.
Many studies have shown that rule-based classifiers perform well in classifying categorical and sparse high-dimensional databases. However, a fundamental limitation with many rule-based classifiers is that they find the rules by employing various heuristic methods to prune the search space and select the rules based on the sequential database covering paradigm. As a result, the final set of rules that they use may not be the globally best rules for some instances in the training database. To make matters worse, these algorithms fail to fully exploit some more effective search space pruning methods in order to scale to large databases. In this paper, we present a new classifier, HARMONY, which directly mines the final set of classification rules. HARMONY uses an instance-centric rule-generation approach and it can assure that, for each training instance, one of the highest-confidence rules covering this instance is included in the final rule set, which helps in improving the overall accuracy of the classifier. By introducing several novel search strategies and pruning methods into the rule discovery process, HARMONY also has high efficiency and good scalability. Our thorough performance study with some large text and categorical databases has shown that HARMONY outperforms many well-known classifiers in terms of both accuracy and computational efficiency and scales well with regard to the database size  相似文献   

16.
Building a high accuracy classifier for classification is a problem in real applications. One high accuracy classifier used for this purpose is based on association rules. In the past, some researches showed that classification based on association rules (or class-association rules – CARs) has higher accuracy than that of other rule-based methods such as ILA and C4.5. However, mining CARs consumes more time because it mines a complete rule set. Therefore, improving the execution time for mining CARs is one of the main problems with this method that needs to be solved. In this paper, we propose a new method for mining class-association rule. Firstly, we design a tree structure for the storage frequent itemsets of datasets. Some theorems for pruning nodes and computing information in the tree are developed after that, and then, based on the theorems, we propose an efficient algorithm for mining CARs. Experimental results show that our approach is more efficient than those used previously.  相似文献   

17.
Data mining is a data-intensive computation activity. Parallel processing has often been used in data mining algorithms. However, when data do not fit in memory, some solutions do not apply and a database system may be required rather than flat files. Most of the implementations use the database system loosely coupled with the data mining techniques. Hence, the database system only issues queries to be processed on the client machine. In this work, we address the data consuming activities through parallel processing on a database server providing a tight integration with data mining techniques. Experimental results showing the potential benefits of this integration were obtained. Despite the difficulties in processing a complex application, we extracted rules and obtained high performance on all the data-intensive activities such as the construction of the decision tree, pruning and rule extraction.  相似文献   

18.
本文充分利用了 Eclat算法的概念格理论和等价类划分方法,将约束条件融入基于垂直数据分布的关联规则挖掘算法中。提出了一种新的反单调和单调约束条件下关联规则的挖掘算法,分别为EclatA算法和EclatM算法。算法采用自底向上的搜索方法,在发现频繁项集的同时进行约束条件的检验。数据库的扫描次数较少,无需对候选项集进行剪枝,占用内存较小。实验证明:该算法的执行效率比已有算法有显著提高。  相似文献   

19.
周秀梅  黄名选 《计算机应用》2014,34(10):2820-2826
针对现有加权关联规则挖掘算法不能适用于矩阵加权数据的缺陷,给出一种新的矩阵加权项集剪枝策略,构建矩阵加权正负关联模式评价框架SRCCCI,提出一种新的基于SRCCCI评价框架的矩阵加权正负关联规则挖掘算法MWARM-SRCCCI。该算法克服了现有挖掘技术的缺陷,采用新的剪枝技术和模式评价方法,挖掘有效的矩阵加权正负关联规则,避免一些无效和无趣的模式产生。以中文Web测试集CWT200g为实验数据,与现有无加权正负关联规则挖掘算法比较,MWARM-SRCCCI算法的挖掘时间减幅最大可达74.74%。理论分析和实验结果表明,MWARM-SRCCCI算法具有较好的剪枝效果,候选项集数量和挖掘时间明显减少,挖掘效率得到极大提高,其关联模式可为信息检索提供可靠的查询扩展词来源。  相似文献   

20.
针对现有的最大频繁项集挖掘算法挖掘时间过长、内存消耗较大的问题,提出了一种基于构造链表B-list的最大频繁项集挖掘算法BMFI,该算法利用B-list数据结构来挖掘频繁项集并采用全序搜索树作为搜索空间,然后采用父等价剪枝技术来缩小搜索空间,最后再结合基于MFI-tree的投影策略实现超集检测来提高算法的效率。实验结果表明,BMFI算法在时间效率与空间效率方面均优于FPMAX算法与MFIN算法。该算法在稠密数据集与稀疏数据集中进行最大频繁项集挖掘时均有良好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号