首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
决策树算法采用递归方法构建,训练效率较低,过度分类的决策树可能产生过拟合现象.因此,文中提出模型决策树算法.首先在训练数据集上采用基尼指数递归生成一棵不完全决策树,然后使用一个简单分类模型对其中的非纯伪叶结点(非叶结点且结点包含的样本不属于同一类)进行分类,生成最终的决策树.相比原始的决策树算法,这样产生的模型决策树能在算法精度不损失或损失很小的情况下,提高决策树的训练效率.在标准数据集上的实验表明,文中提出的模型决策树在速度上明显优于决策树算法,具备一定的抗过拟合能力.  相似文献   

2.
为改善剪枝算法单一的事前剪枝或事后剪枝导致分类响应时间长、准确度低的问题,在REP事后剪枝的基础上,提出一种CDC与REP结合的决策树剪枝优化算法。使用CDC算法在生成决策树的同时,利用左右子树节点差异比来排除部分非叶子节点,决策树生成后再通过REP算法对决策树进一步剪枝。实验结果表明,该算法可避免庞大决策树的生成过程过于细化导致过于拟合的现象,与其他算法相比,能减少分裂时间,提高决策树分裂的正确率。  相似文献   

3.
一种改进的决策树后剪枝算法磁   总被引:1,自引:0,他引:1  
当深度和节点个数超过一定规模后,决策树对未知实例的分类准确率会随着规模的增大而逐渐降低,需要在保证分类正确率的前提下,用剪枝算法对减小决策树的规模。论文在对现有决策树剪枝算法优缺点进行分析的基础上,提出了一种综合考虑分类精度、分类稳定性以及决策树规模的后剪枝改进算法,并通过实验证明了该算法在保证模型判别精度和稳定性的前提下,可以有效地减小了决策树的规模,使得最终的自动判别模型更加简洁。  相似文献   

4.
郭华平  范明 《计算机科学》2013,40(11):236-241
基于决策树的组合分类器可以看作一个森林。提出了一种森林剪枝算法来对森林进行剪枝,以简化组合分类器的结构,并提高其分类准确率。传统的决策树剪枝只考虑剪枝对单棵决策树的影响,而森林剪枝则把所有决策树看作一个整体,更加关注剪枝对组合分类器的性能影响。为了确定森林的哪些分枝可以被剪枝,提出一种称作贡献增益的度量。子树的贡献增益不仅与它所在的决策树的分类准确率有关,而且也与诸决策树的差异性有关,因此它较好地度量了一个结点扩展为一棵子树对组合分类器分类准确率的提高程度。借助于贡献增益,设计了一种基于结点贡献增益的森林剪枝算法FTCG。实验表明,无论森林是基于某种算法(如bagging)构建的还是某种组合分类器选择算法(如EPIC[1])的结果,无论每棵决策树是未剪枝的还是剪枝后的,FTCG都能进一步降低每棵决策树的规模,并且在大部分数据集上显著提高了剪枝后的组合分类器的分类准确率。  相似文献   

5.
纳税信用等级评定的实现是需要对大量税收数据进行分析和判定的结果,决策树是进行数据挖掘和分类的常用工具,其中以C4.5算法最为流行。如何应用数据挖掘技术改变纳税信用等级手工评定的现状是当前税务系统税收信息化工作难点之一。文章主要讨论如何应用C4.5算法构造纳税信用等级评定决策树,通过对纳税人涉税数据的采集、预处理、属性选择、决策树生成和剪枝等一系列过程最终生成纳税信用等级评定决策树,并根据生成的决策树实现对纳税人纳税信用等级的判决。  相似文献   

6.
该文主要探讨了基于数据仓库的数据挖掘技术中分类算法的决策树算法的基础理论和实施方法,分析并改进了分类方法中决策树算法;并在决策树预剪枝算法中,利用父结点与当前结点信息嫡的比值来作为是否停止决策树扩张的评判标准。  相似文献   

7.
乔梅  韩文秀 《计算机应用》2005,25(5):989-991
噪音数据是影响决策树训练效率和结果集质量的重要因素。目前的树剪枝方法不能消除噪音数据对选择决策树测试节点属性的影响。为改变这种状况,基于变精度Rough集(VPRS)模型,提出了一个在决策树算法中处理噪音数据的新方法---预剪枝法,该方法在进行选择属性的计算之前基于变精度正区域求取属性修正的分类模式,来消除噪音数据的对选择属性以及生成叶节点的影响。利用该方法对基本ID3决策树算法进行了改进。分析和实验表明,与先剪枝方法相比,该方法能进一步减小决策树的规模和训练时间。  相似文献   

8.
基于粗集的决策树构建的探讨   总被引:1,自引:0,他引:1  
杨宝华 《微机发展》2006,16(8):83-84
决策树是对未知数据进行分类预测的一种方法。自顶向下的决策树生成算法关键是对结点属性值的选择。近似精度是RS中描述信息系统模糊程度的参量,能够准确地刻画粗集。文中在典型的ID3算法的基础上提出了基于RS的算法。该算法基于近似精度大的属性选择根结点,分支由分类产生。该算法计算简单,且分类使决策树和粗集更易理解。  相似文献   

9.
目前关于决策树剪枝优化方面的研究主要集中于预剪枝和后剪枝算法。然而,这些剪枝算法通常作用于传统的决策树分类算法,在代价敏感学习与剪枝优化算法相结合方面还没有较好的研究成果。基于经济学中的效益成本分析理论,提出代价收益矩阵及单位代价收益等相关概念,采用单位代价收益最大化原则对决策树叶节点的类标号进行分配,并通过与预剪枝策略相结合,设计一种新型的决策树剪枝算法。通过对生成的决策树进行单位代价收益剪枝,使其具有代价敏感性,能够很好地解决实际问题。实验结果表明,该算法能生成较小规模的决策树,且与REP、EBP算法相比具有较好的分类效果。  相似文献   

10.
为提高智能模型的识别精度,增强其泛化能力,需要对用于智能建模的数据集中的对象类别异常进行检测和修正。在进行数据集和决策树形式化描述的基础上,将基尼指数增益率作为确定连续条件属性最优二分原则,采用递归算法生成叶节点中对象为同一类别的二叉决策树。利用信息熵评价决策树剪除叶节点中对象的类别分布效果,实现数据集类别异常的类别修正。决策树的生成和剪枝本质上是完成基于基尼指数和信息熵的连续条件属性数据空间分割和合并类别修正。实验和实际应用验证了决策树生成和剪枝是数据集类别优化的有效方法。  相似文献   

11.
张晓龙  骆名剑 《计算机应用》2005,25(9):1986-1988
决策树是机器学习和数据挖掘领域中一种基本的学习方法。文中分析了C4.5算法以及该算法不足之处,提出了一种决策树裁剪算法,其中以规则信息量作为判断标准。实验结果表明这种方法可以提高最终模型的预测精度,并能够很好克服数据中的噪音。  相似文献   

12.
基于条件误分类的决策树剪枝算法   总被引:2,自引:0,他引:2       下载免费PDF全文
徐晶  刘旭敏  关永  董睿 《计算机工程》2010,36(23):50-52
在建立决策树分类模型时,剪枝的方法直接影响分类器的分类效果。通过研究基于误差率的剪枝算法,引入条件误差的概念,改进剪枝标准的评估方法,针对决策树的模型进行优化,提出条件误差剪枝方法,并将其应用于C4.5算法中。实验结果表明,条件误差剪枝方法有效地解决剪枝不充分和过剪枝的情况,在一定程度上提高了准确率。  相似文献   

13.
Probability trees are decision trees that predict class probabilities rather than the most likely class. The pruning criterion used to learn a probability tree strongly influences the size of the tree and thereby also the quality of its probability estimates. While the effect of pruning criteria on classification accuracy is well-studied, only recently has there been more interest in the effect on probability estimates. Hence, it is currently unclear which pruning criteria for probability trees are preferable under which circumstances. In this paper we survey six of the most important pruning criteria for probability trees, and discuss their theoretical advantages and disadvantages. We also perform an extensive experimental study of the relative performance of these pruning criteria. The main conclusion is that overall a pruning criterion based on randomization tests performs best because it is most robust to extreme data characteristics (such as class skew or a high number of classes). We also identify and explain several shortcomings of the other pruning criteria.  相似文献   

14.
Induction of decision trees is one of the most successful approaches to supervised machine learning. Branching programs are a generalization of decision trees and, by the boosting analysis, exponentially more efficiently learnable than decision trees. However, this advantage has not been seen to materialize in experiments. Decision trees are easy to simplify using pruning. Reduced error pruning is one of the simplest decision tree pruning algorithms. For branching programs no pruning algorithms are known. In this paper we prove that reduced error pruning of branching programs is infeasible. Finding the optimal pruning of a branching program with respect to a set of pruning examples that is separate from the set of training examples is NP-complete. Because of this intractability result, we have to consider approximating reduced error pruning. Unfortunately, it turns out that even finding an approximate solution of arbitrary accuracy is computationally infeasible. In particular, reduced error pruning of branching programs is APX-hard. Our experiments show that, despite the negative theoretical results, heuristic pruning of branching programs can reduce their size without significantly altering the accuracy.  相似文献   

15.

Pruning is an effective technique in improving the generalization performance of decision tree. However, most of the existing methods are time-consuming or unsuitable for small dataset. In this paper, a new pruning algorithm based on structural risk of leaf node is proposed. The structural risk is measured by the product of the accuracy and the volume (PAV) in leaf node. The comparison experiments with Cost-Complexity Pruning using cross-validation (CCP-CV) algorithm on some benchmark datasets show that PAV pruning largely reduces the time cost of CCP-CV, while the test accuracy of PAV pruning is close to that of CCP-CV.

  相似文献   

16.
A novel pruning approach using expert knowledge for data-specific pruning   总被引:1,自引:0,他引:1  
Classification is an important data mining task that discovers hidden knowledge from the labeled datasets. Most approaches to pruning assume that all dataset are equally uniform and equally important, so they apply equal pruning to all the datasets. However, in real-world classification problems, all the datasets are not equal and considering equal pruning rate during pruning tends to generate a decision tree with large size and high misclassification rate. We approach the problem by first investigating the properties of each dataset and then deriving data-specific pruning value using expert knowledge which is used to design pruning techniques to prune decision trees close to perfection. An efficient pruning algorithm dubbed EKBP is proposed and is very general as we are free to use any learning algorithm as the base classifier. We have implemented our proposed solution and experimentally verified its effectiveness with forty real world benchmark dataset from UCI machine learning repository. In all these experiments, the proposed approach shows it can dramatically reduce the tree size while enhancing or retaining the level of accuracy.  相似文献   

17.
基于数据挖掘的决策树方法分析   总被引:1,自引:0,他引:1  
决策树方法因其简单、直观、准确率高等特点在数据挖掘及数据分析中得到了广泛的应用。在介绍了决策树方法的一般知识后,深入分析了决策树的生成算法与模型,并对决策树的剪枝过程进行了探讨。  相似文献   

18.
Trading Accuracy for Simplicity in Decision Trees   总被引:4,自引:0,他引:4  
Bohanec  Marko  Bratko  Ivan 《Machine Learning》1994,15(3):223-250
When communicating concepts, it is often convenient or even necessary to define a concept approximately. A simple, although only approximately accurate concept definition may be more useful than a completely accurate definition which involves a lot of detail. This paper addresses the problem: given a completely accurate, but complex, definition of a concept, simplify the definition, possibly at the expense of accuracy, so that the simplified definition still corresponds to the concept sufficiently well. Concepts are represented by decision trees, and the method of simplification is tree pruning. Given a decision tree that accurately specifies a concept, the problem is to find a smallest pruned tree that still represents the concept within some specified accuracy. A pruning algorithm is presented that finds an optimal solution by generating adense sequence of pruned trees, decreasing in size, such that each tree has the highest accuracy among all the possible pruned trees of the same size. An efficient implementation of the algorithm, based on dynamic programming, is presented and empirically compared with three progressive pruning algorithms using both artificial and real-world data. An interesting empirical finding is that the real-world data generally allow significantly greater simplification at equal loss of accuracy.  相似文献   

19.
刘丽  王永恒  韦航 《计算机应用》2015,35(12):3481-3486
针对传统粗粒度情感分析忽略具体评价对象,以及现有细粒度情感分析方法忽略无关评价要素的问题,提出结合条件随机场(CRF)和语法树剪枝的方法对产品评论进行细粒度情感分析。采用基于MapReduce的并行化协同训练(Tri-training)的方法对语料进行半自主标注,利用融合多种语言特征的条件随机场模型,获取评论中的评价对象和正负面评价词。通过建立领域本体和句法路径库实现语法树剪枝,对含有多个评价对象和评价词的文本,去掉无关评价对象的干扰,抽取出正确的评价单元,最后形成可视化产品报告。实验结果显示,提出的方法在两种不同领域数据集上,识别情感要素的综合准确率达89%左右,情感评价单元的综合准确率也达89%左右。实验结果表明,与传统方法相比,结合CRF和语法树剪枝的方法识别准确率更高,性能更好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号