首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 78 毫秒
1.
区间值属性决策树学习算法*   总被引:8,自引:0,他引:8  
王熙照  洪家荣 《软件学报》1998,9(8):637-640
该文提出了一种区间值属性决策树的学习算法.区间值属性的值域不同于离散情况下的无序集和连续情况下的全序集,而是一种半序集.作为ID3算法在区间值意义下的推广,算法通过一种分割信息熵的极小化来选取扩展属性.通过非平稳点分析,减少了分割信息熵的计算次数,使算法的效率得到了提高.  相似文献   

2.
While many constructive induction algorithms focus on generating new binary attributes, this paper explores novel methods of constructing nominal and numeric attributes. We propose a new constructive operator, X-of-N. An X-of-N representation is a set containing one or more attribute-value pairs. For a given instance, the value of an X-of-N representation corresponds to the number of its attribute-value pairs that are true of the instance. A single X-of-N representation can directly and simply represent any concept that can be represented by a single conjunctive, a single disjunctive, or a single M-of-N representation commonly used for constructive induction, and the reverse is not true. In this paper, we describe a constructive decision tree learning algorithm, called XofN. When building decision trees, this algorithm creates one X-of-N representation, either as a nominal attribute or as a numeric attribute, at each decision node. The construction of X-of-N representations is carried out by greedily searching the space defined by all the attribute-value pairs of a domain. Experimental results reveal that constructing X-of-N attributes can significantly improve the performance of decision tree learning in both artificial and natural domains in terms of higher prediction accuracy and lower theory complexity. The results also show the performance advantages of constructing X-of-N attributes over constructing conjunctive, disjunctive, or M-of-N representations for decision tree learning.  相似文献   

3.
牛市的开始吸引了股民蜂拥而至,为了保护投资者权益,帮助投资者进行理性投资。文章提出了从技术的角度分析股票交易数据。采用二叉决策树的方法对庞大的交易数据进行挖掘,根据决策树获取的分类规则,基本上能预测单支股票在一段时间内走势,能有效的帮助投资者进行理性投资。  相似文献   

4.
决策树的优化算法   总被引:78,自引:1,他引:78  
刘小虎  李生 《软件学报》1998,9(10):797-800
决策树的优化是决策树学习算法中十分重要的分支.以ID3为基础,提出了改进的优化算法.每当选择一个新的属性时,算法不是仅仅考虑该属性带来的信息增益,而是考虑到选择该属性后继续选择的属性带来的信息增益,即同时考虑树的两层结点.提出的改进算法的时间复杂性与ID3相同,对于逻辑表达式的归纳,改进算法明显优于ID3.  相似文献   

5.
一种连续值属性约简方法ReCA   总被引:1,自引:1,他引:0  
属性约简是Rough集理论的主要应用和研究内容之一.现有的各种属性约简方法大多适用于离散值属性.对于连续值属性的数据处理,通常做法是先对其进行离散化.这种先期对数据进行的处理会丢失一些信息,易于使约简产生错误.针对连续值信息系统,提出了一种新的属性约简方法ReCA,该方法将连续值属性离散化与属性约简过程融为一体,以基于信息熵的不确定性度量作为适应度函数,通过进化计算同时得到约简属性集合和离散化的断点集合.实验表明,该方法不仅可以有效地进行属性约简,而且与Rough集及C4.5两种方法相比,得到的属性数目少、测试精度较高.  相似文献   

6.
决策树学习算法ID3的研究   总被引:28,自引:0,他引:28  
ID3是决策树学习的核心算法,为此详细叙述了决策树表示方法和ID3决策树学习算法,特别说明了决策属性的选取法则。通过一个学习实例给出该算法第一选取决策属性的详细过程,并且对该算法进行了讨论,一般情况下,ID3算法可以找出最优决策树。  相似文献   

7.
基于决策熵的决策树规则提取方法   总被引:2,自引:0,他引:2  
在决策表中,决策规则的可信度和对象覆盖度是衡量决策能力的重要指标。以知识粗糙熵为基础,提出决策熵的概念,并定义其属性重要性;然后以条件属性子集的决策熵来度量其对决策分类的重要性,自顶向下递归构造决策树;最后遍历决策树,简化所获得的决策规则。该方法的优点在于构造决策树及提取规则前不进行属性约简,计算直观,时间复杂度较低。实例分析的结果表明,该方法能获得更为简化有效的决策规则。  相似文献   

8.
在决策表中,决策规则的可信度和对象覆盖度是衡量决策能力的重要指标。以知识粗糙熵为基础,提出决策熵的概念,并定义其属性重要性;然后以条件属性子集的决策熵来度量其对决策分类的重要性,自顶向下递归构造决策树;最后遍历决策树,简化所获得的决策规则。该方法的优点在于构造决策树及提取规则前不进行属性约简,计算直观,时间复杂度较低。实例分析的结果表明,该方法能获得更为简化有效的决策规则。  相似文献   

9.
决策表中连续属性离散化,即将一个连续属性分为若干属性区间并为每个区间确定一个离散型数值。该文提出一种新的决策表连续属性离散化算法。首先使用决策强度来度量条件属性的重要性,并据此对条件属性按照属性重要性从小到大排序,然后按排序后的顺序,考察每个条件属性的所有断点,将冗余的断点去掉,从而将条件属性离散化。该算法易于理解,计算简单,算法的时间复杂性为O(3kn2)。  相似文献   

10.
Predicting Nearly As Well As the Best Pruning of a Decision Tree   总被引:2,自引:2,他引:0  
Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up over-fitting the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The final tree is then output and used for classification on test data.In this paper, we suggest an alternative approach to the pruning phase. Using a given unpruned decision tree, we present a new method of making predictions on test data, and we prove that our algorithm's performance will not be much worse (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure is very efficient and highly robust.Our method can be viewed as a synthesis of two previously studied techniques. First, we apply Cesa-Bianchi et al.'s (1993) results on predicting using expert advice (where we view each pruning as an expert) to obtain an algorithm that has provably low prediction loss, but that is computationally infeasible. Next, we generalize and apply a method developed by Buntine (1990, 1992) and Willems, Shtarkov and Tjalkens (1993, 1995) to derive a very efficient implementation of this procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号