首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
While many constructive induction algorithms focus on generating new binary attributes, this paper explores novel methods of constructing nominal and numeric attributes. We propose a new constructive operator, X-of-N. An X-of-N representation is a set containing one or more attribute-value pairs. For a given instance, the value of an X-of-N representation corresponds to the number of its attribute-value pairs that are true of the instance. A single X-of-N representation can directly and simply represent any concept that can be represented by a single conjunctive, a single disjunctive, or a single M-of-N representation commonly used for constructive induction, and the reverse is not true. In this paper, we describe a constructive decision tree learning algorithm, called XofN. When building decision trees, this algorithm creates one X-of-N representation, either as a nominal attribute or as a numeric attribute, at each decision node. The construction of X-of-N representations is carried out by greedily searching the space defined by all the attribute-value pairs of a domain. Experimental results reveal that constructing X-of-N attributes can significantly improve the performance of decision tree learning in both artificial and natural domains in terms of higher prediction accuracy and lower theory complexity. The results also show the performance advantages of constructing X-of-N attributes over constructing conjunctive, disjunctive, or M-of-N representations for decision tree learning.  相似文献   

2.
决策树的优化算法   总被引:78,自引:1,他引:78  
刘小虎  李生 《软件学报》1998,9(10):797-800
决策树的优化是决策树学习算法中十分重要的分支.以ID3为基础,提出了改进的优化算法.每当选择一个新的属性时,算法不是仅仅考虑该属性带来的信息增益,而是考虑到选择该属性后继续选择的属性带来的信息增益,即同时考虑树的两层结点.提出的改进算法的时间复杂性与ID3相同,对于逻辑表达式的归纳,改进算法明显优于ID3.  相似文献   

3.
This paper presents a novel host-based combinatorial method based on k-Means clustering and ID3 decision tree learning algorithms for unsupervised classification of anomalous and normal activities in computer network ARP traffic. The k-Means clustering method is first applied to the normal training instances to partition it into k clusters using Euclidean distance similarity. An ID3 decision tree is constructed on each cluster. Anomaly scores from the k-Means clustering algorithm and decisions of the ID3 decision trees are extracted. A special algorithm is used to combine results of the two algorithms and obtain final anomaly score values. The threshold rule is applied for making the decision on the test instance normality. Experiments are performed on captured network ARP traffic. Some anomaly criteria has been defined and applied to the captured ARP traffic to generate normal training instances. Performance of the proposed approach is evaluated using five defined measures and empirically compared with the performance of individual k-Means clustering and ID3 decision tree classification algorithms and the other proposed approaches based on Markovian chains and stochastic learning automata. Experimental results show that the proposed approach has specificity and positive predictive value of as high as 96 and 98%, respectively.  相似文献   

4.
决策树学习算法ID3的研究   总被引:28,自引:0,他引:28  
ID3是决策树学习的核心算法,为此详细叙述了决策树表示方法和ID3决策树学习算法,特别说明了决策属性的选取法则。通过一个学习实例给出该算法第一选取决策属性的详细过程,并且对该算法进行了讨论,一般情况下,ID3算法可以找出最优决策树。  相似文献   

5.
As we know, learning in real world is interactive, incremental and dynamical in multiple dimensions, where new data could be appeared at anytime from anywhere and of any type. Therefore, incremental learning is of more and more importance in real world data mining scenarios. Decision trees, due to their characteristics, have been widely used for incremental learning. In this paper, we propose a novel incremental decision tree algorithm based on rough set theory. To improve the computation efficiency of our algorithm, when a new instance arrives, according to the given decision tree adaptation strategies, the algorithm will only modify some existing leaf node in the currently active decision tree or add a new leaf node to the tree, which can avoid the high time complexity of the traditional incremental methods for rebuilding decision trees too many times. Moreover, the rough set based attribute reduction method is used to filter out the redundant attributes from the original set of attributes. And we adopt the two basic notions of rough sets: significance of attributes and dependency of attributes, as the heuristic information for the selection of splitting attributes. Finally, we apply the proposed algorithm to intrusion detection. The experimental results demonstrate that our algorithm can provide competitive solutions to incremental learning.  相似文献   

6.
Most of the methods that generate decision trees for a specific problem use the examples of data instances in the decision tree–generation process. This article proposes a method called RBDT‐1—rule‐based decision tree—for learning a decision tree from a set of decision rules that cover the data instances rather than from the data instances themselves. The goal is to create on demand a short and accurate decision tree from a stable or dynamically changing set of rules. The rules could be generated by an expert, by an inductive rule learning program that induces decision rules from the examples of decision instances such as AQ‐type rule induction programs, or extracted from a tree generated by another method, such as the ID3 or C4.5. In terms of tree complexity (number of nodes and leaves in the decision tree), RBDT‐1 compares favorably with AQDT‐1 and AQDT‐2, which are methods that create decision trees from rules. RBDT‐1 also compares favorably with ID3 while it is as effective as C4.5 where both (ID3 and C4.5) are well‐known methods that generate decision trees from data examples. Experiments show that the classification accuracies of the decision trees produced by all methods under comparison are indistinguishable.  相似文献   

7.
基于粗糙集的归纳推理检索方法   总被引:4,自引:0,他引:4  
张光前  邓贵仕  吕文颜 《计算机工程》2003,29(16):23-24,105
归纳推理检索是基于事例推理(CBR)中常用的检索方法之一,是基于ID3算法的检索方法。文章在基干事例推理方法的背景下论证了事例的属性的重要性和冗余之间的关系,并在此基础上从属性相对于其属性的重要性角度来构造启发函数。和ID3算法相比较,该算法不但降低了计算复杂性,而且在一定程度上可以消除样本中的噪声,使归纳推理检索方法的检索效率有所提高。  相似文献   

8.
基于属性值的ID3算法改进   总被引:6,自引:1,他引:5  
ID3算法是数据挖掘中经典的决策树分类算法.针对ID3算法所存在的属性取值偏向问题及只时较小的数据集有效的缺点提出改进.当训练样本各属性的取值个数相差较大的情况下,在计算划分标准时引入了属性取值个数N,在一定程度上克服了ID3算法易偏向于取值较多的属性这一缺陷,得到了结构更简洁的、较为理想的决策树.采用先剪枝的方法实现改进,设定一个阈值避免决策树的完全生长,在保持分类准确率的同时,大大地提高了算法的速度.实验结果表明,改进后的算法(AVID3)对许多数据集比传统ID3算法更有效.  相似文献   

9.
阐明决策树分类器在用于分类的数据挖掘技术中依然重要,论述基于决策树归纳分类的ID3、C4.5算法,并且对决策属性的选取法则进行说明。通过实例解析ID3、C4.5算法实现过程,结果表明C4.5算法相比较于ID3算法的优越性.尤其在处理具有多属性值的数据时的更加合理和正确。  相似文献   

10.
基于粗糙集技术的决策树归纳   总被引:3,自引:0,他引:3       下载免费PDF全文
ID3算法是一种典型的决策树归纳算法,它以信息增益作为选择扩展属性根结点的标准,并递归地生成决策树。但ID3算法倾向于选取属性取值较多的属性作为根结点,而且它假设训练集中各类别样例的比例应与实际问题领域里各类别样例的比例相同。提出一种新的基于粗糙集技术的决策树归纳算法,它是一种完全数据驱动的归纳算法,可以克服ID3算法的上述不足。  相似文献   

11.
针对网络中的告警泛洪和故障处理复杂问题, 提出一种结合元胞学习自动机(CLA)和决策树ID3的新告警关联聚类算法。在CLA算法中使用学习自动机对告警信号进行分簇, 但是在一个簇内如果出现任何子群或交错, 则决策树ID3学习算法通过分割数据样本训练在该簇上来优化决策边界, 从而大大减少分簇告警数目以及完成对根源性告警的定位。仿真表明, 该算法能有效地对大量告警信号进行分析, 并且能比较准确地鉴定出根源性告警。  相似文献   

12.
增量决策树算法研究   总被引:2,自引:1,他引:2  
文中主要解决传统的ID3算法不能处理增量数据集构造决策树的问题。在传统ID3决策树算法和原有增量算法的基础上,利用信息论中熵变原理的特点,对与增量决策树算法相关的三个定理进行相应的改进,在理论上证明了改进的增量决策树算法的有效性和可靠性。同时对增量决策树算法和ID3算法的复杂度进行了对比分析,得出增量决策树算法的实例费用和信息熵费用都高于ID3算法的结论。最后通过一个实验证明,改进的增量决策树算法能够构造出与ID3算法形态基本相同的决策树。  相似文献   

13.
In this paper, we present "k-means+ID3", a method to cascade k-means clustering and the ID3 decision tree learning methods for classifying anomalous and normal activities in a computer network, an active electronic circuit, and a mechanical mass-beam system. The k-means clustering method first partitions the training instances into k clusters using Euclidean distance similarity. On each cluster, representing a density region of normal or anomaly instances, we build an ID3 decision tree. The decision tree on each cluster refines the decision boundaries by learning the subgroups within the cluster. To obtain a final decision on classification, the decisions of the k-means and ID3 methods are combined using two rules: 1) the nearest-neighbor rule and 2) the nearest-consensus rule. We perform experiments on three data sets: 1) network anomaly data (NAD), 2) Duffing equation data (DED), and 3) mechanical system data (MSD), which contain measurements from three distinct application domains of computer networks, an electronic circuit implementing a forced Duffing equation, and a mechanical system, respectively. Results show that the detection accuracy of the k-means+ID3 method is as high as 96.24 percent at a false-positive-rate of 0.03 percent on NAD; the total accuracy is as high as 80.01 percent on MSD and 79.9 percent on DED  相似文献   

14.
一种新的基于属性—值对的决策树归纳算法   总被引:6,自引:1,他引:5  
决策树归纳算法ID3是实例学习中具有代表性的学习方法。文中针对ID3易偏向于值数较多属性的缺陷,提出一种新的基于属性-值对的决策树归纳算法AVPI,它所产生的决策树大小及测试速度均优于ID3。该算法应用于色彩匹配系统,取得了较好效果。  相似文献   

15.
一种基于属性加权的决策树算法   总被引:1,自引:0,他引:1  
ID3算法和C4.5算法是简单而有效的决策树分类算法,但其应用于复杂决策问题上存在准确性差的问题。本文提出了一种新的基于属性加权决策树算法,基于粗集理论提出通过属性对决策影响程度的不同进行加权来构建决策树,提高了决策结果准确性。通过属性加权标记属性的重要性,权值可以从训练数据中学习得到。实验结果表明,算法明显提高了决策结果的准确率。  相似文献   

16.
决策树算法是经典的分类挖掘算法之一,具有广泛的实际应用价值。经典的ID3决策树算法是内存驻留算法,只能处理小数据集,在面对海量数据集时显得无能为力。为此,对经典ID3决策树生成算法的可并行性进行了深入分析和研究,利用云计算的MapReduce编程技术,提出并实现面向海量数据的ID3决策树并行分类算法。实验结果表明该算法是有效可行的。  相似文献   

17.
Hybrid decision tree   总被引:6,自引:0,他引:6  
  相似文献   

18.
Linear tree     
In this paper we present system Ltree for propositional supervised learning. Ltree is able to define decision surfaces both orthogonal and oblique to the axes defined by the attributes of the input space. This is done combining a decision tree with a linear discriminant by means of constructive induction. At each decision node Ltree defines a new instance space by insertion of new attributes that are projections of the examples that fall at this node over the hyper-planes given by a linear discriminant function. This new instance space is propagated down through the tree. Tests based on those new attributes are oblique with respect to the original input space. Ltree is a probabilistic tree in the sense that it outputs a class probability distribution for each query example. The class probability distribution is computed at learning time, taking into account the different class distributions on the path from the root to the actual node. We have carried out experiments on twenty one benchmark datasets and compared our system with other well known decision tree systems (orthogonal and oblique) like C4.5, OC1, LMDT, and CART. On these datasets we have observed that our system has advantages in what concerns accuracy and learning times at statistically significant confidence levels.  相似文献   

19.
一种多变量决策树的构造与研究   总被引:3,自引:0,他引:3       下载免费PDF全文
单变量决策树算法造成树的规模庞大、规则复杂、不易理解,而多变量决策树是一种有效用于分类的数据挖掘方法,构造的关键是根据属性之间的相关性选择合适的属性组合构成一个新的属性作为节点。结合粗糙集原理中的知识依赖性度量和信息系统中条件属性集的离散度概念,提出了一种多变量决策树的构造算法(RD)。在UCI上部分数据集的实验结果表明,提出的多变量决策树算法的分类效果与传统的ID3算法以及基于核方法的多变量决策树的分类效果相比,有一定的提高。  相似文献   

20.
《Knowledge》1999,12(5-6):269-275
An algorithm for decision-tree induction is presented in which attribute selection is based on the evidence-gathering strategies used by doctors in sequential diagnosis. Since the attribute selected by the algorithm at a given node is often the best attribute according to the Quinlan's information gain criterion, the decision tree it induces is often identical to the ID3 tree when the number of attributes is small. In problem-solving applications of the induced decision tree, an advantage of the approach is that the relevance of a selected attribute or test can be explained in strategic terms. An implementation of the algorithm in an environment providing integrated support for incremental learning, problem solving and explanation is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号