首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 51 毫秒
1.
增量决策树算法研究   总被引:2,自引:1,他引:2  
文中主要解决传统的ID3算法不能处理增量数据集构造决策树的问题。在传统ID3决策树算法和原有增量算法的基础上,利用信息论中熵变原理的特点,对与增量决策树算法相关的三个定理进行相应的改进,在理论上证明了改进的增量决策树算法的有效性和可靠性。同时对增量决策树算法和ID3算法的复杂度进行了对比分析,得出增量决策树算法的实例费用和信息熵费用都高于ID3算法的结论。最后通过一个实验证明,改进的增量决策树算法能够构造出与ID3算法形态基本相同的决策树。  相似文献   

2.
Instance-Based Learning Algorithms   总被引:46,自引:1,他引:45  
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.  相似文献   

3.
This paper presents a novel host-based combinatorial method based on k-Means clustering and ID3 decision tree learning algorithms for unsupervised classification of anomalous and normal activities in computer network ARP traffic. The k-Means clustering method is first applied to the normal training instances to partition it into k clusters using Euclidean distance similarity. An ID3 decision tree is constructed on each cluster. Anomaly scores from the k-Means clustering algorithm and decisions of the ID3 decision trees are extracted. A special algorithm is used to combine results of the two algorithms and obtain final anomaly score values. The threshold rule is applied for making the decision on the test instance normality. Experiments are performed on captured network ARP traffic. Some anomaly criteria has been defined and applied to the captured ARP traffic to generate normal training instances. Performance of the proposed approach is evaluated using five defined measures and empirically compared with the performance of individual k-Means clustering and ID3 decision tree classification algorithms and the other proposed approaches based on Markovian chains and stochastic learning automata. Experimental results show that the proposed approach has specificity and positive predictive value of as high as 96 and 98%, respectively.  相似文献   

4.
Most of the methods that generate decision trees for a specific problem use the examples of data instances in the decision tree–generation process. This article proposes a method called RBDT‐1—rule‐based decision tree—for learning a decision tree from a set of decision rules that cover the data instances rather than from the data instances themselves. The goal is to create on demand a short and accurate decision tree from a stable or dynamically changing set of rules. The rules could be generated by an expert, by an inductive rule learning program that induces decision rules from the examples of decision instances such as AQ‐type rule induction programs, or extracted from a tree generated by another method, such as the ID3 or C4.5. In terms of tree complexity (number of nodes and leaves in the decision tree), RBDT‐1 compares favorably with AQDT‐1 and AQDT‐2, which are methods that create decision trees from rules. RBDT‐1 also compares favorably with ID3 while it is as effective as C4.5 where both (ID3 and C4.5) are well‐known methods that generate decision trees from data examples. Experiments show that the classification accuracies of the decision trees produced by all methods under comparison are indistinguishable.  相似文献   

5.
机器学习中的决策树算法具有重要的数据分类功能,但基于信息增益的ID3算法与基于基尼指数的CART算法的分类功效还值得提高。构造信息增益与基尼指数的自适应集成度量,设计有效的决策树算法,以提升ID3与CART两类基本算法的性能。分析信息增益信息表示与基尼指数代数表示的异质无关性,采用基于知识的加权线性组合来建立信息增益与基尼指数的融合度量,开发决策树启发构造算法IGGI。关于决策树,IGGI算法有效改进了ID3算法与CART算法,相关数据实验表明IGGI算法通常具有更优的分类准确度。  相似文献   

6.
基于变精度粗糙集的决策树优化算法研究   总被引:4,自引:2,他引:4  
应用变精度粗糙集理论,提出了一种利用新的启发式函数构造决策树的方法。该方法以变精度粗糙集的分类质量的量度作为信息函数,对条件属性进行选择。和ID3算法比较,本方法充分考虑了属性间的依赖性和冗余性,尤其考虑了训练数据中的噪声数据,允许在构造决策树的过程中划入正域的实例类别存在一定的不一致性,可简化生成的决策树,提高决策树的泛化能力。  相似文献   

7.
决策树算法已经在人工智能领域发挥了巨大的作用,但是传统的ID3算法并不能满足增量学习的要求,增量决策树算法已经成为了当前的研究热点。该文引入了属性值类别计数器的概念,然后重点介绍了ID4算法和ID5R算法,并在最后加以比较。  相似文献   

8.
A scalable, incremental learning algorithm for classification problems   总被引:5,自引:0,他引:5  
In this paper a novel data mining algorithm, Clustering and Classification Algorithm-Supervised (CCA-S), is introduced. CCA-S enables the scalable, incremental learning of a non-hierarchical cluster structure from training data. This cluster structure serves as a function to map the attribute values of new data to the target class of these data, that is, classify new data. CCA-S utilizes both the distance and the target class of training data points to derive the cluster structure. In this paper, we first present problems with many existing data mining algorithms for classification problems, such as decision trees, artificial neural networks, in scalable and incremental learning. We then describe CCA-S and discuss its advantages in scalable, incremental learning. The testing results of applying CCA-S to several common data sets for classification problems are presented. The testing results show that the classification performance of CCA-S is comparable to the other data mining algorithms such as decision trees, artificial neural networks and discriminant analysis.  相似文献   

9.
通过分析ID3算法的基本原理及其多值偏向问题,提出了一种基于相关系数的决策树优化算法。首先通过引进相关系数对ID3算法进行改进,从而克服其多值偏向问题,然后运用数学中泰勒公式和麦克劳林公式的性质,对信息增益公式进行近似简化。通过具体数据的实例验证,说明优化后的ID3算法能够解决多值偏向问题。标准数据集UCI上的实验结果表明,在构建决策树的过程中,既提高了平均分类准确率,又降低了构建决策树的复杂度,从而还缩短了决策树的生成时间,当数据集中的样本数较大时,优化后的ID3算法的效率得到了明显的提高。  相似文献   

10.
The relation between the decision trees generated by a machine learning algorithm and the hidden layers of a neural network is described. A continuous ID3 algorithm is proposed that converts decision trees into hidden layers. The algorithm allows self-generation of a feedforward neural network architecture. In addition, it allows interpretation of the knowledge embedded in the generated connections and weights. A fast simulated annealing strategy, known as Cauchy training, is incorporated into the algorithm to escape from local minima. The performance of the algorithm is analyzed on spiral data.  相似文献   

11.
本文首先阐述了数据挖掘中决策树的基本思想,然后简单介绍了决策树经典算法(ID3算法),重点基于ID3算法论述了对于决策树的影响4个要素,并使用真实的数据详细地分析了4个要素,实验表明,只要4个要素中的任何一个改变,决策树必须要重新被构建。  相似文献   

12.
基于关联度函数的决策树分类算法   总被引:10,自引:0,他引:10  
韩松来  张辉  周华平 《计算机应用》2005,25(11):2655-2657
为了克服决策树算法中普遍存在的多值偏向问题,提出了一种新的基于关联度函数的决策树算法--AF算法,并从理论上分析了它克服多值偏向的原理。通过实验发现,与ID3算法比较,AF算法不仅克服了多值偏向问题,而且保持了较高的分类正确率。  相似文献   

13.
Mining with streaming data is a hot topic in data mining. When performing classification on data streams, traditional classification algorithms based on decision trees, such as ID3 and C4.5, have a relatively poor efficiency in both time and space due to the characteristics of streaming data. There are some advantages in time and space when using random decision trees. An incremental algorithm for mining data streams, SRMTDS (Semi-Random Multiple decision Trees for Data Streams), based on random decision trees is proposed in this paper. SRMTDS uses the inequality of Hoeffding bounds to choose the minimum number of split-examples, a heuristic method to compute the information gain for obtaining the split thresholds of numerical attributes, and a Naive Bayes classifier to estimate the class labels of tree leaves. Our extensive experimental study shows that SRMTDS has an improved performance in time, space, accuracy and the anti-noise capability in comparison with VFDTc, a state-of-the-art decision-tree algorithm for classifying data streams.  相似文献   

14.
The data-driven characteristic of the Version Space rule-learning method works efficiently in memory even if the training set is enormous. However, the concept hierarchy of each attribute used to generalize/specialize the hypothesis of a specific/general (S/G) set is processed sequentially and instance by instance, which degrades its performance. As for ID3, the decision tree is generated from the order of attributes according to their entropies to reduce the number of attributes in some of the tree paths. Unlike Version Space, ID3 generates an extremely complex decision tree when the training set is enormous. Therefore, we propose a method called AGE (A_RCH+OG_L+ASE_, where ARCH=“Automatic geneRation of Concept Hierarchies”, OGL=“Optimal Generalization Level”, and ASE=“Attribute Selection by Entropy”), taking advantages of Version Space and ID3 to learn rules from object-oriented databases (OODBs) with the least number of learning features according to the entropy. By simulations, we found the performance of our learning algorithm is better than both Version Space and ID3. Furthermore, AGE's time complexity and space complexity are both linear with the number of training instances  相似文献   

15.
一种新的增量决策树算法   总被引:1,自引:0,他引:1  
对于数据增加迅速的客户行为分析、Web日志分析、网络入侵检测等在线分类系统来说,如何快速适应新增样本是确保其分类正确和可持续运行的关键。该文提出了一种新的适应数据增量的决策树算法,该算法同贝叶斯方法相结合,在原有决策树的基础上利用新增样本迅速训练出新的决策树。实验结果表明,提出的算法可以较好的解决该问题,与重新构造决策树相比,它的时间开销更少,且具有更高的分类准确率,更适用于在线分类系统。  相似文献   

16.
Machine Learning for Intelligent Processing of Printed Documents   总被引:1,自引:0,他引:1  
A paper document processing system is an information system component which transforms information on printed or handwritten documents into a computer-revisable form. In intelligent systems for paper document processing this information capture process is based on knowledge of the specific layout and logical structures of the documents. This article proposes the application of machine learning techniques to acquire the specific knowledge required by an intelligent document processing system, named WISDOM++, that manages printed documents, such as letters and journals. Knowledge is represented by means of decision trees and first-order rules automatically generated from a set of training documents. In particular, an incremental decision tree learning system is applied for the acquisition of decision trees used for the classification of segmented blocks, while a first-order learning system is applied for the induction of rules used for the layout-based classification and understanding of documents. Issues concerning the incremental induction of decision trees and the handling of both numeric and symbolic data in first-order rule learning are discussed, and the validity of the proposed solutions is empirically evaluated by processing a set of real printed documents.  相似文献   

17.
针对医院信息管理工作难度大,数据种类复杂并且对于医院管理数据利用率低等问题,设计一种医院信息管理系统,该系统软件设计采用C/S架构记性设计;针对医院数据挖掘技术,通过改进Apriori算法和增量决策树算法对数据进行处理,提高医院信息利用率;并通过设计模拟实验方案对设计的算法进行验证,其中对于改进Apriori算法与原始的Apriori算法相比起处理速度提升了 10倍;对于增量决策树算法分类的准确率比C4.5算法和ID3算法高5%以上,并且在增量学习中耗时是C4.5算法和ID3算法的40%以下.  相似文献   

18.
沈思倩  毛宇光  江冠儒 《计算机科学》2017,44(6):139-143, 149
主要研究在对不完全数据集进行决策树分析时,如何加入差分隐私保护技术。首先简单介绍了差分隐私ID3算法和差分隐私随机森林决策树算法;然后针对上述算法存在的缺陷和不足进行了修改,提出指数机制的差分隐私随机森林决策树算法;最后对于不完全数据集提出了一种新的WP(Weight Partition)缺失值处理方法,能够在不需要插值的情况下,使决策树分析算法既能满足差分隐私保护,也能拥有更高的预测准确率和适应性。实验证明,无论是Laplace机制还是指数机制,无论是ID3算法还是随机森林决策树算法,都能适用于所提方法。  相似文献   

19.
一种基于修正信息增益的ID3算法   总被引:2,自引:0,他引:2       下载免费PDF全文
ID3算法是决策树中影响最大的算法之一,它以信息增益为标准选择决策树的测试属性。这种算法存在不足之处,在选择合适的测试属性时,倾向于选择取值较多的属性,而在实际应用中,取值较多的属性未必是重要的。针对此算法的不足,本文提出了一种对增益修正的 ID3算法,为改善 ID3的多值偏向问题提供了一种有效途径。通过理论分析和实验证明,这种算法能较好地解决多值倾向的问题。  相似文献   

20.
决策树学习算法ID3的研究   总被引:28,自引:0,他引:28  
ID3是决策树学习的核心算法,为此详细叙述了决策树表示方法和ID3决策树学习算法,特别说明了决策属性的选取法则。通过一个学习实例给出该算法第一选取决策属性的详细过程,并且对该算法进行了讨论,一般情况下,ID3算法可以找出最优决策树。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号