首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
孙娟  王熙照 《计算机工程》2006,32(12):210-211,231
决策树归纳学习算法是机器学习领域中解决分类问题的最有效工具之一。由于决策树算法自身的缺陷了,因此需要进行相应的简化来提高预测精度。模糊决策树算法是对决策树算法的一种改进,它更加接近人的思维方式。文章通过实验分析了模糊决策树、规则简化与模糊规则简化;模糊决策树与模糊预剪枝算法的异同,对决策树的大小、算法的训练准确率与测试准确率进行比较,分析了模糊决策树的性能,为改进该算法提供了一些有益的线索。  相似文献   

2.
数据挖掘是一种重要的数据分析方法,决策树是数据挖掘中的一种主要技术,如何构造出最优决策树是许多研究者关心的问题。本文通过Rough集方法对决策表进行属性约简和属性值约简,去除决策表中与决策无关的冗余信息。在简化的决策表基础上构造近似最优决策树,本文给出了近似最优决策树的生成算法,并通过实例说明。  相似文献   

3.
决策树算法研究及应用   总被引:2,自引:0,他引:2  
信息论是数据挖掘技术的重要指导理论之一,是决策树算法实现的理论依据.决策树算法是一种逼近离散值目标函数的方法,其实质是在学习的基础上,得到分类规则。本文简要介绍了信息论的基本原理,重点阐述基于信息论的决策树算法,分析了它们目前主要的代表理论以及存在的问题,并用具体的事例来验证。  相似文献   

4.
基于数据挖掘的决策树方法分析   总被引:1,自引:0,他引:1  
决策树方法因其简单、直观、准确率高等特点在数据挖掘及数据分析中得到了广泛的应用。在介绍了决策树方法的一般知识后,深入分析了决策树的生成算法与模型,并对决策树的剪枝过程进行了探讨。  相似文献   

5.
决策树分类技术研究   总被引:28,自引:1,他引:28  
栾丽华  吉根林 《计算机工程》2004,30(9):94-96,105
决策树分类是一种重要的数据分类技术。ID3、C4.和EC4.5是建立决策树的常用算法,但目前国内对一些新的决策树分类算法研究较少。为此,在消化大量文献资料的基础上,研究了CART、SLIQ、SPRINT、PUBLIC等新算法,对各种决策树分类算法的基本思想进行阐述,并分析比较了各种算法的主要特性,为数据分类研究者提供借鉴。  相似文献   

6.
一种以相关性确定条件属性的决策树   总被引:5,自引:1,他引:5  
韩家新  王家华 《微机发展》2003,13(5):38-39,42
决策树是数据挖掘中的一种重要的分类器。文章在介绍了一些典型的决策树分类算法的基础上,研究了一种相关性度量的决策树分类器。其主要思想是在建立决策树过程中采用属性相关性度量来确定划分条件属性的顺序,通过阈值设定和处理简化了决策树的剪枝和优化过程,避免了使用信息熵带来的不当划分,详细描述了算法的执行过程以及正确性证明和时间复杂性分析。  相似文献   

7.
利用数据挖掘技术中的决策树ID3算法分析影响学生成绩的因素并生成决策树,可以从中挖掘出隐含的、未知的、影响学生成绩的潜在因素,然而,生成的决策树通常庞大而且复杂,有必要对它进行剪枝,在有效简化决策树的基础上,保证挖掘质量,得出影响学生成绩的重要因素。  相似文献   

8.
数据挖掘是一种重要的数据分析方法,决策树是数据挖掘中的一种主要技术,如何构造出最优决策树是许多研究者关心的问题。本文通过Rough集方法对决策表进行属性约简和属性值约简,去除决策表中与决策无关的冗余信息。在简化的决策表基础上构造近似最优决策树,本文给出了近似最优决策树的生成算法,并通过实例说明。  相似文献   

9.
决策树是数据挖掘的一种重要方法,通常用来形成分类器和预测模型。ID3算法作为决策树的核心算法,由于它的简单与高效而得到了广泛的应用,然而它倾向于选择属性值较多的属性作为分支属性,从而可能错过分类能力强的属性。对ID3算法的分支策略进行改进,增加了对属性的类区分度的考量。经实验比较,新方法能提高决策树的精度,简化决策树。  相似文献   

10.
基于SPRINT方法的并行决策树分类研究   总被引:9,自引:0,他引:9  
决策树技术的最大问题之一就是它的计算复杂性和训练数据的规模成正比,导致在大的数据集上构造决策树的计算时间太长。并行构造决策树是解决这个问题的一种有效方法。文中基于同步构造决策树的思想,对SPRINT方法的并行性做了详细分析和研究,并提出了进一步研究的方向。  相似文献   

11.
Most of the methods that generate decision trees for a specific problem use the examples of data instances in the decision tree–generation process. This article proposes a method called RBDT‐1—rule‐based decision tree—for learning a decision tree from a set of decision rules that cover the data instances rather than from the data instances themselves. The goal is to create on demand a short and accurate decision tree from a stable or dynamically changing set of rules. The rules could be generated by an expert, by an inductive rule learning program that induces decision rules from the examples of decision instances such as AQ‐type rule induction programs, or extracted from a tree generated by another method, such as the ID3 or C4.5. In terms of tree complexity (number of nodes and leaves in the decision tree), RBDT‐1 compares favorably with AQDT‐1 and AQDT‐2, which are methods that create decision trees from rules. RBDT‐1 also compares favorably with ID3 while it is as effective as C4.5 where both (ID3 and C4.5) are well‐known methods that generate decision trees from data examples. Experiments show that the classification accuracies of the decision trees produced by all methods under comparison are indistinguishable.  相似文献   

12.
用遗传算法构造决策树   总被引:20,自引:1,他引:20  
C4.5是一种归纳学习算法,它通过对一组事例的学习形成决策树形式的规则。由于C4.5采用的是局部探索的策略,它得到的决策树不一定是最优的。遗传算法是模拟自然进化的通用全局搜索算法。文中讨论了利用遗传算法的构造决策树的方法。  相似文献   

13.
Induction of multiple fuzzy decision trees based on rough set technique   总被引:5,自引:0,他引:5  
The integration of fuzzy sets and rough sets can lead to a hybrid soft-computing technique which has been applied successfully to many fields such as machine learning, pattern recognition and image processing. The key to this soft-computing technique is how to set up and make use of the fuzzy attribute reduct in fuzzy rough set theory. Given a fuzzy information system, we may find many fuzzy attribute reducts and each of them can have different contributions to decision-making. If only one of the fuzzy attribute reducts, which may be the most important one, is selected to induce decision rules, some useful information hidden in the other reducts for the decision-making will be losing unavoidably. To sufficiently make use of the information provided by every individual fuzzy attribute reduct in a fuzzy information system, this paper presents a novel induction of multiple fuzzy decision trees based on rough set technique. The induction consists of three stages. First several fuzzy attribute reducts are found by a similarity based approach, and then a fuzzy decision tree for each fuzzy attribute reduct is generated according to the fuzzy ID3 algorithm. The fuzzy integral is finally considered as a fusion tool to integrate the generated decision trees, which combines together all outputs of the multiple fuzzy decision trees and forms the final decision result. An illustration is given to show the proposed fusion scheme. A numerical experiment on real data indicates that the proposed multiple tree induction is superior to the single tree induction based on the individual reduct or on the entire feature set for learning problems with many attributes.  相似文献   

14.
The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper describes one such method that generates compact trees using multifeature splits in place of single feature split decision trees generated by most existing methods for distributed data. Our method is based on Fisher's linear discriminant function, and is capable of dealing with multiple classes in the data. For homogeneously distributed data, the decision trees produced by our method are identical to decision trees generated using Fisher's linear discriminant function with centrally stored data. For heterogeneously distributed data, a certain approximation is involved with a small change in performance with respect to the tree generated with centrally stored data. Experimental results for several well-known datasets are presented and compared with decision trees generated using Fisher's linear discriminant function with centrally stored data.  相似文献   

15.
This research proposes a new model for constructing decision trees using interval-valued fuzzy membership values. Most existing fuzzy decision trees do not consider the uncertainty associated with their membership values, however, precise values of fuzzy membership values are not always possible. In this paper, we represent fuzzy membership values as intervals to model uncertainty and employ the look-ahead based fuzzy decision tree induction method to construct decision trees. We also investigate the significance of different neighbourhood values and define a new parameter insensitive to specific data sets using fuzzy sets. Some examples are provided to demonstrate the effectiveness of the approach.  相似文献   

16.
Credit scoring is the term used to describe methods utilized for classifying applicants for credit into classes of risk. This paper evaluates two induction approaches, rough sets and decision trees, as techniques for classifying credit (business) applicants. Inductive learning methods, like rough sets and decision trees, have a better knowledge representational structure than neural networks or statistical procedures because they can be used to derive production rules. If decision trees have already been used for credit granting, the rough sets approach is rarely utilized in this domain. In this paper, we use production rules obtained on a sample of 1102 business loans in order to compare the classification abilities of the two techniques. We show that decision trees obtain better results with 87.5% of good classifications with a pruned tree, against 76.7% for rough sets. However, decision trees make more type–II errors than rough sets, but fewer type–I errors.  相似文献   

17.
决策树归纳的两个重要阶段是数据表示空间的简化和决策树的生成。在将训练集的不一致率控制在某一阈值的前提下,减少实例的属性个数和各个属性的取值个数保证了决策树方法的可行性和有效性。本文在Chi2算法的基础上运用它的一种变形进行属性取值离散化和属性筛选,然后运用算术运算符合并取值个数为2或3的相邻属性。在此基础上生
成的决策树具有良好的准确性。实验数据采用的是一个保险公司捐献的数据集。  相似文献   

18.
Overfitting Avoidance as Bias   总被引:2,自引:0,他引:2  
Schaffer  Cullen 《Machine Learning》1993,10(2):153-178
Strategies for increasing predictive accuracy through selective pruning have been widely adopted by researchers in decision tree induction. It is easy to get the impression from research reports that there are statistical reasons for believing that these overfitting avoidance strategies do increase accuracy and that, as a research community, we are making progress toward developing powerful, general methods for guarding against overfitting in inducing decision trees. In fact, any overfitting avoidance strategy amounts to a form of bias and, as such, may degrade performance instead of improving it. If pruning methods have often proven successful in empirical tests, this is due, not to the methods, but to the choice of test problems. As examples in this article illustrate, overfitting avoidance strategies are not better or worse, but only more or less appropriate to specific application domains. We are not—and cannot be—making progress toward methods both powerful and general.  相似文献   

19.
While many constructive induction algorithms focus on generating new binary attributes, this paper explores novel methods of constructing nominal and numeric attributes. We propose a new constructive operator, X-of-N. An X-of-N representation is a set containing one or more attribute-value pairs. For a given instance, the value of an X-of-N representation corresponds to the number of its attribute-value pairs that are true of the instance. A single X-of-N representation can directly and simply represent any concept that can be represented by a single conjunctive, a single disjunctive, or a single M-of-N representation commonly used for constructive induction, and the reverse is not true. In this paper, we describe a constructive decision tree learning algorithm, called XofN. When building decision trees, this algorithm creates one X-of-N representation, either as a nominal attribute or as a numeric attribute, at each decision node. The construction of X-of-N representations is carried out by greedily searching the space defined by all the attribute-value pairs of a domain. Experimental results reveal that constructing X-of-N attributes can significantly improve the performance of decision tree learning in both artificial and natural domains in terms of higher prediction accuracy and lower theory complexity. The results also show the performance advantages of constructing X-of-N attributes over constructing conjunctive, disjunctive, or M-of-N representations for decision tree learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号