首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
A standard approach to determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. Such problems arise, for example, when an attribute assigned to some node cannot be measured, or there is a significant change in the costs of measuring attributes or in the frequency distribution of events from different decision classes. An attractive approach to resolving this problem is to learn and store knowledge in the form of decision rules, and to generate from them, whenever needed, a decision tree that is most suitable in a given situation. An additional advantage of such an approach is that it facilitates buildingcompact decision trees, which can be much simpler than the logically equivalent conventional decision trees (by compact trees are meant decision trees that may contain branches assigned aset of values, and nodes assignedderived attributes, i.e., attributes that are logical or mathematical functions of the original ones). The paper describes an efficient method, AQDT-1, that takes decision rules generated by an AQ-type learning system (AQ15 or AQ17), and builds from them a decision tree optimizing a given optimality criterion. The method can work in two modes: thestandard mode, which produces conventional decision trees, andcompact mode, which produces compact decision trees. The preliminary experiments with AQDT-1 have shown that the decision trees generated by it from decision rules (conventional and compact) have outperformed those generated from examples by the well-known C4.5 program both in terms of their simplicity and their predictive accuracy.  相似文献   

3.
数据挖掘是一种重要的数据分析方法,决策树是数据挖掘中的一种主要技术,如何构造出最优决策树是许多研究者关心的问题。本文通过Rough集方法对决策表进行属性约简和属性值约简,去除决策表中与决策无关的冗余信息。在简化的决策表基础上构造近似最优决策树,本文给出了近似最优决策树的生成算法,并通过实例说明。  相似文献   

4.
数据挖掘是一种重要的数据分析方法,决策树是数据挖掘中的一种主要技术,如何构造出最优决策树是许多研究者关心的问题。本文通过Rough集方法对决策表进行属性约简和属性值约简,去除决策表中与决策无关的冗余信息。在简化的决策表基础上构造近似最优决策树,本文给出了近似最优决策树的生成算法,并通过实例说明。  相似文献   

5.
新型决策树构造方法   总被引:1,自引:0,他引:1       下载免费PDF全文
决策树是一种重要的数据挖掘工具,但构造最优决策树是一个NP-完全问题。提出了一种基于关联规则挖掘的决策树构造方法。首先定义了高可信度的近似精确规则,给出了挖掘这类规则的算法;在近似精确规则的基础上产生新的属性,并讨论了新生成属性的评价方法;然后利用新生成的属性和数据本身的属性共同构造决策树;实验结果表明新的决策树构造方法具有较高的精度。  相似文献   

6.
具有高可理解性的二分决策树生成算法研究   总被引:3,自引:0,他引:3  
蒋艳凰  杨学军  赵强利 《软件学报》2003,14(12):1996-2005
二分离散化是决策树生成中处理连续属性最常用的方法,对于连续属性较多的问题,生成的决策树庞大,知识表示难以理解.针对两类分类问题,提出一种基于属性变换的多区间离散化方法--RCAT,该方法首先将连续属性转化为某类别的概率属性,此概率属性的二分法结果对应于原连续属性的多区间划分,然后对这些区间的边缘进行优化,获得原连续属性的信息熵增益,最后采用悲观剪枝与无损合并剪枝技术对RCAT决策树进行简化.对多个领域的数据集进行实验,结果表明:对比二分离散化,RCAT算法的执行效率高,生成的决策树在保持分类精度的同时,树的规模小,可理解性强.  相似文献   

7.
针对决策树C4.5算法在处理连续值属性过程中时间复杂度较高的问题,提出一种新的决策树构建方法:采用概率论中属性间的相关系数(Pearson),对数据集中的属性进行约简;结合属性的信息增益率,保留决策属性的最优子集,保证属性子集中没有冗余属性;采用边界点的判定,改进了连续值属性离散化过程中阈值分割方法,对信息增益率的计算进行修正。采用UCI数据库中的数据集,在Pycharm平台上进行一系列对比实验,结果表明:采用改进后C4.5决策树算法,决策树生成效率提高了约50%,准确率提升约2%,比较有效地解决了原C4.5算法属性选择偏连续值属性的问题。  相似文献   

8.
In many application domains, there is a need for learning algorithms that can effectively exploit attribute value taxonomies (AVT)—hierarchical groupings of attribute values—to learn compact, comprehensible and accurate classifiers from data—including data that are partially specified. This paper describes AVT-NBL, a natural generalization of the naïve Bayes learner (NBL), for learning classifiers from AVT and data. Our experimental results show that AVT-NBL is able to generate classifiers that are substantially more compact and more accurate than those produced by NBL on a broad range of data sets with different percentages of partially specified values. We also show that AVT-NBL is more efficient in its use of training data: AVT-NBL produces classifiers that outperform those produced by NBL using substantially fewer training examples.  相似文献   

9.
Attribute Generation Based on Association Rules   总被引:1,自引:0,他引:1  
A decision tree is considered to be appropriate (1) if the tree can classify the unseen data accurately, and (2) if the size of the tree is small. One of the approaches to induce such a good decision tree is to add new attributes and their values to enhance the expressiveness of the training data at the data pre-processing stage. There are many existing methods for attribute extraction and construction, but constructing new attributes is still an art. These methods are very time consuming, and some of them need a priori knowledge of the data domain. They are not suitable for data mining dealing with large volumes of data. We propose a novel approach that the knowledge on attributes relevant to the class is extracted as association rules from the training data. The new attributes and the values are generated from the association rules among the originally given attributes. We elaborate on the method and investigate its feature. The effectiveness of our approach is demonstrated through some experiments. Received 6 December 1999 / Revised 28 October 2000 / Accepted in revised form 9 March 2001  相似文献   

10.

Learning from patient records may aid medical knowledge acquisition and decision making. Decision tree induction, based on ID3, is a well-known approach of learning from examples. In this article we introduce a new data representation formalism that extends the original ID3 algorithm. We propose a new algorithm, ID+, which adopts this representation scheme. ID+ provides the capability of modeling dependencies between attributes or attribute values and of handling multiple values per attribute. We demonstrate our work via a series of medical knowledge acquisition experiments that are based on a ''real-world'' application of acute abdominal pain in children. In the context of these experiments, we compare ID+ with C4.5, NewId, and a Naive Bayesian classifier. Results demonstrate that the rules acquired via ID+ improve decision tree clinical comprehensibility and complement explanations supported by the Naive Bayesian classifier, while in terms of classification, accuracy decrease is marginal.  相似文献   

11.
周亮  晏立 《计算机应用研究》2010,27(8):2899-2901
为了克服现有决策树分类算法在大数据集上的有效性和可伸缩性的局限,提出一种新的基于粗糙集理论的决策树算法。首先提出基于代表性实例的原型抽象方法,该方法从原始数据集中抽取代表性实例组成抽象原型,可缩减实例数目和无关属性,从而使算法可以处理大数据集;然后提出属性分类价值量概念,并作为选择属性的启发式测度,该测度描述了属性对分类的贡献价值量的多少,侧重考虑了属性之间以及实例与分类之间的关系。实验表明,新算法比其他算法生成的决策树规模要小,准确率也有显著提高,在大数据集上尤为明显。  相似文献   

12.
We present a general technique for dynamizing a class of problems whose underlying structure is a computation graph embedded in a tree. We introduce three fully dynamic data structures, called path attribute systems, tree attribute systems, and linear attribute grammars, which extend and generalize the dynamic trees of Sleator and Tarjan. More specifically, we associate values, called attributes, with the nodes and paths of a rooted tree. Path attributes form a path attribute system if they can be maintained in constant time under path concatenation. Node attributes form a tree attribute system if the tree attributes of the tail of a path Π can be determined in constant time from the path attributes of Π. A linear attribute grammar is a tree-based linear expression such that the values of a node μ are calculated from the values at the parent, siblings, and/for children of μ. We provide a framework for maintaining path attribute systems, tree attribute systems, and linear attribute grammars in a fully dynamic environment using linear space and logarithmic time per operation. Also, we demonstrate the applicability of our techniques by showing examples of graph and geometric problems that can be efficiently dynamized, including biconnectivity and triconnectivity queries, planarity testing, drawing trees and series-parallel digraphs, slicing floorplan compaction, point location, and many optimization problems on bounded tree-width graphs. Received May 13, 1994; revised October 12, 1995.  相似文献   

13.
Costs are often an important part of the classification process. Cost factors have been taken into consideration in many previous studies regarding decision tree models. In this study, we also consider a cost-sensitive decision tree construction problem. We assume that there are test costs that must be paid to obtain the values of the decision attribute and that a record must be classified without exceeding the spending cost threshold. Unlike previous studies, however, in which records were classified with only a single condition attribute, in this study, we are able to simultaneously classify records with multiple condition attributes. An algorithm is developed to build a cost-constrained decision tree, which allows us to simultaneously classify multiple condition attributes. The experimental results show that our algorithm satisfactorily handles data with multiple condition attributes under different cost constraints.  相似文献   

14.
Univariate decision trees are classifiers currently used in many data mining applications. This classifier discovers partitions in the input space via hyperplanes that are orthogonal to the axes of attributes, producing a model that can be understood by human experts. One disadvantage of univariate decision trees is that they produce complex and inaccurate models when decision boundaries are not orthogonal to axes. In this paper we introduce the Fisher’s Tree, it is a classifier that takes advantage of dimensionality reduction of Fisher’s linear discriminant and uses the decomposition strategy of decision trees, to come up with an oblique decision tree. Our proposal generates an artificial attribute that is used to split the data in a recursive way.The Fisher’s decision tree induces oblique trees whose accuracy, size, number of leaves and training time are competitive with respect to other decision trees reported in the literature. We use more than ten public available data sets to demonstrate the effectiveness of our method.  相似文献   

15.
一种多变量决策树的构造与研究   总被引:3,自引:0,他引:3       下载免费PDF全文
单变量决策树算法造成树的规模庞大、规则复杂、不易理解,而多变量决策树是一种有效用于分类的数据挖掘方法,构造的关键是根据属性之间的相关性选择合适的属性组合构成一个新的属性作为节点。结合粗糙集原理中的知识依赖性度量和信息系统中条件属性集的离散度概念,提出了一种多变量决策树的构造算法(RD)。在UCI上部分数据集的实验结果表明,提出的多变量决策树算法的分类效果与传统的ID3算法以及基于核方法的多变量决策树的分类效果相比,有一定的提高。  相似文献   

16.
华文立  胡学刚 《微机发展》2007,17(3):116-118
在分析C4.5算法原理的基础上,进一步讨论了C4.5算法在决策树的规模控制、属性选择、滤躁和去除不相关属性等方面的不足,讨论了决策树挖掘中对训练数据进行属性约简的必要性。从实用的角度提出了一种利用遗传算法进行寻优的、基于属性约简的决策树构建模型,并为此模型设计了一个适应度函数。该模型具有自适应的特点,通过调整适应度函数的参数,可以约束遗传算法的寻优方向,实现对决策树的优化。实验表明,决策树寻优后,在所用训练集属性减少的同时,分类精度却有一定程度的提高,而分类规则的规模却降低了,因此,该模型具有一定的实用价值。  相似文献   

17.
一种基于属性加权的决策树算法   总被引:1,自引:0,他引:1  
ID3算法和C4.5算法是简单而有效的决策树分类算法,但其应用于复杂决策问题上存在准确性差的问题。本文提出了一种新的基于属性加权决策树算法,基于粗集理论提出通过属性对决策影响程度的不同进行加权来构建决策树,提高了决策结果准确性。通过属性加权标记属性的重要性,权值可以从训练数据中学习得到。实验结果表明,算法明显提高了决策结果的准确率。  相似文献   

18.
决策树是常用的数据挖掘方法,扩展属性的选择是决策树归纳的核心问题。基于离散化方法的连续值决策 树归纳在选择扩展属性时,需要度量每一个条件属性的每一个割点的分类不确定性,并通过这些割点的不确定性选择 扩展属性,其计算时间复杂度高。针对这一问题,提出了一种基于相容粗糙集技术的连续值属性决策树归纳方法。该 方法首先利用相容粗糙集技术选择扩展属性,然后找出该属性的最优割点,分割样例集并递归地构建决策树。从理论 上分析了该算法的计算时间复杂度,并在多个数据集上进行了实验。实验结果及对实验结果的统计分析均表明,提出 的方法在计算复杂度和分类精度方面均优于其他相关方法。  相似文献   

19.
Shuyu  Zhongying 《Knowledge》2006,19(8):675-680
This paper proposes an improved decision tree method for web information retrieval with self-map attributes. Our self-map tree has a value of self-map attribute in its internal node, and information based on dissimilarity between a pair of map sequences. Our method selects self-map which exists between data by exhaustive search based on relation and attribute information. Experimental results confirm that our improved method constructs comprehensive and accurate decision tree. Moreover, an example shows that our self-map decision tree is promising for data mining and knowledge discovery.  相似文献   

20.
属性频率划分和信息熵离散化的决策树算法   总被引:2,自引:0,他引:2       下载免费PDF全文
决策树是数据挖掘任务中分类的常用方法。在构造决策树的过程中,节点划分属性选择的度量直接影响决策树分类的效果。基于粗糙集的属性频率函数方法度量属性重要性,并用于分枝划分属性的选择和决策树的预剪枝,提出一种决策树学习算法。同时,为了能处理数值型属性,利用数据集的统计性质为启发式知识,提出了一种改进的数值型属性信息熵离散化算法。实验结果表明,新的离散化方法计算效率有明显提高,新的决策树算法与基于信息熵的决策树算法相比较,结构简单,且能有效提高分类效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号