首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 265 毫秒
1.
文章在基于变精度粗糙集模型的基础上,研究了具有置信度规则的一种新的决策树构造方法。新算法对基于粗糙集的决策树生成方法进行改进,新算法以变精度加权平均粗糙度作为属性选择标准构造决策树,综合分析训练数据的噪声数据,引入在构造决策树的过程中存在的不一致性。在决策树生长过程中引入置信度,以控制决策树的生长,得到具有确切置信度的决策规则。  相似文献   

2.
针对决策树构造中存在的最优属性选择困难、抗噪声能力差等问题,提出了一种新的基于变精度粗糙集模型的决策树构造算法.该算法采用近似分类精度作为节点选择属性的启发函数,与传统基于粗糙集的决策树构造算法相比,该算法构造的决策树结构简单,提高了决策树的泛化能力,同时对噪声也有一定的抑制能力.  相似文献   

3.
基于粗糙集的决策树构造算法   总被引:7,自引:2,他引:5  
针对ID3算法构造决策树复杂、分类效率不高问题,基于粗糙集理论提出一种决策树构造算法。该算法采用加权分类粗糙度作为节点选择属性的启发函数,与信息增益相比,能全面地刻画属性分类的综合贡献能力,并且计算简单。为消除噪声对选择属性和生成叶节点的影响,利用变精度粗糙集模型对该算法进行优化。实验结果表明,该算法构造的决策树在规模与分类效率上均优于ID3算法。  相似文献   

4.
决策树是数据挖掘中常用的分类方法。针对高等院校学生就业问题中出现由噪声造成的不一致性数据,本文提出了基于变精度粗糙集的决策树模型,并应用于学生就业数据分析。该方法以变精度粗糙集的分类质量的量度作为信息函数,对条件属性进行选择,作为树的节点,自上而下地分割数据集,直到满足某种终止条件。它充分考虑了属性间的依赖性和冗余性,允许在构造决策树的过程中划入正域的实例类别存在一定的不一致性。实验表明,该算法能够有效地处理不一致性数据集,并能正确合理地将就业数据分类,最终得到若干有价值的结论,供决策分析。该算法大大提高了决策规则的泛化能力,减化了树的结构。  相似文献   

5.
变精度粗糙集模型在决策树构造中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
针对ID3算法构造决策树复杂、分类效率不高等问题,本文基于变精度粗糙集模型提出了一种新的决策树构造算法。该算法采用加权分类粗糙度作为节点选择属性的启发函数,与信息增益相比,该标准更能够全面地刻画属性分类的综合贡献能力,计算简单,并且可以消除噪声数据对选择属性和生成叶节点的影响。实验结果证明,本算法构造的决策树在规模与分类效率上均优于ID3算法。  相似文献   

6.
陈家俊  苏守宝  徐华丽 《计算机应用》2011,31(12):3243-3246
针对经典决策树算法构造的决策树结构复杂、缺乏对噪声数据适应能力等局限性,基于多尺度粗糙集模型提出一种新的决策树构造算法。算法引入尺度变量和尺度函数概念,采用不同尺度下近似分类精度选择测试属性构造决策树,使用抑制因子对决策树进行修剪,有效地去除了噪声规则。结果表明该算法构造的决策树简单有效,对噪声数据有一定的抗干扰性,且能满足不同用户对决策精度的要求。  相似文献   

7.
针对C4.5决策树构造复杂、分类精度不高等问题,提出了一种基于变精度粗糙集的决策树构造改进算法.该算法采用近似分类质量作为节点选择属性的启发函数,与信息增益率相比,该标准更能准确地刻画属性分类的综合贡献能力,同时对噪声有一定的抑制能力.此外还针对两个或两个以上属性的近似分类质量相等的特殊情形,给出了如何选择最优的分类属...  相似文献   

8.
食品安全决策是食品安全问题研究的一项重要内容。为了对食品安全状况进行分析,基于粗糙集变精度模型,提出了一种包含规则置信度的构造决策树新方法。这种新方法针对传统加权决策树生成算法进行了改进,新算法以加权平均变精度粗糙度作为属性选择标准构造决策树,用变精度近似精度来代替近似精度,可以在数据库中消除噪声冗余数据,并且能够忽略部分矛盾数据,保证决策树构建过程中能够兼容部分存在冲突的决策规则。该算法可以在生成决策树的过程中,简化其生成过程,提高其应用范围,并且有助于诠释其生成规则。验证结果表明该算法是有效可行的。  相似文献   

9.
朱一飞  武琳琳 《福建电脑》2012,28(7):111-112
本文将粗糙集理论应用到决策树生成过程中,利用变精度粗糙集理论属性约简的特性在决策树生成过程中在保证分类能力不变的前提下减少分支数目,并考虑到实际问题中噪声数据的影响。  相似文献   

10.
变精度粗糙集模型在决策树构造中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
本文在应用变精度粗糙集模型构造决策树的研究基础上,提出了具有置信度规则的决策树的构造方法。该方法是对决策树生成方法的一个改进,所构造的决策树具有更强的实用性以及更高的可理解性。本文还针对两个甚至两个以上属性的分类质量量度相等的特殊情形,给出了如何选择较优的属性作为结点的方法。与传统的ID3算法相比,该方法所构造的决策树不仅结构简单,而且更加实用,利于理解。  相似文献   

11.
A decision tree is a predictive model that recursively partitions the covariate’s space into subspaces such that each subspace constitutes a basis for a different prediction function. Decision trees can be used for various learning tasks including classification, regression and survival analysis. Due to their unique benefits, decision trees have become one of the most powerful and popular approaches in data science. Decision forest aims to improve the predictive performance of a single decision tree by training multiple trees and combining their predictions. This paper provides an introduction to the subject by explaining how a decision forest can be created and when it is most valuable. In addition, we are reviewing some popular methods for generating the forest, fusion the individual trees’ outputs and thinning large decision forests.  相似文献   

12.
The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper describes one such method that generates compact trees using multifeature splits in place of single feature split decision trees generated by most existing methods for distributed data. Our method is based on Fisher's linear discriminant function, and is capable of dealing with multiple classes in the data. For homogeneously distributed data, the decision trees produced by our method are identical to decision trees generated using Fisher's linear discriminant function with centrally stored data. For heterogeneously distributed data, a certain approximation is involved with a small change in performance with respect to the tree generated with centrally stored data. Experimental results for several well-known datasets are presented and compared with decision trees generated using Fisher's linear discriminant function with centrally stored data.  相似文献   

13.
A mass assignment based ID3 algorithm for learning probabilistic fuzzy decision trees is introduced. Fuzzy partitions are used to discretize continuous feature universes and to reduce complexity when universes are discrete but with large cardinalities. Furthermore, the fuzzy partitioning of classification universes facilitates the use of these decision trees in function approximation problems. Generally the incorporation of fuzzy sets into this paradigm overcomes many of the problems associated with the application of decision trees to real-world problems. The probabilities required for the trees are calculated according to mass assignment theory applied to fuzzy labels. The latter concept is introduced to overcome computational complexity problems associated with higher dimensional mass assignment evaluations on databases. ©1997 John Wiley & Sons, Inc.  相似文献   

14.
The parity decision tree model extends the decision tree model by allowing the computation of a parity function in one step. We prove that the deterministic parity decision tree complexity of any Boolean function is polynomially related to the non-deterministic complexity of the function or its complement. We also show that they are polynomially related to an analogue of the block sensitivity. We further study parity decision trees in their relations with an intermediate variant of the decision trees, as well as with communication complexity.  相似文献   

15.
Cut selection based on heuristic information is one of the most fundamental issues in the induction of decision trees with continuous valued attributes. This paper connects the selection of optimal cuts with a class of heuristic information functions together. It statistically shows that both training and testing accuracies in decision tree learning are dependent strongly on the selection of heuristics. A clear relationship between the second-order derivative of heuristic information function and locations of optimal cuts is mathematically derived and further is confirmed experimentally. Incorporating this relationship into a process of building decision trees, we can significantly reduce the number of detected cuts and furthermore improve the generalization of the decision tree.  相似文献   

16.
Classifiability-based omnivariate decision trees   总被引:1,自引:0,他引:1  
Top-down induction of decision trees is a simple and powerful method of pattern classification. In a decision tree, each node partitions the available patterns into two or more sets. New nodes are created to handle each of the resulting partitions and the process continues. A node is considered terminal if it satisfies some stopping criteria (for example, purity, i.e., all patterns at the node are from a single class). Decision trees may be univariate, linear multivariate, or nonlinear multivariate depending on whether a single attribute, a linear function of all the attributes, or a nonlinear function of all the attributes is used for the partitioning at each node of the decision tree. Though nonlinear multivariate decision trees are the most powerful, they are more susceptible to the risks of overfitting. In this paper, we propose to perform model selection at each decision node to build omnivariate decision trees. The model selection is done using a novel classifiability measure that captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of the subproblem at each node. The proposed approach is fast and does not suffer from as high a computational burden as that incurred by typical model selection algorithms. Empirical results over 26 data sets indicate that our approach is faster and achieves better classification accuracy compared to statistical model select algorithms.  相似文献   

17.
A standard approach to determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. Such problems arise, for example, when an attribute assigned to some node cannot be measured, or there is a significant change in the costs of measuring attributes or in the frequency distribution of events from different decision classes. An attractive approach to resolving this problem is to learn and store knowledge in the form of decision rules, and to generate from them, whenever needed, a decision tree that is most suitable in a given situation. An additional advantage of such an approach is that it facilitates buildingcompact decision trees, which can be much simpler than the logically equivalent conventional decision trees (by compact trees are meant decision trees that may contain branches assigned aset of values, and nodes assignedderived attributes, i.e., attributes that are logical or mathematical functions of the original ones). The paper describes an efficient method, AQDT-1, that takes decision rules generated by an AQ-type learning system (AQ15 or AQ17), and builds from them a decision tree optimizing a given optimality criterion. The method can work in two modes: thestandard mode, which produces conventional decision trees, andcompact mode, which produces compact decision trees. The preliminary experiments with AQDT-1 have shown that the decision trees generated by it from decision rules (conventional and compact) have outperformed those generated from examples by the well-known C4.5 program both in terms of their simplicity and their predictive accuracy.  相似文献   

18.
This paper introduces orthogonal decision trees that offer an effective way to construct a redundancy-free, accurate, and meaningful representation of large decision-tree-ensembles often created by popular techniques such as bagging, boosting, random forests, and many distributed and data stream mining algorithms. Orthogonal decision trees are functionally orthogonal to each other and they correspond to the principal components of the underlying function space. This paper offers a technique to construct such trees based on the Fourier transformation of decision trees and eigen-analysis of the ensemble in the Fourier representation. It offers experimental results to document the performance of orthogonal trees on the grounds of accuracy and model complexity.  相似文献   

19.
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for one problem domain, specifically, that of software resource data analysis. The purpose of the decision trees is to identify classes of objects (software modules) that had high development effort, i.e. in the uppermost quartile relative to past data. Sixteen software systems ranging from 3000 to 112000 source lines have been selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4700 objects, capture a multitude of information about the objects: development effort, faults, changes, design style, and implementation style. A total of 9600 decision trees are automatically generated and evaluated. The analysis focuses on the characterization and evaluation of decision tree accuracy, complexity, and composition. The decision trees correctly identified 79.3% of the software modules that had high development effort or faults, on the average across all 9600 trees. The decision trees generated from the best parameter combinations correctly identified 88.4% of the modules on the average. Visualization of the results is emphasized, and sample decision trees are included  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号