首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Constrained cascade generalization of decision trees   总被引:1,自引:0,他引:1  
While decision tree techniques have been widely used in classification applications, a shortcoming of many decision tree inducers is that they do not learn intermediate concepts, i.e., at each node, only one of the original features is involved in the branching decision. Combining other classification methods, which learn intermediate concepts, with decision tree inducers can produce more flexible decision boundaries that separate different classes, potentially improving classification accuracy. We propose a generic algorithm for cascade generalization of decision tree inducers with the maximum cascading depth as a parameter to constrain the degree of cascading. Cascading methods proposed in the past, i.e., loose coupling and tight coupling, are strictly special cases of this new algorithm. We have empirically evaluated the proposed algorithm using logistic regression and C4.5 as base inducers on 32 UCI data sets and found that neither loose coupling nor tight coupling is always the best cascading strategy and that the maximum cascading depth in the proposed algorithm can be tuned for better classification accuracy. We have also empirically compared the proposed algorithm and ensemble methods such as bagging and boosting and found that the proposed algorithm performs marginally better than bagging and boosting on the average.  相似文献   

2.
This paper presents a novel algorithm named ID6NB for extending decision tree induced by Quinlan’s non-incremental ID3 algorithm. The presented approach is aimed at suggesting the solutions for few unhandled exceptions of the Decision tree induction algorithms such as (i) the situation in which the majority voting makes incorrect decision (generating two different types of rules for same data), and (ii) in case of dimensionality reduction by decision tree induction algorithms, the determination of appropriate attribute at a node where two or more attributes have equal highest information gain. Exception due to majority voting is handled with the help of Naive Bayes algorithm and also novel solutions are given for dimensionality reduction. As a result, the classification accuracy has drastically improved. An extensive experimental evaluation on a number of real and synthetic databases shows that ID6NB is a state-of-the-art classification algorithm that outperforms well than other methods of decision tree learning.  相似文献   

3.
基于知识的模型自动选择策略   总被引:1,自引:0,他引:1  
戴超凡  冯旸赫 《计算机工程》2010,36(11):170-172
模型自动选择是决策支持系统智能化发展的必然要求。针对目前实用算法较少的现状,提出一种模型自动选择策略。基于知识框架描述模型,根据事实库和知识库提取相应规则生成推理树,结合经验和专业知识实现模型自动选择。实验结果表明,该策略具有较高的命中率。  相似文献   

4.
基于粗糙集的决策树构造算法   总被引:7,自引:2,他引:5  
针对ID3算法构造决策树复杂、分类效率不高问题,基于粗糙集理论提出一种决策树构造算法。该算法采用加权分类粗糙度作为节点选择属性的启发函数,与信息增益相比,能全面地刻画属性分类的综合贡献能力,并且计算简单。为消除噪声对选择属性和生成叶节点的影响,利用变精度粗糙集模型对该算法进行优化。实验结果表明,该算法构造的决策树在规模与分类效率上均优于ID3算法。  相似文献   

5.
6.
变精度粗糙集模型在决策树构造中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
针对ID3算法构造决策树复杂、分类效率不高等问题,本文基于变精度粗糙集模型提出了一种新的决策树构造算法。该算法采用加权分类粗糙度作为节点选择属性的启发函数,与信息增益相比,该标准更能够全面地刻画属性分类的综合贡献能力,计算简单,并且可以消除噪声数据对选择属性和生成叶节点的影响。实验结果证明,本算法构造的决策树在规模与分类效率上均优于ID3算法。  相似文献   

7.
In this paper, we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy for assessing the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.  相似文献   

8.
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.  相似文献   

9.
Accuracy is a critical factor in predictive modeling. A predictive model such as a decision tree must be accurate to draw conclusions about the system being modeled. This research aims at analyzing and improving the performance of classification and regression trees (CART), a decision tree algorithm, by evaluating and deriving a new methodology based on the performance of real-world data sets that were studied. This paper introduces a new approach to tree induction to improve the efficiency of the CART algorithm by combining the existing functionality of CART with the addition of artificial neural networks (ANNs). Trained ANNs are utilized by the tree induction algorithm by generating new, synthetic data, which have been shown to improve the overall accuracy of the decision tree model when actual training samples are limited. In this paper, traditional decision trees developed by the standard CART methodology are compared with the enhanced decision trees that utilize the ANN’s synthetic data generation, or CART+. This research demonstrates the improved accuracies that can be obtained with CART+, which can ultimately improve the knowledge that can be extracted by researchers about a system being modeled.  相似文献   

10.
Look-ahead based fuzzy decision tree induction   总被引:2,自引:0,他引:2  
Decision tree induction is typically based on a top-down greedy algorithm that makes locally optimal decisions at each node. Due to the greedy and local nature of the decisions made at each node, there is considerable possibility of instances at the node being split along branches such that instances along some or all of the branches require a large number of additional nodes for classification. In this paper, we present a computationally efficient way of incorporating look-ahead into fuzzy decision tree induction. Our algorithm is based on establishing the decision at each internal node by jointly optimizing the node splitting criterion (information gain or gain ratio) and the classifiability of instances along each branch of the node. Simulations results confirm that the use of the proposed look-ahead method leads to smaller decision trees and as a consequence better test performance  相似文献   

11.
This paper presents a new architecture of a fuzzy decision tree based on fuzzy rules – fuzzy rule based decision tree (FRDT) and provides a learning algorithm. In contrast with “traditional” axis-parallel decision trees in which only a single feature (variable) is taken into account at each node, the node of the proposed decision trees involves a fuzzy rule which involves multiple features. Fuzzy rules are employed to produce leaves of high purity. Using multiple features for a node helps us minimize the size of the trees. The growth of the FRDT is realized by expanding an additional node composed of a mixture of data coming from different classes, which is the only non-leaf node of each layer. This gives rise to a new geometric structure endowed with linguistic terms which are quite different from the “traditional” oblique decision trees endowed with hyperplanes as decision functions. A series of numeric studies are reported using data coming from UCI machine learning data sets. The comparison is carried out with regard to “traditional” decision trees such as C4.5, LADtree, BFTree, SimpleCart, and NBTree. The results of statistical tests have shown that the proposed FRDT exhibits the best performance in terms of both accuracy and the size of the produced trees.  相似文献   

12.
Neural networks and decision tree methods are two common approaches to pattern classification. While neural networks can achieve high predictive accuracy rates, the decision boundaries they form are highly nonlinear and generally difficult to comprehend. Decision trees, on the other hand, can be readily translated into a set of rules. In this paper, we present a novel algorithm for generating oblique decision trees that capitalizes on the strength of both approaches. Oblique decision trees classify the patterns by testing on linear combinations of the input attributes. As a result, an oblique decision tree is usually much smaller than the univariate tree generated for the same domain. Our algorithm consists of two components: connectionist and symbolic. A three-layer feedforward neural network is constructed and pruned, a decision tree is then built from the hidden unit activation values of the pruned network. An oblique decision tree is obtained by expressing the activation values using the original input attributes. We test our algorithm on a wide range of problems. The oblique decision trees generated by the algorithm preserve the high accuracy of the neural networks, while keeping the explicitness of decision trees. Moreover, they outperform univariate decision trees generated by the symbolic approach and oblique decision trees built by other approaches in accuracy and tree size.  相似文献   

13.
Learning from data streams is a challenging task which demands a learning algorithm with several high quality features. In addition to space complexity and speed requirements needed for processing the huge volume of data which arrives at high speed, the learning algorithm must have a good balance between stability and plasticity. This paper presents a new approach to induce incremental decision trees on streaming data. In this approach, the internal nodes contain trainable split tests. In contrast with traditional decision trees in which a single attribute is selected as the split test, each internal node of the proposed approach contains a trainable function based on multiple attributes, which not only provides the flexibility needed in the stream context, but also improves stability. Based on this approach, we propose evolving fuzzy min–max decision tree (EFMMDT) learning algorithm in which each internal node of the decision tree contains an evolving fuzzy min–max neural network. EFMMDT splits the instance space non-linearly based on multiple attributes which results in much smaller and shallower decision trees. The extensive experiments reveal that the proposed algorithm achieves much better precision in comparison with the state-of-the-art decision tree learning algorithms on the benchmark data streams, especially in the presence of concept drift.  相似文献   

14.
A new decision tree method for application in data mining, machine learning, pattern recognition, and other areas is proposed in this paper. The new method incorporates a classical multivariate statistical method, linear discriminant function, into decision trees' recursive partitioning process. The proposed method considers not only the linear combination with all variables, but also combinations with fewer variables. It uses a tabu search technique to find appropriate variable combinations within a reasonable length of time. For problems with more than two classes, the tabu search technique is also used to group the data into two superclasses before each split. The results of our experimental study indicate that the proposed algorithm appears to outperform some of the major classification algorithms in terms of classification accuracy, the proposed algorithm generates decision trees with relatively small sizes, and the proposed algorithm runs faster than most multivariate decision trees and its computing time increases linearly with data size, indicating that the algorithm is scalable to large datasets.  相似文献   

15.
As we know, learning in real world is interactive, incremental and dynamical in multiple dimensions, where new data could be appeared at anytime from anywhere and of any type. Therefore, incremental learning is of more and more importance in real world data mining scenarios. Decision trees, due to their characteristics, have been widely used for incremental learning. In this paper, we propose a novel incremental decision tree algorithm based on rough set theory. To improve the computation efficiency of our algorithm, when a new instance arrives, according to the given decision tree adaptation strategies, the algorithm will only modify some existing leaf node in the currently active decision tree or add a new leaf node to the tree, which can avoid the high time complexity of the traditional incremental methods for rebuilding decision trees too many times. Moreover, the rough set based attribute reduction method is used to filter out the redundant attributes from the original set of attributes. And we adopt the two basic notions of rough sets: significance of attributes and dependency of attributes, as the heuristic information for the selection of splitting attributes. Finally, we apply the proposed algorithm to intrusion detection. The experimental results demonstrate that our algorithm can provide competitive solutions to incremental learning.  相似文献   

16.
Trading Accuracy for Simplicity in Decision Trees   总被引:4,自引:0,他引:4  
Bohanec  Marko  Bratko  Ivan 《Machine Learning》1994,15(3):223-250
When communicating concepts, it is often convenient or even necessary to define a concept approximately. A simple, although only approximately accurate concept definition may be more useful than a completely accurate definition which involves a lot of detail. This paper addresses the problem: given a completely accurate, but complex, definition of a concept, simplify the definition, possibly at the expense of accuracy, so that the simplified definition still corresponds to the concept sufficiently well. Concepts are represented by decision trees, and the method of simplification is tree pruning. Given a decision tree that accurately specifies a concept, the problem is to find a smallest pruned tree that still represents the concept within some specified accuracy. A pruning algorithm is presented that finds an optimal solution by generating adense sequence of pruned trees, decreasing in size, such that each tree has the highest accuracy among all the possible pruned trees of the same size. An efficient implementation of the algorithm, based on dynamic programming, is presented and empirically compared with three progressive pruning algorithms using both artificial and real-world data. An interesting empirical finding is that the real-world data generally allow significantly greater simplification at equal loss of accuracy.  相似文献   

17.
决策树是一种有效地进行实例分类的数据挖掘方法。在处理不完备信息系统中的缺省值数据时,现有决策树算法大多使用猜测技术。在不改变缺失值的情况下,利用极大相容块的概念定义了不完备决策表中条件属性对决策属性的决策支持度,将其作为属性选择的启发式信息。同时,提出了一种不完备信息系统中的决策树生成算法IDTBDS,该算法不仅可以快速得到规则集,而且具有较高的准确率。  相似文献   

18.
We consider the problem of exploiting a taxonomy of propositionalized attributes in order to learn compact and robust classifiers. We introduce propositionalized attribute taxonomy guided decision tree learner (PAT-DTL), an inductive learning algorithm that exploits a taxonomy of propositionalized attributes as prior knowledge to generate compact decision trees. Since taxonomies are unavailable in most domains, we also introduce propositionalized attribute taxonomy learner (PAT-Learner) that automatically constructs taxonomy from data. PAT-DTL uses top-down and bottom-up search to find a locally optimal cut that corresponds to the literals of decision rules from data and propositionalized attribute taxonomy. PAT-Learner propositionalizes attributes and hierarchically clusters the propositionalized attributes based on the distribution of class labels that co-occur with them to generate a taxonomy. Our experimental results on UCI repository data sets show that the proposed algorithms can generate a decision tree that is generally more compact than and is sometimes comparably accurate to those produced by standard decision tree learners.  相似文献   

19.
Omnivariate decision trees.   总被引:3,自引:0,他引:3  
Univariate decision trees at each decision node consider the value of only one feature leading to axis-aligned splits. In a linear multivariate decision tree, each decision node divides the input space into two with a hyperplane. In a nonlinear multivariate tree, a multilayer perceptron at each node divides the input space arbitrarily, at the expense of increased complexity and higher risk of overfitting. We propose omnivariate trees where the decision node may be univariate, linear, or nonlinear depending on the outcome of comparative statistical tests on accuracy thus matching automatically the complexity of the node with the subproblem defined by the data reaching that node. Such an architecture frees the designer from choosing the appropriate node type, doing model selection automatically at each node. Our simulation results indicate that such a decision tree induction method generalizes better than trees with the same types of nodes everywhere and induces small trees.  相似文献   

20.
Probability trees are decision trees that predict class probabilities rather than the most likely class. The pruning criterion used to learn a probability tree strongly influences the size of the tree and thereby also the quality of its probability estimates. While the effect of pruning criteria on classification accuracy is well-studied, only recently has there been more interest in the effect on probability estimates. Hence, it is currently unclear which pruning criteria for probability trees are preferable under which circumstances. In this paper we survey six of the most important pruning criteria for probability trees, and discuss their theoretical advantages and disadvantages. We also perform an extensive experimental study of the relative performance of these pruning criteria. The main conclusion is that overall a pruning criterion based on randomization tests performs best because it is most robust to extreme data characteristics (such as class skew or a high number of classes). We also identify and explain several shortcomings of the other pruning criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号