首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
She  Yanhong  Zhao  Zhuojun  Hu  Mengting  Zheng  Wenli  He  Xiaoli 《Artificial Intelligence Review》2021,54(8):6125-6148

In this paper, a novel optimal scale selection method in complete multi-scale decision tables has been proposed. Unlike the existing approaches in the literature, we employ the tools of granularity trees and cuts for each attribute. Each granularity tree has many different local cuts, which represent various scale selection methods under a specific attribute. Different local cuts collectively forms a global cut of a multi-scale information table, which in turn induces an information table with a mixed scale. One distinct feature of such tables is that the attribute values of different objects may be obtained at different scales for the same attribute. By keeping maximal consistency of the derived mixed-scale decision table, we introduce the notions of optimal cuts in multi-scale decision tables. Then, a comparative study between different types of optimal scale selection methods is performed. Finally, an algorithm is designed to verify the validity of the proposed approach.

  相似文献   

2.
Fuzzy decision tree induction is an important way of learning from examples with fuzzy representation. Since the construction of optimal fuzzy decision tree is NP-hard, the research on heuristic algorithms is necessary. In this paper, three heuristic algorithms for generating fuzzy decision trees are analyzed and compared. One of them is proposed by the authors. The comparisons are twofold. One is the analytic comparison based on expanded attribute selection and reasoning mechanism; the other is the experimental comparison based on the size of generated trees and learning accuracy. The purpose of this study is to explore comparative strengths and weaknesses of the three heuristics and to show some useful guidelines on how to choose an appropriate heuristic for a particular problem.  相似文献   

3.
基于Min-Ambiguity启发式算法的模糊决策树整个建立过程均是在给定的一个显著性水平参数基础上进行,该参数值的选择对于模糊决策树性能将产生重要影响。文章通过实验研究表明,在某一特定取值区间内,随着该参数值的逐步增大,可以使得模糊决策树在保持提高测试精度的前提下,使树的规模逐步减小,直至到达该参数的最优值,使树成为测试精度达到最优而树规模达到最小的一棵。而再度增大的此参数值(已超出该区间)却会导致树的过度剪枝,使树的测试精度降低。最后,通过相同数据在清晰决策树系统(C4.5系统)后剪枝前后的比较试验进一步证实,在该区间内,逐步增大的此参数值对模糊决策树性能的影响等效于清晰决策树的后剪枝。  相似文献   

4.
变精度粗糙集模型在决策树构造中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
针对ID3算法构造决策树复杂、分类效率不高等问题,本文基于变精度粗糙集模型提出了一种新的决策树构造算法。该算法采用加权分类粗糙度作为节点选择属性的启发函数,与信息增益相比,该标准更能够全面地刻画属性分类的综合贡献能力,计算简单,并且可以消除噪声数据对选择属性和生成叶节点的影响。实验结果证明,本算法构造的决策树在规模与分类效率上均优于ID3算法。  相似文献   

5.
机器学习中的决策树算法具有重要的数据分类功能,但基于信息增益的ID3算法与基于基尼指数的CART算法的分类功效还值得提高.构造信息增益与基尼指数的自适应集成度量,设计有效的决策树算法,以提升ID3与C A RT两类基本算法的性能.分析信息增益信息表示与基尼指数代数表示的异质无关性,采用基于知识的加权线性组合来建立信息增...  相似文献   

6.
决策树是数据挖掘中常用的分类方法。针对高等院校学生就业问题中出现由噪声造成的不一致性数据,本文提出了基于变精度粗糙集的决策树模型,并应用于学生就业数据分析。该方法以变精度粗糙集的分类质量的量度作为信息函数,对条件属性进行选择,作为树的节点,自上而下地分割数据集,直到满足某种终止条件。它充分考虑了属性间的依赖性和冗余性,允许在构造决策树的过程中划入正域的实例类别存在一定的不一致性。实验表明,该算法能够有效地处理不一致性数据集,并能正确合理地将就业数据分类,最终得到若干有价值的结论,供决策分析。该算法大大提高了决策规则的泛化能力,减化了树的结构。  相似文献   

7.
As we know, learning in real world is interactive, incremental and dynamical in multiple dimensions, where new data could be appeared at anytime from anywhere and of any type. Therefore, incremental learning is of more and more importance in real world data mining scenarios. Decision trees, due to their characteristics, have been widely used for incremental learning. In this paper, we propose a novel incremental decision tree algorithm based on rough set theory. To improve the computation efficiency of our algorithm, when a new instance arrives, according to the given decision tree adaptation strategies, the algorithm will only modify some existing leaf node in the currently active decision tree or add a new leaf node to the tree, which can avoid the high time complexity of the traditional incremental methods for rebuilding decision trees too many times. Moreover, the rough set based attribute reduction method is used to filter out the redundant attributes from the original set of attributes. And we adopt the two basic notions of rough sets: significance of attributes and dependency of attributes, as the heuristic information for the selection of splitting attributes. Finally, we apply the proposed algorithm to intrusion detection. The experimental results demonstrate that our algorithm can provide competitive solutions to incremental learning.  相似文献   

8.
基于不完备信息系统的决策树生成算法   总被引:1,自引:1,他引:0  
决策树是一种有效地进行实例分类的数据挖掘方法。在处理不完备信息系统中的缺省值数据时,现有决策树算法大多使用猜测技术。在不改变缺失值的情况下,利用极大相容块的概念定义了不完备决策表中条件属性对决策属性的决策支持度,将其作为属性选择的启发式信息。同时,提出了一种不完备信息系统中的决策树生成算法IDTBDS,该算法不仅可以快速得到规则集,而且具有较高的准确率。  相似文献   

9.
运行状态评价是指在过程正常生产的前提下, 进一步判断生产过程运行状态的优劣. 针对复杂工业过程定量信息与定性信息共存的情况, 本文提出了一种基于随机森林的工业过程运行状态评价方法. 针对随机森林中决策树信息存在冗余的问题, 基于互信息将传统随机森林中的决策树进行分组, 并选出每组中最优的决策树组成新的随机森林. 同时为了强化评价精度高的决策树和弱化评价精度低的决策树对最终评价结果的影响, 使用加权投票机制取代传统众数投票方法, 最终构成一种基于互信息的加权随机森林算法(Mutual information weighted random forest, MIWRF). 对于在线评价, 本文通过计算在线数据处于各个等级的概率, 并且结合提出的在线评价策略, 判定当前样本运行状态等级. 为了验证所提算法的有效性, 将所提方法应用于湿法冶金浸出过程, 实验结果表明, 相对于传统随机森林算法, MIWRF 降低了模型的复杂度, 同时提高了运行状态评价精度.  相似文献   

10.
Mining with streaming data is a hot topic in data mining. When performing classification on data streams, traditional classification algorithms based on decision trees, such as ID3 and C4.5, have a relatively poor efficiency in both time and space due to the characteristics of streaming data. There are some advantages in time and space when using random decision trees. An incremental algorithm for mining data streams, SRMTDS (Semi-Random Multiple decision Trees for Data Streams), based on random decision trees is proposed in this paper. SRMTDS uses the inequality of Hoeffding bounds to choose the minimum number of split-examples, a heuristic method to compute the information gain for obtaining the split thresholds of numerical attributes, and a Naive Bayes classifier to estimate the class labels of tree leaves. Our extensive experimental study shows that SRMTDS has an improved performance in time, space, accuracy and the anti-noise capability in comparison with VFDTc, a state-of-the-art decision-tree algorithm for classifying data streams.  相似文献   

11.
The problem of generating the sequence of tests required to reach a diagnostic conclusion with minimum average cost, which is also known as a test-sequencing problem, is considered. The traditional test-sequencing problem is generalized here to include asymmetrical tests. In general, the next test to execute depends on the results of previous tests. Hence, the test-sequencing problem can naturally be formulated as an optimal binary AND/OR decision tree construction problem, whose solution is known to be NP-hard. Our approach is based on integrating concepts from one-step look-ahead heuristic algorithms and basic ideas of Huffman coding to construct an AND/OR decision tree bottom-up as opposed to heuristics proposed in the literature that construct the AND/OR trees top-down. The performance of the algorithm is demonstrated on numerous test cases, with various properties.  相似文献   

12.
Induction of decision trees is one of the most successful approaches to supervised machine learning. Branching programs are a generalization of decision trees and, by the boosting analysis, exponentially more efficiently learnable than decision trees. However, this advantage has not been seen to materialize in experiments. Decision trees are easy to simplify using pruning. Reduced error pruning is one of the simplest decision tree pruning algorithms. For branching programs no pruning algorithms are known. In this paper we prove that reduced error pruning of branching programs is infeasible. Finding the optimal pruning of a branching program with respect to a set of pruning examples that is separate from the set of training examples is NP-complete. Because of this intractability result, we have to consider approximating reduced error pruning. Unfortunately, it turns out that even finding an approximate solution of arbitrary accuracy is computationally infeasible. In particular, reduced error pruning of branching programs is APX-hard. Our experiments show that, despite the negative theoretical results, heuristic pruning of branching programs can reduce their size without significantly altering the accuracy.  相似文献   

13.
Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications.  相似文献   

14.
基于新的条件熵的决策树规则提取方法   总被引:9,自引:0,他引:9  
分析了知识约简过程中现有信息熵反映决策表“决策能力”的局限性,定义了一种新的条件熵,以弥补现有信息熵的不足;然后对传统启发式方法中选择属性的标准进行改进,由此给出了新的属性重要性定义;以新的属性重要性为启发式信息设计决策树规则提取方法。该方法的优点在于构造决策树及提取决策规则前不进行属性约简,计算直观,时间复杂度较低。应用实例分析的结果表明,该方法能提取更为简洁有效的决策规则。  相似文献   

15.
基于知识的模型自动选择策略   总被引:1,自引:0,他引:1  
戴超凡  冯旸赫 《计算机工程》2010,36(11):170-172
模型自动选择是决策支持系统智能化发展的必然要求。针对目前实用算法较少的现状,提出一种模型自动选择策略。基于知识框架描述模型,根据事实库和知识库提取相应规则生成推理树,结合经验和专业知识实现模型自动选择。实验结果表明,该策略具有较高的命中率。  相似文献   

16.
Cybernetics studies information process in the context of interaction with physical systems. Because such information is sometimes vague and exhibits complex interactions; it can only be discerned using approximate representations. Machine learning provides solutions that create approximate models of information and decision trees are one of its main components. However, decision trees are susceptible to information overload and can get overly complex when a large amount of data is inputted in them. Granulation of decision tree remedies this problem by providing the essential structure of the decision tree, which can decrease its utility. To evaluate the relationship that exists between granulation and decision tree complexity, data uncertainty and prediction accuracy, the deficiencies obtained by nursing homes during annual inspections were taken as a case study. Using rough sets, three forms of granulation were performed: (1) attribute grouping, (2) removing insignificant attributes and (3) removing uncertain records. Attribute grouping significantly reduces tree complexity without having any strong effect upon data consistency and accuracy. On the other hand, removing insignificant features decrease data consistency and tree complexity, while increasing the error in prediction. Finally, decrease in the uncertainty of the dataset results in an increase in accuracy and has no impact on tree complexity.  相似文献   

17.
基于粗糙集的决策树构造算法   总被引:7,自引:2,他引:5  
针对ID3算法构造决策树复杂、分类效率不高问题,基于粗糙集理论提出一种决策树构造算法。该算法采用加权分类粗糙度作为节点选择属性的启发函数,与信息增益相比,能全面地刻画属性分类的综合贡献能力,并且计算简单。为消除噪声对选择属性和生成叶节点的影响,利用变精度粗糙集模型对该算法进行优化。实验结果表明,该算法构造的决策树在规模与分类效率上均优于ID3算法。  相似文献   

18.
目前,已有许多种构建决策树的方法。大多数是基于信息熵的,例如,ID3算法,Min-Ambiguity算法以及它们的变异。文中提出了一种新的启发式算法,它是基于属性对于分类的重要程度的。在选择扩展属性时,有两个选择,即敏感属性和不敏感属性,通常人们习惯选择敏感属性而忽视了不敏感属性。文章主要将其应用到了几个具有符号型属性类分明的数据库。根据对几个数据库所做的实验,对这两种方法从几方面进行了比较,指出了他们各自的利弊所在。  相似文献   

19.
基于属性间交互信息的ID3算法   总被引:3,自引:0,他引:3  
启发式算法是决策树研究的核心。文中分析了最常见的一种决策树归纳启发式算法即ID3算法的不足,给出了一个改进版本,它在选择测试属性时不仅要求该属性带来的信息增益尽可能大,而且要求其与同一分支上已经使用过的各属性之间的交互信息尽可能小,从而避免了对冗余属性的选择,实现信息熵的真正减少。分析及实验结果表明,与ID3算法相比,该算法能构造出更优的决策树。  相似文献   

20.
决策树属性选择标准的改进   总被引:1,自引:0,他引:1       下载免费PDF全文
决策树算法是数据挖掘领域的一个研究热点,通常用来形成分类器和预测模型,在实际中有着广泛的应用。重点阐述了经典的ID3决策树算法,分析了它的优缺点,结合泰勒公式和麦克劳林公式提出了新的属性选择标准。改进后的算法通过简化信息熵的计算,提高了分类准确度,缩短了决策树的生成时间,减少了计算成本。实验证明,改进后算法的有效性和正确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号