首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
一种基于层次聚类的属性全局离散化算法   总被引:1,自引:0,他引:1  
本文摒弃了以往利用断点集来进行离散化的算法思想,提出了一种新的基于粗糙集和分裂的层次聚类的全局离散化算法.本算法在层次聚类的基础上考虑不同连续属性离散化结果间的互补性和相关性,在不改变原信息系统不可分辨关系的前提下通过增类减类进行全局离散化.实验表明该算法具备了删除不必要属性的能力,提高了离散化的精度,更便于属性约简.  相似文献   

2.
目前基于Rough集的离散化算法很难做到高效率和高识别率兼顾,针对粗糙集给出了基于逐级均值聚类的信息熵的离散化算法。首先使用改进的逐级均值聚类算法分别对单个属性的候选断点按其信息熵值进行聚类分析,生成新的规模更小的候选断点集,然后用基于信息熵的离散化算法完成断点的选取并对连续值属性进行离散化。实验结果表明,该方法在识别率相当的情况下比传统的离散化方法的时间代价更低。  相似文献   

3.
连续属性离散化是机器学习和数据挖掘领域中的一个重要问题,离散化是否合理决定着表达和提取相关信息的准确性。经过研究Chi2系列算法,提出一种新的基于属性重要性的连续属性离散化方法——Imp-Chi2算法,该算法依据属性重要性程度对属性离散化的顺序进行了合理的调整,能够更准确地对连续属性进行离散化。文章通过C4.5和支持向量机分别对离散化后的结果进行了实验,在实验过程中,提出一种训练集类比例抽取方法,避免了训练集随机抽取的不均匀性。实验结果证明了所提算法的有效性。  相似文献   

4.
基于信息熵的二元分割算法离散连续属性,在对连续属性较多,数据量较大的数据集进行分析预测中,存在不足。实验表明,在决策树算法中结合改进后的k-means算法作为连续属性离散化算法,在连续属性较多的数据实例中可以构造出更好的决策树。  相似文献   

5.
李晓飞 《计算机应用与软件》2009,26(10):262-264,272
连续属性离散化问题是机器学习的重要方面,是数据预处理问题之一.提供的基于动态层次聚类的离散化算法是层次聚类算法的一种改进.对该算法进行定性分析-对随机采集数据根据相似度进行聚类分析,得到论域的一种划分.通过实验表明,基于动态层次聚类的离散化算法对连续属性的划分更加合理,更加有效.  相似文献   

6.
连续属性离散化在数据分析的数据预处理中非常重要。本文提出一种基于类信息熵的有监督连续属性离散化方法。该方法运用了粗集理论中决策表的一致性水平的概念。算法分成两部分:首先根据决策表的一致性水平动态调整聚类类别数目,运用分级聚类形成初始聚类。然后,基于类信息熵合并相邻区域,减少区间数目。实践证明该方法是可行的。  相似文献   

7.
一种连续属性离散化的新方法   总被引:6,自引:0,他引:6  
提出了一种基于聚类方法、结合粗集理论的连续属性离散化方法。在粗集理论中有一个重要概念:属性重要度(Attribute significance),它常用来作为生成好的约简所采用的启发式评价函数。受此启发,在连续属性离散化方法中可把它用于属性选择,即从已离散化的属性集中选择出属性重要度最高的属性,再把它和待离散化的连续属性一起进行聚类学习,得到该连续属性的离散区间。文中介绍了该方法的算法描述,并通过实验与其他算法进行了比较。实验结果表明,由于这种方法在离散化过程中结合了粗集理论的思想,考虑了属性间的相互影响,从而产生了比较合理的划分点,提高了规则的分类精度。  相似文献   

8.
为解决混合属性中数值属性与分类属性相似性度量的差异造成的聚类效果不佳问题,分析混合属性聚类相似性度量的问题,提出基于熵的混合属性聚类算法.引入熵离散化技术将数值属性离散化,仅使用二元化距离度量混合属性对象之间的相似性,在聚类过程中随机选取k个初始簇中心,将其它对象按照距离k个簇中心的最小距离划分到相应的簇中,选择每个簇中每个数据属性中频率最高的属性值形成新的簇中心继续划分对象,迭代此步当满足目标条件时停止,形成最终聚类.在UCI数据集上的实验结果验证了该算法的有效性.  相似文献   

9.
连续属性离散化是数据分析中重要的预处理过程,而基于粗糙集理论的数据分析要求离散化的结果能够最大程度地保持原信息系统的分辨关系。论文提出了一种新的离散化算法,此算法以决策信息系统中决策属性对条件属性集合的依赖度作为评价函数动态调整DBSCAN聚类算法的参数,直至离散化决策属性对条件属性集合的依赖度达到预先指定的阈值为止。算法分析和实验证明,算法是切实可行的。  相似文献   

10.
数据属性离散化是作战仿真数据预处理的重要组成部分,也是作战仿真数据研究的重点和难点.论述了进行数据属性离散化的必要性,提出一种基于改进属性重要度和信息熵(Discretization by Improved Attribute Significance and Information Entropy,DIAFIE)的作战仿真数据属性离散化算法.算法定义了属性重要度并以此为聚类判断依据将数据值域划分为多个离散区间,然后根据信息熵优化合并相邻区间以保证离散化结果的精度.实验证明上述算法能有效处理作战仿真数据属性离散化问题,具有产生断点少、分类精度高的优点.  相似文献   

11.
一种基于粗糙集理论的连续属性离散化新算法*   总被引:3,自引:0,他引:3  
粗糙集理论中要求离散化保持原有决策系统的不可分辨关系,但以往的一些算法在离散过程中会使近似精度控制在可以接受的范围,即允许一定的错分。针对此不足,在保证决策属性绝对不改变的情况下,提出一种新的区间拆分方法,更合理有效地对连续属性进行离散化。实验通过C4.5和支持向量机分别对离散化后的数据进行识别与分类预测,实验结果证明了算法的有效性。  相似文献   

12.
This work analyzes experimentally discretization algorithms for handling continuous attributes in evolutionary learning. We consider a learning system that induces a set of rules in a fragment of first-order logic (evolutionary inductive logic programming), and introduce a method where a given discretization algorithm is used to generate initial inequalities, which describe subranges of attributes' values. Mutation operators exploiting information on the class label of the examples (supervised discretization) are used during the learning process for refining inequalities. The evolutionary learning system is used as a platform for testing experimentally four algorithms: two variants of the proposed method, a popular supervised discretization algorithm applied prior to induction, and a discretization method which does not use information on the class labels of the examples (unsupervised discretization). Results of experiments conducted on artificial and real life datasets suggest that the proposed method provides an effective and robust technique for handling continuous attributes by means of inequalities.  相似文献   

13.
A discretization algorithm based on Class-Attribute Contingency Coefficient   总被引:1,自引:0,他引:1  
Discretization algorithms have played an important role in data mining and knowledge discovery. They not only produce a concise summarization of continuous attributes to help the experts understand the data more easily, but also make learning more accurate and faster. In this paper, we propose a static, global, incremental, supervised and top-down discretization algorithm based on Class-Attribute Contingency Coefficient. Empirical evaluation of seven discretization algorithms on 13 real datasets and four artificial datasets showed that the proposed algorithm could generate a better discretization scheme that improved the accuracy of classification. As to the execution time of discretization, the number of generated rules, and the training time of C5.0, our approach also achieved promising results.  相似文献   

14.
Relief is a measure of attribute quality which is often used for feature subset selection. Its use in induction of classification trees and rules, discretization, and other methods has however been hindered by its inability to suggest subsets of values of discrete attributes and thresholds for splitting continuous attributes into intervals. We present efficient algorithms for both tasks.  相似文献   

15.
CAIM discretization algorithm   总被引:8,自引:0,他引:8  
The task of extracting knowledge from databases is quite often performed by machine learning algorithms. The majority of these algorithms can be applied only to data described by discrete numerical or nominal attributes (features). In the case of continuous attributes, there is a need for a discretization algorithm that transforms continuous attributes into discrete ones. We describe such an algorithm, called CAIM (class-attribute interdependence maximization), which is designed to work with supervised data. The goal of the CAIM algorithm is to maximize the class-attribute interdependence and to generate a (possibly) minimal number of discrete intervals. The algorithm does not require the user to predefine the number of intervals, as opposed to some other discretization algorithms. The tests performed using CAIM and six other state-of-the-art discretization algorithms show that discrete attributes generated by the CAIM algorithm almost always have the lowest number of intervals and the highest class-attribute interdependency. Two machine learning algorithms, the CLIP4 rule algorithm and the decision tree algorithm, are used to generate classification rules from data discretized by CAIM. For both the CLIP4 and decision tree algorithms, the accuracy of the generated rules is higher and the number of the rules is lower for data discretized using the CAIM algorithm when compared to data discretized using six other discretization algorithms. The highest classification accuracy was achieved for data sets discretized with the CAIM algorithm, as compared with the other six algorithms.  相似文献   

16.
粗糙集理论中的离散化问题   总被引:74,自引:1,他引:73  
一引言数据分析及数据挖掘,是一个重要的正在迅速发展的研究课题。波兰科学家Z.Pawlak于1982年提出的粗糙集(Rough Set)理论正是解决这一问题的新理论,它可用于处理决策信息表中的不确定知识,并用规则的形式表达,是一种有效的知识获取工具。在Roughset中,数据约简是非常重要的,包括属性约简和值约简。运用粗糙集理论处理决策表时,要求决策表中各值  相似文献   

17.
《Knowledge》2007,20(4):419-425
Many classification algorithms require that training examples contain only discrete values. In order to use these algorithms when some attributes have continuous numeric values, the numeric attributes must be converted into discrete ones. This paper describes a new way of discretizing numeric values using information theory. Our method is context-sensitive in the sense that it takes into account the value of the target attribute. The amount of information each interval gives to the target attribute is measured using Hellinger divergence, and the interval boundaries are decided so that each interval contains as equal amount of information as possible. In order to compare our discretization method with some current discretization methods, several popular classification data sets are selected for discretization. We use naive Bayesian classifier and C4.5 as classification tools to compare the accuracy of our discretization method with that of other methods.  相似文献   

18.
We present a data mining method which integrates discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated. Numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. The horizontal reduction is done by merging identical tuples after substituting an attribute value by its higher level value in a pre- defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples we consider further in the database(s). In the second phase, a novel context- sensitive feature merit measure is used to rank features, a subset of relevant attributes is chosen, based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without changing the interdependence relationships between the classes and the attributes. Finally, the tuples in the reduced relation are transformed into different knowledge rules based on different knowledge discovery algorithms. Based on these principles, a prototype knowledge discovery system DBROUGH-II has been constructed by integrating discretization, generalization, rough set feature selection and a variety of data mining algorithms. Tests on a telecommunication customer data warehouse demonstrates that different kinds of knowledge rules, such as characteristic rules, discriminant rules, maximal generalized classification rules, and data evolution regularities, can be discovered efficiently and effectively.  相似文献   

19.
一种基于类别属性关联程度最大化离散算法   总被引:2,自引:0,他引:2  
针对现有离散化算法难以兼顾计算速度和求解质量这一难题,提出一种新的基于类别属性关联程度最大化监督离散化算法.该算法考虑了类别、属性值的空间分布特征,根据类别与属性之间的内在联系构造离散化框架,使离散化后类别和属性的关联程度最大.实验结果表明,基于类别属性关联程度最大化离散算法在保证计算速度的前提下能有效提高分类精度,减少分类规则个数.  相似文献   

20.
Rough Set理论中连续属性的离散化方法   总被引:95,自引:0,他引:95  
苗夺谦 《自动化学报》2001,27(3):296-302
Rough Set(RS)理论是一种新的处理不精确、不完全与不相容知识的数学工具.传 统的RS理论只能对数据库中的离散属性进行处理,而绝大多数现实的数据库既包含了离散 属性,又包含了连续属性.文中针对传统RS理论的这一缺陷,利用决策表相容性的反馈信 息,提出了一种领域独立的基于动态层次聚类的连续属性离散化算法.该方法为RS理论处 理离散与连续属性提供了一种统一的框架,从而极大地拓广了RS理论的应用范围.通过一 些例子将本算法与现有方法进行了比较分析,得到了令人鼓舞的结果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号