首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
一种基于类别属性关联程度最大化离散算法   总被引:2,自引:0,他引:2  
针对现有离散化算法难以兼顾计算速度和求解质量这一难题,提出一种新的基于类别属性关联程度最大化监督离散化算法.该算法考虑了类别、属性值的空间分布特征,根据类别与属性之间的内在联系构造离散化框架,使离散化后类别和属性的关联程度最大.实验结果表明,基于类别属性关联程度最大化离散算法在保证计算速度的前提下能有效提高分类精度,减少分类规则个数.  相似文献   

2.
基于Cramer’s V的连续属性离散化算法   总被引:1,自引:0,他引:1       下载免费PDF全文
郭启铭  樊玮 《计算机工程》2008,34(4):111-112
在类-属性相关离散化方法的基础上,提出一种基于Cramer’s V的连续属性离散化算法CVM,该方法利用统计学中的Cramer’s V来量化类-属性相关度,以保证离散后的类-属性相关度最大。与CADD和CAIM算法的实验比较以及对离散化后的数据进行C4.5分类测试,表明CVM算法性能良好,其离散化的数据明显地提高了分类器的预测精度。  相似文献   

3.
一种改进的CAIM算法   总被引:1,自引:0,他引:1       下载免费PDF全文
在CAIM算法中,离散判别式仅考虑了区间中最多的类与属性间的依赖度,使离散化过度而导致结果不精确。基于此,提出对CAIM的改进算法,该算法考虑到按属性重要性从小到大顺序进行离散,同时根据粗糙集理论提出条件属性可分辨率概念,与近似精度同时控制信息表最终的离散程度,有效解决了离散化过度问题。实验通过C4.5和支持向量机分别对离散化后的数据进行识别和分类预测,结果证明了该算法的有效性。  相似文献   

4.
Inductive learning systems can be effectively used to acquire classification knowledge from examples. Many existing symbolic learning algorithms can be applied in domains with continuous attributes when integrated with a discretization algorithm to transform the continuous attributes into ordered discrete ones. In this paper, a new information theoretic discretization method optimized for supervised learning is proposed and described. This approach seeks to maximize the mutual dependence as measured by the interdependence redundancy between the discrete intervals and the class labels, and can automatically determine the most preferred number of intervals for an inductive learning application. The method has been tested in a number of inductive learning examples to show that the class-dependent discretizer can significantly improve the classification performance of many existing learning algorithms in domains containing numeric attributes  相似文献   

5.
Class-attribute interdependence maximization (CAIM) is one of the state-of-the-art algorithms for discretizing data for which classes are known. However, it may take a long time when run on high-dimensional large-scale data, with large number of attributes and/or instances. This paper presents a solution to this problem by introducing a graphic processing unit (GPU)-based implementation of the CAIM algorithm that significantly speeds up the discretization process on big complex data sets. The GPU-based implementation is scalable to multiple GPU devices and enables the use of concurrent kernels execution capabilities of modern GPUs. The CAIM GPU-based model is evaluated and compared with the original CAIM using single and multi-threaded parallel configurations on 40 data sets with different characteristics. The results show great speedup, up to 139 times faster using four GPUs, which makes discretization of big data efficient and manageable. For example, discretization time of one big data set is reduced from 2 h to \(<\) 2 min.  相似文献   

6.
We present a data mining method which integrates discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated. Numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. The horizontal reduction is done by merging identical tuples after substituting an attribute value by its higher level value in a pre- defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples we consider further in the database(s). In the second phase, a novel context- sensitive feature merit measure is used to rank features, a subset of relevant attributes is chosen, based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without changing the interdependence relationships between the classes and the attributes. Finally, the tuples in the reduced relation are transformed into different knowledge rules based on different knowledge discovery algorithms. Based on these principles, a prototype knowledge discovery system DBROUGH-II has been constructed by integrating discretization, generalization, rough set feature selection and a variety of data mining algorithms. Tests on a telecommunication customer data warehouse demonstrates that different kinds of knowledge rules, such as characteristic rules, discriminant rules, maximal generalized classification rules, and data evolution regularities, can be discovered efficiently and effectively.  相似文献   

7.
周世昊  倪衍森 《控制与决策》2011,26(10):1504-1510
连续属性离散化在数据挖掘、机器学习和人工智能等领域起着重要的作用.鉴于此,提出一种基于类-属性关联度的启发式离散化技术.该技术定义了一个新的离散化标准,根据数据本身的特性选择最佳断点,克服了目前最先进自顶向下离散化方法存在的缺陷.基于粗糙集理论中变精度粗糙集模型,提出一种新的不一致衡量标准,能够有效地控制离散化所产生的信息丢失,允许数据存在适当的分类错误度.实验结果和统计性分析表明,所提出的技术显著地提高了J4.8决策树和SVM分类器的学习精度.  相似文献   

8.
A fuzzy approach to partitioning continuous attributes for classification   总被引:1,自引:0,他引:1  
Classification is an important topic in data mining research. To better handle continuous data, fuzzy sets are used to represent interval events in the domains of continuous attributes, allowing continuous data lying on the interval boundaries to partially belong to multiple intervals. Since the membership functions of fuzzy sets can profoundly affect the performance of the models or rules discovered, the determination of membership functions or fuzzy partitioning is crucial. In this paper, we present a new method to determine the membership functions of fuzzy sets directly from data to maximize the class-attribute interdependence and, hence, improve the classification results. In other words, it forms a fuzzy partition of the input space automatically, using an information-theoretic measure to evaluate the interdependence between the class membership and an attribute as the objective function for fuzzy partitioning. To find the optimum of the measure, it employs fractional programming. To evaluate the effectiveness of the proposed method, several real-world data sets are used in our experiments. The experimental results show that this method outperforms other well-known discretization and fuzzy partitioning approaches.  相似文献   

9.
Discretization techniques have played an important role in machine learning and data mining as most methods in such areas require that the training data set contains only discrete attributes. Data discretization unification (DDU), one of the state-of-the-art discretization techniques, trades off classification errors and the number of discretized intervals, and unifies existing discretization criteria. However, it suffers from two deficiencies. First, the efficiency of DDU is very low as it conducts a large number of parameters to search good results, which does not still guarantee to obtain an optimal solution. Second, DDU does not take into account the number of inconsistent records produced by discretization, which leads to unnecessary information loss. To overcome the above deficiencies, this paper presents a Uni versal Dis cretization technique, namely UniDis. We first develop a non-parametric normalized discretization criteria which avoids the effect of relatively large difference between classification errors and the number of discretized intervals on discretization results. In addition, we define a new entropy-based measure of inconsistency for multi-dimensional variables to effectively control information loss while producing a concise summarization of continuous variables. Finally, we propose a heuristic algorithm to guarantee better discretization based on the non-parametric normalized discretization criteria and the entropy-based inconsistency. Besides theoretical analysis, experimental results demonstrate that our approach is statistically comparable to DDU evaluated by a popular statistical test and it yields a better discretization scheme which significantly improves the accuracy of classification than previously other known discretization methods except for DDU by running J4.8 decision tree and Naive Bayes classifier.  相似文献   

10.
C4.5算法是一种非常有影响力的决策树生成算法,但该方法生成的决策树分类精度不高,分支较多,规模较大.针对C4.5算法存在的上述问题,本文提出了一种基于粗糙集理论与CAIM准则的C4.5改进算法.该算法采用基于CAIM准则的离散化方法对连续属性进行处理,使离散化过程中的信息丢失程度降低,提高分类精度.对离散化后的样本用基于粗糙集理论的属性约简方法进行属性约简,剔除冗余属性,减小生成的决策树规模.通过实验验证,该算法可以有效提高C4.5算法生成的决策树分类精度,降低决策树的规模.  相似文献   

11.
We present a method to learn maximal generalized decision rules from databases by integrating discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated and the numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. Horizontal reduction is accomplished by merging identical tuples after the substitution of an attribute value by its higher level value in a pre-defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples in the database. In the second phase, a novel context-sensitive feature merit measure is used to rank the features, a subset of relevant attributes is chosen based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without destroying the interdependence relationships between classes and the attributes. Then rough set-based value reduction is further performed on the reduced table and all redundant condition values are dropped. Finally, tuples in the reduced table are transformed into a set of maximal generalized decision rules. The experimental results on UCI data sets and a real market database demonstrate that our method can dramatically reduce the feature space and improve learning accuracy.  相似文献   

12.
基于混合概率模型的无监督离散化算法   总被引:10,自引:0,他引:10  
李刚 《计算机学报》2002,25(2):158-164
现实应用中常常涉及许多连续的数值属性,而且前许多机器学习算法则要求所处理的属性取离散值,根据在对数值属性的离散化过程中,是否考虑相关类别属性的值,离散化算法可分为有监督算法和无监督算法两类。基于混合概率模型,该文提出了一种理论严格的无监督离散化算法,它能够在无先验知识,无类别是属性的前提下,将数值属性的值域划分为若干子区间,再通过贝叶斯信息准则自动地寻求最佳的子区间数目和区间划分方法。  相似文献   

13.
Relief is a measure of attribute quality which is often used for feature subset selection. Its use in induction of classification trees and rules, discretization, and other methods has however been hindered by its inability to suggest subsets of values of discrete attributes and thresholds for splitting continuous attributes into intervals. We present efficient algorithms for both tasks.  相似文献   

14.
In this paper we propose a new static, global, supervised, incremental and bottom-up discretization algorithm based on coefficient of dispersion and skewness of data range. It automates the discretization process by introducing the number of intervals and stopping criterion. The results obtained using this discretization algorithm show that the discretization scheme generated by the algorithm almost has minimum number of intervals and requires smallest discretization time. The feedforward neural network with conjugate gradient training algorithm is used to compute the accuracy of classification from the data discretized by this algorithm. The efficiency of the proposed algorithm is shown in terms of better discretization scheme and better accuracy of classification by implementing it on six different real data sets.  相似文献   

15.
为了解决数据挖掘和机器学习领域中连续属性离散化问题,提出一种改进的自适应离散粒子群优化算法。将连续属性的断点集合作为离散粒子群,通过粒子间的相互作用最小化断点子集,同时引入模拟退火算法作为局部搜索策略,提高了粒子群的多样性和寻找全局最优解的能力。利用粗糙集理论中决策属性对条件属性的依赖度来衡量决策表的一致性,从而达到连续属性离散化的目的,最后采用多组数据对此算法的性能进行了检验,并与其他算法做了对比实验,实验结果表明此算法是有效的。  相似文献   

16.
徐盈盈  钟才明 《计算机应用》2014,34(8):2184-2187
模式识别与机器学习的一些算法只能处理离散属性值,而在现实生活中的很多数据具有连续的属性值,针对数据离散化的问题提出了一种无监督的方法。首先,使用K-means方法将数据集进行划分得到类别信息;然后,应用有监督的离散化方法对划分后的数据离散化,重复上述过程以得到多个离散化的结果,再将这些结果进行集成;最后,将集成得到的最小子区间进行合并,这里根据数据间的邻居关系选择优先合并的维度及相邻区间。其中,通过数据间的近邻关系自动寻求子区间数目,尽可能保持其内在结构关系不变。将离散后的数据应用于聚类算法,如谱聚类算法,并对聚类后的效果进行评价。实验结果表明,该算法聚类精确度比其他4种方法平均提高约33%,表明了该算法的可行性和有效性。通过该算法得到的离散化数据可应用于一些数据挖掘算法,如ID3决策树算法。  相似文献   

17.
A discretization algorithm based on a heterogeneity criterion   总被引:5,自引:0,他引:5  
Discretization, as a preprocessing step for data mining, is a process of converting the continuous attributes of a data set into discrete ones so that they can be treated as the nominal features by machine learning algorithms. Those various discretization methods, that use entropy-based criteria, form a large class of algorithm. However, as a measure of class homogeneity, entropy cannot always accurately reflect the degree of class homogeneity of an interval. Therefore, in this paper, we propose a new measure of class heterogeneity of intervals from the viewpoint of class probability itself. Based on the definition of heterogeneity, we present a new criterion to evaluate a discretization scheme and analyze its property theoretically. Also, a heuristic method is proposed to find the approximate optimal discretization scheme. Finally, our method is compared, in terms of predictive error rate and tree size, with Ent-MDLC, a representative entropy-based discretization method well-known for its good performance. Our method is shown to produce better results than those of Ent-MDLC, although the improvement is not significant. It can be a good alternative to entropy-based discretization methods.  相似文献   

18.
Toward unsupervised correlation preserving discretization   总被引:2,自引:0,他引:2  
Discretization is a crucial preprocessing technique used for a variety of data warehousing and mining tasks. In this paper, we present a novel PCA-based unsupervised algorithm for the discretization of continuous attributes in multivariate data sets. The algorithm leverages the underlying correlation structure in the data set to obtain the discrete intervals and ensures that the inherent correlations are preserved. Previous efforts on this problem are largely supervised and consider only piecewise correlation among attributes. We consider the correlation among continuous attributes and, at the same time, also take into account the interactions between continuous and categorical attributes. Our approach also extends easily to data sets containing missing values. We demonstrate the efficacy of the approach on real data sets and as a preprocessing step for both classification and frequent itemset mining tasks. We show that the intervals are meaningful and can uncover hidden patterns in data. We also show that large compression factors can be obtained on the discretized data sets. The approach is task independent, i.e., the same discretized data set can be used for different data mining tasks. Thus, the data sets can be discretized, compressed, and stored once and can be used again and again.  相似文献   

19.
一种连续属性离散化的新方法   总被引:6,自引:0,他引:6  
提出了一种基于聚类方法、结合粗集理论的连续属性离散化方法。在粗集理论中有一个重要概念:属性重要度(Attribute significance),它常用来作为生成好的约简所采用的启发式评价函数。受此启发,在连续属性离散化方法中可把它用于属性选择,即从已离散化的属性集中选择出属性重要度最高的属性,再把它和待离散化的连续属性一起进行聚类学习,得到该连续属性的离散区间。文中介绍了该方法的算法描述,并通过实验与其他算法进行了比较。实验结果表明,由于这种方法在离散化过程中结合了粗集理论的思想,考虑了属性间的相互影响,从而产生了比较合理的划分点,提高了规则的分类精度。  相似文献   

20.
随着数据挖掘和知识发现等技术的迅速发展,出现了很多数据离散的算法,但是,已有的离散化方法大多是针对固定点上的连续属性值的情况,实际应用中大量存在着连续区间属性值的情况。针对这一问题,提出了一种连续区间属性值离散化的新方法。通过区间数的相似度来描述对象间的相似关系,定义相似度阈度确定离散关系,来实现对区间数据的离散化,经过分析相似度在算法中的作用,提出了一种新的变量——关联度,改进了算法。采用多组数据对此算法的性能进行了检验,与其他算法做了对比试验,试验结果表明此算法是有效的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号