首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
使用信息论的方法进行连续属性的离散化,引入Hellinger偏差HD(Hellinger Divergence)作为每个区间对决策的信息量度量,从而定义切分点的信息熵,最终的离散化结果是使各区间的信息量尽可能平均,分析了HD度量在两种离散化方法中的作用,说明它在划分算法中运用比较理想,而在归并算法中则有局限。  相似文献   

2.
In data mining many datasets are described with both discrete and numeric attributes. Most Ant Colony Optimization based classifiers can only deal with discrete attributes and need a pre-processing discretization step in case of numeric attributes. We propose an adaptation of AntMiner+ for rule mining which intrinsically handles numeric attributes. We describe the new approach and compare it to the existing algorithms. The proposed method achieves comparable results with existing methods on UCI datasets, but has advantages on datasets with strong interactions between numeric attributes. We analyse the effect of parameters on the classification accuracy and propose sensible defaults. We describe application of the new method on a real world medical domain which achieves comparable results with the existing method.  相似文献   

3.
Relief is a measure of attribute quality which is often used for feature subset selection. Its use in induction of classification trees and rules, discretization, and other methods has however been hindered by its inability to suggest subsets of values of discrete attributes and thresholds for splitting continuous attributes into intervals. We present efficient algorithms for both tasks.  相似文献   

4.
Inductive learning systems can be effectively used to acquire classification knowledge from examples. Many existing symbolic learning algorithms can be applied in domains with continuous attributes when integrated with a discretization algorithm to transform the continuous attributes into ordered discrete ones. In this paper, a new information theoretic discretization method optimized for supervised learning is proposed and described. This approach seeks to maximize the mutual dependence as measured by the interdependence redundancy between the discrete intervals and the class labels, and can automatically determine the most preferred number of intervals for an inductive learning application. The method has been tested in a number of inductive learning examples to show that the class-dependent discretizer can significantly improve the classification performance of many existing learning algorithms in domains containing numeric attributes  相似文献   

5.
6.
基于混合概率模型的无监督离散化算法   总被引:10,自引:0,他引:10  
李刚 《计算机学报》2002,25(2):158-164
现实应用中常常涉及许多连续的数值属性,而且前许多机器学习算法则要求所处理的属性取离散值,根据在对数值属性的离散化过程中,是否考虑相关类别属性的值,离散化算法可分为有监督算法和无监督算法两类。基于混合概率模型,该文提出了一种理论严格的无监督离散化算法,它能够在无先验知识,无类别是属性的前提下,将数值属性的值域划分为若干子区间,再通过贝叶斯信息准则自动地寻求最佳的子区间数目和区间划分方法。  相似文献   

7.

The effective extraction of continuous features in ocean optical remote sensing image is the key to achieve the automatic detection and identification for marine vessel targets. Since many of the existing data mining algorithms can only deal with discrete attributes, it is necessary to transform the continuous features into discrete ones for adapting to these intelligent algorithms. However, most of the current discretization methods do not consider the mutual exclusion within the attribute set when selecting breakpoints, and cannot guarantee that the indiscernible relationship of information system is not destroyed. Obviously, they are not suitable for processing ocean optical remote sensing data with multiple features. Aiming at this problem, a multivariable optical remote sensing image feature discretization method applied to marine vessel targets recognition is presented in this paper. Firstly, the information equivalent model of remote sensing image is established based on the theories of information entropy and rough set. Secondly, the change extent of indiscernible relationship in the model before and after discretization is evaluated. Thirdly, multiple scans are executed for each band until the termination condition is satisfied for generating the optimal number of intervals. Finally, we carry out the simulation analysis of the high-resolution remote sensing image data collected near the coast of South China Sea. In addition, we also compare the proposed method with the current mainstream discretization algorithms. Experiments validate that the proposed method has better comprehensive performance in terms of interval number, data consistency, running time, prediction accuracy and recognition rate.

  相似文献   

8.
Classification is one of the most important tasks in machine learning with a huge number of real-life applications. In many practical classification problems, the available information for making object classification is partial or incomplete because some attribute values can be missing due to various reasons. These missing values can significantly affect the efficacy of the classification model. So it is crucial to develop effective techniques to impute these missing values. A number of methods have been introduced for solving classification problem with missing values. However they have various problems. So, we introduce an effective method for imputing missing values using the correlation among the attributes. Other methods which consider correlation for imputing missing values works better either for categorical or numeric data, or designed for a particular application only. Moreover they will not work if all the records have at least one missing attribute. Our method, Model based Missing value Imputation using Correlation (MMIC), can effectively impute both categorical and numeric data. It uses an effective model based technique for filling the missing values attribute wise and reusing then effectively using the model. Extensive performance analyzes show that our proposed approach achieves high performance in imputing missing values and thus increases the efficacy of the classifier. The experimental results also show that our method outperforms various existing methods for handling missing data in classification.  相似文献   

9.
董跃华  刘力 《计算机应用》2016,36(1):188-193
针对经典粗糙集只能处理离散型属性的问题,提出一种基于自适应混合粒子群优化(AHPSO)的离散化算法。首先,引入自适应调整策略,以克服粒子群易陷入局部解的缺点,提高了粒子群全局寻优能力;然后对每一代全局最优粒子进行禁忌搜索(TS),得到当代最佳全局最优粒子,增强了粒子群局部搜索能力;最后,在保持决策表分类能力不变的情况下,将属性离散化分割点初始化为粒子群体,通过粒子间的相互作用得到最佳的离散化分割点。使用WEKA平台上的J48决策树分类方法,与基于属性重要度、信息熵的离散化算法相比,该算法的分类精度提升了10%~20%;与基于小生境离散粒子群优化(NDPSO)、参数线性递减粒子群的离散化算法相比,该算法的分类精度提升了2%~5%。实验结果表明,该算法显著地提高了J48决策树的分类学习精度,在对数据离散化时也有较好的性能。  相似文献   

10.
连续属性离散化作为水产品安全信息系统中进行智能化数据处理的一个重要研究内容,已然成为水产品安全信息化研究领域的一个热点和难点。文中利用基于粗糙集理论相对熵的连续属性离散化方法来解决这个问题。此方法选用候选区间的类信息熵作为离散门限值边界,并且通过考察每个属性值的分类能力,合并离散区间,去掉冗余断点,确定关键离散属性值,最终在水产品安全信息系统中实现连续属性离散化。实例分析表明算法是有效可行的。  相似文献   

11.
将粗糙集理论中属性重要度和依赖度的概念与分级聚类离散化算法相结合,提出了一种纳税人连续型属性动态的离散化算法。首先将纳税数据对象的每个连续型属性划分为2类,然后利用粗糙集理论计算每个条件属性对于决策属性的重要度,再通过重要度由大至小排序进行增类运算,最后将保持与原有数据对象集依赖度一致的分类结果输出。该算法能够动态地对数据对象进行类别划分,实现纳税人连续型属性的离散化。通过采用专家分析和关联分析的实验结果,验证了该算法具有较高的纳税人连续型属性离散化精度和性能。  相似文献   

12.
We present a data mining method which integrates discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated. Numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. The horizontal reduction is done by merging identical tuples after substituting an attribute value by its higher level value in a pre- defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples we consider further in the database(s). In the second phase, a novel context- sensitive feature merit measure is used to rank features, a subset of relevant attributes is chosen, based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without changing the interdependence relationships between the classes and the attributes. Finally, the tuples in the reduced relation are transformed into different knowledge rules based on different knowledge discovery algorithms. Based on these principles, a prototype knowledge discovery system DBROUGH-II has been constructed by integrating discretization, generalization, rough set feature selection and a variety of data mining algorithms. Tests on a telecommunication customer data warehouse demonstrates that different kinds of knowledge rules, such as characteristic rules, discriminant rules, maximal generalized classification rules, and data evolution regularities, can be discovered efficiently and effectively.  相似文献   

13.
Instance-based attribute identification in database integration   总被引:3,自引:0,他引:3  
Most research on attribute identification in database integration has focused on integrating attributes using schema and summary information derived from the attribute values. No research has attempted to fully explore the use of attribute values to perform attribute identification. We propose an attribute identification method that employs schema and summary instance information as well as properties of attributes derived from their instances. Unlike other attribute identification methods that match only single attributes, our method matches attribute groups for integration. Because our attribute identification method fully explores data instances, it can identify corresponding attributes to be integrated even when schema information is misleading. Three experiments were performed to validate our attribute identification method. In the first experiment, the heuristic rules derived for attribute classification were evaluated on 119 attributes from nine public domain data sets. The second was a controlled experiment validating the robustness of the proposed attribute identification method by introducing erroneous data. The third experiment evaluated the proposed attribute identification method on five data sets extracted from online music stores. The results demonstrated the viability of the proposed method.Received: 30 August 2001, Accepted: 31 August 2002, Published online: 31 July 2003Edited by L. Raschid  相似文献   

14.
连续属性离散化是知识系统中的一个重要环节,一个好的离散化方法能够简化知识的描述和便于对知识系统的处理。而求取连续属性值的最优断点集合是一个NP难题。提出一种连续属性模糊离散化的Norm-FD方法:根据正态分布特点采用正态离散化算法(Norm-D算法),使其离散结果达到需要离散区间数,根据属性值和与其相邻的区间关系将具体属性值用F-Inter算法转化为用隶属度、分区号和偏向系数三个参数表示。  相似文献   

15.
16.
基于二进制粒子群优化的决策系统属性离散化   总被引:1,自引:0,他引:1  
为解决连续属性无法直接用于粗糙集理论的问题,依据粗糙集连续属性离散化的根本要求,提出了一种基于二进制粒子群优化算法(Binary Particle Swarm Optimization,BinaryPSO)的属性离散化方法。该方法将二进制粒子视为断点子集,最小化断点集中的断点个数作为优化目标,粗糙集属性分类精度作为约束条件。其中,适应函数的定义保证了在尽量减少决策系统信息损失的前提下,得到简化的决策系统。仿真结果表明,该方法得到的离散结果包含较少的断点个数,并且保持了较高的分类能力。  相似文献   

17.
Discretization is the process of converting numerical values into categorical values. There are many existing techniques for discretization. However, the existing techniques have various limitations such as the requirement of a user input on the number of categories and number of records in each category. Therefore, we propose a new discretization technique called low frequency discretizer (LFD) that does not require any user input. There are some existing techniques that do not require user input, but they rely on various assumptions such as the number of records in each interval is same, and the number of intervals is equal to the number of records in each interval. These assumptions are often difficult to justify. LFD does not require any assumptions. In LFD the number of categories and frequency of each category are not pre-defined, rather data driven. Other contributions of LFD are as follows. LFD uses low frequency values as cut points and thus reduces the information loss due to discretization. It uses all other categorical attributes and any numerical attribute that has already been categorized. It considers that the influence of an attribute in discretization of another attribute depends on the strength of their relationship. We evaluate LFD by comparing it with six (6) existing techniques on eight (8) datasets for three different types of evaluation, namely the classification accuracy, imputation accuracy and noise detection accuracy. Our experimental results indicate a significant improvement based on the sign test analysis.  相似文献   

18.
针对新能源智能车监控数据中包含过多的连续属性,提出了一种基于分辨矩阵和信息增益率的有监督离散化算法,从而降低连续属性的取值精度,使得新能源智能车后续的分类模型建立更具泛化能力.该算法在保证分类效果的前提下,获得尽可能少的结果断点,主要从3个方面对传统的离散化算法进行优化,一是根据决策表的条件属性与决策属性构建候选断点分辨矩阵,通过分辨矩阵判断相邻属性取值之间是否有可能的断点;二是用信息增益率来优化结果断点的选取;三是通过设定停止阈值解决了传统算法因停止条件过于严格导致算法选取过多的结果断点、离散化效果一般的问题.实验结果表明,改进的算法能够有效减少断点数量,大幅提高计算效率,并获得与经典算法相近的离散结果.  相似文献   

19.
We present a method to learn maximal generalized decision rules from databases by integrating discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated and the numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. Horizontal reduction is accomplished by merging identical tuples after the substitution of an attribute value by its higher level value in a pre-defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples in the database. In the second phase, a novel context-sensitive feature merit measure is used to rank the features, a subset of relevant attributes is chosen based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without destroying the interdependence relationships between classes and the attributes. Then rough set-based value reduction is further performed on the reduced table and all redundant condition values are dropped. Finally, tuples in the reduced table are transformed into a set of maximal generalized decision rules. The experimental results on UCI data sets and a real market database demonstrate that our method can dramatically reduce the feature space and improve learning accuracy.  相似文献   

20.
随着数据挖掘和知识发现等技术的迅速发展,出现了很多数据离散的算法,但是,已有的离散化方法大多是针对固定点上的连续属性值的情况,实际应用中大量存在着连续区间属性值的情况。针对这一问题,提出了一种连续区间属性值离散化的新方法。通过区间数的相似度来描述对象间的相似关系,定义相似度阈度确定离散关系,来实现对区间数据的离散化,经过分析相似度在算法中的作用,提出了一种新的变量——关联度,改进了算法。采用多组数据对此算法的性能进行了检验,与其他算法做了对比试验,试验结果表明此算法是有效的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号