首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 203 毫秒
1.
《计算机科学与探索》2016,(10):1451-1458
代价敏感属性选择是数据挖掘的一个重要研究领域,其目的在于通过权衡测试代价和误分类代价,获得总代价最小的属性子集。针对经典回溯算法运行时间较长的缺点,结合分治思想,提出了一种改进的回溯算法。改进算法引入了两个相关参数,根据数据集规模自适应调整参数,并按参数大小拆分数据集,降低问题规模,以提高经典回溯算法的执行效率。针对较大规模数据集的实验结果表明,与经典的回溯算法相比,改进算法在保证效果的同时至少提高20%的运算效率;与启发式算法相比,改进算法在保证效率的同时取得了具有更小总代价的属性集合,可应用于实际问题。  相似文献   

2.
将决策粗糙集与代价敏感学习相结合,提出了一种基于决策粗糙集的代价敏感分类方法。依据决策粗糙集理论和属性约简方法,对待预测样本分别计算最优测试属性集,使得样本在最优测试属性集上计算的分类结果具有最小误分类代价和测试代价,依此给出样本的最小总代价分类结果。针对全局最优测试属性集求解过程中计算复杂度高的问题,提出了局部最优测试属性集的启发式搜索算法。该算法以单个属性对降低总分类代价的贡献率为启发函数,搜索各样本的局部最优测试属性集,并输出在局部最优测试属性集上样本的代价敏感分类结果。在UCI数据上的实验分析显示,所提算法有效地降低了分类结果的总代价和测试属性个数,使得样本分类结果同时具有较小的误分类代价和较小的测试代价。  相似文献   

3.
特征选择是机器学习和数据挖据中一个重要的预处理步骤,而类别不均衡数据的特征选择是机器学习和模式识别中的一个热点研究问题。多数传统的特征选择分类算法追求高精度,并假设数据没有误分类代价或者有同样的代价。在现实应用中,不同的误分类往往会产生不同的误分类代价。为了得到最小误分类代价下的特征子集,本文提出一种基于样本邻域保持的代价敏感特征选择算法。该算法的核心思想是把样本邻域引入现有的代价敏感特征选择框架。在8个真实数据集上的实验结果表明了该算法的优越性。  相似文献   

4.
模糊决策粗糙集代价敏感属性约简研究   总被引:1,自引:1,他引:0  
刘偲  秦亮曦 《计算机科学》2016,43(Z11):67-72
针对决策中普遍存在的代价问题,在模糊理论和决策粗糙集的基础上,对其代价敏感属性约简方法进行了研究。在模糊决策粗糙集属性约简中引入了包含误分类代价和测试代价的总代价。因此约简的目标不再只是考虑正域的大小,而是寻找使得总代价最小的最优属性子集。提出了一种模糊决策粗糙集代价敏感属性约简(COSAR)算法,该算法采用启发式方法搜索最优属性子集。给出了算法的步骤,并将该算法与已有的模糊粗决策粗糙集属性快速约简(QuickReduct)算法进行了性能对比。实验结果表明,COSAR算法比QuickReduct算法具有更强的属性约简能力、更低的分类总代价、更短的运行时间,且随着测试样本的增加,分类总代价差值也越来越大。  相似文献   

5.
代价敏感学习中经常考虑测试代价和误分类代价。在实际应用中,一个属性的测试代价常跟属性值的粒度有关,而一个具有多个属性的对象的误分类代价又常受它的属性的总测试代价大小的影响。基于这一点,研究在总测试代价受限的情形下,数据的属性和粒度选择的问题。以最小化数据处理的平均总代价为目标提出了一种方法,该方法能同时选择最优的属性子集和数据粒度。首先建立了该方法的理论模型,再设计了一个高效的算法。实验结果表明,所提算法能有效地进行不同大小的测试代价约束下的属性和粒度选择。  相似文献   

6.
代价敏感概率神经网络及其在故障诊断中的应用   总被引:3,自引:1,他引:2  
针对传统的分类算法人多以误分率最小化为目标,忽略了误分类型之间的差别和数据集的非平衡性的问题,提出代价敏感概率神经网络算法.该算法将代价敏感机制引入概率神经网络,用期望代价取代误分率,以期望代价最小化为目标,基于期望代价最小的贝叶斯决策规则预测新样本类别.采用工业现场数据和数据集German Credit验证了该算法的有效性.实验结果表明,该算法具有故障识别率高、泛化能力强、建模时间短等特点.  相似文献   

7.
大多数非均衡数据集的研究集中于纯重构数据集或者纯代价敏感学习,本文针对数据集类分布非均衡和不相等误分类代价往往同时发生这一事实,提出了一种以最小误分类代价为目标的基于混合重取样的代价敏感学习算法。该算法将两种不同类型解决方案有机地融合在一起,先用样本类空间重构的方法使原始数据集的两类数据达到基本均衡,然后再引入代价敏感学习算法进行分类,能提高少数类分类精度,同时有效降低总的误分类代价。实验结果验证了该算法在处理非均衡类问题时比传统算法要优越。  相似文献   

8.
极限学习机的相异性集成算法(Dissimilarity Based Ensemble of Extreme Learning Machine,D-ELM)在基因表达数据分类中能够得到较稳定的分类效果,然而这种分类算法是基于分类精度的,当所给样本的误分类代价不相等时,不能直接实现代价敏感分类过程中的最小平均误分类代价的要求。通过在分类过程中引入概率估计以及误分类代价和拒识代价重新构造分类结果,提出了基于相异性集成极限学习机的代价敏感算法(CS-D-ELM)。该算法被运用到基因表达数据集上,得到了较好的分类效果。  相似文献   

9.
针对决策粗糙集属性约简在引入代价后分类精度不高的问题,对其中代价敏感与分类精度的平衡进行了研究。将分类总代价和近似分类质量作为属性约简过程中的约束条件,结合模拟退火方法,提出了一个基于代价敏感和近似分类质量的决策粗糙集属性约简(ARACOQ)算法。利用UCI数据集对算法进行了模拟实验,实验结果验证了ARACOQ算法的有效性,该算法能够在可承受代价范围内找到一个分类精度最高的属性约简集。  相似文献   

10.
代价敏感决策树是以最小化误分类代价和测试代价为目标的一种决策树.目前,随着数据量急剧增长,劣质数据的出现也愈发频繁.在建立代价敏感决策树时,训练数据集中的劣质数据会对分裂属性的选择和决策树结点的划分造成一定的影响.因此在进行分类任务前,需要提前对数据进行劣质数据清洗.然而在实际应用中,由于数据清洗工作所需要的时间和金钱代价往往很高,许多用户给出了自己可接受的数据清洗代价最大值,并要求将数据清洗的代价控制在这一阈值内.因此除了误分类代价和测试代价以外,劣质数据的清洗代价也是代价敏感决策树建立过程中的一个重要因素.然而,现有代价敏感决策树建立的相关研究没有考虑数据质量问题.为了弥补这一空缺,着眼于研究劣质数据上代价敏感决策树的建立问题.针对该问题,提出了3种融合数据清洗算法的代价敏感决策树建立方法,并通过实验证明了所提出方法的有效性.  相似文献   

11.
多标签代价敏感分类集成学习算法   总被引:12,自引:2,他引:10  
付忠良 《自动化学报》2014,40(6):1075-1085
尽管多标签分类问题可以转换成一般多分类问题解决,但多标签代价敏感分类问题却很难转换成多类代价敏感分类问题.通过对多分类代价敏感学习算法扩展为多标签代价敏感学习算法时遇到的一些问题进行分析,提出了一种多标签代价敏感分类集成学习算法.算法的平均错分代价为误检标签代价和漏检标签代价之和,算法的流程类似于自适应提升(Adaptive boosting,AdaBoost)算法,其可以自动学习多个弱分类器来组合成强分类器,强分类器的平均错分代价将随着弱分类器增加而逐渐降低.详细分析了多标签代价敏感分类集成学习算法和多类代价敏感AdaBoost算法的区别,包括输出标签的依据和错分代价的含义.不同于通常的多类代价敏感分类问题,多标签代价敏感分类问题的错分代价要受到一定的限制,详细分析并给出了具体的限制条件.简化该算法得到了一种多标签AdaBoost算法和一种多类代价敏感AdaBoost算法.理论分析和实验结果均表明提出的多标签代价敏感分类集成学习算法是有效的,该算法能实现平均错分代价的最小化.特别地,对于不同类错分代价相差较大的多分类问题,该算法的效果明显好于已有的多类代价敏感AdaBoost算法.  相似文献   

12.
Cost-sensitive learning is a crucial problem in machine learning research. Traditional classification problem assumes that the misclassification for each category has the same cost, and the target of learning algorithm is to minimize the expected error rate. In cost-sensitive learning, costs of misclassification for samples of different categories are not the same; the target of algorithm is to minimize the sum of misclassification cost. Cost-sensitive learning can meet the actual demand of real-life classification problems, such as medical diagnosis, financial projections, and so on. Due to fast learning speed and perfect performance, extreme learning machine (ELM) has become one of the best classification algorithms, while voting based on extreme learning machine (V-ELM) makes classification results more accurate and stable. However, V-ELM and some other versions of ELM are all based on the assumption that all misclassifications have same cost. Therefore, they cannot solve cost-sensitive problems well. To overcome the drawback of ELMs mentioned above, an algorithm called cost-sensitive ELM (CS-ELM) is proposed by introducing misclassification cost of each sample into V-ELM. Experimental results on gene expression data show that CS-ELM is effective in reducing misclassification cost.  相似文献   

13.
There are nine major types of cost involved in cost-sensitive learning that are with heterogeneous units in general, referred to heterogeneous costs. Extant cost-sensitive learning (CSL) algorithms are based on the assumption that all involved costs can be transformed into a unified unit, called as homogeneous assumption of costs. While it is a challenge to construct many suitable transformation functions for the costs with diverse units, this paper designs a heterogeneous-cost sensitive learning (HCSL) algorithm to make split attribute selection more effective. This paper first proposes an efficient method of reducing the heterogeneity caused by both cost mechanisms and attribute information. And then, all heterogeneous costs with attribution information together are incorporated into the process of split attribute selection, called as HCAI-based split attribute selection. Third, the over-fitting is tackled by designing a simple and effective smoothing strategy, so as to build cost-sensitive decision tree classifiers with the HCSL algorithm. Experiments are conducted to evaluate the proposed HCSL algorithm on six UCI datasets. Experimental results show that the proposed approach outperforms existing methods for handling the heterogeneity caused by cost mechanisms and attribute information.  相似文献   

14.
Cultural modeling aims at developing behavioral models of groups and analyzing the impact of culture factors on group behavior using computational methods. Machine learning methods and in particular classification, play a central role in such applications. In modeling cultural data, it is expected that standard classifiers yield good performance under the assumption that different classification errors have uniform costs. However, this assumption is often violated in practice. Therefore, the performance of standard classifiers is severely hindered. To handle this problem, this paper empirically studies cost-sensitive learning in cultural modeling. We consider cost factor when building the classifiers, with the aim of minimizing total misclassification costs. We conduct experiments to investigate four typical cost-sensitive learning methods, combine them with six standard classifiers and evaluate their performance under various conditions. Our empirical study verifies the effectiveness of cost-sensitive learning in cultural modeling. Based on the experimental results, we gain a thorough insight into the problem of non-uniform misclassification costs, as well as the selection of cost-sensitive methods, base classifiers and method-classifier pairs for this domain. Furthermore, we propose an improved algorithm which outperforms the best method-classifier pair using the benchmark cultural datasets.  相似文献   

15.
基于支持向量机的代价敏感挖掘   总被引:4,自引:0,他引:4  
针对一些数据挖掘应用中反例样本和正例样本具有不同误分类代价的情况,提出一种代价敏感支持向量机算法CS-SVM.CS-SVM包括3个步骤:首先,引入Sigmoid函数,根据样本到分类超平面的距离估计其后验概率;然后,根据误分类代价最小原则重构训练样本的类标号;最后,在重构后的训练集上使用标准SVM进行学习即得到嵌入误分类代价的最优分类超平面.基于CS-SVM的思路,提出一个通用的嵌入误分类代价的代价敏感分类算法G-CSC.试验结果表明:相比于SVM,CS-SVM大大降低测试集上的平均误分类代价.  相似文献   

16.
Test strategies for cost-sensitive decision trees   总被引:2,自引:0,他引:2  
In medical diagnosis, doctors must often determine what medical tests (e.g., X-ray and blood tests) should be ordered for a patient to minimize the total cost of medical tests and misdiagnosis. In this paper, we design cost-sensitive machine learning algorithms to model this learning and diagnosis process. Medical tests are like attributes in machine learning whose values may be obtained at a cost (attribute cost), and misdiagnoses are like misclassifications which may also incur a cost (misclassification cost). We first propose a lazy decision tree learning algorithm that minimizes the sum of attribute costs and misclassification costs. Then, we design several novel "test strategies" that can request to obtain values of unknown attributes at a cost (similar to doctors' ordering of medical tests at a cost) in order to minimize the total cost for test examples (new patients). These test strategies correspond to different situations in real-world diagnoses. We empirically evaluate these test strategies, and show that they are effective and outperform previous methods. Our results can be readily applied to real-world diagnosis tasks. A case study on heart disease is given throughout the paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号