首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 43 毫秒
1.
大多数非均衡数据集的研究集中于纯重构数据集或者纯代价敏感学习,本文针对数据集类分布非均衡和不相等误分类代价往往同时发生这一事实,提出了一种以最小误分类代价为目标的基于混合重取样的代价敏感学习算法。该算法将两种不同类型解决方案有机地融合在一起,先用样本类空间重构的方法使原始数据集的两类数据达到基本均衡,然后再引入代价敏感学习算法进行分类,能提高少数类分类精度,同时有效降低总的误分类代价。实验结果验证了该算法在处理非均衡类问题时比传统算法要优越。  相似文献   

2.
代价敏感属性选择问题的目的是通过权衡测试代价和误分类代价,得到一个具有最小总代价的属性子集。目前,多数代价敏感属性选择方法只考虑误分类代价固定不变的情况,不能较好地解决类分布不均衡等问题。而在大规模数据集上,算法效率不理想也是代价敏感属性选择的主要问题之一。针对这些问题,以总代价最小为目标,设计了一种新的动态误分类代价机制。结合分治思想,根据数据集规模按列自适应拆分各数据集。基于动态误分类代价重新定义最小代价属性选择问题,提出了动态误分类代价下的代价敏感属性选择分治算法。通过实验表明,该算法能在提高效率的同时获得最优误分类代价,从而保证所得属性子集的总代价最小。  相似文献   

3.
多标签代价敏感分类集成学习算法   总被引:12,自引:2,他引:10  
付忠良 《自动化学报》2014,40(6):1075-1085
尽管多标签分类问题可以转换成一般多分类问题解决,但多标签代价敏感分类问题却很难转换成多类代价敏感分类问题.通过对多分类代价敏感学习算法扩展为多标签代价敏感学习算法时遇到的一些问题进行分析,提出了一种多标签代价敏感分类集成学习算法.算法的平均错分代价为误检标签代价和漏检标签代价之和,算法的流程类似于自适应提升(Adaptive boosting,AdaBoost)算法,其可以自动学习多个弱分类器来组合成强分类器,强分类器的平均错分代价将随着弱分类器增加而逐渐降低.详细分析了多标签代价敏感分类集成学习算法和多类代价敏感AdaBoost算法的区别,包括输出标签的依据和错分代价的含义.不同于通常的多类代价敏感分类问题,多标签代价敏感分类问题的错分代价要受到一定的限制,详细分析并给出了具体的限制条件.简化该算法得到了一种多标签AdaBoost算法和一种多类代价敏感AdaBoost算法.理论分析和实验结果均表明提出的多标签代价敏感分类集成学习算法是有效的,该算法能实现平均错分代价的最小化.特别地,对于不同类错分代价相差较大的多分类问题,该算法的效果明显好于已有的多类代价敏感AdaBoost算法.  相似文献   

4.
基于支持向量机的代价敏感挖掘   总被引:4,自引:0,他引:4  
针对一些数据挖掘应用中反例样本和正例样本具有不同误分类代价的情况,提出一种代价敏感支持向量机算法CS-SVM.CS-SVM包括3个步骤:首先,引入Sigmoid函数,根据样本到分类超平面的距离估计其后验概率;然后,根据误分类代价最小原则重构训练样本的类标号;最后,在重构后的训练集上使用标准SVM进行学习即得到嵌入误分类代价的最优分类超平面.基于CS-SVM的思路,提出一个通用的嵌入误分类代价的代价敏感分类算法G-CSC.试验结果表明:相比于SVM,CS-SVM大大降低测试集上的平均误分类代价.  相似文献   

5.
Abstract: Decision tree induction has been widely studied and applied. In safety applications, such as determining whether a chemical process is safe or whether a person has a medical condition, the cost of misclassification in one of the classes is significantly higher than in the other class. Several authors have tackled this problem by developing cost-sensitive decision tree learning algorithms or have suggested ways of changing the distribution of training examples to bias the decision tree learning process so as to take account of costs. A prerequisite for applying such algorithms is the availability of costs of misclassification. Although this may be possible for some applications, obtaining reasonable estimates of costs of misclassification is not easy in the area of safety .
This paper presents a new algorithm for applications where the cost of misclassifications cannot be quantified, although the cost of misclassification in one class is known to be significantly higher than in another class. The algorithm utilizes linear discriminant analysis to identify oblique relationships between continuous attributes and then carries out an appropriate modification to ensure that the resulting tree errs on the side of safety. The algorithm is evaluated with respect to one of the best known cost-sensitive algorithms (ICET), a well-known oblique decision tree algorithm (OC1) and an algorithm that utilizes robust linear programming.  相似文献   

6.
Cost-sensitive learning is a crucial problem in machine learning research. Traditional classification problem assumes that the misclassification for each category has the same cost, and the target of learning algorithm is to minimize the expected error rate. In cost-sensitive learning, costs of misclassification for samples of different categories are not the same; the target of algorithm is to minimize the sum of misclassification cost. Cost-sensitive learning can meet the actual demand of real-life classification problems, such as medical diagnosis, financial projections, and so on. Due to fast learning speed and perfect performance, extreme learning machine (ELM) has become one of the best classification algorithms, while voting based on extreme learning machine (V-ELM) makes classification results more accurate and stable. However, V-ELM and some other versions of ELM are all based on the assumption that all misclassifications have same cost. Therefore, they cannot solve cost-sensitive problems well. To overcome the drawback of ELMs mentioned above, an algorithm called cost-sensitive ELM (CS-ELM) is proposed by introducing misclassification cost of each sample into V-ELM. Experimental results on gene expression data show that CS-ELM is effective in reducing misclassification cost.  相似文献   

7.
极限学习机的相异性集成算法(Dissimilarity Based Ensemble of Extreme Learning Machine,D-ELM)在基因表达数据分类中能够得到较稳定的分类效果,然而这种分类算法是基于分类精度的,当所给样本的误分类代价不相等时,不能直接实现代价敏感分类过程中的最小平均误分类代价的要求。通过在分类过程中引入概率估计以及误分类代价和拒识代价重新构造分类结果,提出了基于相异性集成极限学习机的代价敏感算法(CS-D-ELM)。该算法被运用到基因表达数据集上,得到了较好的分类效果。  相似文献   

8.
郭冰楠  吴广潮 《计算机应用》2019,39(10):2888-2892
在网络贷款用户数据集中,贷款成功和贷款失败的用户数量存在着严重的不平衡,传统的机器学习算法在解决该类问题时注重整体分类正确率,导致贷款成功用户的预测精度较低。针对此问题,在代价敏感决策树敏感函数的计算中加入类分布,以减弱正负样本数量对误分类代价的影响,构建改进的代价敏感决策树;以该决策树作为基分类器并以分类准确度作为衡量标准选择表现较好的基分类器,将它们与最后阶段生成的分类器集成得到最终的分类器。实验结果表明,与已有的常用于解决此类问题的算法(如MetaCost算法、代价敏感决策树、AdaCost算法等)相比,改进的代价敏感决策树对网络贷款用户分类可以降低总体的误分类错误率,具有更强的泛化能力。  相似文献   

9.
针对传统的基于遗传神经网络的入侵检测模型未考虑误分类代价的不足,将误分类代价敏感的特征集成到基于遗传神经网络的网络入侵检测模型中,从而克服了传统模型中错误分类时可能导致代价过大的缺点。通过实验结果表明,增加了误分类代价敏感特征后的遗传神经网络能较好地控制网络入侵检测系统误报、漏报攻击时所产生的代价。  相似文献   

10.
为了解决客户细分中由于客户价值不同和不同价值客户数量的悬殊差异造成对客户错误分类的代价不同和不平衡的数据样本,研究了客户价值细分问题中错误分类代价形成机理,建立基于客户价值的动态代价函数,在此基础上设计了代价敏感的支持向量机分类器。实验结果说明,该方法可以更精确地控制代价敏感性,降低总体的错误分类代价,使模型能更准确地反映分类的代价,有效地识别客户价值。  相似文献   

11.
代价敏感决策树是以最小化误分类代价和测试代价为目标的一种决策树.目前,随着数据量急剧增长,劣质数据的出现也愈发频繁.在建立代价敏感决策树时,训练数据集中的劣质数据会对分裂属性的选择和决策树结点的划分造成一定的影响.因此在进行分类任务前,需要提前对数据进行劣质数据清洗.然而在实际应用中,由于数据清洗工作所需要的时间和金钱代价往往很高,许多用户给出了自己可接受的数据清洗代价最大值,并要求将数据清洗的代价控制在这一阈值内.因此除了误分类代价和测试代价以外,劣质数据的清洗代价也是代价敏感决策树建立过程中的一个重要因素.然而,现有代价敏感决策树建立的相关研究没有考虑数据质量问题.为了弥补这一空缺,着眼于研究劣质数据上代价敏感决策树的建立问题.针对该问题,提出了3种融合数据清洗算法的代价敏感决策树建立方法,并通过实验证明了所提出方法的有效性.  相似文献   

12.
Real-time transient stability status prediction (RTSSP) is very important to maintain the safety and stability of electrical power systems, where any unstable contingency will be likely to cause large-scale blackout. Most of machine learning methods used for RTSSP attempt to attain a low classification error, which implies that the misclassification costs of different categories are the same. However, misclassifying an unstable case as stable one usually leads to much higher costs than misclassifying a stable case as unstable one. In this paper, a new RTSSP method based on cost-sensitive extreme learning machine (CELM) is proposed, which recognizes the RTSSP as a cost-sensitive classification problem. The CELM is constructed pursuing the minimum misclassification costs, and its detailed implementation procedures for RSSTP are also researched in this work. The proposed method is implemented on the New England 39-bus electrical power system. Compared with three cost-blind methods (ELM, SVM and DT) and two cost-sensitive methods (cost-sensitive DT, cost-sensitive SVM), the simulation results have proved that the lower total misclassification costs and false dismissal rate with low computational complexity can be achieved by the proposed method, which meets the demands for the computation speed and the reliability of RTSSP.  相似文献   

13.
特征选择是机器学习和数据挖据中一个重要的预处理步骤,而类别不均衡数据的特征选择是机器学习和模式识别中的一个热点研究问题。多数传统的特征选择分类算法追求高精度,并假设数据没有误分类代价或者有同样的代价。在现实应用中,不同的误分类往往会产生不同的误分类代价。为了得到最小误分类代价下的特征子集,本文提出一种基于样本邻域保持的代价敏感特征选择算法。该算法的核心思想是把样本邻域引入现有的代价敏感特征选择框架。在8个真实数据集上的实验结果表明了该算法的优越性。  相似文献   

14.
The last decade has seen an increase in the attention paid to the development of cost-sensitive learning algorithms that aim to minimize misclassification costs while still maintaining accuracy. Most of this attention has been on cost-sensitive decision tree learning, whereas relatively little attention has been paid to assess if it is possible to develop better cost-sensitive classifiers based on Bayesian networks. Hence, this paper presents EBNO, an algorithm that utilizes Genetic algorithms to learn cost-sensitive Bayesian networks, where genes are utilized to represent the links between the nodes in Bayesian networks and the expected cost is used as a fitness function. An empirical comparison of the new algorithm has been carried out with respect to (a) an algorithm that induces cost-insensitive Bayesian networks to provide a base line, (b) ICET, a well-known algorithm that uses Genetic algorithms to induce cost-sensitive decision trees, (c) use of MetaCost to induce cost-sensitive Bayesian networks via bagging (d) use of AdaBoost to induce cost-sensitive Bayesian networks, and (e) use of XGBoost, a gradient boosting algorithm, to induce cost-sensitive decision trees. An empirical evaluation on 28 data sets reveals that EBNO performs well in comparison with the algorithms that produce single interpretable models and performs just as well as algorithms that use bagging and boosting methods.  相似文献   

15.
代价敏感支持向量机   总被引:12,自引:1,他引:11  
以分类精度为目标的传统分类算法通常假定:每个样本的误分类具有同样的代价且每类样本数大致相等.但现实数据挖掘中该假定不成立时,这些算法的直接应用不能取得理想的分类和预测.针对此缺隙,并基于标准的SVM,通过在SVM的设计中集成样本的不同误分类代价,提出代价敏感支持向量机(CS-SVM)的设计方法.实验结果表明CS-SVM是有效的.  相似文献   

16.
Cultural modeling aims at developing behavioral models of groups and analyzing the impact of culture factors on group behavior using computational methods. Machine learning methods and in particular classification, play a central role in such applications. In modeling cultural data, it is expected that standard classifiers yield good performance under the assumption that different classification errors have uniform costs. However, this assumption is often violated in practice. Therefore, the performance of standard classifiers is severely hindered. To handle this problem, this paper empirically studies cost-sensitive learning in cultural modeling. We consider cost factor when building the classifiers, with the aim of minimizing total misclassification costs. We conduct experiments to investigate four typical cost-sensitive learning methods, combine them with six standard classifiers and evaluate their performance under various conditions. Our empirical study verifies the effectiveness of cost-sensitive learning in cultural modeling. Based on the experimental results, we gain a thorough insight into the problem of non-uniform misclassification costs, as well as the selection of cost-sensitive methods, base classifiers and method-classifier pairs for this domain. Furthermore, we propose an improved algorithm which outperforms the best method-classifier pair using the benchmark cultural datasets.  相似文献   

17.
Cost-sensitive learning with conditional Markov networks   总被引:1,自引:0,他引:1  
There has been a recent, growing interest in classification and link prediction in structured domains. Methods such as conditional random fields and relational Markov networks support flexible mechanisms for modeling correlations due to the link structure. In addition, in many structured domains, there is an interesting structure in the risk or cost function associated with different misclassifications. There is a rich tradition of cost-sensitive learning applied to unstructured (IID) data. Here we propose a general framework which can capture correlations in the link structure and handle structured cost functions. We present two new cost-sensitive structured classifiers based on maximum entropy principles. The first determines the cost-sensitive classification by minimizing the expected cost of misclassification. The second directly determines the cost-sensitive classification without going through a probability estimation step. We contrast these approaches with an approach which employs a standard 0/1-loss structured classifier to estimate class conditional probabilities followed by minimization of the expected cost of misclassification and with a cost-sensitive IID classifier that does not utilize the correlations present in the link structure. We demonstrate the utility of our cost-sensitive structured classifiers with experiments on both synthetic and real-world data.  相似文献   

18.
"Missing is useful": missing values in cost-sensitive decision trees   总被引:3,自引:0,他引:3  
Many real-world data sets for machine learning and data mining contain missing values and much previous research regards it as a problem and attempts to impute missing values before training and testing. In this paper, we study this issue in cost-sensitive learning that considers both test costs and misclassification costs. If some attributes (tests) are too expensive in obtaining their values, it would be more cost-effective to miss out their values, similar to skipping expensive and risky tests (missing values) in patient diagnosis (classification). That is, "missing is useful" as missing values actually reduces the total cost of tests and misclassifications and, therefore, it is not meaningful to impute their values. We discuss and compare several strategies that utilize only known values and that "missing is useful" for cost reduction in cost-sensitive decision tree learning.  相似文献   

19.
Automatically countering imbalance and its empirical relationship to cost   总被引:4,自引:1,他引:3  
Learning from imbalanced data sets presents a convoluted problem both from the modeling and cost standpoints. In particular, when a class is of great interest but occurs relatively rarely such as in cases of fraud, instances of disease, and regions of interest in large-scale simulations, there is a correspondingly high cost for the misclassification of rare events. Under such circumstances, the data set is often re-sampled to generate models with high minority class accuracy. However, the sampling methods face a common, but important, criticism: how to automatically discover the proper amount and type of sampling? To address this problem, we propose a wrapper paradigm that discovers the amount of re-sampling for a data set based on optimizing evaluation functions like the f-measure, Area Under the ROC Curve (AUROC), cost, cost-curves, and the cost dependent f-measure. Our analysis of the wrapper is twofold. First, we report the interaction between different evaluation and wrapper optimization functions. Second, we present a set of results in a cost- sensitive environment, including scenarios of unknown or changing cost matrices. We also compared the performance of the wrapper approach versus cost-sensitive learning methods—MetaCost and the Cost-Sensitive Classifiers—and found the wrapper to outperform the cost-sensitive classifiers in a cost-sensitive environment. Lastly, we obtained the lowest cost per test example compared to any result we are aware of for the KDD-99 Cup intrusion detection data set.  相似文献   

20.
In real-world classification problems, different types of misclassification errors often have asymmetric costs, thus demanding cost-sensitive learning methods that attempt to minimize average misclassification cost rather than plain error rate. Instance weighting and post hoc threshold adjusting are two major approaches to cost-sensitive classifier learning. This paper compares the effects of these two approaches on several standard, off-the-shelf classification methods. The comparison indicates that the two approaches lead to similar results for some classification methods, such as Naïve Bayes, logistic regression, and backpropagation neural network, but very different results for other methods, such as decision tree, decision table, and decision rule learners. The findings from this research have important implications on the selection of the cost-sensitive classifier learning approach as well as on the interpretation of a recently published finding about the relative performance of Naïve Bayes and decision trees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号