首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
多标签代价敏感分类集成学习算法   总被引:12,自引:2,他引:10  
付忠良 《自动化学报》2014,40(6):1075-1085
尽管多标签分类问题可以转换成一般多分类问题解决,但多标签代价敏感分类问题却很难转换成多类代价敏感分类问题.通过对多分类代价敏感学习算法扩展为多标签代价敏感学习算法时遇到的一些问题进行分析,提出了一种多标签代价敏感分类集成学习算法.算法的平均错分代价为误检标签代价和漏检标签代价之和,算法的流程类似于自适应提升(Adaptive boosting,AdaBoost)算法,其可以自动学习多个弱分类器来组合成强分类器,强分类器的平均错分代价将随着弱分类器增加而逐渐降低.详细分析了多标签代价敏感分类集成学习算法和多类代价敏感AdaBoost算法的区别,包括输出标签的依据和错分代价的含义.不同于通常的多类代价敏感分类问题,多标签代价敏感分类问题的错分代价要受到一定的限制,详细分析并给出了具体的限制条件.简化该算法得到了一种多标签AdaBoost算法和一种多类代价敏感AdaBoost算法.理论分析和实验结果均表明提出的多标签代价敏感分类集成学习算法是有效的,该算法能实现平均错分代价的最小化.特别地,对于不同类错分代价相差较大的多分类问题,该算法的效果明显好于已有的多类代价敏感AdaBoost算法.  相似文献   

2.
郭冰楠  吴广潮 《计算机应用》2019,39(10):2888-2892
在网络贷款用户数据集中,贷款成功和贷款失败的用户数量存在着严重的不平衡,传统的机器学习算法在解决该类问题时注重整体分类正确率,导致贷款成功用户的预测精度较低。针对此问题,在代价敏感决策树敏感函数的计算中加入类分布,以减弱正负样本数量对误分类代价的影响,构建改进的代价敏感决策树;以该决策树作为基分类器并以分类准确度作为衡量标准选择表现较好的基分类器,将它们与最后阶段生成的分类器集成得到最终的分类器。实验结果表明,与已有的常用于解决此类问题的算法(如MetaCost算法、代价敏感决策树、AdaCost算法等)相比,改进的代价敏感决策树对网络贷款用户分类可以降低总体的误分类错误率,具有更强的泛化能力。  相似文献   

3.
多分类问题代价敏感AdaBoost算法   总被引:8,自引:2,他引:6  
付忠良 《自动化学报》2011,37(8):973-983
针对目前多分类代价敏感分类问题在转换成二分类代价敏感分类问题存在的代价合并问题, 研究并构造出了可直接应用于多分类问题的代价敏感AdaBoost算法.算法具有与连续AdaBoost算法 类似的流程和误差估计. 当代价完全相等时, 该算法就变成了一种新的多分类的连续AdaBoost算法, 算法能够确保训练错误率随着训练的分类器的个数增加而降低, 但不直接要求各个分类器相互独立条件, 或者说独立性条件可以通过算法规则来保证, 但现有多分类连续AdaBoost算法的推导必须要求各个分类器相互独立. 实验数据表明, 算法可以真正实现分类结果偏向错分代价较小的类, 特别当每一类被错分成其他类的代价不平衡但平均代价相等时, 目前已有的多分类代价敏感学习算法会失效, 但新方法仍然能 实现最小的错分代价. 研究方法为进一步研究集成学习算法提供了一种新的思路, 得到了一种易操作并近似满足分类错误率最小的多标签分类问题的AdaBoost算法.  相似文献   

4.
代价敏感概率神经网络及其在故障诊断中的应用   总被引:3,自引:1,他引:2  
针对传统的分类算法人多以误分率最小化为目标,忽略了误分类型之间的差别和数据集的非平衡性的问题,提出代价敏感概率神经网络算法.该算法将代价敏感机制引入概率神经网络,用期望代价取代误分率,以期望代价最小化为目标,基于期望代价最小的贝叶斯决策规则预测新样本类别.采用工业现场数据和数据集German Credit验证了该算法的有效性.实验结果表明,该算法具有故障识别率高、泛化能力强、建模时间短等特点.  相似文献   

5.
相关向量机(RVM)是在稀疏贝叶斯框架下提出的稀疏模型,由于其强大的稀疏性和泛化能力,近年来在机器学习领域得到了广泛研究和应用,但和传统的决策树、神经网络算法及支持向量机一样,RVM不具有代价敏感性,不能直接用于代价敏感学习。针对监督学习中错误分类带来的代价问题,提出代价敏感相关向量分类(CS-RVC)算法,在相关向量机的基础上,通过赋予每类样本不同的误分代价,使其更加注重误分类代价较高的样本分类准确率,使得整体误分类代价降低以实现代价敏感挖掘。实验结果表明,该算法具有良好的稀疏性并能够有效地解决代价敏感分类问题。  相似文献   

6.
大多数非均衡数据集的研究集中于纯重构数据集或者纯代价敏感学习,本文针对数据集类分布非均衡和不相等误分类代价往往同时发生这一事实,提出了一种以最小误分类代价为目标的基于混合重取样的代价敏感学习算法。该算法将两种不同类型解决方案有机地融合在一起,先用样本类空间重构的方法使原始数据集的两类数据达到基本均衡,然后再引入代价敏感学习算法进行分类,能提高少数类分类精度,同时有效降低总的误分类代价。实验结果验证了该算法在处理非均衡类问题时比传统算法要优越。  相似文献   

7.
针对最小化错误分类器不一定满足最小化误分类代价的问题,提出了一种代价敏感准则--即最小化误分类代价和最小化错误分类率的双重准则.研究了基于代价敏感准则的贝叶斯网络结构学习,要求搜索网络结构时在满足误分类代价最小的同时,还要满足错误分类率优于当前的最优模型.在UCI数据集上学习代价敏感贝叶斯网络,并与相应的生成贝叶斯网络和判别贝叶斯网络进行比较,结果表明了代价敏感贝叶斯网络的有效性.  相似文献   

8.
标准的分类器设计大多都是基于整体最小化错误率.在入侵检测、医疗诊断等领域中,不同类别的误分类通常具有不等的损失.文中采用支持向量机建立模型,在组合算法的思想下引入组合代价敏感支持向量机,弥补传统代价敏感支持向量机在分类精度上的不可控.在模型对比中引入了更为实际的对比方式,从而能更好地选取模型,以减少总体误分代价.文中考虑不同类别的误分代价的前提下建立合适的支持向量机模型,并成功地应用在个人信用分类上  相似文献   

9.
周尔昊  高尚 《计算机与数字工程》2021,49(9):1763-1766,1883
分类器集成通过将弱学习器提升为强学习器,提高了分类器分类的准确性.但当它面对不平衡数据问题时,虽然比单个分类器效果要好,但依旧无法达到预期效果.基于此提出了一种代价敏感的旋转森林算法(CROF),利用旋转森林进行数据预处理,并将代价函数引入基分类器构造中,最终获得面向不平衡数据问题的新的集成分类器.经实验表明,CROF算法能够有效提高少数类的分类能力,可以较好处理不平衡问题.  相似文献   

10.
不平衡数据的集成分类算法综述   总被引:1,自引:0,他引:1  
集成学习是通过集成多个基分类器共同决策的机器学习技术,通过不同的样本集训练有差异的基分类器,得到的集成分类器可以有效地提高学习效果。在基分类器的训练过程中,可以通过代价敏感技术和数据采样实现不平衡数据的处理。由于集成学习在不平衡数据分类的优势,针对不平衡数据的集成分类算法得到广泛研究。详细分析了不平衡数据集成分类算法的研究现状,比较了现有算法的差异和各自存在的优点及问题,提出和分析了有待进一步研究的问题。  相似文献   

11.
Cost-sensitive learning is a crucial problem in machine learning research. Traditional classification problem assumes that the misclassification for each category has the same cost, and the target of learning algorithm is to minimize the expected error rate. In cost-sensitive learning, costs of misclassification for samples of different categories are not the same; the target of algorithm is to minimize the sum of misclassification cost. Cost-sensitive learning can meet the actual demand of real-life classification problems, such as medical diagnosis, financial projections, and so on. Due to fast learning speed and perfect performance, extreme learning machine (ELM) has become one of the best classification algorithms, while voting based on extreme learning machine (V-ELM) makes classification results more accurate and stable. However, V-ELM and some other versions of ELM are all based on the assumption that all misclassifications have same cost. Therefore, they cannot solve cost-sensitive problems well. To overcome the drawback of ELMs mentioned above, an algorithm called cost-sensitive ELM (CS-ELM) is proposed by introducing misclassification cost of each sample into V-ELM. Experimental results on gene expression data show that CS-ELM is effective in reducing misclassification cost.  相似文献   

12.
Abstract: Decision tree induction has been widely studied and applied. In safety applications, such as determining whether a chemical process is safe or whether a person has a medical condition, the cost of misclassification in one of the classes is significantly higher than in the other class. Several authors have tackled this problem by developing cost-sensitive decision tree learning algorithms or have suggested ways of changing the distribution of training examples to bias the decision tree learning process so as to take account of costs. A prerequisite for applying such algorithms is the availability of costs of misclassification. Although this may be possible for some applications, obtaining reasonable estimates of costs of misclassification is not easy in the area of safety .
This paper presents a new algorithm for applications where the cost of misclassifications cannot be quantified, although the cost of misclassification in one class is known to be significantly higher than in another class. The algorithm utilizes linear discriminant analysis to identify oblique relationships between continuous attributes and then carries out an appropriate modification to ensure that the resulting tree errs on the side of safety. The algorithm is evaluated with respect to one of the best known cost-sensitive algorithms (ICET), a well-known oblique decision tree algorithm (OC1) and an algorithm that utilizes robust linear programming.  相似文献   

13.
特征选择是机器学习和数据挖据中一个重要的预处理步骤,而类别不均衡数据的特征选择是机器学习和模式识别中的一个热点研究问题。多数传统的特征选择分类算法追求高精度,并假设数据没有误分类代价或者有同样的代价。在现实应用中,不同的误分类往往会产生不同的误分类代价。为了得到最小误分类代价下的特征子集,本文提出一种基于样本邻域保持的代价敏感特征选择算法。该算法的核心思想是把样本邻域引入现有的代价敏感特征选择框架。在8个真实数据集上的实验结果表明了该算法的优越性。  相似文献   

14.
Cost-sensitive learning algorithms are typically designed for minimizing the total cost when multiple costs are taken into account. Like other learning algorithms, cost-sensitive learning algorithms must face a significant challenge, over-fitting, in an applied context of cost-sensitive learning. Specifically speaking, they can generate good results on training data but normally do not produce an optimal model when applied to unseen data in real world applications. It is called data over-fitting. This paper deals with the issue of data over-fitting by designing three simple and efficient strategies, feature selection, smoothing and threshold pruning, against the TCSDT (test cost-sensitive decision tree) method. The feature selection approach is used to pre-process the data set before applying the TCSDT algorithm. The smoothing and threshold pruning are used in a TCSDT algorithm before calculating the class probability estimate for each decision tree leaf. To evaluate our approaches, we conduct extensive experiments on the selected UCI data sets across different cost ratios, and on a real world data set, KDD-98 with real misclassification cost. The experimental results show that our algorithms outperform both the original TCSDT and other competing algorithms on reducing data over-fitting.  相似文献   

15.
代价敏感属性选择问题的目的是通过权衡测试代价和误分类代价,得到一个具有最小总代价的属性子集。目前,多数代价敏感属性选择方法只考虑误分类代价固定不变的情况,不能较好地解决类分布不均衡等问题。而在大规模数据集上,算法效率不理想也是代价敏感属性选择的主要问题之一。针对这些问题,以总代价最小为目标,设计了一种新的动态误分类代价机制。结合分治思想,根据数据集规模按列自适应拆分各数据集。基于动态误分类代价重新定义最小代价属性选择问题,提出了动态误分类代价下的代价敏感属性选择分治算法。通过实验表明,该算法能在提高效率的同时获得最优误分类代价,从而保证所得属性子集的总代价最小。  相似文献   

16.
The last decade has seen an increase in the attention paid to the development of cost-sensitive learning algorithms that aim to minimize misclassification costs while still maintaining accuracy. Most of this attention has been on cost-sensitive decision tree learning, whereas relatively little attention has been paid to assess if it is possible to develop better cost-sensitive classifiers based on Bayesian networks. Hence, this paper presents EBNO, an algorithm that utilizes Genetic algorithms to learn cost-sensitive Bayesian networks, where genes are utilized to represent the links between the nodes in Bayesian networks and the expected cost is used as a fitness function. An empirical comparison of the new algorithm has been carried out with respect to (a) an algorithm that induces cost-insensitive Bayesian networks to provide a base line, (b) ICET, a well-known algorithm that uses Genetic algorithms to induce cost-sensitive decision trees, (c) use of MetaCost to induce cost-sensitive Bayesian networks via bagging (d) use of AdaBoost to induce cost-sensitive Bayesian networks, and (e) use of XGBoost, a gradient boosting algorithm, to induce cost-sensitive decision trees. An empirical evaluation on 28 data sets reveals that EBNO performs well in comparison with the algorithms that produce single interpretable models and performs just as well as algorithms that use bagging and boosting methods.  相似文献   

17.
Cultural modeling aims at developing behavioral models of groups and analyzing the impact of culture factors on group behavior using computational methods. Machine learning methods and in particular classification, play a central role in such applications. In modeling cultural data, it is expected that standard classifiers yield good performance under the assumption that different classification errors have uniform costs. However, this assumption is often violated in practice. Therefore, the performance of standard classifiers is severely hindered. To handle this problem, this paper empirically studies cost-sensitive learning in cultural modeling. We consider cost factor when building the classifiers, with the aim of minimizing total misclassification costs. We conduct experiments to investigate four typical cost-sensitive learning methods, combine them with six standard classifiers and evaluate their performance under various conditions. Our empirical study verifies the effectiveness of cost-sensitive learning in cultural modeling. Based on the experimental results, we gain a thorough insight into the problem of non-uniform misclassification costs, as well as the selection of cost-sensitive methods, base classifiers and method-classifier pairs for this domain. Furthermore, we propose an improved algorithm which outperforms the best method-classifier pair using the benchmark cultural datasets.  相似文献   

18.
Test strategies for cost-sensitive decision trees   总被引:2,自引:0,他引:2  
In medical diagnosis, doctors must often determine what medical tests (e.g., X-ray and blood tests) should be ordered for a patient to minimize the total cost of medical tests and misdiagnosis. In this paper, we design cost-sensitive machine learning algorithms to model this learning and diagnosis process. Medical tests are like attributes in machine learning whose values may be obtained at a cost (attribute cost), and misdiagnoses are like misclassifications which may also incur a cost (misclassification cost). We first propose a lazy decision tree learning algorithm that minimizes the sum of attribute costs and misclassification costs. Then, we design several novel "test strategies" that can request to obtain values of unknown attributes at a cost (similar to doctors' ordering of medical tests at a cost) in order to minimize the total cost for test examples (new patients). These test strategies correspond to different situations in real-world diagnoses. We empirically evaluate these test strategies, and show that they are effective and outperform previous methods. Our results can be readily applied to real-world diagnosis tasks. A case study on heart disease is given throughout the paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号