首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is an actual and challenging issue to learn cost-sensitive models from those datasets that are with few labeled data and plentiful unlabeled data, because some time labeled data are very difficult, time consuming and/or expensive to obtain. To solve this issue, in this paper we proposed two classification strategies to learn cost-sensitive classifier from training datasets with both labeled and unlabeled data, based on Expectation Maximization (EM). The first method, Direct-EM, uses EM to build a semi-supervised classifier, then directly computes the optimal class label for each test example using the class probability produced by the learning model. The second method, CS-EM, modifies EM by incorporating misclassification cost into the probability estimation process. We conducted extensive experiments to evaluate the efficiency, and results show that when using only a small number of labeled training examples, the CS-EM outperforms the other competing methods on majority of the selected UCI data sets across different cost ratios, especially when cost ratio is high.  相似文献   

2.
Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics.The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios.The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification.  相似文献   

3.
多分类问题代价敏感AdaBoost算法   总被引:8,自引:2,他引:6  
付忠良 《自动化学报》2011,37(8):973-983
针对目前多分类代价敏感分类问题在转换成二分类代价敏感分类问题存在的代价合并问题, 研究并构造出了可直接应用于多分类问题的代价敏感AdaBoost算法.算法具有与连续AdaBoost算法 类似的流程和误差估计. 当代价完全相等时, 该算法就变成了一种新的多分类的连续AdaBoost算法, 算法能够确保训练错误率随着训练的分类器的个数增加而降低, 但不直接要求各个分类器相互独立条件, 或者说独立性条件可以通过算法规则来保证, 但现有多分类连续AdaBoost算法的推导必须要求各个分类器相互独立. 实验数据表明, 算法可以真正实现分类结果偏向错分代价较小的类, 特别当每一类被错分成其他类的代价不平衡但平均代价相等时, 目前已有的多分类代价敏感学习算法会失效, 但新方法仍然能 实现最小的错分代价. 研究方法为进一步研究集成学习算法提供了一种新的思路, 得到了一种易操作并近似满足分类错误率最小的多标签分类问题的AdaBoost算法.  相似文献   

4.
Real-life datasets are often imbalanced, that is, there are significantly more training samples available for some classes than for others, and consequently the conventional aim of reducing overall classification accuracy is not appropriate when dealing with such problems. Various approaches have been introduced in the literature to deal with imbalanced datasets, and are typically based on oversampling, undersampling or cost-sensitive classification. In this paper, we introduce an effective ensemble of cost-sensitive decision trees for imbalanced classification. Base classifiers are constructed according to a given cost matrix, but are trained on random feature subspaces to ensure sufficient diversity of the ensemble members. We employ an evolutionary algorithm for simultaneous classifier selection and assignment of committee member weights for the fusion process. Our proposed algorithm is evaluated on a variety of benchmark datasets, and is confirmed to lead to improved recognition of the minority class, to be capable of outperforming other state-of-the-art algorithms, and hence to represent a useful and effective approach for dealing with imbalanced datasets.  相似文献   

5.
郑燕  王杨  郝青峰  甘振韬 《计算机应用》2014,34(5):1336-1340
传统的超网络模型在处理不平衡数据分类问题时,具有很大的偏向性,正类的识别率远远高于负类。为此,提出了一种代价敏感超网络Boosting集成算法。首先,将代价敏感学习引入超网络模型,提出了代价敏感的超网络模型;同时,为了使算法能够自适应正类的错分代价,采用Boosting算法对代价敏感超网络进行集成。代价敏感超网络能很好地修正传统的超网络在处理不平衡数据分类问题时过分偏向正类的缺陷,提高对负类的分类准确性。实验结果表明,代价敏感超网络Boosting集成算法具有处理不平衡数据分类问题的优势。  相似文献   

6.
一种用于不平衡数据分类的改进AdaBoost算法   总被引:3,自引:1,他引:3  
真实世界中存在大量的类别不平衡分类问题,传统的机器学习算法如AdaBoost算法,关注的是分类器的整体性能,而没有给予小类更多的关注。因此针对类别不平衡学习算法的研究是机器学习的一个重要方向。AsymBoost作为AdaBoost的一种改进算法,用于类别不平衡学习时,牺牲大类样本的识别精度来提高小类样本的分类性能。AsymBoost算法依然可能遭遇样本权重过大造成的过适应问题。据此提出了一种新型的AdaBoost改进算法。该方法通过对大类中分类困难样本的权重和标签进行处理,使分类器能够同时获得较好的查准率和查全率。实验结果表明,该方法可以有效提高在不平衡数据集上的分类性能。  相似文献   

7.
代价敏感概率神经网络及其在故障诊断中的应用   总被引:3,自引:1,他引:2  
针对传统的分类算法人多以误分率最小化为目标,忽略了误分类型之间的差别和数据集的非平衡性的问题,提出代价敏感概率神经网络算法.该算法将代价敏感机制引入概率神经网络,用期望代价取代误分率,以期望代价最小化为目标,基于期望代价最小的贝叶斯决策规则预测新样本类别.采用工业现场数据和数据集German Credit验证了该算法的有效性.实验结果表明,该算法具有故障识别率高、泛化能力强、建模时间短等特点.  相似文献   

8.
We introduce a multi-class generalization of AdaBoost with binary weak-learners. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of punishments for failures and rewards for successes. Moreover, the stage-wise optimization of this model introduces an asymmetric boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the boosting algorithm takes into account class imbalances when building the ensemble. The experiments performed compare this new approach favorably to AdaBoost.MH, GentleBoost and the SAMME algorithms.  相似文献   

9.
More than two decades ago the imbalanced data problem turned out to be one of the most important and challenging problems. Indeed, missing information about the minority class leads to a significant degradation in classifier performance. Moreover, comprehensive research has proved that there are certain factors increasing the problem’s complexity. These additional difficulties are closely related to the data distribution over decision classes. In spite of numerous methods which have been proposed, the flexibility of existing solutions needs further improvement. Therefore, we offer a novel rough–granular computing approach (RGA, in short) to address the mentioned issues. New synthetic examples are generated only in specific regions of feature space. This selective oversampling approach is applied to reduce the number of misclassified minority class examples. A strategy relevant for a given problem is obtained by formation of information granules and an analysis of their degrees of inclusion in the minority class. Potential inconsistencies are eliminated by applying an editing phase based on a similarity relation. The most significant algorithm parameters are tuned in an iterative process. The set of evaluated parameters includes the number of nearest neighbours, complexity threshold, distance threshold and cardinality redundancy. Each data model is built by exploiting different parameters’ values. The results obtained by the experimental study on different datasets from the UCI repository are presented. They prove that the proposed method of inducing the neighbourhoods of examples is crucial in the proper creation of synthetic positive instances. The proposed algorithm outperforms related methods in most of the tested datasets. The set of valid parameters for the Rough–Granular Approach (RGA) technique is established.  相似文献   

10.
通过剪枝技术与欠采样技术相结合来选择合适数据,以提高少数类分类精度,研究欠采样技术在不平衡数据集环境下的影响。结果表明,与直接欠采样算法相比,本文算法不仅在accuracy值上有所提高,更重要的是大大改善了g-means值,特别是对非平衡率较大的数据集效果会更好。  相似文献   

11.
Recently, the problem of imbalanced data classification has drawn a significant amount of interest from academia, industry and government funding agencies. The fundamental issue with imbalanced data classification is the imbalanced data has posed a significant drawback of the performance of most standard learning algorithms, which assume or expect balanced class distribution or equal misclassification costs. Boosting is a meta-technique that is applicable to most learning algorithms. This paper gives a review of boosting methods for imbalanced data classification, denoted as IDBoosting (Imbalanced-data-boosting), where conventional learning algorithms can be integrated without further modifications. The main focus is on the intrinsic mechanisms without considering implementation detail. Existing methods are catalogued and each class is displayed in detail in terms of design criteria, typical algorithms and performance analysis. The essence of two IDBoosting methods is discovered followed by experimental evidence and useful reference point for future research are also given.  相似文献   

12.
In the class imbalanced learning scenario, traditional machine learning algorithms focusing on optimizing the overall accuracy tend to achieve poor classification performance especially for the minority class in which we are most interested. To solve this problem, many effective approaches have been proposed. Among them, the bagging ensemble methods with integration of the under-sampling techniques have demonstrated better performance than some other ones including the bagging ensemble methods integrated with the over-sampling techniques, the cost-sensitive methods, etc. Although these under-sampling techniques promote the diversity among the generated base classifiers with the help of random partition or sampling for the majority class, they do not take any measure to ensure the individual classification performance, consequently affecting the achievability of better ensemble performance. On the other hand, evolutionary under-sampling EUS as a novel undersampling technique has been successfully applied in searching for the best majority class subset for training a good-performance nearest neighbor classifier. Inspired by EUS, in this paper, we try to introduce it into the under-sampling bagging framework and propose an EUS based bagging ensemble method EUS-Bag by designing a new fitness function considering three factors to make EUS better suited to the framework. With our fitness function, EUS-Bag could generate a set of accurate and diverse base classifiers. To verify the effectiveness of EUS-Bag, we conduct a series of comparison experiments on 22 two-class imbalanced classification problems. Experimental results measured using recall, geometric mean and AUC all demonstrate its superior performance.  相似文献   

13.
为提高不平衡数据的分类性能,提出了基于度量指标优化的不平衡数据Boosting算法。该算法结合不平衡数据分类性能度量标准和Boosting算法,使用不平衡数据分类性能度量指标代替原有误分率指标,分别采用带有权重的正类和负类召回率、F-measure和G-means指标对Boosting算法进行优化,按照不同的度量指标计算Alpha 值进行迭代,得到带有加权值的弱学习器组合,最后使用Boosting算法进行优化。经过实验验证,与带有权重的Boosting算法进行比较,该算法对一定数据集的AUC分类性能指标有一定提高,错误率有所下降,对F-measure和G-mean性能指标有一定的改善,说明该算法侧重提高正类分类性能,改善不平衡数据的整体分类性能。  相似文献   

14.
极限学习机的相异性集成算法(Dissimilarity Based Ensemble of Extreme Learning Machine,D-ELM)在基因表达数据分类中能够得到较稳定的分类效果,然而这种分类算法是基于分类精度的,当所给样本的误分类代价不相等时,不能直接实现代价敏感分类过程中的最小平均误分类代价的要求。通过在分类过程中引入概率估计以及误分类代价和拒识代价重新构造分类结果,提出了基于相异性集成极限学习机的代价敏感算法(CS-D-ELM)。该算法被运用到基因表达数据集上,得到了较好的分类效果。  相似文献   

15.
针对网络流量分类过程中,传统模型在小类别上的分类性能较差和难以实现频繁、及时更新的问题,提出一种基于集成学习的网络流量分类模型(ELTCM)。首先,根据类别分布信息定义了偏向于小类别的特征度量,利用加权对称不确定性和近似马尔可夫毯(AMB)对网络流量特征进行降维,减小类不平衡问题带来的影响;然后,引入早期概念漂移检测增强模型应对流量特征随网络变化而变化的能力,并通过增量学习的方式提高模型更新训练的灵活性。利用真实流量数据集进行实验,仿真结果表明,与基于C4.5决策树的分类模型(DTITC)和基于错误率的概念漂移检测分类模型(ERCDD)相比,ELTCM的平均整体精确率分别提高了1.13%和0.26%,且各小类别的分类性能皆优于对比模型。ELTCM有较好的泛化能力,能在不牺牲整体分类精度的情况下有效提高小类别的分类性能。  相似文献   

16.
师彦文  王宏杰 《计算机科学》2017,44(Z11):98-101
针对不平衡数据集的有效分类问题,提出一种结合代价敏感学习和随机森林算法的分类器。首先提出了一种新型不纯度度量,该度量不仅考虑了决策树的总代价,还考虑了同一节点对于不同样本的代价差异;其次,执行随机森林算法,对数据集作K次抽样,构建K个基础分类器;然后,基于提出的不纯度度量,通过分类回归树(CART)算法来构建决策树,从而形成决策树森林;最后,随机森林通过投票机制做出数据分类决策。在UCI数据库上进行实验,与传统随机森林和现有的代价敏感随机森林分类器相比,该分类器在分类精度、AUC面积和Kappa系数这3种性能度量上都具有良好的表现。  相似文献   

17.
不平衡入侵检测数据的代价敏感分类策略*   总被引:1,自引:0,他引:1  
提出一种新的预处理算法AdaP,不仅有效避免了数据过度拟合,且可独立使用。针对不平衡的入侵检测数据集,引入代价敏感机制,基于权值矩阵最小化误分类代价的思想,去除部分训练密集区域、拓展稀疏区域的同时再过滤噪声,最终实现了AdaP算法与AdaCost算法相结合的策略。实验证明此策略充分体现了提升算法有效提升前端弱分类算法分类精度和预处理算法平衡稀有类数据的优势,且可有效提高不平衡入侵检测数据的分类性能。  相似文献   

18.
现实中许多领域产生的数据通常具有多个类别并且是不平衡的。在多类不平衡分类中,类重叠、噪声和多个少数类等问题降低了分类器的能力,而有效解决多类不平衡问题已经成为机器学习与数据挖掘领域中重要的研究课题。根据近年来的多类不平衡分类方法的文献,从数据预处理和算法级分类方法两方面进行了分析与总结,并从优缺点和数据集等方面对所有算法进行了详细的分析。在数据预处理方法中,介绍了过采样、欠采样、混合采样和特征选择方法,对使用相同数据集算法的性能进行了比较。从基分类器优化、集成学习和多类分解技术三个方面对算法级分类方法展开介绍和分析。最后对多类不平衡数据分类研究领域的未来发展方向进行总结归纳。  相似文献   

19.
不平衡数据集的分类方法研究   总被引:2,自引:0,他引:2  
传统的分类算法在处理不平衡数据分类问题时会倾向于多数类,而导致少数类的分类精度较低。针对不平衡数据的分类,首先介绍了现有不平衡数据分类的性能评价;然后介绍了现有常用的基于数据采样的方法及现有的分类方法;最后介绍了基于数据采样和分类方法结合的综合方法。  相似文献   

20.
Cost-sensitive learning with conditional Markov networks   总被引:1,自引:0,他引:1  
There has been a recent, growing interest in classification and link prediction in structured domains. Methods such as conditional random fields and relational Markov networks support flexible mechanisms for modeling correlations due to the link structure. In addition, in many structured domains, there is an interesting structure in the risk or cost function associated with different misclassifications. There is a rich tradition of cost-sensitive learning applied to unstructured (IID) data. Here we propose a general framework which can capture correlations in the link structure and handle structured cost functions. We present two new cost-sensitive structured classifiers based on maximum entropy principles. The first determines the cost-sensitive classification by minimizing the expected cost of misclassification. The second directly determines the cost-sensitive classification without going through a probability estimation step. We contrast these approaches with an approach which employs a standard 0/1-loss structured classifier to estimate class conditional probabilities followed by minimization of the expected cost of misclassification and with a cost-sensitive IID classifier that does not utilize the correlations present in the link structure. We demonstrate the utility of our cost-sensitive structured classifiers with experiments on both synthetic and real-world data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号