首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
一般的学习算法通过最小化分类损失使分类错误率最小化,而代价敏感学习则以最小化分类代价为目标,需构造代价敏感损失.本文探讨代价敏感损失的设计准则,首先介绍基于代价敏感风险优化的代价敏感学习方法,然后在Bayes最优分类理论框架下,提出两条代价敏感损失设计准则.接着采用两种常用代价敏感损失生成方法构造平方损失、指数损失、对数损失、支持向量机损失等经典损失函数的代价敏感扩展形式.根据所提出的设计准则,从理论上分析这些代价敏感损失的性能.最后通过实验表明,同时满足两条设计准则的代价敏感损失能有效降低分类代价,从而证明了本文提出的代价敏感损失设计准则的合理性.  相似文献   

2.
非对称AdaBoost算法及其在目标检测中的应用   总被引:1,自引:0,他引:1  
葛俊锋  罗予频 《自动化学报》2009,35(11):1403-1409
针对目标检测中的非对称分类问题,在分析现有的由离散AdaBoost算法扩展得到的代价敏感(即非对称)学习算法的基础上,提出了以三个不同的非对称错误率上界为核心的推导非对称AdaBoost算法的统一框架. 在该框架下, 不仅现有离散型非对称AdaBoost算法之间的关系非常清晰, 而且其中不符合理论推导的部分可以很容易得到修正. 同时, 利用不同的优化方法, 最小化这三个不同上界, 推出了连续型AdaBoost算法的非对称扩展(用Asym-Real AdaBoost和Asym-Gentle AdaBoost 表示). 新的算法不仅在弱分类器组合系数的计算上比现有离散型算法更加方便, 而且实验证明, 在人脸检测和行人检测两方面都获得了比传统对称AdaBoost算法和离散型非对称AdaBoost算法更好的性能.  相似文献   

3.
The last decade has seen an increase in the attention paid to the development of cost-sensitive learning algorithms that aim to minimize misclassification costs while still maintaining accuracy. Most of this attention has been on cost-sensitive decision tree learning, whereas relatively little attention has been paid to assess if it is possible to develop better cost-sensitive classifiers based on Bayesian networks. Hence, this paper presents EBNO, an algorithm that utilizes Genetic algorithms to learn cost-sensitive Bayesian networks, where genes are utilized to represent the links between the nodes in Bayesian networks and the expected cost is used as a fitness function. An empirical comparison of the new algorithm has been carried out with respect to (a) an algorithm that induces cost-insensitive Bayesian networks to provide a base line, (b) ICET, a well-known algorithm that uses Genetic algorithms to induce cost-sensitive decision trees, (c) use of MetaCost to induce cost-sensitive Bayesian networks via bagging (d) use of AdaBoost to induce cost-sensitive Bayesian networks, and (e) use of XGBoost, a gradient boosting algorithm, to induce cost-sensitive decision trees. An empirical evaluation on 28 data sets reveals that EBNO performs well in comparison with the algorithms that produce single interpretable models and performs just as well as algorithms that use bagging and boosting methods.  相似文献   

4.
多标签代价敏感分类集成学习算法   总被引:12,自引:2,他引:10  
付忠良 《自动化学报》2014,40(6):1075-1085
尽管多标签分类问题可以转换成一般多分类问题解决,但多标签代价敏感分类问题却很难转换成多类代价敏感分类问题.通过对多分类代价敏感学习算法扩展为多标签代价敏感学习算法时遇到的一些问题进行分析,提出了一种多标签代价敏感分类集成学习算法.算法的平均错分代价为误检标签代价和漏检标签代价之和,算法的流程类似于自适应提升(Adaptive boosting,AdaBoost)算法,其可以自动学习多个弱分类器来组合成强分类器,强分类器的平均错分代价将随着弱分类器增加而逐渐降低.详细分析了多标签代价敏感分类集成学习算法和多类代价敏感AdaBoost算法的区别,包括输出标签的依据和错分代价的含义.不同于通常的多类代价敏感分类问题,多标签代价敏感分类问题的错分代价要受到一定的限制,详细分析并给出了具体的限制条件.简化该算法得到了一种多标签AdaBoost算法和一种多类代价敏感AdaBoost算法.理论分析和实验结果均表明提出的多标签代价敏感分类集成学习算法是有效的,该算法能实现平均错分代价的最小化.特别地,对于不同类错分代价相差较大的多分类问题,该算法的效果明显好于已有的多类代价敏感AdaBoost算法.  相似文献   

5.
Various forms of additive modeling techniques have been successfully used in many data mining and machine learning–related applications. In spite of their great success, boosting algorithms still suffer from a few open-ended problems that require closer investigation. The efficiency of any additive modeling technique relies significantly on the choice of the weak learners and the form of the loss function. In this paper, we propose a novel multi-resolution approach for choosing the weak learners during additive modeling. Our method applies insights from multi-resolution analysis and chooses the optimal learners at multiple resolutions during different iterations of the boosting algorithms, which are simple yet powerful additive modeling methods. We demonstrate the advantages of this novel framework in both classification and regression problems and show results on both synthetic and real-world datasets taken from the UCI machine learning repository. Though demonstrated specifically in the context of boosting algorithms, our framework can be easily accommodated in general additive modeling techniques. Similarities and distinctions of the proposed algorithm with the popularly used methods like radial basis function networks are also discussed.  相似文献   

6.
Label embedding (LE) is an important family of multi-label classification algorithms that digest the label information jointly for better performance. Different real-world applications evaluate performance by different cost functions of interest. Current LE algorithms often aim to optimize one specific cost function, but they can suffer from bad performance with respect to other cost functions. In this paper, we resolve the performance issue by proposing a novel cost-sensitive LE algorithm that takes the cost function of interest into account. The proposed algorithm, cost-sensitive label embedding with multidimensional scaling (CLEMS), approximates the cost information with the distances of the embedded vectors by using the classic multidimensional scaling approach for manifold learning. CLEMS is able to deal with both symmetric and asymmetric cost functions, and effectively makes cost-sensitive decisions by nearest-neighbor decoding within the embedded vectors. We derive theoretical results that justify how CLEMS achieves the desired cost-sensitivity. Furthermore, extensive experimental results demonstrate that CLEMS is significantly better than a wide spectrum of existing LE algorithms and state-of-the-art cost-sensitive algorithms across different cost functions.  相似文献   

7.
Classification of data with imbalanced class distribution has posed a significant drawback of the performance attainable by most standard classifier learning algorithms, which assume a relatively balanced class distribution and equal misclassification costs. The significant difficulty and frequent occurrence of the class imbalance problem indicate the need for extra research efforts. The objective of this paper is to investigate meta-techniques applicable to most classifier learning algorithms, with the aim to advance the classification of imbalanced data. The AdaBoost algorithm is reported as a successful meta-technique for improving classification accuracy. The insight gained from a comprehensive analysis of the AdaBoost algorithm in terms of its advantages and shortcomings in tacking the class imbalance problem leads to the exploration of three cost-sensitive boosting algorithms, which are developed by introducing cost items into the learning framework of AdaBoost. Further analysis shows that one of the proposed algorithms tallies with the stagewise additive modelling in statistics to minimize the cost exponential loss. These boosting algorithms are also studied with respect to their weighting strategies towards different types of samples, and their effectiveness in identifying rare cases through experiments on several real world medical data sets, where the class imbalance problem prevails.  相似文献   

8.
针对代价敏感思想在类不平衡问题中的传统代价给定方式,提出了分类性能需求引导代价优化的因子量化方法。分类性能需求表示为相关于代价因子[c]的正负类分类性能指标函数式,为代价择优标准。应用遗传算法基于该标准在指定值域内寻优,得到最优代价因子,并将其代入代价敏感Boosting学习方法,产生基于给定分类性能的分类模型。折中分类性能的算法实现以正负类召回率的几何平均作为择优标准,选用了四类算法(基算法C4.5和ZeroR)依次在三组样本集上进行分类建模。与传统代价给定方式代入算法相比,寻优过程确定的代价因子代入AdaCost算法后,基于C4.5和ZeroR的分类器在TP与TN上的变化幅度依次为33.3%~200%、[-49%~-15.6%]和[-44.4%~-16.7%、]25%~400%。前者改善了正类误判情形,且未造成负类误判严重化;后者改善了负类严重误判情形,且正类召回率保持在0.5以上,分类性能达到较为均衡的状态。  相似文献   

9.
Robust streaming of video over 802.11 wireless LANs (WLANs) poses many challenges, including coping with packets losses caused by network buffer overflow or link erasures. In this paper, we propose a novel error protection method that can provide adaptive quality-of-service (QoS) to layered coded video by utilizing priority queueing at the network layer and retry-limit adaptation at the link layer. The design of our method is motivated by the observation that the retry limit settings of the MAC layer can be optimized in such a way that the overall packet losses that are caused by either link erasure or buffer overflow are minimized. We developed a real-time retry limit adaptation algorithm to trace the optimal retry limit for both the single-queue (or single-layer) and multiqueue (or multilayer) cases. The video layers are unequally protected over the wireless link by the MAC with different retry limits. In our proposed transmission framework, these retry limits are dynamically adapted depending on the wireless channel conditions and traffic characteristics. Furthermore, the proposed priority queueing discipline is enhanced with packet filtering and purging functionalities that can significantly save bandwidth by discarding obsolete or un-decodable packets from the buffer. Simulations show that the proposed cross-layer protection mechanism can significantly improve the received video quality.  相似文献   

10.
代价敏感的列表排序算法   总被引:1,自引:0,他引:1  
排序学习是信息检索与机器学习中的研究热点之一.在信息检索中,预测排序列表中顶部排序非常重要.但是,排序学习中一类经典的排序算法——列表排序算法——无法强调预测排序列表中顶部排序.为了解决此问题,将代价敏感学习的思想融入到列表排序算法中,提出代价敏感的列表排序算法框架.该框架是在列表排序算法的损失函数中对文档引入权重,且基于性能评价指标NDCG计算文档的权重.在此基础之上,进一步证明了代价敏感的列表排序算法的损失函数是NDCG损失的上界.为了验证代价敏感的列表排序算法的有效性,在此框架下提出了一种代价敏感的ListMLE排序算法,并对该算法开展序保持与泛化性的理论研究工作,从理论上验证了该算法具有序保持特性.在基准数据集上的实验结果表明,在预测排序列表中顶部排序中,代价敏感的ListMLE比传统排序学习算法能取得更好的性能.  相似文献   

11.
Lin HT  Li L 《Neural computation》2012,24(5):1329-1367
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.  相似文献   

12.
η-one-class问题和η-outlier及其LP学习算法   总被引:1,自引:0,他引:1  
陶卿  齐红威  吴高巍  章显 《计算机学报》2004,27(8):1102-1108
用SVM方法研究one-class和outlier问题.在将one-class问题理解为一种函数估计问题的基础上,作者首次定义了η-one-class和η-outlier问题的泛化错误,进而定义了线性可分性和边缘,得到了求解one-class问题的最大边缘、软边缘和v-软边缘算法.这些学习算法具有统计学习理论依据并可归结为求解线性规划问题.算法的实现采用与boosting类似的思路.实验结果表明该文的算法是有实际意义的.  相似文献   

13.
万建武  杨明 《软件学报》2020,31(1):113-136
分类是机器学习的重要任务之一.传统的分类学习算法追求最低的分类错误率,假设不同类型的错误分类具有相等的损失.然而,在诸如人脸识别门禁系统、软件缺陷预测、多标记学习等应用领域中,不同类型的错误分类所导致的损失差异较大.这要求学习算法对可能导致高错分损失的样本加以重点关注,使得学习模型的整体错分损失最小.为解决该问题,代价敏感学习方法引起了研究者的极大关注.以代价敏感学习方法的理论基础作为切入点,系统阐述了代价敏感学习的主要模型方法以及代表性的应用领域.最后,讨论并展望了未来可能的研究趋势.  相似文献   

14.
代价敏感属性选择问题的目的是通过权衡测试代价和误分类代价,得到一个具有最小总代价的属性子集。目前,多数代价敏感属性选择方法只考虑误分类代价固定不变的情况,不能较好地解决类分布不均衡等问题。而在大规模数据集上,算法效率不理想也是代价敏感属性选择的主要问题之一。针对这些问题,以总代价最小为目标,设计了一种新的动态误分类代价机制。结合分治思想,根据数据集规模按列自适应拆分各数据集。基于动态误分类代价重新定义最小代价属性选择问题,提出了动态误分类代价下的代价敏感属性选择分治算法。通过实验表明,该算法能在提高效率的同时获得最优误分类代价,从而保证所得属性子集的总代价最小。  相似文献   

15.
Cost-sensitive learning algorithms are typically designed for minimizing the total cost when multiple costs are taken into account. Like other learning algorithms, cost-sensitive learning algorithms must face a significant challenge, over-fitting, in an applied context of cost-sensitive learning. Specifically speaking, they can generate good results on training data but normally do not produce an optimal model when applied to unseen data in real world applications. It is called data over-fitting. This paper deals with the issue of data over-fitting by designing three simple and efficient strategies, feature selection, smoothing and threshold pruning, against the TCSDT (test cost-sensitive decision tree) method. The feature selection approach is used to pre-process the data set before applying the TCSDT algorithm. The smoothing and threshold pruning are used in a TCSDT algorithm before calculating the class probability estimate for each decision tree leaf. To evaluate our approaches, we conduct extensive experiments on the selected UCI data sets across different cost ratios, and on a real world data set, KDD-98 with real misclassification cost. The experimental results show that our algorithms outperform both the original TCSDT and other competing algorithms on reducing data over-fitting.  相似文献   

16.
This paper examines the ability of a multivariable PID controller rejecting measurement noise without the use of any external filter. The work first provides a framework for the design of the PID gains comprising of necessary and sufficient conditions for boundedness of trajectories and zero-error convergence in presence of measurement noise. It turns out that such convergence requires time-varying gains. Subsequently, novel recursive algorithms providing optimal and sub-optimal time-varying PID gains are proposed for discrete-time varying linear multiple-input multiple-output (MIMO) systems. The development of the proposed optimal algorithm is based on minimising a stochastic performance index in presence of erroneous initial conditions, white measurement noise, and white process noise. The proposed algorithms are shown to reject measurement noise provided that the system is asymptotically stable and the product of the input–output coupling matrices is full-column rank. In addition, convergence results are presented for discretised continuous-time plants. Simulation results are included to illustrate the performance capabilities of the proposed algorithms.  相似文献   

17.
Over the last 10 years, the popularity of solar panels for catching solar energy has reduced development and manufacturing costs. Nevertheless, costs per watt are still high when compared to other less-clean energy sources such as wind energy. Therefore, the goal of the sun tracker is to maximize the energy generation of solar cells, thus giving a competitive advantage to solar energy. However, finding the optimal position is a very complex task and different algorithms such as genetic algorithms or swarm-based optimization algorithms have been used to improve the results. This article shows the design and implementation of two optimal sun tracker algorithms. The first method presented is genetic algorithms, which allow finding the position of the sun tracker based on an offline solution. When genetic algorithms find the solution offline, the results can be programmed in a simple lookup table. This approximation decreases the computational cost, and it is effective for geographical climes where conditions are constant. However, there are places with nonconstant climate conditions that need online optization algorithms. In this case, a newly developed intelligent water drop algorithm is proposed for running an online solution. Both methods were designed for the sun tracker problem and were implemented. The power and energy analytics show that the algorithms increase the efficiency of the sun tracker, compared to a static solar cell, by at least 40% in some cases. The sun tracker presented gives an excellent solution for obtaining energy from the sun during diverse weather conditions. This work also introduces a novel derivation of the intelligent water drop algorithm for sun trackers based on a nonconventional trajectory and conventional genetic algorithms adjusted for sun tracker needs. The experimental results are shown in order to validate the methodologies proposed.  相似文献   

18.
多分类问题代价敏感AdaBoost算法   总被引:8,自引:2,他引:6  
付忠良 《自动化学报》2011,37(8):973-983
针对目前多分类代价敏感分类问题在转换成二分类代价敏感分类问题存在的代价合并问题, 研究并构造出了可直接应用于多分类问题的代价敏感AdaBoost算法.算法具有与连续AdaBoost算法 类似的流程和误差估计. 当代价完全相等时, 该算法就变成了一种新的多分类的连续AdaBoost算法, 算法能够确保训练错误率随着训练的分类器的个数增加而降低, 但不直接要求各个分类器相互独立条件, 或者说独立性条件可以通过算法规则来保证, 但现有多分类连续AdaBoost算法的推导必须要求各个分类器相互独立. 实验数据表明, 算法可以真正实现分类结果偏向错分代价较小的类, 特别当每一类被错分成其他类的代价不平衡但平均代价相等时, 目前已有的多分类代价敏感学习算法会失效, 但新方法仍然能 实现最小的错分代价. 研究方法为进一步研究集成学习算法提供了一种新的思路, 得到了一种易操作并近似满足分类错误率最小的多标签分类问题的AdaBoost算法.  相似文献   

19.
We tackle the structured output classification problem using the Conditional Random Fields (CRFs). Unlike the standard 0/1 loss case, we consider a cost-sensitive learning setting where we are given a non-0/1 misclassification cost matrix at the individual output level. Although the task of cost-sensitive classification has many interesting practical applications that retain domain-specific scales in the output space (e.g., hierarchical or ordinal scale), most CRF learning algorithms are unable to effectively deal with the cost-sensitive scenarios as they merely assume a nominal scale (hence 0/1 loss) in the output space. In this paper, we incorporate the cost-sensitive loss into the large margin learning framework. By large margin learning, the proposed algorithm inherits most benefits from the SVM-like margin-based classifiers, such as the provable generalization error bounds. Moreover, the soft-max approximation employed in our approach yields a convex optimization similar to the standard CRF learning with only slight modification in the potential functions. We also provide the theoretical cost-sensitive generalization error bound. We demonstrate the improved prediction performance of the proposed method over the existing approaches in a diverse set of sequence/image structured prediction problems that often arise in pattern recognition and computer vision domains.  相似文献   

20.
代价敏感学习是解决不均衡数据分类问题的一个重要策略,数据特征的非线性也给分类带来一定困难,针对此问题,结合代价敏感学习思想与核主成分分析KPCA提出一种代价敏感的Stacking集成算法KPCA-Stacking.首先对原始数据集采用自适应综合采样方法(ADASYN)进行过采样并进行KPCA降维处理;其次将KNN、LD...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号