首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
为改进SVM对不均衡数据的分类性能,提出一种基于拆分集成的不均衡数据分类算法,该算法对多数类样本依据类别之间的比例通过聚类划分为多个子集,各子集分别与少数类合并成多个训练子集,通过对各训练子集进行学习获得多个分类器,利用WE集成分类器方法对多个分类器进行集成,获得最终分类器,以此改进在不均衡数据下的分类性能.在UCI数据集上的实验结果表明,该算法的有效性,特别是对少数类样本的分类性能.  相似文献   

2.
高锋  黄海燕 《计算机科学》2017,44(8):225-229
不平衡数据严重影响了传统分类算法的性能,导致少数类的识别率降低。提出一种基于邻域特征的混合抽样技术,该技术根据样本邻域中的类别分布特征来确定采样权重,进而采用混合抽样的方法来获得平衡的数据集;然后采用一种基于局部置信度的动态集成方法,通过分类学习生成基分类器,对于每个检验的样本,根据局部分类精度动态地选择最优的基分类器进行组合。通过UCI标准数据集上的实验表明,该方法能够同时提高不平衡数据中少数类和多数类的分类精度。  相似文献   

3.
不平衡数据集中的组合分类算法   总被引:1,自引:0,他引:1  
吴广潮  陈奇刚 《计算机工程与设计》2007,28(23):5687-5689,5761
为提高少数类的分类性能,对基于数据预处理的组合分类器算法进行了研究.利用Tomek links对数据集进行预处理;把新数据集里的多数类样本按照不平衡比拆分为多个子集,每个子集和少数类样本合并成新子集;用最小二乘支持向量机对每个新子集进行训练,把训练后的各个子分类器组合为一个分类系统,新的测试样本的类别将由这个分类系统投票表决.数据试验结果表明,该算法在多数类和少数类的分类性能方面,都优于最小二乘支持向量机过抽样方法和欠抽样方法.  相似文献   

4.
张枭山  罗强 《计算机科学》2015,42(Z11):63-66
在面对现实中广泛存在的不平衡数据分类问题时,大多数 传统分类算法假定数据集类分布是平衡的,分类结果偏向多数类,效果不理想。为此,提出了一种基于聚类融合欠抽样的改进AdaBoost分类算法。该算法首先进行聚类融合,根据样本权值从每个簇中抽取一定比例的多数类和全部的少数类组成平衡数据集。使用AdaBoost算法框架,对多数类和少数类的错分类给予不同的权重调整,选择性地集成分类效果较好的几个基分类器。实验结果表明,该算法在处理不平衡数据分类上具有一定的优势。  相似文献   

5.
董元方  李雄飞  李军 《计算机工程》2010,36(24):161-163
针对不平衡数据学习问题,提出一种采用渐进学习方式的分类算法。根据属性值域分布,逐步添加合成少数类样例,并在阶段分类器出现误分时,及时删除被误分的合成样例。当数据达到预期的平衡程度时,用原始数据和合成数据训练学习算法,得到最终分类器。实验结果表明,该算法优于C4.5算法,并在多数数据集上优于SMOTEBoost和DataBoost-IM。  相似文献   

6.
为了提高不平衡数据集中少数类的分类准确率,文章对组合分类算法进行了研究,提出了一种新的组合分类算法WDB.该算法采用决策树C4.5和朴素贝叶斯两种不同的分类器作为基分类器,选择精确度(precision)作为权值,根据不同的训练集,通过"权值学习"的方式自动调整各基分类器的权值大小,然后,结合各基分类器的预测结果,利用加权平均法进行代数组合,构造出一种新的分类算法WDB.最后,以开放的不平衡数据集作为数据源,利用常见的性能评价指标进行实验验证.实验结果证明,在组合分类算法中引入"权值学习"能够发挥基分类器对于特定数据类型的分类优势,提高预测结果的准确率.WDB算法对不平衡数据集分类的性能优于决策树C4.5算法、朴素贝叶斯算法及随机森林算法,能够有效提升不平衡数据集中少数类的分类准确率.  相似文献   

7.
不平衡数据分类是机器学习研究领域中的一个热点问题。针对传统分类算法处理不平衡数据的少数类识别率过低问题,文章提出了一种基于聚类的改进AdaBoost分类算法。算法首先进行基于聚类的欠采样,在多数类样本上进行K均值聚类,之后提取聚类质心,与少数类样本数目一致的聚类质心和所有少数类样本组成新的平衡训练集。为了避免少数类样本数量过少而使训练集过小导致分类精度下降,采用少数过采样技术过采样结合聚类欠采样。然后,借鉴代价敏感学习思想,对AdaBoost算法的基分类器分类误差函数进行改进,赋予不同类别样本非对称错分损失。实验结果表明,算法使模型训练样本具有较高的代表性,在保证总体分类性能的同时提高了少数类的分类精度。  相似文献   

8.
基于传统模型的实际分类问题,不均衡分类是一个常见的挑战问题。由于传统分类器较难学习少数类数据集内部的本质结构,导致更多地偏向于多数类,从而使少数类样本被误分为多数类样本。与此同时,样本集中的冗余数据和噪音数据也会对分类器造成困扰。为有效处理上述问题,提出一种新的不均衡分类框架SSIC,该框架充分考虑数据统计特性,自适应从大小类中选取有价值样本,并结合代价敏感学习构建不均衡数据分类器。首先,SSIC通过组合部分多数类实例和所有少数类实例来构造几个平衡的数据子集。在每个子集上,SSIC充分利用数据的特征来提取可区分的高级特征并自适应地选择重要样本,从而可以去除冗余噪声数据。其次,SSIC通过在每个样本上自动分配适当的权重来引入一种代价敏感的支持向量机(SVM),以便将少数类视为与多数类相等。  相似文献   

9.
针对不平衡数据集中的少数类在传统分类器上预测精度低的问题,提出了一种基于欠采样和代价敏感的不平衡数据分类算法——USCBoost.首先在AdaBoost算法每次迭代训练基分类器之前对多数类样本按权重由大到小进行排序,根据样本权重选取与少数类样本数量相当的多数类样本;之后将采样后的多数类样本权重归一化并与少数类样本组成临...  相似文献   

10.
针对少数类样本合成过采样技术(Synthetic Minority Over-Sampling Technique, SMOTE)在合成少数类新样本时会带来噪音问题,提出了一种改进降噪自编码神经网络不平衡数据分类算法(SMOTE-SDAE)。该算法首先通过SMOTE方法合成少数类新样本以均衡原始数据集,考虑到合成样本过程中会产生噪音的影响,利用降噪自编码神经网络算法的逐层无监督降噪学习和有监督微调过程,有效实现对过采样数据集的降噪处理与数据分类。在UCI不平衡数据集上实验结果表明,相比传统SVM算法,该算法显著提高了不平衡数据集中少数类的分类精度。  相似文献   

11.
提出了一种使用基于规则的基分类器建立组合分类器的新方法PCARules。尽管新方法也采用基分类器预测的加权投票来决定待分类样本的类,但是为基分类器创建训练数据集的方法与bagging和boosting完全不同。该方法不是通过抽样为基分类器创建数据集,而是随机地将特征划分成K个子集,使用PCA得到每个子集的主成分,形成新的特征空间,并将所有训练数据映射到新的特征空间作为基分类器的训练集。在UCI机器学习库的30个随机选取的数据集上的实验表明:算法不仅能够显著提高基于规则的分类方法的分类性能,而且与bagging和boosting等传统组合方法相比,在大部分数据集上都具有更高的分类准确率。  相似文献   

12.
A new approach to design of a fuzzy-rule-based classifier that is capable of selecting informative features is discussed. Three basic stages of the classifier construction—feature selection, generation of fuzzy rule base, and optimization of the parameters of rule antecedents—are identified. At the first stage, several feature subsets on the basis of discrete harmonic search are generated by using the wrapper scheme. The classifier structure is formed by the rule base generation algorithm by using extreme feature values. The optimal parameters of the fuzzy classifier are extracted from the training data using continuous harmonic search. Akaike information criterion is deployed to identify the best performing classifiers. The performance of the classifier was tested on real-world KEEL and KDD Cup 1999 datasets. The proposed algorithms were compared with other fuzzy classifiers tested on the same datasets. Experimental results show efficiency of the proposed approach and demonstrate that highly accurate classifiers can be constructed by using relatively few features.  相似文献   

13.
In this work, we propose two novel classifiers for multi-class classification problems using mathematical programming optimisation techniques. A hyper box-based classifier (Xu & Papageorgiou, 2009) that iteratively constructs hyper boxes to enclose samples of different classes has been adopted. We firstly propose a new solution procedure that updates the sample weights during each iteration, which tweaks the model to favour those difficult samples in the next iteration and therefore achieves a better final solution. Through a number of real world data classification problems, we demonstrate that the proposed refined classifier results in consistently good classification performance, outperforming the original hyper box classifier and a number of other state-of-the-art classifiers.Furthermore, we introduce a simple data space partition method to reduce the computational cost of the proposed sample re-weighting hyper box classifier. The partition method partitions the original dataset into two disjoint regions, followed by training sample re-weighting hyper box classifier for each region respectively. Through some real world datasets, we demonstrate the data space partition method considerably reduces the computational cost while maintaining the level of prediction accuracies.  相似文献   

14.
Applications like identifying different customers from their unique buying behaviours, determining ratingsof a product given by users based on different sets of features, etc. require classification using class-specific subsets of features. Most of the existing state-of-the-art classifiers for multivariate data use complete feature set for classification regardless of the different class labels. Decision tree classifier can produce class-wise subsets of features. However, none of these classifiers model the relationship between features which may enhance classification accuracy. We call the class-specific subsets of features and the features’ interrelationships as class signatures. In this work, we propose to map the original input space of multivariate data to the feature space characterized by connected graphs as graphs can easily model entities, their attributes, and relationships among attributes. Mostly, entities are modeled using graphs, where graphs occur naturally, for example, chemical compounds. However, graphs do not occur naturally in multivariate data. Thus, extracting class signatures from multivariate data is a challenging task. We propose some feature selection heuristics to obtain class-specific prominent subgraph signatures. We also propose two variants of class signatures based classifier namely: 1) maximum matching signature (gMM), and 2) score and size of matched signatures (gSM). The effectiveness of the proposed approach on real-world and synthetic datasets has been studied and compared with other established classifiers. Experimental results confirm the ascendancy of the proposed class signatures based classifier on most of the datasets.  相似文献   

15.
为了提高分类器集成性能,提出了一种基于聚类算法与排序修剪结合的分类器集成方法。首先将混淆矩阵作为量化基分类器间差异度的工具,通过聚类将分类器划分为若干子集;然后提出一种排序修剪算法,以距离聚类中心最近的分类器为起点,根据分类器的距离对差异度矩阵动态加权,以加权差异度作为排序标准对子集中的分类器进行按比例修剪;最后使用投票法对选出的基分类器进行集成。同时与多种集成方法在UCI数据库中的10组数据集上进行对比与分析,实验结果表明基于聚类与排序修剪的分类器选择方法有效提升了集成系统的分类能力。  相似文献   

16.
丁要军 《计算机应用》2015,35(12):3348-3351
针对不平衡网络流量分类精度不高的问题,在旋转森林算法的基础上结合Bagging算法的Bootstrap抽样和基于分类精度排序的基分类器选择算法,提出一种改进的旋转森林算法。首先,对原始训练集按特征进行子集划分并分别使用Bagging进行样本抽样,通过主成分分析(PCA)生成主成分系数矩阵;然后,在原始训练集和主成分系数矩阵的基础上进行特征转换,生成新的训练子集,再次使用Bagging对子集进行抽样,提升训练集的差异性,并使用训练子集训练C4.5基分类器;最后,使用测试集评价基分类器,依据总体分类精度进行排序筛选,保留分类精度较高的分类器并生成一致分类结果。在不平衡网络流量数据集上进行测试实验,依据准确率和召回率两个标准对C4.5、Bagging、旋转森林和改进的旋转森林四种算法评价,依据模型训练时间和测试时间评价四种算法的时间效率。实验结果表明改进的旋转森林算法对万维网(WWW)协议、Mail协议、Attack协议、对等网(P2P)协议的分类准确度达到99.5%以上,召回率也高于旋转森林、Bagging、C4.5三种算法,可用于网络入侵取证、维护网络安全、提升网络服务质量。  相似文献   

17.
针对异构数据集下的不均衡分类问题,从数据集重采样、集成学习算法和构建弱分类器3个角度出发,提出一种针对异构不均衡数据集的分类方法——HVDM-Adaboost-KNN算法(heterogeneous value difference metric-Adaboost-KNN),该算法首先通过聚类算法对数据集进行均衡处理,获得多个均衡的数据子集,并构建多个子分类器,采用异构距离计算异构数据集中2个样本之间的距离,提高KNN算法的分类准性能,然后用Adaboost算法进行迭代获得最终分类器。用8组UCI数据集来评估算法在不均衡数据集下的分类性能,Adaboost实验结果表明,相比Adaboost等算法,F1值、AUC、G-mean等指标在异构不均衡数据集上的分类性能都有相应的提高。  相似文献   

18.
一种基于微阵列数据的集成分类方法*   总被引:1,自引:0,他引:1  
针对现有的微阵列数据集成分类方法分类精度不高这一问题,提出了一种Bagging-PCA-SVM方法。该方法首先采用Bootstrap技术对训练样本集重复取样,构成大量训练样本子集,然后在每个子集上进行特征选择和主成分分析以消除噪声基因与冗余基因;最后利用支持向量机作为分类器,采用多数投票的方法预测样本的类属。通过三个数据集进行了测试,测试结果表明了该方法的有效性和可行性。  相似文献   

19.
In financial time series forecasting, the problem that we often encounter is how to increase the prediction accuracy as possible using the financial data with noise. In this study, we discuss the use of supervised neural networks as a meta-learning technique to design a financial time series forecasting system to solve this problem. In this system, some data sampling techniques are first used to generate different training subsets from the original datasets. In terms of these different training subsets, different neural networks with different initial conditions or training algorithms are then trained to formulate different prediction models, i.e., base models. Subsequently, to improve the efficiency of predictions of metamodeling, the principal component analysis (PCA) technique is used as a pruning tool to generate an optimal set of base models. Finally, a neural-network-based nonlinear metamodel can be produced by learning from the selected base models, so as to improve the prediction accuracy. For illustration and verification purposes, the proposed metamodel is conducted on four typical financial time series. Empirical results obtained reveal that the proposed neural-network-based nonlinear metamodeling technique is a very promising approach to financial time series forecasting.  相似文献   

20.
This paper proposes a new approach based on missing value pattern discovery for classifying incomplete data. This approach is particularly designed for classification of datasets with a small number of samples and a high percentage of missing values where available missing value treatment approaches do not usually work well. Based on the pattern of the missing values, the proposed approach finds subsets of samples for which most of the features are available and trains a classifier for each subset. Then, it combines the outputs of the classifiers. Subset selection is translated into a clustering problem, allowing derivation of a mathematical framework for it. A trade off is established between the computational complexity (number of subsets) and the accuracy of the overall classifier. To deal with this trade off, a numerical criterion is proposed for the prediction of the overall performance. The proposed method is applied to seven datasets from the popular University of California, Irvine data mining archive and an epilepsy dataset from Henry Ford Hospital, Detroit, Michigan (total of eight datasets). Experimental results show that classification accuracy of the proposed method is superior to those of the widely used multiple imputations method and four other methods. They also show that the level of superiority depends on the pattern and percentage of missing values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号