首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
为了有效提升支持向量机的泛化性能,提出两种集成算法对其进行训练.首先分析了扰动输入特征空间和扰动模型参数两种方式对于增大成员分类器之间差异性的作用;然后提出两种基于二重扰动机制的集成训练算法.其共同特点是,同时扰动输入特征空间和模型参数以产生成员分类器,并利用多数投票法对它们进行组合.实验结果表明,因为同时缩减了误差的偏差部分和方差部分,所以两种算法均能显著提升支持向量机的泛化性能.  相似文献   

2.

为了有效提升支持向量机的泛化性能,提出两种集成算法对其进行训练.首先分析了扰动输入特征空间和扰动模型参数两种方式对于增大成员分类器之间差异性的作用;然后提出两种基于二重扰动机制的集成训练算法.其共同特点是,同时扰动输入特征空间和模型参数以产生成员分类器,并利用多数投票法对它们进行组合.实验结果表明,因为同时缩减了误差的偏差部分和方差部分,所以两种算法均能显著提升支持向量机的泛化性能.

 

  相似文献   

3.
针对大规模数据分类中训练集分解导致的分类器泛化能力下降问题,提出基于训练集平行分割的集成学习算法.它采用多簇平行超平面对训练集实施多次划分,在各次划分的训练集上采用一种模块化支持向量机网络算法训练基分类器.测试时采用多数投票法对各个基分类器的输出进行集成.在3个大规模问题上的实验表明:在不增加训练时间和测试时间的条件下,集成学习在保持分类器偏置基本不变的同时有效减少了分类器的方差,从而有效降低了由于训练集分割导致的分类器泛化能力下降.  相似文献   

4.
近年来,集成学习方法因其在多分类系统中具备良好的泛化性能而成为关注热点,然而,传统采样方法生成的基分类器存在相似度高、集成后泛化能力不足等问题,为此,提出一种基于监督学习的分类器自适应融合方法AEC_SL,该方法先采用高斯混合模型聚类算法将训练集划分为有监督的样本簇,然后在每个类簇上使用随机森林算法得到差异化的分类器,...  相似文献   

5.
现存的大多数隐写分析方法的泛化能力较弱,无法对未知隐写算法有效检测,使得其分类的准确性在实际运用过程中大幅度降低。针对这个问题,提出了一种基于分组卷积和快照集成的图像隐写分析方法(snapshot ensembling steganalysis network, SENet)。首先,残差卷积块和分组卷积块对图像的特征进行提取并利用;其次,在每个训练周期中得到性能最好的模型作为快照模型;最后将选定的快照模型进行集成后对图像进行分类。该方法应用分组卷积和快照集成的技术,避免了传统集成方法的高训练成本以及单一分类器泛化能力有限的问题。实验结果表明,该方法可以提升隐写分析模型的准确率,并且在训练集和测试集失配时,也能够有效地进行分类,具有较高的模型泛化能力。  相似文献   

6.
艾成豪  高建华  黄子杰 《计算机工程》2022,48(7):168-176+198
代码异味是违反基本设计原理或编码规范的软件特征,源代码中若存在代码异味将提高其维护的成本和难度。在代码异味检测方法中,机器学习相较其他方法能够取得更好的性能表现。针对使用大量特征进行训练可能会引起“维度灾难”以及单一模型泛化性能不佳的问题,提出一种混合特征选择和集成学习驱动的代码异味检测方法。通过ReliefF、XGBoost特征重要性和Pearson相关系数计算出所有特征的权重并进行融合,删除融合后权重值较低的无关特征,以得到特征子集。构建具有两层结构的Stacking集成学习模型,第一层的基分类器由3种不同的树模型构成,第二层以逻辑回归作为元分类器,两层结构的集成学习模型能够结合多样化模型的优点来增强泛化性能。将特征子集输入Stacking集成学习模型,从而完成代码异味分类与检测任务。实验结果表明,该方法能够减少特征维度,与Stacking集成学习模型第一层中的最优基分类器相比,其在F-measure和G-mean指标上最高分别提升1.46%和0.87%。  相似文献   

7.
为了平衡集成学习中差异性和准确性的关系并提高学习系统的泛化性能,提出一种基于AdaBoost和匹配追踪的选择性集成算法.其基本思想是将匹配追踪理论融合于AdaBoost的训练过程中,利用匹配追踪贪婪迭代的思想来最小化目标函数与基分类器线性组合之间的冗余误差,并根据冗余误差更新AdaBoost已训练基分类器的权重,进而根据权重大小选择集成分类器成员.在公共数据集上的实验结果表明,该算法能够获得较高的分类精度.  相似文献   

8.
为了平衡集成学习中差异性和准确性的关系并提高学习系统的泛化性能, 提出一种基于AdaBoost 和匹配追踪的选择性集成算法. 其基本思想是将匹配追踪理论融合于AdaBoost 的训练过程中, 利用匹配追踪贪婪迭代的思想来最小化目标函数与基分类器线性组合之间的冗余误差, 并根据冗余误差更新AdaBoost 已训练基分类器的权重, 进而根据权重大小选择集成分类器成员. 在公共数据集上的实验结果表明, 该算法能够获得较高的分类精度.  相似文献   

9.
在基于Stacking框架下异构分类器集成方式分析的基础上,引入同构分类器集成中改变训练样本以增强成员分类器间差异性的思想,提出融合DECORATE的异构分类器集成算法SDE;在1-层泛化利用DECORATE算法,向1-层训练集增加一定比例的人工数据,使得生成的多个1-层成员分类器间具有差异性。实验表明,该方法在分类精度上要优于传统Stacking方法。  相似文献   

10.
差异性是提高分类器集成泛化性能的重要因素。采用熵差异性度量及数据子集法训练基分类器,研究了爬山选择、集成前序选择、集成后序选择以及聚类选择策略选取个体模型的集成学习。实验结果表明,由选择策略选取差异性较大的个体模型,其集成性能表现出较好的优势;从总体角度考虑,爬山选择策略的集成性能优于集成前序选择和集成后序选择的集成性能;另外,由聚类技术选取的集成模型,当集成正确率较稳定时,则模型间的差异性变化较小;簇数也对集成性能与集成模型间的差异性产生一定的影响。  相似文献   

11.
Currently, web spamming is a serious problem for search engines. It not only degrades the quality of search results by intentionally boosting undesirable web pages to users, but also causes the search engine to waste a significant amount of computational and storage resources in manipulating useless information. In this paper, we present a novel ensemble classifier for web spam detection which combines the clonal selection algorithm for feature selection and under-sampling for data balancing. This web spam detection system is called USCS. The USCS ensemble classifiers can automatically sample and select sub-classifiers. First, the system will convert the imbalanced training dataset into several balanced datasets using the under-sampling method. Second, the system will automatically select several optimal feature subsets for each sub-classifier using a customized clonal selection algorithm. Third, the system will build several C4.5 decision tree sub-classifiers from these balanced datasets based on its specified features. Finally, these sub-classifiers will be used to construct an ensemble decision tree classifier which will be applied to classify the examples in the testing data. Experiments on WEBSPAM-UK2006 dataset on the web spam problem show that our proposed approach, the USCS ensemble web spam classifier, contributes significant classification performance compared to several baseline systems and state-of-the-art approaches.  相似文献   

12.
针对异构数据集下的不均衡分类问题,从数据集重采样、集成学习算法和构建弱分类器3个角度出发,提出一种针对异构不均衡数据集的分类方法——HVDM-Adaboost-KNN算法(heterogeneous value difference metric-Adaboost-KNN),该算法首先通过聚类算法对数据集进行均衡处理,获得多个均衡的数据子集,并构建多个子分类器,采用异构距离计算异构数据集中2个样本之间的距离,提高KNN算法的分类准性能,然后用Adaboost算法进行迭代获得最终分类器。用8组UCI数据集来评估算法在不均衡数据集下的分类性能,Adaboost实验结果表明,相比Adaboost等算法,F1值、AUC、G-mean等指标在异构不均衡数据集上的分类性能都有相应的提高。  相似文献   

13.
一种基于局部随机子空间的分类集成算法   总被引:1,自引:0,他引:1  
分类器集成学习是当前机器学习研究领域的热点之一。然而,经典的采用完全随机的方法,对高维数据而言,难以保证子分类器的性能。 为此,文中提出一种基于局部随机子空间的分类集成算法,该算法首先采用特征选择方法得到一个有效的特征序列,进而将特征序列划分为几个区段并依据在各区段的采样比例进行随机采样,以此来改进子分类器性能和子分类器的多样性。在5个UCI数据集和5个基因数据集上进行实验,实验结果表明,文中方法优于单个分类器的分类性能,且在多数情况下优于经典的分类集成方法。  相似文献   

14.
Hu Li  Ye Wang  Hua Wang  Bin Zhou 《World Wide Web》2017,20(6):1507-1525
Imbalanced streaming data is commonly encountered in real-world data mining and machine learning applications, and has attracted much attention in recent years. Both imbalanced data and streaming data in practice are normally encountered together; however, little research work has been studied on the two types of data together. In this paper, we propose a multi-window based ensemble learning method for the classification of imbalanced streaming data. Three types of windows are defined to store the current batch of instances, the latest minority instances, and the ensemble classifier. The ensemble classifier consists of a set of latest sub-classifiers, and the instances employed to train each sub-classifier. All sub-classifiers are weighted prior to predicting the class labels of newly arriving instances, and new sub-classifiers are trained only when the precision is below a predefined threshold. Extensive experiments on synthetic datasets and real-world datasets demonstrate that the new approach can efficiently and effectively classify imbalanced streaming data, and generally outperforms existing approaches.  相似文献   

15.
Bagging, boosting, rotation forest and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-classifiers. Boosting and rotation forest algorithms are considered stronger than bagging and random subspace methods on noise-free data. However, there are strong empirical indications that bagging and random subspace methods are much more robust than boosting and rotation forest in noisy settings. For this reason, in this work we built an ensemble of bagging, boosting, rotation forest and random subspace methods ensembles with 6 sub-classifiers in each one and then a voting methodology is used for the final prediction. We performed a comparison with simple bagging, boosting, rotation forest and random subspace methods ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.  相似文献   

16.
混合数据聚类是聚类分析中一个重要的问题。现有的混合数据聚类算法主要是在全体样本的相似性度量的基础上进行聚类,因此对大规模数据进行聚类时,算法效率不高。基于此,设计了一种新的抽样策略,在此基础上,提出了一种基于抽样的大规模混合数据聚类集成算法。该算法对利用新的抽样策略得到的多个样本子集分别进行聚类,并将结果集成得到最终聚类结果。实验证明,与改进的K-prototypes算法相比,该算法的效率有了显著提高,同时聚类有效性指标基本相同。  相似文献   

17.
The aim of this paper is to propose a new hybrid data mining model based on combination of various feature selection and ensemble learning classification algorithms, in order to support decision making process. The model is built through several stages. In the first stage, initial dataset is preprocessed and apart of applying different preprocessing techniques, we paid a great attention to the feature selection. Five different feature selection algorithms were applied and their results, based on ROC and accuracy measures of logistic regression algorithm, were combined based on different voting types. We also proposed a new voting method, called if_any, that outperformed all other voting methods, as well as a single feature selection algorithm's results. In the next stage, a four different classification algorithms, including generalized linear model, support vector machine, naive Bayes and decision tree, were performed based on dataset obtained in the feature selection process. These classifiers were combined in eight different ensemble models using soft voting method. Using the real dataset, the experimental results show that hybrid model that is based on features selected by if_any voting method and ensemble GLM + DT model performs the highest performance and outperforms all other ensemble and single classifier models.  相似文献   

18.
为解决垃圾网页检测过程中的不平衡分类和"维数灾难"问题,提出一种基于随机森林(RF)和欠采样集成的二元分类器算法。首先使用欠采样技术将训练样本集大类抽样成多个子样本集,再将其分别与小类样本集合并构成多个平衡的子训练样本集;然后基于各个子训练样本集训练出多个随机森林分类器;最后用多个随机森林分类器对测试样本集进行分类,采用投票法确定测试样本的最终所属类别。在WEBSPAM UK-2006数据集上的实验表明,该集成分类器算法应用于垃圾网页检测比随机森林算法及其Bagging和Adaboost集成分类器算法效果更好,准确率、F1测度、ROC曲线下面积(AUC)等指标提高至少14%,13%和11%。与Web spam challenge 2007 优胜团队的竞赛结果相比,该集成分类器算法在F1测度上提高至少1%,在AUC上达到最优结果。  相似文献   

19.
针对数量激增、数据类型复杂的遥感影像,准确和具有普适性的分类是亟待解决的问题。提出一种轮转径向基函数神经网络模型应用于遥感影像的处理方法。通过对输入数据的特征变换,使特征总集变为多个子特征集,依据PCA(主成分分析)变换处理这些新的子特征集,将得到的系数用于改变训练样本,增加基分类器之间的差异度,提高分类精度。以扎龙湿地为研究对象将该算法与其他方法比较,结果显示本文方法能得到更准确的分类结果,而且具有较高的泛化精度以及较小的过学习现象。  相似文献   

20.
A new optimized classification algorithm assembled by neural network based on Ordinary Least Squares (OLS) is established here. While recognizing complex high-dimensional data by neural network, the design of network is a challenge. Besides, single network model can hardly get satisfying recognition accuracy. Firstly, feature dimension reduction is carried on so that the design of network is more convenient. Take Elman neural network algorithm based on PCA as sub-classifier I. The recognition precision of this classifier is relatively high, but the convergence rate is not satisfying. Take RBF neural network algorithm based on factor analysis as sub-classifier II. The convergence rate of the classifier algorithm is fast, but the recognition precision is relatively low. In order to make up for the deficiency, by carrying on ensemble learning of the two sub-classifiers and determining optimal weights of each sub-classifier by OLS principle, assembled optimized classification algorithm is obtained, so to some extent, information loss caused by dimensionality reduction in data is made up. In the end, validation of the model can be tested by case analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号