首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
为解决垃圾网页检测过程中的“维数灾难”和不平衡分类问题,提出一种基于免疫克隆特征选择和欠采样(US)集成的二元分类器算法。首先,使用欠采样技术将训练样本集大类抽样成多个与小类样本数相近的样本集,再将其分别与小类样本合并构成多个平衡的子训练样本集;然后,设计一种免疫克隆算法遴选出多个最优的特征子集;基于最优特征子集对平衡的子样本集进行投影操作,生成平衡数据集的多个视图;最后,用随机森林(RF)分类器对测试样本进行分类,采用简单投票法确定测试样本的最终类别。在WEBSPAM UK-2006数据集上的实验结果表明,该集成分类器算法应用于垃圾网页检测:与随机森林算法及其Bagging和AdaBoost集成分类器算法相比,准确率、F1测度、AUC等指标均提高11%以上;与其他最优的研究结果相比,该集成分类器算法在F1测度上提高2%,在AUC上达到最优。  相似文献   

2.
网络作弊检测是搜索引擎的重要挑战之一,该文提出基于遗传规划的集成学习方法 (简记为GPENL)来检测网络作弊。该方法首先通过欠抽样技术从原训练集中抽样得到t个不同的训练集;然后使用c个不同的分类算法对t个训练集进行训练得到t*c个基分类器;最后利用遗传规划得到t*c个基分类器的集成方式。新方法不仅将欠抽样技术和集成学习融合起来提高非平衡数据集的分类性能,还能方便地集成不同类型的基分类器。在WEBSPAM-UK2006数据集上所做的实验表明无论是同态集成还是异态集成,GPENL均能提高分类的性能,且异态集成比同态集成更加有效;GPENL比AdaBoost、Bagging、RandomForest、多数投票集成、EDKC算法和基于Prediction Spamicity的方法取得更高的F-度量值。  相似文献   

3.
针对垃圾网页检测过程中轻微的不平衡分类问题,提出三种随机欠采样集成分类器算法,分别为一次不放回随机欠采样(RUS-once)、多次不放回随机欠采样(RUS-multiple)和有放回随机欠采样(RUS-replacement)算法。首先使用其中一种随机欠采样技术将训练样本集转换成平衡样本集,然后对每个平衡样本集使用分类回归树(CART)分类器算法进行分类,最后采用简单投票法构建集成分类器对测试样本进行分类。实验表明,三种随机欠采样集成分类器均取得了良好的分类效果,其中RUS-multiple和RUS-replacement比RUS-once的分类效果更好。与CART及其Bagging和Adaboost集成分类器相比,在WEBSPAM UK-2006数据集上,RUS-multiple和RUS-replacement方法的AUC指标值提高了10%左右,在WEBSPAM UK-2007数据集上,提高了25%左右;与其他最优研究结果相比,RUS-multiple和RUS-replacement方法在AUC指标上能达到最优分类结果。  相似文献   

4.
为解决垃圾网页检测过程中的不平衡分类和"维数灾难"问题,提出一种基于随机森林(RF)和欠采样集成的二元分类器算法。首先使用欠采样技术将训练样本集大类抽样成多个子样本集,再将其分别与小类样本集合并构成多个平衡的子训练样本集;然后基于各个子训练样本集训练出多个随机森林分类器;最后用多个随机森林分类器对测试样本集进行分类,采用投票法确定测试样本的最终所属类别。在WEBSPAM UK-2006数据集上的实验表明,该集成分类器算法应用于垃圾网页检测比随机森林算法及其Bagging和Adaboost集成分类器算法效果更好,准确率、F1测度、ROC曲线下面积(AUC)等指标提高至少14%,13%和11%。与Web spam challenge 2007 优胜团队的竞赛结果相比,该集成分类器算法在F1测度上提高至少1%,在AUC上达到最优结果。  相似文献   

5.
Features selection is the process of choosing the relevant subset of features from the high-dimensional dataset to enhance the performance of the classifier. Much research has been carried out in the present world for the process of feature selection. Algorithms such as Naïve Bayes (NB), decision tree, and genetic algorithm are applied to the high-dimensional dataset to select the relevant features and also to increase the computational speed. The proposed model presents a solution for selection of features using ensemble classifier algorithms. The proposed algorithm is the combination of minimum redundancy and maximum relevance (mRMR) and forest optimization algorithm (FOA). Ensemble-based algorithms such as support vector machine (SVM), K-nearest neighbor (KNN), and NB is further used to enhance the performance of the classifier algorithm. The mRMR-FOA is used to select the relevant features from the various datasets and 21% to 24% improvement is recorded in the feature selection. The ensemble classifier algorithms further improves the performance of the algorithm and provides accuracy of 96%.  相似文献   

6.
一种基于局部随机子空间的分类集成算法   总被引:1,自引:0,他引:1  
分类器集成学习是当前机器学习研究领域的热点之一。然而,经典的采用完全随机的方法,对高维数据而言,难以保证子分类器的性能。 为此,文中提出一种基于局部随机子空间的分类集成算法,该算法首先采用特征选择方法得到一个有效的特征序列,进而将特征序列划分为几个区段并依据在各区段的采样比例进行随机采样,以此来改进子分类器性能和子分类器的多样性。在5个UCI数据集和5个基因数据集上进行实验,实验结果表明,文中方法优于单个分类器的分类性能,且在多数情况下优于经典的分类集成方法。  相似文献   

7.
化工过程故障诊断中样本数据分布不均衡现象普遍存在.在使用不均衡样本作为训练集建立各类故障诊断分类器时,易出现分类器的识别率偏置于多数类样本的结果,由此产生虽正常状态易识别,但更受关注的故障状态却难以被诊断的现象.针对该问题,本文提出一种基于Easy Ensemble思想的主元分析–支持向量机(Easy Ensemble based principle component analysis–support vector machine,EEPS)故障诊断算法,通过欠采样方法抽取多数类样本子集组建多个新的均衡数据样本集,使用主元分析(principle component analysis,PCA)进行特征提取并使用支持向量机(support vector machine,SVM)算法进行训练,得到多个基于SVM的故障诊断分类器,然后使用Adaboost算法集成最终的分类,从而提高故障诊断准确性.所提方法被用于TE(Tenessee Eastman)化工过程,实验结果表明,EEPS算法能够有效提高分类器在不均衡数据集上的诊断性能和预报能力.  相似文献   

8.
李勇 《计算机应用》2014,34(8):2291-2294
软件缺陷预测是提高测试效率、保证软件可靠性的重要途径。为了提高软件缺陷预测的准确率,提出一种结合欠抽样与决策树分类器集成的软件缺陷预测模型。考虑到软件缺陷数据的类不平衡特性,首先,通过数据的不平衡率确定抽样度,执行欠抽样实现数据的重新平衡;然后,采用Bagging随机抽样原理训练若干个决策树子分类器;最后,按照少数服从多数的原则生成预测模型。使用公开的NASA软件缺陷预测数据集进行了仿真实验。实验结果表明,与3种基准方法对比,所提模型在保证预报率的前提下,误报率(PF)降低了10%以上,综合评价指标均有显著提升。该模型的缺陷预测误报率较低,而且具有较高的预测准确率与稳定性。  相似文献   

9.
针对异构数据集下的不均衡分类问题,从数据集重采样、集成学习算法和构建弱分类器3个角度出发,提出一种针对异构不均衡数据集的分类方法——HVDM-Adaboost-KNN算法(heterogeneous value difference metric-Adaboost-KNN),该算法首先通过聚类算法对数据集进行均衡处理,获得多个均衡的数据子集,并构建多个子分类器,采用异构距离计算异构数据集中2个样本之间的距离,提高KNN算法的分类准性能,然后用Adaboost算法进行迭代获得最终分类器。用8组UCI数据集来评估算法在不均衡数据集下的分类性能,Adaboost实验结果表明,相比Adaboost等算法,F1值、AUC、G-mean等指标在异构不均衡数据集上的分类性能都有相应的提高。  相似文献   

10.
软件缺陷预测通过预先识别出被测项目内的潜在缺陷程序模块,有助于合理分配测试资源,并最终提高被测软件产品的质量。但在搜集缺陷预测数据集的时候,由于考虑了大量与代码复杂度或开发过程相关的度量元,造成数据集内存在维数灾难问题。借助基于搜索的软件工程思想,提出一种新颖的基于搜索的包裹式特征选择框架SBFS。该框架在实现时,首先借助SMOTE方法来缓解数据集内存在的类不平衡问题,随后借助基于遗传算法的特征选择方法,基于训练集选出最优特征子集。在实证研究中,以NASA数据集作为评测对象,以基于前向选择策略的包裹式特征选择方法FW、基于后向选择策略的包裹式特征选择BW、不进行特征选择的Origin作为基准方法。最终实证研究结果表明:SBFS方法在90%的情况下,不差于Origin法。在82.3%的情况下,不差于BW法。在69.3%的情况下,不差于FW法。除此之外,我们发现若基于决策树分类器,则应用SMOTE方法后,可以在71%的情况下,提高模型性能。而基于朴素贝叶斯和Logistic回归分类器,则应用SMOTE方法后,仅可以在47%和43%的情况下,提高模型的预测性能。  相似文献   

11.
异常检测系统在网络空间安全中起着至关重要的作用,为网络安全提供有效的保障.对于复杂的网络流量信息,传统的单一的分类器往往无法同时具备较高检测精确度和较强的泛化能力.此外,基于全特征的异常检测模型往往会受到冗余特征的干扰,影响检测的效率和精度.针对这些问题,本文提出了一种基于平均特征重要性的特征选择和集成学习的模型,选取决策树(DT)、随机森林(RF)、额外树(ET)作为基分类器,建立投票集成模型,并基于基尼系数计算基分类器的平均特征重要性进行特征选择.在多个数据集上的实验评估结果表明,本文提出的集成模型优于经典集成学习模型及其他著名异常检测集成模型.且提出的基于平均特征重要性的特征选择方法可以使集成模型准确率平均进一步提升约0.13%,训练时间平均节省约30%.  相似文献   

12.
Credit scoring focuses on the development of empirical models to support the financial decision‐making processes of financial institutions and credit industries. It makes use of applicants' historical data and statistical or machine learning techniques to assess the risk associated with an applicant. However, the historical data may consist of redundant and noisy features that affect the performance of credit scoring models. The main focus of this paper is to develop a hybrid model, combining feature selection and a multilayer ensemble classifier framework, to improve the predictive performance of credit scoring. The proposed hybrid credit scoring model is modeled in three phases. The initial phase constitutes preprocessing and assigns ranks and weights to classifiers. In the next phase, the ensemble feature selection approach is applied to the preprocessed dataset. Finally, in the last phase, the dataset with the selected features is used in a multilayer ensemble classifier framework. In addition, a classifier placement algorithm based on the Choquet integral value is designed, as the classifier placement affects the predictive performance of the ensemble framework. The proposed hybrid credit scoring model is validated on real‐world credit scoring datasets, namely, Australian, Japanese, German‐categorical, and German‐numerical datasets.  相似文献   

13.
The aim of this paper is to propose a new hybrid data mining model based on combination of various feature selection and ensemble learning classification algorithms, in order to support decision making process. The model is built through several stages. In the first stage, initial dataset is preprocessed and apart of applying different preprocessing techniques, we paid a great attention to the feature selection. Five different feature selection algorithms were applied and their results, based on ROC and accuracy measures of logistic regression algorithm, were combined based on different voting types. We also proposed a new voting method, called if_any, that outperformed all other voting methods, as well as a single feature selection algorithm's results. In the next stage, a four different classification algorithms, including generalized linear model, support vector machine, naive Bayes and decision tree, were performed based on dataset obtained in the feature selection process. These classifiers were combined in eight different ensemble models using soft voting method. Using the real dataset, the experimental results show that hybrid model that is based on features selected by if_any voting method and ensemble GLM + DT model performs the highest performance and outperforms all other ensemble and single classifier models.  相似文献   

14.
In this paper, we propose a new feature selection method called class dependency based feature selection for dimensionality reduction of the macular disease dataset from pattern electroretinography (PERG) signals. In order to diagnosis of macular disease, we have used class dependency based feature selection as feature selection process, fuzzy weighted pre-processing as weighted process and decision tree classifier as decision making. The proposed system consists of three parts. First, we have reduced to 9 features number of features of macular disease dataset that has 63 features using class dependency based feature selection, which is first developed by ours. Second, the macular disease dataset that has 9 features is weighted by using fuzzy weighted pre-processing. And finally, decision tree classifier was applied to PERG signals to distinguish between healthy eye and diseased eye (macula diseases). The employed class dependency based feature selection, fuzzy weighted pre-processing and decision tree classifier have reached to 96.22%, 96.27% and 96.30% classification accuracies using 5–10–15-fold cross-validation, respectively. The results confirmed that the medical decision making system based on the class dependency based feature selection, fuzzy weighted pre-processing and decision tree classifier has potential in detecting the macular disease. The stated results show that the proposed method could point out the ability of design of a new intelligent assistance diagnosis system.  相似文献   

15.
现实生活中存在大量的非平衡数据,大多数传统的分类算法假定类分布平衡或者样本的错分代价相同,因此在对这些非平衡数据进行分类时会出现少数类样本错分的问题。针对上述问题,在代价敏感的理论基础上,提出了一种新的基于代价敏感集成学习的非平衡数据分类算法--NIBoost(New Imbalanced Boost)。首先,在每次迭代过程中利用过采样算法新增一定数目的少数类样本来对数据集进行平衡,在该新数据集上训练分类器;其次,使用该分类器对数据集进行分类,并得到各样本的预测类标及该分类器的分类错误率;最后,根据分类错误率和预测的类标计算该分类器的权重系数及各样本新的权重。实验采用决策树、朴素贝叶斯作为弱分类器算法,在UCI数据集上的实验结果表明,当以决策树作为基分类器时,与RareBoost算法相比,F-value最高提高了5.91个百分点、G-mean最高提高了7.44个百分点、AUC最高提高了4.38个百分点;故该新算法在处理非平衡数据分类问题上具有一定的优势。  相似文献   

16.
为了进一步提高复杂干扰环境下对海雷达目标识别的泛化能力,提出基于k-medoids聚类和随机参考分类器(RRC)的动态选择集成算法(KMRRC).主要利用重采样技术生成多个基分类器,然后基于成对多样性度量准则将基分类器划分为多个簇,并基于校验数据集为每个基分类器构建相应的RRC模型,最后利用RRC从各个簇中动态选择竞争力最强的部分基分类器进行集成决策.通过寻优实验确定KMRRC的参数设置,随后利用Java调用Weka API在自建的目标全极化高分辨距离像(HRRP)样本库及17个UCI数据集上进行KMRRC与常用的9种集成算法和基分类算法的对比实验,并进一步研究多样性度量方法的选取对KMRRC性能的影响.实验验证文中算法在对海雷达目标识别领域的有效性.  相似文献   

17.
This paper presents a hybrid approach based on feature selection, fuzzy weighted pre-processing and artificial immune recognition system (AIRS) to medical decision support systems. We have used the heart disease and hepatitis disease datasets taken from UCI machine learning database as medical dataset. Artificial immune recognition system has shown an effective performance on several problems such as machine learning benchmark problems and medical classification problems like breast cancer, diabetes, and liver disorders classification. The proposed approach consists of three stages. In the first stage, the dimensions of heart disease and hepatitis disease datasets are reduced to 9 from 13 and 19 in the feature selection (FS) sub-program by means of C4.5 decision tree algorithm (CBA program), respectively. In the second stage, heart disease and hepatitis disease datasets are normalized in the range of [0,1] and are weighted via fuzzy weighted pre-processing. In the third stage, weighted input values obtained from fuzzy weighted pre-processing are classified using AIRS classifier system. The obtained classification accuracies of our system are 92.59% and 81.82% using 50-50% training-test split for heart disease and hepatitis disease datasets, respectively. With these results, the proposed method can be used in medical decision support systems.  相似文献   

18.

Twitter has nowadays become a trending microblogging and social media platform for news and discussions. Since the dramatic increase in its platform has additionally set off a dramatic increase in spam utilization in this platform. For Supervised machine learning, one always finds a need to have a labeled dataset of Twitter. It is desirable to design a semi-supervised labeling technique for labeling newly prepared recent datasets. To prepare the labeled dataset lot of human affords are required. This issue has motivated us to propose an efficient approach for preparing labeled datasets so that time can be saved and human errors can be avoided. Our proposed approach relies on readily available features in real-time for better performance and wider applicability. This work aims at collecting the most recent tweets of a user using Twitter streaming and prepare a recent dataset of Twitter. Finally, a semi-supervised machine learning algorithm based on the self-training technique was designed for labeling the tweets. Semi-supervised support vector machine and semi-supervised decision tree classifiers were used as base classifiers in the self-training technique. Further, the authors have applied K means clustering algorithm to the tweets based on the tweet content. The principled novel approach is an ensemble of semi-supervised and unsupervised learning wherein it was found that semi-supervised algorithms are more accurate in prediction than unsupervised ones. To effectively assign the labels to the tweets, authors have implemented the concept of voting in this novel approach and the label pre-directed by the majority voting classifier is the actual label assigned to the tweet dataset. Maximum accuracy of 99.0% has been reported in this paper using a majority voting classifier for spam labeling.

  相似文献   

19.
黄晓娟  张莉 《计算机应用》2015,35(10):2798-2802
为处理癌症多分类问题,已经提出了多类支持向量机递归特征消除(MSVM-RFE)方法,但该方法考虑的是所有子分类器的权重融合,忽略了各子分类器自身挑选特征的能力。为提高多分类问题的识别率,提出了一种改进的多类支持向量机递归特征消除(MMSVM-RFE)方法。所提方法利用一对多策略把多类问题化解为多个两类问题,每个两类问题均采用支持向量机递归特征消除来逐渐剔除掉冗余特征,得到一个特征子集;然后将得到的多个特征子集合并得到最终的特征子集;最后用SVM分类器对获得的特征子集进行建模。在3个基因数据集上的实验结果表明,改进的算法整体识别率提高了大约2%,单个类别的精度有大幅度提升甚至100%。与随机森林、k近邻分类器以及主成分分析(PCA)降维方法的比较均验证了所提算法的优势。  相似文献   

20.
Hu Li  Ye Wang  Hua Wang  Bin Zhou 《World Wide Web》2017,20(6):1507-1525
Imbalanced streaming data is commonly encountered in real-world data mining and machine learning applications, and has attracted much attention in recent years. Both imbalanced data and streaming data in practice are normally encountered together; however, little research work has been studied on the two types of data together. In this paper, we propose a multi-window based ensemble learning method for the classification of imbalanced streaming data. Three types of windows are defined to store the current batch of instances, the latest minority instances, and the ensemble classifier. The ensemble classifier consists of a set of latest sub-classifiers, and the instances employed to train each sub-classifier. All sub-classifiers are weighted prior to predicting the class labels of newly arriving instances, and new sub-classifiers are trained only when the precision is below a predefined threshold. Extensive experiments on synthetic datasets and real-world datasets demonstrate that the new approach can efficiently and effectively classify imbalanced streaming data, and generally outperforms existing approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号