首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Most of the widely used pattern classification algorithms, such as Support Vector Machines (SVM), are sensitive to the presence of irrelevant or redundant features in the training data. Automatic feature selection algorithms aim at selecting a subset of features present in a given dataset so that the achieved accuracy of the following classifier can be maximized. Feature selection algorithms are generally categorized into two broad categories: algorithms that do not take the following classifier into account (the filter approaches), and algorithms that evaluate the following classifier for each considered feature subset (the wrapper approaches). Filter approaches are typically faster, but wrapper approaches deliver a higher performance. In this paper, we present the algorithm – Predictive Forward Selection – based on the widely used wrapper approach forward selection. Using ideas from meta-learning, the number of required evaluations of the target classifier is reduced by using experience knowledge gained during past feature selection runs on other datasets. We have evaluated our approach on 59 real-world datasets with a focus on SVM as the target classifier. We present comparisons with state-of-the-art wrapper and filter approaches as well as one embedded method for SVM according to accuracy and run-time. The results show that the presented method reaches the accuracy of traditional wrapper approaches requiring significantly less evaluations of the target algorithm. Moreover, our method achieves statistically significant better results than the filter approaches as well as the embedded method.  相似文献   

2.
网络故障诊断中大量无关或冗余的特征会降低诊断的精度,需要对初始特征进行选择。Wrapper模式特征选择方法分类算法计算量大,为了降低计算量,本文提出了基于支持向量的二进制粒子群(SVB-BPSO)的故障特征选择方法。该算法以SVM为分类器,首先通过对所有样本的SVM训练选出SV集,在封装的分类训练中仅使用SV集,然后采用异类支持向量之间的平均距离作为SVM的参数进行训练,最后根据分类结果,利用BPSO在特征空间中进行全局搜索选出最优特征集。在DARPA数据集上的实验表明本文提出的方法能够降低封装模式特征选择的计算量且获得了较高的分类精度以及较明显的降维效果。  相似文献   

3.
针对传统支持向量机(SVM)在封装式特征选择中分类精度低、特征子集选择冗余以及计算效率差的不足,利用元启发式优化算法同步优化SVM与特征选择。为改善SVM分类效果以及选择特征子集的能力,首先,利用自适应差分进化(DE)算法、混沌初始化与锦标赛选择策略对斑点鬣狗优化(SHO)算法改进,以增强其局部搜索能力并提高其寻优效率与求解精度;其次,将改进后的算法用于特征选择与SVM参数调整的同步优化中;最后,在UCI数据集进行特征选择仿真实验,采取分类准确率、选择特征数、适应度值及运行时间来综合评估所提算法的优化性能。实验结果证明,改进算法的同步优化机制能够在高分类准确率下降低特征选择的数目,该算法比传统算法更适合解决封装式特征选择问题,具有良好的应用价值。  相似文献   

4.
基于相关性分析及遗传算法的高维数据特征选择   总被引:4,自引:0,他引:4  
特征选择是模式识别及数据挖掘等领域的重要问题之一。针对高维数据对象,特征选择一方面可以提高分类精度和效率,另一方面可以找出富含信息的特征子集。针对此问题,提出了一种综合了filter模型及wrapper模型的特征选择方法,首先基于特征与类别标签的相关性分析进行特征筛选,只保留与类别标签具有较强相关性的特征,然后针对经过筛选而精简的特征子集采用遗传算法进行随机搜索,并采用感知器模型的分类错误率作为评价指标。实验结果表明,该算法可有效地找出具有较好的线性可分离性的特征子集,从而实现降维并提高分类精度。  相似文献   

5.
一种基于信息增益及遗传算法的特征选择算法   总被引:8,自引:0,他引:8  
特征选择是模式识别及数据挖掘等领域的重要问题之一。针对高维数据对象,特征选择一方面可以提高分类精度和效率,另一方面可以找出富含信息的特征子集。针对此问题,本文提出一种综合了filter模型及wrapper模型的特征选择方法,首先基于特征之间的信息增益进行特征分组及筛选,然后针对经过筛选而精简的特征子集采用遗传算法进行随机搜索,并采用感知器模型的分类错误率作为评价指标。实验结果表明,该算法可有效地找出具有较好的线性可分离性的特征子集,从而实现降维并提高分类精度。  相似文献   

6.
特征选择方法主要包括过滤方法和绕封方法。为了利用过滤方法计算简单和绕封方法精度高的优点,提出一种组合过滤和绕封方法的特征选择新方法。该方法首先利用基于互信息准则的过滤方法得到满足一定精度要求的子集后,再采用绕封方法找到最后的优化特征子集。由于遗传算法在组合优化问题上的成功应用,对特征子集寻优采用了遗传算法。在数值仿真和轴承故障特征选择中,采用新方法在保证诊断精度的同时,可以节省大量选择时间。组合特征选择方法有较好的寻优特征子集的能力,能够节省选择时间,具有高效、高精度的双重优点。  相似文献   

7.
基于离散粒子群和支持向量机的特征基因选择算法   总被引:1,自引:0,他引:1  
基因芯片表达谱信息,为识别疾病相关基因及对癌症等疾病分型、诊断及病理学研究提供一新途径。在基因表达谱数据中选择特征基因可以提高疾病诊断、分类的准确率,并降低分类器的复杂度。本文研究了基于离散粒子群(binary particle swarm optimization,BPSO)和支持向量机(support vector machine,SVM)封装模式的BPSO-SVM特征基因选择方法,首先随机产生若干种群(特征子集),然后用BPSO算法优化随机产生的特征基因,并用SVM分类结果指导搜索,最后选出最佳适应度的特征基因子集以训练SVM。结果表明,基于BPSO-SVM的特征基因选择方法,的确是一种行之有效的特征基因选择方法。  相似文献   

8.
This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA  相似文献   

9.
This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA.  相似文献   

10.
Feature selection is an important filtering method for data analysis, pattern classification, data mining, and so on. Feature selection reduces the number of features by removing irrelevant and redundant data. In this paper, we propose a hybrid filter–wrapper feature subset selection algorithm called the maximum Spearman minimum covariance cuckoo search (MSMCCS). First, based on Spearman and covariance, a filter algorithm is proposed called maximum Spearman minimum covariance (MSMC). Second, three parameters are proposed in MSMC to adjust the weights of the correlation and redundancy, improve the relevance of feature subsets, and reduce the redundancy. Third, in the improved cuckoo search algorithm, a weighted combination strategy is used to select candidate feature subsets, a crossover mutation concept is used to adjust the candidate feature subsets, and finally, the filtered features are selected into optimal feature subsets. Therefore, the MSMCCS combines the efficiency of filters with the greater accuracy of wrappers. Experimental results on eight common data sets from the University of California at Irvine Machine Learning Repository showed that the MSMCCS algorithm had better classification accuracy than the seven wrapper methods, the one filter method, and the two hybrid methods. Furthermore, the proposed algorithm achieved preferable performance on the Wilcoxon signed-rank test and the sensitivity–specificity test.  相似文献   

11.
针对目前大部分钓鱼网站检测方法存在检测准确率低、误判率高等问题,提出了一种基于特征选择与集成学习的钓鱼网站检测方法。该检测方法首先使用FSIGR算法进行特征选择,FSIGR算法结合过滤和封装模式的优点,从信息相关性和分类能力两个方面对特征进行综合度量,并采用前向递增后向递归剔除策略对特征进行选择,以分类精度作为评价指标对特征子集进行评价,从而获取最优特征子集;然后使用最优特征子集数据对随机森林分类算法模型进行训练。在UCI数据集上的实验表明,所提方法能够有效提高钓鱼网站检测的正确率,降低误判率,具有实际应用意义。  相似文献   

12.
This paper deals with the problem of supervised wrapper-based feature subset selection in datasets with a very large number of attributes. Recently the literature has contained numerous references to the use of hybrid selection algorithms: based on a filter ranking, they perform an incremental wrapper selection over that ranking. Though working fine, these methods still have their problems: (1) depending on the complexity of the wrapper search method, the number of wrapper evaluations can still be too large; and (2) they rely on a univariate ranking that does not take into account interaction between the variables already included in the selected subset and the remaining ones.Here we propose a new approach whose main goal is to drastically reduce the number of wrapper evaluations while maintaining good performance (e.g. accuracy and size of the obtained subset). To do this we propose an algorithm that iteratively alternates between filter ranking construction and wrapper feature subset selection (FSS). Thus, the FSS only uses the first block of ranked attributes and the ranking method uses the current selected subset in order to build a new ranking where this knowledge is considered. The algorithm terminates when no new attribute is selected in the last call to the FSS algorithm. The main advantage of this approach is that only a few blocks of variables are analyzed, and so the number of wrapper evaluations decreases drastically.The proposed method is tested over eleven high-dimensional datasets (2400-46,000 variables) using different classifiers. The results show an impressive reduction in the number of wrapper evaluations without degrading the quality of the obtained subset.  相似文献   

13.
针对传统支持向量机在封装式特征选择中分类效果差、子集选取冗余、计算性能易受核函数参数影响的不足,利用元启发式优化算法对其进行同步优化.首先利用莱维飞行策略和模拟退火机制对秃鹰搜索算法的局部搜索能力与勘探利用解空间能力进行改进,通过标准函数的测试结果验证其改进的有效性;其次将支持向量机核函数参数作为待优化目标,利用改进后...  相似文献   

14.
This paper presents a hybrid filter-wrapper feature subset selection algorithm based on particle swarm optimization (PSO) for support vector machine (SVM) classification. The filter model is based on the mutual information and is a composite measure of feature relevance and redundancy with respect to the feature subset selected. The wrapper model is a modified discrete PSO algorithm. This hybrid algorithm, called maximum relevance minimum redundancy PSO (mr2PSO), is novel in the sense that it uses the mutual information available from the filter model to weigh the bit selection probabilities in the discrete PSO. Hence, mr2PSO uniquely brings together the efficiency of filters and the greater accuracy of wrappers. The proposed algorithm is tested over several well-known benchmarking datasets. The performance of the proposed algorithm is also compared with a recent hybrid filter-wrapper algorithm based on a genetic algorithm and a wrapper algorithm based on PSO. The results show that the mr2PSO algorithm is competitive in terms of both classification accuracy and computational performance.  相似文献   

15.
基于Relief和SVM-RFE的组合式SNP特征选择   总被引:1,自引:0,他引:1  
针对SNP的全基因组关联分析面临SNP数据的高维小样本特性和遗传疾病病理的复杂性两大难点,将特征选择引入SNP全基因组关联分析中,提出基于Relief和SVM-RFE的组合式SNP特征选择方法。该方法包括两个阶段:Filter阶段,使用Relief算法剔除无关SNPs;Wrapper阶段,使用基于支持向量机的特征递归消减方法(SVM-RFE)筛选出与遗传疾病相关的关键SNPs。实验表明,该方法具有明显优于单独使用SVM-RFE算法的性能,优于单独使用Relief-SVM算法的分类准确率,为SNP全基因组关联分析提供了一种有效途径。  相似文献   

16.
针对监督分类中的特征选择问题, 提出一种基于量子进化算法的包装式特征选择方法. 首先分析了现有子集评价方法存在过度偏好分类精度的缺点, 进而提出基于固定阈值和统计检验的两种子集评价方法. 然后改进了量子进化算法的进化策略, 即将整个进化过程分为两个阶段, 分别选用个体极值和全局极值作为种群的进化目标. 在此基础上, 按照包装式特征选择遵循的一般框架设计了特征选择算法. 最后, 通过15个UCI数据集分别验证了子集评价方法和进化策略的有效性, 以及新方法相较于其它6种特征选择方法的优越性. 结果表明, 新方法在80%以上的数据集上取得相似甚至更好的分类精度, 在86.67%的数据集上选择了特征个数更小的子集.  相似文献   

17.
任越美  李垒  张艳宁  魏巍  李映 《计算机科学》2014,41(12):283-287
针对高光谱图像分类过程中数据波段多以及信息冗余量大引起的处理速度慢及Hughes现象等问题,提出了一种基于多粒子协同进化算法进行高光谱图像自动波段选择与分类的方法:使用多粒子群协同进化算法搜索特征子集,对粒子群优化算法进行改进,定义新的位置和速度的更新策略,并以支持向量机为分类器,同时对特征子集和SVM核函数参数进行优化。在协同搜索过程中,引入遗传算法改善粒子群优化的"早熟"收敛问题,构建了一种新的MPSO-SVM(Multiple particle swarm optimization-SVM)分类模型。对高光谱遥感图像的实验结果表明:MPSO-SVM方法不仅能有效地压缩光谱的特征维数,得到最佳的波段组合,还能得到最优的SVM参数,达到较好的分类效果,提高分类精度。  相似文献   

18.
The selection of informative and non-redundant features has become a prominent step in pattern classification. However, despite the intensive research, it is still an open issue to identify valuable feature subsets, especially in highly dimensional feature spaces. This paper proposes a wrapper feature selection method, in the context of support vector machines (SVMs), named Wr-SVM-FuzCoC. Our method combines effectively the advantages of the wrapper and filter approaches, achieving three goals simultaneously: classification performance, dimensionality reduction, and computational efficiency. In the filter part, a forward feature search methodology is developed, driven by a fuzzy complementary criterion, whereby at each iteration a feature is selected that exhibits the maximum additional contribution in regard to the previously selected subset. The quality of single features or feature subsets is assessed via a fuzzy local evaluation criterion with respect to patterns. This is achieved by the so-called fuzzy partition vector (FPV), comprising the fuzzy membership grades of every pattern in their target classes. Derivation of the feature FPVs is accomplished by incorporating a fuzzy output kernel-based support vector machine. The proposed method is favorably compared with existing SVM-based wrapper methods, in terms of performance capability and computational speed. Experimental investigation is carried out using a diverse pool of real datasets, including moderate and high-dimensional feature spaces.  相似文献   

19.
Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset.  相似文献   

20.
杨柳  李云 《计算机应用》2021,41(12):3521-3526
K-匿名算法通过对数据的泛化、隐藏等手段使得数据达到K-匿名条件,在隐藏特征的同时考虑数据的隐私性与分类性能,可以视为一种特殊的特征选择方法,即K-匿名特征选择。K-匿名特征选择方法结合K-匿名与特征选择的特点使用多个评价准则选出K-匿名特征子集。过滤式K-匿名特征选择方法难以搜索到所有满足K-匿名条件的候选特征子集,不能保证得到的特征子集的分类性能最优,而封装式特征选择方法计算成本很大,因此,结合过滤式特征排序与封装式特征选择的特点,改进已有方法中的前向搜索策略,设计了一种混合式K-匿名特征选择算法,使用分类性能作为评价准则选出分类性能最好的K-匿名特征子集。在多个公开数据集上进行实验,结果表明,所提算法在分类性能上可以超过现有算法并且信息损失更小。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号