首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
基于支持向量机的递归特征消除(SVM-RFE)是目前最主流的基因选择方法之一,是为二分类问题设计的,对于多分类问题必须要进行扩展。从帕累托最优(Pareto Optimum)的概念出发,阐明了常用的基因选择方法在多分类问题中的局限性,提出了基于类别的基因选择过程,并据此提出一种新的SVM-RFE设计方法。8个癌症和肿瘤基因表达谱数据上的实验结果证明了新方法优于另两种递归特征消除方法,为每一类单独寻找最优基因,能够得到更高的分类准确率。  相似文献   

2.
黄晓娟  张莉 《计算机应用》2015,35(10):2798-2802
为处理癌症多分类问题,已经提出了多类支持向量机递归特征消除(MSVM-RFE)方法,但该方法考虑的是所有子分类器的权重融合,忽略了各子分类器自身挑选特征的能力。为提高多分类问题的识别率,提出了一种改进的多类支持向量机递归特征消除(MMSVM-RFE)方法。所提方法利用一对多策略把多类问题化解为多个两类问题,每个两类问题均采用支持向量机递归特征消除来逐渐剔除掉冗余特征,得到一个特征子集;然后将得到的多个特征子集合并得到最终的特征子集;最后用SVM分类器对获得的特征子集进行建模。在3个基因数据集上的实验结果表明,改进的算法整体识别率提高了大约2%,单个类别的精度有大幅度提升甚至100%。与随机森林、k近邻分类器以及主成分分析(PCA)降维方法的比较均验证了所提算法的优势。  相似文献   

3.
特征选择通过去除无关和冗余特征提高学习算法性能,本质是组合优化问题。黑寡妇算法是模拟黑寡妇蜘蛛生命周期的元启发式算法,在收敛速度、适应度值优化等方面具有诸多优势。针对黑寡妇算法不能进行特征选择的问题,设计五种优化策略:二进制策略“、或门”策略、种群限制策略、快速生殖策略以及适应度优先策略,提出黑寡妇特征选择算法(black widow optimization feature selection algorithm,BWOFS)和生殖调控黑寡妇特征选择算法(procreation controlled black widow optimization feature selection algorithm,PCBWOFS),从特征空间中搜索有效特征子集。在多个分类、回归公共数据集上验证新方法,实验结果表明,相较其他对比方法(全集、AMB、SFS、SFFS、FSFOA),BWOFS和PCBWOFS能找到预测精度更高的特征子集,可提供有竞争力、有前景的结果,而且与BWOFS相比,PCBWOFS计算量更小,性能更好。  相似文献   

4.
不平衡数据分类是机器学习领域的重要研究内容,但现有的不平衡分类算法通常针对不平衡二分类问题,关于不平衡多分类的研究相对较少。然而实际应用中的数据集通常具有多类别且数据分布具有不平衡性,而类别的多样性进一步加剧了不平衡数据的分类难度,因此不平衡多分类问题已经成为亟待解决的研究课题。针对近年来提出的不平衡多分类算法展开综述,根据是否采用分解策略把不平衡多分类算法分为分解方法和即席方法,并进一步将分解方法按照分解策略的不同划分为“一对一(OVO)”架构和“一对多(OVA)”架构,将即席方法按照处理技术的不同分为数据级方法、算法级方法、代价敏感方法、集成方法和基于深度网络的方法。系统阐述各类方法的优缺点及其代表性算法,总结概括不平衡多分类方法的评价指标,并通过实验深入分析代表性方法的性能,讨论了不平衡多分类的未来发展方向。  相似文献   

5.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

6.
Multi-class classification problems can be addressed by using decomposition strategy. One of the most popular decomposition techniques is the One-vs-One (OVO) strategy, which consists of dividing multi-class classification problems into as many as possible pairs of easier-to-solve binary sub-problems. To discuss the presence of classes with different cost, in this paper, we examine the behavior of an ensemble of Cost-Sensitive Back-Propagation Neural Networks (CSBPNN) with OVO binarization techniques for multi-class problems. To implement this, the original multi-class cost-sensitive problem is decomposed into as many sub-problems as possible pairs of classes and each sub-problem is learnt in an independent manner using CSBPNN. Then a combination method is used to aggregate the binary cost-sensitive classifiers. To verify the synergy of the binarization technique and CSBPNN for multi-class cost-sensitive problems, we carry out a thorough experimental study. Specifically, we first develop the study to check the effectiveness of the OVO strategy for multi-class cost-sensitive learning problems. Then, we develop a comparison of several well-known aggregation strategies in our scenario. Finally, we explore whether further improvement can be achieved by using the management of non-competent classifiers. The experimental study is performed with three types of cost matrices and proper statistical analysis is employed to extract the meaningful findings.  相似文献   

7.
8.
Feature selection can directly ascertain causes of faults by selecting useful features for fault diagnosis, which can simplify the procedures of fault diagnosis. As an efficient feature selection method, the linear kernel support vector machine recursive feature elimination (SVM-RFE) has been successfully applied to fault diagnosis. However, fault diagnosis is not a linear issue. Thus, this paper introduces the Gaussian kernel SVM-RFE to extract nonlinear features for fault diagnosis. The key issue is the selection of the kernel parameter for the Gaussian kernel SVM-RFE. We introduce three classical and simple kernel parameter selection methods and compare them in experiments. The proposed fault diagnosis framework combines the Gaussian kernel SVM-RFE and the SVM classifier, which can improve the performance of fault diagnosis. Experimental results on the Tennessee Eastman process indicate that the proposed framework for fault diagnosis is an advanced technique.  相似文献   

9.
In a DNA microarray dataset, gene expression data often has a huge number of features(which are referred to as genes) versus a small size of samples. With the development of DNA microarray technology, the number of dimensions increases even faster than before, which could lead to the problem of the curse of dimensionality. To get good classification performance, it is necessary to preprocess the gene expression data. Support vector machine recursive feature elimination (SVM-RFE) is a classical method for gene selection. However, SVM-RFE suffers from high computational complexity. To remedy it, this paper enhances SVM-RFE for gene selection by incorporating feature clustering, called feature clustering SVM-RFE (FCSVM-RFE). The proposed method first performs gene selection roughly and then ranks the selected genes. First, a clustering algorithm is used to cluster genes into gene groups, in each which genes have similar expression profile. Then, a representative gene is found to represent a gene group. By doing so, we can obtain a representative gene set. Then, SVM-RFE is applied to rank these representative genes. FCSVM-RFE can reduce the computational complexity and the redundancy among genes. Experiments on seven public gene expression datasets show that FCSVM-RFE can achieve a better classification performance and lower computational complexity when compared with the state-the-art-of methods, such as SVM-RFE.  相似文献   

10.
Feature selection has been widely discussed as an important preprocessing step in machine learning and data mining. In this paper, a new feature selection evaluation criterion based on low-loss learning vector quantization (LVQ) classification is proposed. Based on the evaluation criterion, a feature selection algorithm that optimizes the hypothesis margin of LVQ classification through minimizing its loss function is presented. Some experiments that are compared with well-known SVM-RFE and Relief are carried out on 4 UCI data sets using Naive Bayes and RBF Network classifier. Experimental results show that new algorithm achieves similar or even higher performance than Relief on all training data and has better or comparable performance than SVM-RFE.  相似文献   

11.
特征选择及分类器参数优化是提高分类器性能的两个重要方面,传统上这两个问题是分开解决的.近年来,随着进化优化计算技术在模式识别领域的广泛应用,编码上的灵活性使得特征选择及参数的同步优化成为一种可能和趋势.为了解决此问题,本文研究采用二进制PSO算法进行特征选择及核K近邻分类器参数的同步优化.实验表明,该方法可有效地找出合适的特征子集及核函数参数,并取得较好的分类效果.  相似文献   

12.
基于粒子群优化算法和相关性分析的特征子集选择   总被引:3,自引:0,他引:3  
特征选择是模式识别与数据挖掘等领域的重要问题之一.针对此问题,提出了基于离散粒子群和相关性分析的特征子集选择算法,算法中采用过滤模式的特征选择方法,通过分析网络入侵数据中所有特征之间的相关性,利用离散粒子群算法在所有特征的空间里优化搜索,自动选择有效的特征子集以降低数据维度.1999 KDD Cup Data中IDS数据集的实验结果表明了提出算法的有效性.  相似文献   

13.
As a very effective method for universal purpose pattern recognition, support vector machine (SVM) was proposed for dichotomic classification problem, which exhibits a remarkable resistance to overfitting, a feature explained by the fact that it directly implements the principle of structural risk minimization. However, in real world, most of classification problems consist of multiple categories. In an attempt to extend the binary SVM classifier for multiclass classification, decision-tree-based multiclass SVM was proposed recently, in which the structure of decision tree plays an important role in minimizing the classification error. The present study aims at developing a systematic way for the design of decision tree for multiclass SVM. Kernel-induced distance function between datasets was discussed and then kernelized hierarchical clustering was developed and used in determining the structure of decision tree. Further, simulation results on satellite image interpretation show the superiority of the proposed classification strategy over the conventional multiclass SVM algorithms.  相似文献   

14.
基于二进制PSO算法的特征选择及SVM参数同步优化   总被引:3,自引:0,他引:3  
特征选择及分类器参数优化是提高分类器性能的两个重要方面,传统上这两个问题是分开解决的。近年来,随着进化优化计算技术在模式识别领域的广泛应用,编码上的灵活性使得特征选择及参数的同步优化成为一种可能和趋势。为了解决此问题,本文研究采用二进制PSO算法同步进行特征选择及SVM参数的同步优化,提出了一种PSO-SVM算法。实验表明,该方法可有效地找出合适的特征子集及SVM参数,并取得较好的分类效果;且与文[4]所提出的GA-SVM算法相比具有特征精简幅度较大、运行效率较高等优点。  相似文献   

15.
This paper proposed two psychophysiological-data-driven classification frameworks for operator functional states (OFS) assessment in safety-critical human-machine systems with stable generalization ability. The recursive feature elimination (RFE) and least square support vector machine (LSSVM) are combined and used for binary and multiclass feature selection. Besides typical binary LSSVM classifiers for two-class OFS assessment, two multiclass classifiers based on multiclass LSSVM-RFE and decision directed acyclic graph (DDAG) scheme are developed, one used for recognizing the high mental workload and fatigued state while the other for differentiating overloaded and base-line states from the normal states. Feature selection results have revealed that different dimensions of OFS can be characterized by specific set of psychophysiological features. Performance comparison studies show that reasonable high and stable classification accuracy of both classification frameworks can be achieved if the RFE procedure is properly implemented and utilized.  相似文献   

16.
Linear kernel Support Vector Machine Recursive Feature Elimination (SVM-RFE) is known as an excellent feature selection algorithm. Nonlinear SVM is a black box classifier for which we do not know the mapping function F{\Phi} explicitly. Thus, the weight vector w cannot be explicitly computed. In this paper, we proposed a feature selection algorithm utilizing Support Vector Machine with RBF kernel based on Recursive Feature Elimination(SVM-RBF-RFE), which expands nonlinear RBF kernel into its Maclaurin series, and then the weight vector w is computed from the series according to the contribution made to classification hyperplane by each feature. Using wi2{w_i^2} as ranking criterion, SVM-RBF-RFE starts with all the features, and eliminates one feature with the least squared weight at each step until all the features are ranked. We use SVM and KNN classifiers to evaluate nested subsets of features selected by SVM-RBF-RFE. Experimental results based on 3 UCI and 3 microarray datasets show SVM-RBF-RFE generally performs better than information gain and SVM-RFE.  相似文献   

17.
针对特征子集区分度准则(Discernibility of feature subsets, DFS)没有考虑特征测量量纲对特征子集区分能力影响的缺陷, 引入离散系数, 提出GDFS (Generalized discernibility of feature subsets)特征子集区分度准则. 结合顺序前向、顺序后向、顺序前向浮动和顺序后向浮动4种搜索策略, 以极限学习机为分类器, 得到4种混合特征选择算法. UCI数据集与基因数据集的实验测试, 以及与DFS、Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR和FSSC-SD的实验比较和统计重要度检测表明: 提出的GDFS优于DFS, 能选择到分类能力更好的特征子集.  相似文献   

18.
In classification problems, a large number of features are typically used to describe the problem’s instances. However, not all of these features are useful for classification. Feature selection is usually an important pre-processing step to overcome the problem of “curse of dimensionality”. Feature selection aims to choose a small number of features to achieve similar or better classification performance than using all features. This paper presents a particle swarm Optimization (PSO)-based multi-objective feature selection approach to evolving a set of non-dominated feature subsets which achieve high classification performance. The proposed algorithm uses local search techniques to improve a Pareto front and is compared with a pure multi-objective PSO algorithm, three well-known evolutionary multi-objective algorithms and a current state-of-the-art PSO-based multi-objective feature selection approach. Their performances are examined on 12 benchmark datasets. The experimental results show that in most cases, the proposed multi-objective algorithm generates better Pareto fronts than all other methods.  相似文献   

19.
Feature selection is the basic pre-processing task of eliminating irrelevant or redundant features through investigating complicated interactions among features in a feature set. Due to its critical role in classification and computational time, it has attracted researchers’ attention for the last five decades. However, it still remains a challenge. This paper proposes a binary artificial bee colony (ABC) algorithm for the feature selection problems, which is developed by integrating evolutionary based similarity search mechanisms into an existing binary ABC variant. The performance analysis of the proposed algorithm is demonstrated by comparing it with some well-known variants of the particle swarm optimization (PSO) and ABC algorithms, including standard binary PSO, new velocity based binary PSO, quantum inspired binary PSO, discrete ABC, modification rate based ABC, angle modulated ABC, and genetic algorithms on 10 benchmark datasets. The results show that the proposed algorithm can obtain higher classification performance in both training and test sets, and can eliminate irrelevant and redundant features more effectively than the other approaches. Note that all the algorithms used in this paper except for standard binary PSO and GA are employed for the first time in feature selection.  相似文献   

20.
The high dimensionality of microarray datasets endows the task of multiclass tissue classification with various difficulties—the main challenge being the selection of features deemed relevant and non-redundant to form the predictor set for classifier training. The necessity of varying the emphases on relevance and redundancy, through the use of the degree of differential prioritization (DDP) during the search for the predictor set is also of no small importance. Furthermore, there are several types of decomposition technique for the feature selection (FS) problem—all-classes-at-once, one-vs.-all (OVA) or pairwise (PW). Also, in multiclass problems, there is the need to consider the type of classifier aggregation used—whether non-aggregated (a single machine), or aggregated (OVA or PW). From here, first we propose a systematic approach to combining the distinct problems of FS and classification. Then, using eight well-known multiclass microarray datasets, we empirically demonstrate the effectiveness of the DDP in various combinations of FS decomposition types and classifier aggregation methods. Aided by the variable DDP, feature selection leads to classification performance which is better than that of rank-based or equal-priorities scoring methods and accuracies higher than previously reported for benchmark datasets with large number of classes. Finally, based on several criteria, we make general recommendations on the optimal choice of the combination of FS decomposition type and classifier aggregation method for multiclass microarray datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号