首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
一种基于递归分类树的集成特征基因选择方法   总被引:14,自引:1,他引:14  
李霞  张田文  郭政 《计算机学报》2004,27(5):675-682
利用DNA芯片基因表达谱信息识别疾病相关基因,对癌症等疾病分型、诊断及病理学研究有非常重要的实际意义.该文提出了一种基于递归分类树的特征基因选择的集成方法EFST(Ensemble Feature Selection based on Recursive Partition—Tree).EFST可选择多组基于不同样本分布结构的特征基因,结合有监督机器学习中的多分类器集成(ensemble)决策技术,利用提出的衡量特征基因稳定性与显著性测度.集成各特征基因组选择最终的特征基因.应用结肠癌2000个基因的表达谱实验数据分析结果显示:EFST方法不仅具有寻找疾病相关基因的能力和较强的数据维数压缩能力,而且由支持向量机(SVM)等4种模式分类方法证实EFST方法可以明显地提高疾病鉴别分类的准确率.  相似文献   

3.
Over the last few years, the dimensionality of datasets involved in data mining applications has increased dramatically. In this situation, feature selection becomes indispensable as it allows for dimensionality reduction and relevance detection. The research proposed in this paper broadens the scope of feature selection by taking into consideration not only the relevance of the features but also their associated costs. A new general framework is proposed, which consists of adding a new term to the evaluation function of a filter feature selection method so that the cost is taken into account. Although the proposed methodology could be applied to any feature selection filter, in this paper the approach is applied to two representative filter methods: Correlation-based Feature Selection (CFS) and Minimal-Redundancy-Maximal-Relevance (mRMR), as an example of use. The behavior of the proposed framework is tested on 17 heterogeneous classification datasets, employing a Support Vector Machine (SVM) as a classifier. The results of the experimental study show that the approach is sound and that it allows the user to reduce the cost without compromising the classification error.  相似文献   

4.
5.
属性规约是应对“维数灾难”的有效技术,分形属性规约FDR(Fractal Dimensionality Reduction)是近年来出现的一种无监督属性选择技术,令人遗憾的是其需要多遍扫描数据集,因而难于应对高维数据集情况;基于遗传算法的属性规约技术对于高维数据而言优越于传统属性选择技术,但其无法应用于无监督学习领域。为此,结合遗传算法内在随机并行寻优机制及分形属性选择的无监督特点,设计并实现了基于遗传算法的无监督分形属性子集选择算法GABUFSS(Genetic Algorithm Based Unsupervised Feature Subset Selection)。基于合成与实际数据集的实验对比分析了GABUFSS算法与FDR算法的性能,结果表明GABUFSS相对优于FDR算法,并具有发现等价结果属性子集的特点。  相似文献   

6.
7.
Most of the widely used pattern classification algorithms, such as Support Vector Machines (SVM), are sensitive to the presence of irrelevant or redundant features in the training data. Automatic feature selection algorithms aim at selecting a subset of features present in a given dataset so that the achieved accuracy of the following classifier can be maximized. Feature selection algorithms are generally categorized into two broad categories: algorithms that do not take the following classifier into account (the filter approaches), and algorithms that evaluate the following classifier for each considered feature subset (the wrapper approaches). Filter approaches are typically faster, but wrapper approaches deliver a higher performance. In this paper, we present the algorithm – Predictive Forward Selection – based on the widely used wrapper approach forward selection. Using ideas from meta-learning, the number of required evaluations of the target classifier is reduced by using experience knowledge gained during past feature selection runs on other datasets. We have evaluated our approach on 59 real-world datasets with a focus on SVM as the target classifier. We present comparisons with state-of-the-art wrapper and filter approaches as well as one embedded method for SVM according to accuracy and run-time. The results show that the presented method reaches the accuracy of traditional wrapper approaches requiring significantly less evaluations of the target algorithm. Moreover, our method achieves statistically significant better results than the filter approaches as well as the embedded method.  相似文献   

8.
葛倩  张光斌  张小凤 《计算机应用》2022,42(10):3046-3053
为解决特征选择ReliefF算法在利用欧氏距离选取近邻样本过程中,算法稳定性差以及选取的特征子集分类准确率低的问题,提出了一种利用最大信息系数(MIC)作为近邻样本选择标准的MICReliefF算法;同时,以支持向量机(SVM)模型的分类准确率作为评价指标,并多次寻优,以自动确定其最优特征子集,从而实现MICReliefF算法与分类模型的交互优化,即MICReliefF-SVM自动特征选择算法。在多个UCI公开数据集上对MICReliefF-SVM算法的性能进行了验证。实验结果表明,MICReliefF-SVM自动特征选择算法不仅可以筛除更多的冗余特征,而且可以选择出具有良好稳定性和泛化能力的特征子集。与随机森林(RF)、最大相关最小冗余(mRMR)、相关性特征选择(CFS)等经典的特征选择算法相比,MICReliefF-SVM算法具有更高的分类准确率。  相似文献   

9.
This paper presents an effective mutual information-based feature selection approach for EMG-based motion classification task. The wavelet packet transform (WPT) is exploited to decompose the four-class motion EMG signals to the successive and non-overlapped sub-bands. The energy characteristic of each sub-band is adopted to construct the initial full feature set. For reducing the computation complexity, mutual information (MI) theory is utilized to get the reduction feature set without compromising classification accuracy. Compared with the extensively used feature reduction methods such as principal component analysis (PCA), sequential forward selection (SFS) and backward elimination (BE) etc., the comparison experiments demonstrate its superiority in terms of time-consuming and classification accuracy. The proposed strategy of feature extraction and reduction is a kind of filter-based algorithms which is independent of the classifier design. Considering the classification performance will vary with the different classifiers, we make the comparison between the fuzzy least squares support vector machines (LS-SVMs) and the conventional widely used neural network classifier. In the further study, our experiments prove that the combination of MI-based feature selection and SVM techniques outperforms other commonly used combination, for example, the PCA and NN. The experiment results show that the diverse motions can be identified with high accuracy by the combination of MI-based feature selection and SVM techniques.

Compared with the combination of PCA-based feature selection and the classical Neural Network classifier, superior performance of the proposed classification scheme illustrates the potential of the SVM techniques combined with WPT and MI in EMG motion classification.  相似文献   


10.
Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset.  相似文献   

11.
林荣强  李鸥  李青  李林林 《计算机应用》2014,34(11):3206-3209
针对网络流量特征选择过程中存在的样本标记瓶颈问题,以及现有半监督方法无法选择强相关的特征的不足,提出一种基于类标记扩展的多类半监督特征选择(SFSEL)算法。该算法首先从少量的标记样本出发,通过K-means算法对未标记样本进行类标记扩展;然后结合基于双重正则的支持向量机(MDrSVM)算法实现多类数据的特征选择。与半监督特征选择算法Spectral、PCFRSC和SEFR在Moore数据集进行了对比实验,SFSEL得到的分类准确率和召回率明显都要高于其他算法,而且SFSEL算法选择的特征个数明显少于其他算法。实验结果表明: SFSEL算法能够有效地提高所选特征的相关性,获取更好的网络流量分类性能。  相似文献   

12.
基于Relief和SVM-RFE的组合式SNP特征选择   总被引:1,自引:0,他引:1  
针对SNP的全基因组关联分析面临SNP数据的高维小样本特性和遗传疾病病理的复杂性两大难点,将特征选择引入SNP全基因组关联分析中,提出基于Relief和SVM-RFE的组合式SNP特征选择方法。该方法包括两个阶段:Filter阶段,使用Relief算法剔除无关SNPs;Wrapper阶段,使用基于支持向量机的特征递归消减方法(SVM-RFE)筛选出与遗传疾病相关的关键SNPs。实验表明,该方法具有明显优于单独使用SVM-RFE算法的性能,优于单独使用Relief-SVM算法的分类准确率,为SNP全基因组关联分析提供了一种有效途径。  相似文献   

13.
Frequent substructure-based approaches for classifying chemical compounds   总被引:5,自引:0,他引:5  
Computational techniques that build models to correctly assign chemical compounds to various classes of interest have many applications in pharmaceutical research and are used extensively at various phases during the drug development process. These techniques are used to solve a number of classification problems such as predicting whether or not a chemical compound has the desired biological activity, is toxic or nontoxic, and filtering out drug-like compounds from large compound libraries. This paper presents a substructure-based classification algorithm that decouples the substructure discovery process from the classification model construction and uses frequent subgraph discovery algorithms to find all topological and geometric substructures present in the data set. The advantage of this approach is that during classification model construction, all relevant substructures are available allowing the classifier to intelligently select the most discriminating ones. The computational scalability is ensured by the use of highly efficient frequent subgraph discovery algorithms coupled with aggressive feature selection. Experimental evaluation on eight different classification problems shows that our approach is computationally scalable and, on average, outperforms existing schemes by 7 percent to 35 percent.  相似文献   

14.
姜鹤  陈丽亚 《微机发展》2010,(3):17-19,23
随着互联网的迅速发展,面向重要网络媒体海量发布信息实现智能分类,对于网络信息监管、舆论引导工作有着深远的意义。文中针对在文本分类中的特征选取问题,描述了一种基于法矢量权重的特征评价和选取方法。将此方法与SVM学习算法进行结合,在路透社标准文本测试集上进行了对比评估。实验结果显示,此特征选取方法相对于传统的特征选取方法可以产生更优的分类性能。此特征提取方法提供一种有效的途径,在基本保持分类器性能的前提下显著地减少特征空间的维数,进而提升系统的资源利用效率。  相似文献   

15.
In computer aided medical system, many practical classification applications are confronted to the massive multiplication of collection and storage of data, this is especially the case in areas such as the prediction of medical test efficiency, the classification of tumors and the detection of cancers. Data with known class labels (labeled data) can be limited but unlabeled data (with unknown class labels) are more readily available. Semi-supervised learning deals with methods for exploiting the unlabeled data in addition to the labeled data to improve performance on the classification task. In this paper, we consider the problem of using a large amount of unlabeled data to improve the efficiency of feature selection in large dimensional datasets, when only a small set of labeled examples is available. We propose a new semi-supervised feature evaluation method called Optimized co-Forest for Feature Selection (OFFS) that combines ideas from co-forest and the embedded principle of selecting in Random Forest based by the permutation of out-of-bag set. We provide empirical results on several medical and biological benchmark datasets, indicating an overall significant improvement of OFFS compared to four other feature selection approaches using filter, wrapper and embedded manner in semi-supervised learning. Our method proves its ability and effectiveness to select and measure importance to improve the performance of the hypothesis learned with a small amount of labeled samples by exploiting unlabeled samples.  相似文献   

16.
17.
李平  徐新  董浩  邓旭 《计算机应用》2018,38(1):132-136
可分性指数(SI)可用来选择各类地物的有效分类特征,但在多维特征以及地物可分性较好的情况下,只利用可分性指数进行特征选择不能有效去除特征之间的冗余性。基于此,提出了利用可分性指数并辅以顺序后退(SBS)算法进行特征选择与多层支持向量机(SVM)分类的方法。首先,由各类地物在所有特征下的可分性指数选择分类地物和特征;然后,以该地物的分类精度为评估依据,利用顺序后退法筛选特征;其次,由剩余地物之间的可分性指数和顺序后退法依次选择各类地物的分类特征;最后利用多层SVM进行分类。实验结果表明,与只利用可分性指数选择特征进行多层SVM分类的方法相比,所提方法的分类精度提高了2%,各类地物的分类精度均高于86%,且运行时间为原来方法的一半。  相似文献   

18.
Business failure prediction (BFP) is an effective tool to help financial institutions and relevant people to make the right decision in investments, especially in the current competitive environment. This topic belongs to a classification-type task, one of whose aims is to generate more accurate hit ratios. Support vector machine (SVM) is a statistical learning technique, whose advantage is its high generalization performance. The objective of this context is threefold. Firstly, SVM is used to predict business failure by utilizing a straightforward wrapper approach to help the model produce more accurate prediction. The wrapper approach is fulfilled by employing a forward feature selection method, composed of feature ranking and feature selection. Meanwhile, this work attempts to investigate the feasibility of using linear SVMs to select features for all SVMs in the wrapper since non-linear SVMs yield to over-fit the data. Finally, a robust re-sampling approach is used to evaluate model performances for the task of BFP in China. In the empirical research, performances of linear SVM, polynomial SVM, Gaussian SVM, and sigmoid SVM with the best filter of stepwise MDA, and wrappers respectively using linear SVM and non-linear SVMs as evaluating functions are to be compared. The results indicate that the non-linear SVM with radial basis function kernel and features selected by linear SVM compare significantly superiorly to all the other SVMs. Meanwhile, all SVMs with features selected by linear SVM produce at least as good performances as SVMs with other optimal features.  相似文献   

19.
Feature weighting based band selection provides a computationally undemanding approach to reduce the number of hyperspectral bands in order to decrease the computational requirements for processing large hyperspectral data sets. In a recent feature weighting based band selection method, a pair‐wise separability criterion and matrix coefficients analysis are used to assign weights to original bands, after which bands identified to be redundant using cross correlation are removed, as it is noted that feature weighting itself does not consider spectral correlation. In the present work, it is proposed to use phase correlation instead of conventional cross correlation to remove redundant bands in the last step of feature weighting based hyperspectral band selection. Support Vector Machine (SVM) based classification of hyperspectral data with a reduced number of bands is used to evaluate the classification accuracy obtained with the proposed approach, and it is shown that feature weighting band selection with the proposed phase correlation based redundant band removal method provides increased classification accuracy compared to feature weighting band selection with conventional cross correlation based redundant band removal.  相似文献   

20.
The focus of this paper is on joint feature re-extraction and classification in cases when the training data set is small. An iterative semi-supervised support vector machine (SVM) algorithm is proposed, where each iteration consists both feature re-extraction and classification, and the feature re-extraction is based on the classification results from the previous iteration. Feature extraction is first discussed in the framework of Rayleigh coefficient maximization. The effectiveness of common spatial pattern (CSP) feature, which is commonly used in Electroencephalogram (EEG) data analysis and EEG-based brain computer interfaces (BCIs), can be explained by Rayleigh coefficient maximization. Two other features are also defined using the Rayleigh coefficient. These features are effective for discriminating two classes with different means or different variances. If we extract features based on Rayleigh coefficient maximization, a large training data set with labels is required in general; otherwise, the extracted features are not reliable. Thus we present an iterative semi-supervised SVM algorithm embedded with feature re-extraction. This iterative algorithm can be used to extract these three features reliably and perform classification simultaneously in cases where the training data set is small. Each iteration is composed of two main steps: (i) the training data set is updated/augmented using unlabeled test data with their predicted labels; features are re-extracted based on the augmented training data set. (ii) The re-extracted features are classified by a standard SVM. Regarding parameter setting and model selection of our algorithm, we also propose a semi-supervised learning-based method using the Rayleigh coefficient, in which both training data and test data are used. This method is suitable when cross-validation model selection may not work for small training data set. Finally, the results of data analysis are presented to demonstrate the validity of our approach. Editor: Olivier Chapelle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号