首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
特征选择在机器学习和数据挖掘中起到了至关重要的作用。Relief作为一种高效的过滤式特征选择算法,能处理多种类型的数据,且对噪声的容忍力较强,因此被广泛应用。然而,经典的Relief算法对离散特征的评价较为简单,在实际进行特征选择时并未充分挖掘特征与类标签之间的潜在关系,具有很大的改进空间。针对经典的Relief算法对离散特征的评价方式较为简单这一不足,提出了一种基于标签相关度的离散特征评价方法。该算法充分考虑了不同特征的特性,给出了一种面向混合特征的距离度量方式,同时从离散特征与标签之间的相关度出发,重新定义了Relief算法对离散特征的评价体系。实验结果表明,改进后的Relief算法与经典的Relief算法和现有的一些面向混合数据的特征选择算法相比,其分类精度均有不同程度的提升,具有良好的性能。  相似文献   

2.
基于Relief和SVM-RFE的组合式SNP特征选择   总被引:1,自引:0,他引:1  
针对SNP的全基因组关联分析面临SNP数据的高维小样本特性和遗传疾病病理的复杂性两大难点,将特征选择引入SNP全基因组关联分析中,提出基于Relief和SVM-RFE的组合式SNP特征选择方法。该方法包括两个阶段:Filter阶段,使用Relief算法剔除无关SNPs;Wrapper阶段,使用基于支持向量机的特征递归消减方法(SVM-RFE)筛选出与遗传疾病相关的关键SNPs。实验表明,该方法具有明显优于单独使用SVM-RFE算法的性能,优于单独使用Relief-SVM算法的分类准确率,为SNP全基因组关联分析提供了一种有效途径。  相似文献   

3.
针对航空发动机预测与健康管理系统对其状态判断和故障诊断的需求,结合LVQ网络具有处理分类问题时能够识别信息内含有的重要聚类特征信息的优点,提出了基于LVQ神经网络的航空发动机故障特征提取方法。分析研究了LVQ神经网络的结构和学习算法,以及某型航空发动机的测量参数、数据预处理和故障样本选取方法。并以其设计点为例进行了系统仿真。通过与BP网络的分类器对比试验,表明了该算法的可行性和有效性。  相似文献   

4.
针对特征子集区分度准则(Discernibility of feature subsets, DFS)没有考虑特征测量量纲对特征子集区分能力影响的缺陷, 引入离散系数, 提出GDFS (Generalized discernibility of feature subsets)特征子集区分度准则. 结合顺序前向、顺序后向、顺序前向浮动和顺序后向浮动4种搜索策略, 以极限学习机为分类器, 得到4种混合特征选择算法. UCI数据集与基因数据集的实验测试, 以及与DFS、Relief、DRJMIM、mRMR、LLE Score、AVC、SVM-RFE、VMInaive、AMID、AMID-DWSFS、CFR和FSSC-SD的实验比较和统计重要度检测表明: 提出的GDFS优于DFS, 能选择到分类能力更好的特征子集.  相似文献   

5.
In a DNA microarray dataset, gene expression data often has a huge number of features(which are referred to as genes) versus a small size of samples. With the development of DNA microarray technology, the number of dimensions increases even faster than before, which could lead to the problem of the curse of dimensionality. To get good classification performance, it is necessary to preprocess the gene expression data. Support vector machine recursive feature elimination (SVM-RFE) is a classical method for gene selection. However, SVM-RFE suffers from high computational complexity. To remedy it, this paper enhances SVM-RFE for gene selection by incorporating feature clustering, called feature clustering SVM-RFE (FCSVM-RFE). The proposed method first performs gene selection roughly and then ranks the selected genes. First, a clustering algorithm is used to cluster genes into gene groups, in each which genes have similar expression profile. Then, a representative gene is found to represent a gene group. By doing so, we can obtain a representative gene set. Then, SVM-RFE is applied to rank these representative genes. FCSVM-RFE can reduce the computational complexity and the redundancy among genes. Experiments on seven public gene expression datasets show that FCSVM-RFE can achieve a better classification performance and lower computational complexity when compared with the state-the-art-of methods, such as SVM-RFE.  相似文献   

6.
基于分类间隔的特征选择算法   总被引:3,自引:0,他引:3  
对于二类目标特征选择问题,首先讨论了特征空间的线性可分性问题,并给出了其判别条件;其次,通过借鉴支撑矢量机原理,分析了特征可分性判据的基本性质;最后,依据各特征对分类间隔的贡献大小定义了特征有效率,并以此进行特征选择和特征空间降维.实测数据与网络公开UCI(University of california,Irvine)数据库的实验结果表明,与经典的Relief特征选择算法相比,该算法在识别性能和推广能力上明显有所提高.  相似文献   

7.
不平衡数据集上的Relief特征选择算法   总被引:1,自引:0,他引:1  
Relief算法为系列特征选择方法,包括最早提出的Relief算法和后来拓展的ReliefF算法,核心思想是对分类贡献大的特征赋予较大的权值;特点是算法简单,运行效率高,因此有着广泛的应用。但直接将Relief算法应用于有干扰的数据集或不平衡数据集,效果并不理想。基于Relief算法,提出一种干扰数据特征选择算法,称为阈值-Relief算法,有效消除了干扰数据对分类结果的影响。结合K-means算法,提出两种不平衡数据集特征选择算法,分别称为K-means-ReliefF算法和 K-means-Relief抽样算法,有效弥补了Relief算法在不平衡数据集上表现出的不足。实验证明了本文算法的有效性。  相似文献   

8.
为了有效解决打印文件机源认证问题,提出了一种基于统计纹理特征选择的打印文件机源认证方法。综合考虑打印字符图像的空间域和时频域特性,将GLCM和DWT统计纹理特征进行组合,运用ReliefF算法实现组合特征的初选,二次特征选择使用SVM-RFE算法。文中实验结果表明,在英文相同字有重复样本集和中文不同字无重复样本集上的分类准确率分别为95.20%和75.00%;特征组合与特征选择有利于提高打印文件机源认证的分类鉴别性能。  相似文献   

9.
Feature selection is a crucial machine learning technique aimed at reducing the dimensionality of the input space. By discarding useless or redundant variables, not only it improves model performance but also facilitates its interpretability. The well-known Support Vector Machines–Recursive Feature Elimination (SVM-RFE) algorithm provides good performance with moderate computational efforts, in particular for wide datasets. When using SVM-RFE on a multiclass classification problem, the usual strategy is to decompose it into a series of binary ones, and to generate an importance statistics for each feature on each binary problem. These importances are then averaged over the set of binary problems to synthesize a single value for feature ranking. In some cases, however, this procedure can lead to poor selection. In this paper we discuss six new strategies, based on list combination, designed to yield improved selections starting from the importances given by the binary problems. We evaluate them on artificial and real-world datasets, using both One–Vs–One (OVO) and One–Vs–All (OVA) strategies. Our results suggest that the OVO decomposition is most effective for feature selection on multiclass problems. We also find that in most situations the new K-First strategy can find better subsets of features than the traditional weight average approach.  相似文献   

10.

Preprocessing of data is ubiquitous, and choosing significant attributes has been one of the important steps in the prior processing of data. Feature selection is used to create a subset of relevant feature for effective classification of data. In a classification of high-dimensional data, the classifier usually depends on the feature subset that has been used for classification. The Relief algorithm is a popular heuristic approach to select significant feature subsets. The Relief algorithm estimates feature individually and selects top-scored feature for subset generation. Many extensions of the Relief algorithm have been developed. However, an important defect in the Relief-based algorithms has been ignored for years. Because of the uncertainty and noise of the instances used for measuring the feature score in the Relief algorithm, the outcome results will vacillate with the instances, which lead to poor classification accuracy. To fix this problem, a novel feature selection algorithm based on Chebyshev distance-outlier detection model is proposed called noisy feature removal-Relief, NFR-ReliefF in short. To demonstrate the performance of NFR-ReliefF algorithm, an extensive experiment, including classification tests, has been carried out on nine benchmarking high-dimensional datasets by uniting the proposed model with standard classifiers, including the naïve Bayes, C4.5 and KNN. The results prove that NFR-ReliefF outperforms the other models on most tested datasets.

  相似文献   

11.
学习向量量化(LVQ)和泛化学习向量量化(GLVQ)算法都是采用欧氏距离作为相似性度量函数, 忽视了向量各维属性的数据取值范围,从而不能区分各维属性在分类中的不同作用。针对该问题,使用一种面向特征取值范围的向量相似性度量函数,对GLVQ进行改进,提出了GLVQ-FR算法。使用视频车型分类数据进行改进型GLVQ和LVQ2.1、GLVQ、GRLVQ、GMLVQ等算法的对比实验,结果表明:GLVQ-FR算法在车型分类中具有较高的分类准确性、运算速度和真实生产环境中的可用性。  相似文献   

12.
杨柳  李云 《计算机应用》2021,41(12):3521-3526
K-匿名算法通过对数据的泛化、隐藏等手段使得数据达到K-匿名条件,在隐藏特征的同时考虑数据的隐私性与分类性能,可以视为一种特殊的特征选择方法,即K-匿名特征选择。K-匿名特征选择方法结合K-匿名与特征选择的特点使用多个评价准则选出K-匿名特征子集。过滤式K-匿名特征选择方法难以搜索到所有满足K-匿名条件的候选特征子集,不能保证得到的特征子集的分类性能最优,而封装式特征选择方法计算成本很大,因此,结合过滤式特征排序与封装式特征选择的特点,改进已有方法中的前向搜索策略,设计了一种混合式K-匿名特征选择算法,使用分类性能作为评价准则选出分类性能最好的K-匿名特征子集。在多个公开数据集上进行实验,结果表明,所提算法在分类性能上可以超过现有算法并且信息损失更小。  相似文献   

13.
Soft nearest prototype classification   总被引:3,自引:0,他引:3  
We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of learning vector quantization (LVQ). The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and assess its performance for several toy data sets and for an optical letter classification task. Results show 1) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy; 2) that classification results are better than those obtained with standard learning vector quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes; and 3) that annealing of the width parameter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features of the (heuristic) LVQ methods.  相似文献   

14.
Kernel Function in SVM-RFE based Hyperspectral Data band Selection   总被引:2,自引:0,他引:2  
Supporting vector machine recursive feature elimination (SVM-RFE) has a low efficiency when it is applied to band selection for hyperspectral dada,since it usually uses a non-linear kernel and trains SVM every time after deleting a band.Recent research shows that SVM with non-linear kernel doesn’t always perform better than linear one for SVM classification.Similarly,there is some uncertainty on which kernel is better in SVM-RFE based band selection.This paper compares the classification results in SVM-RFE using two SVMs,then designs two optimization strategies for accelerating the band selection process:the percentage accelerated method and the fixed accelerated method.Through an experiment on AVIRIS hyperspectral data,this paper found:① Classification precision of SVM will slightly decrease with the increasing of redundant bands,which means SVM classification needs feature selection in terms of classification accuracy;② The best band collection selected by SVM-RFE with linear SVM that has higher classification accuracy and less effective bands than that with non-linear SVM;③ Both two optimization strategies improved the efficiency of the feature selection,and percentage eliminating performed better than fixed eliminating method in terms of computational efficiency and classification accuracy.  相似文献   

15.
DNA microarray is a very active area of research in the molecular diagnosis of cancer. Microarray data are composed of many thousands of features and from tens to hundreds of instances, which make the analysis and diagnosis of cancer very complex. In this case, gene/feature selection becomes an elemental and essential task in data classification. In this paper, we propose a complete cancer diagnostic process through kernel-based learning and feature selection. First, support vector machines recursive feature elimination (SVM-RFE) is used to prefilter the genes. Second, the SVM-RFE is enhanced by using binary dragonfly (BDF), which is a recently developed metaheuristic that has never been benchmarked in the context of feature selection. The objective function is the average of classification accuracy rate generated by three kernel-based learning methods. We conducted a series of experiments on six microarray datasets often used in the literature. Experiment results demonstrate that this approach is efficient and provides a higher classification accuracy rate using a reduced number of genes.  相似文献   

16.
“Dimensionality” is one of the major problems which affect the quality of learning process in most of the machine learning and data mining tasks. Having high dimensional datasets for training a classification model may lead to have “overfitting” of the learned model to the training data. Overfitting reduces generalization of the model, therefore causes poor classification accuracy for the new test instances. Another disadvantage of dimensionality of dataset is to have high CPU time requirement for learning and testing the model. Applying feature selection to the dataset before the learning process is essential to improve the performance of the classification task. In this study, a new hybrid method which combines artificial bee colony optimization technique with differential evolution algorithm is proposed for feature selection of classification tasks. The developed hybrid method is evaluated by using fifteen datasets from the UCI Repository which are commonly used in classification problems. To make a complete evaluation, the proposed hybrid feature selection method is compared with the artificial bee colony optimization, and differential evolution based feature selection methods, as well as with the three most popular feature selection techniques that are information gain, chi-square, and correlation feature selection. In addition to these, the performance of the proposed method is also compared with the studies in the literature which uses the same datasets. The experimental results of this study show that our developed hybrid method is able to select good features for classification tasks to improve run-time performance and accuracy of the classifier. The proposed hybrid method may also be applied to other search and optimization problems as its performance for feature selection is better than pure artificial bee colony optimization, and differential evolution.  相似文献   

17.
Relief是公认的效果较好的filter式特征评估方法,但存在特征权值随样本波动的问题,导致识别准确率的下降。提出了一种基于均值-方差模型的特征权值优化算法,采用样本区分能力的平均贡献值的期望和组合贡献值的波动作为特征评估的依据,使得特征选择的结果更加稳定与准确。基于实地采集的地面运动目标的震动信号进行特征选择与分类学习实验,结果表明,该算法得到的特征子集比Relief具有更好的目标区分能力。  相似文献   

18.
Xu  Ruohao  Li  Mengmeng  Yang  Zhongliang  Yang  Lifang  Qiao  Kangjia  Shang  Zhigang 《Applied Intelligence》2021,51(10):7233-7244

Feature selection is a technique to improve the classification accuracy of classifiers and a convenient data visualization method. As an incremental, task oriented, and model-free learning algorithm, Q-learning is suitable for feature selection, this study proposes a dynamic feature selection algorithm, which combines feature selection and Q-learning into a framework. First, the Q-learning is used to construct the discriminant functions for each class of the data. Next, the feature ranking is achieved according to the all discrimination functions vectors for each class of the data comprehensively, and the feature ranking is doing during the process of updating discriminant function vectors. Finally, experiments are designed to compare the performance of the proposed algorithm with four feature selection algorithms, the experimental results on the benchmark data set verify the effectiveness of the proposed algorithm, the classification performance of the proposed algorithm is better than the other feature selection algorithms, meanwhile the proposed algorithm also has good performance in removing the redundant features, and the experiments of the effect of learning rates on the our algorithm demonstrate that the selection of parameters in our algorithm is very simple.

  相似文献   

19.
黄琴    钱文彬    王映龙  吴兵龙 《智能系统学报》2019,14(5):929-938
在多标记学习中,特征选择是提升多标记学习分类性能的有效手段。针对多标记特征选择算法计算复杂度较大且未考虑到现实应用中数据的获取往往需要花费代价,本文提出了一种面向代价敏感数据的多标记特征选择算法。该算法利用信息熵分析特征与标记之间的相关性,重新定义了一种基于测试代价的特征重要度准则,并根据服从正态分布的特征重要度和特征代价的标准差,给出一种合理的阈值选择方法,同时通过阈值剔除冗余和不相关特征,得到低总代价的特征子集。通过在多标记数据的实验对比和分析,表明该方法的有效性和可行性。  相似文献   

20.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号