首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Feature subset selection is a key problem in the data-mining classification task that helps to obtain more compact and understandable models without degrading (or even improving) their performance. In this work we focus on FSS in high-dimensional datasets, that is, with a very large number of predictive attributes. In this case, standard sophisticated wrapper algorithms cannot be applied because of their complexity, and computationally lighter filter-wrapper algorithms have recently been proposed. In this work we propose a stochastic algorithm based on the GRASP meta-heuristic, with the main goal of speeding up the feature subset selection process, basically by reducing the number of wrapper evaluations to carry out. GRASP is a multi-start constructive method which constructs a solution in its first stage, and then runs an improving stage over that solution. Several instances of the proposed GRASP method are experimentally tested and compared with state-of-the-art algorithms over 12 high-dimensional datasets. The statistical analysis of the results shows that our proposal is comparable in accuracy and cardinality of the selected subset to previous algorithms, but requires significantly fewer evaluations.  相似文献   

2.
孙林  赵婧  徐久成  王欣雅 《计算机应用》2022,42(5):1355-1366
针对经典的帝王蝶优化(MBO)算法不能很好地处理连续型数据,以及粗糙集模型对于大规模、高维复杂的数据处理能力不足等问题,提出了基于邻域粗糙集(NRS)和MBO的特征选择算法。首先,将局部扰动和群体划分策略与MBO算法结合,并构建传输机制以形成一种二进制MBO(BMBO)算法;其次,引入突变算子增强算法的探索能力,设计了基于突变算子的BMBO(BMBOM)算法;然后,基于NRS的邻域度构造适应度函数,并对初始化的特征子集的适应度值进行评估并排序;最后,使用BMBOM算法通过不断迭代搜索出最优特征子集,并设计了一种元启发式特征选择算法。在基准函数上评估BMBOM算法的优化性能,并在UCI数据集上评价所提出的特征选择算法的分类能力。实验结果表明,在5个基准函数上,BMBOM算法的最优值、最差值、平均值以及标准差明显优于MBO和粒子群优化(PSO)算法;在UCI数据集上,与基于粗糙集的优化特征选择算法、结合粗糙集与优化算法的特征选择算法、结合NRS与优化算法的特征选择算法、基于二进制灰狼优化的特征选择算法相比,所提特征选择算法在分类精度、所选特征数和适应度值这3个指标上表现良好,能够选择特征数少且分类精度高的最优特征子集。  相似文献   

3.

对于包含大量特征的数据集, 特征选择已成为一个研究热点, 能剔除无关和冗余特征, 将会有效改善分类准确性. 对此, 在分析已有文献的基础上, 提出一种基于属性关系的特征选择算法(NCMIPV), 获取优化特征子集, 并在UCI 数据集上对NCMIPV 算法进行性能评估. 实验结果表明, 与原始特征子集相比, 该算法能有效降低特征空间维数, 运行时间也相对较短, 分类差错率可与其他算法相比, 在某些场合下性能明显优于其他算法.

  相似文献   

4.
针对传统支持向量机(SVM)在封装式特征选择中分类精度低、特征子集选择冗余以及计算效率差的不足,利用元启发式优化算法同步优化SVM与特征选择。为改善SVM分类效果以及选择特征子集的能力,首先,利用自适应差分进化(DE)算法、混沌初始化与锦标赛选择策略对斑点鬣狗优化(SHO)算法改进,以增强其局部搜索能力并提高其寻优效率与求解精度;其次,将改进后的算法用于特征选择与SVM参数调整的同步优化中;最后,在UCI数据集进行特征选择仿真实验,采取分类准确率、选择特征数、适应度值及运行时间来综合评估所提算法的优化性能。实验结果证明,改进算法的同步优化机制能够在高分类准确率下降低特征选择的数目,该算法比传统算法更适合解决封装式特征选择问题,具有良好的应用价值。  相似文献   

5.
Feature selection in high-dimensional data is one of the active areas of research in pattern recognition. Most of the algorithms in this area try to select a subset of features in a way to maximize the accuracy of classification regardless of the number of selected features that affect classification time. In this article, a new method for feature selection algorithm in high-dimensional data is proposed that can control the trade-off between accuracy and classification time. This method is based on a greedy metaheuristic algorithm called greedy randomized adaptive search procedure (GRASP). It uses an extended version of a simulated annealing (SA) algorithm for local search. In this version of SA, new parameters are embedded that allow the algorithm to control the trade-off between accuracy and classification time. Experimental results show supremacy of the proposed method over previous versions of GRASP for feature selection. Also, they show how the trade-off between accuracy and classification time is controllable by the parameters introduced in the proposed method.  相似文献   

6.

Preprocessing of data is ubiquitous, and choosing significant attributes has been one of the important steps in the prior processing of data. Feature selection is used to create a subset of relevant feature for effective classification of data. In a classification of high-dimensional data, the classifier usually depends on the feature subset that has been used for classification. The Relief algorithm is a popular heuristic approach to select significant feature subsets. The Relief algorithm estimates feature individually and selects top-scored feature for subset generation. Many extensions of the Relief algorithm have been developed. However, an important defect in the Relief-based algorithms has been ignored for years. Because of the uncertainty and noise of the instances used for measuring the feature score in the Relief algorithm, the outcome results will vacillate with the instances, which lead to poor classification accuracy. To fix this problem, a novel feature selection algorithm based on Chebyshev distance-outlier detection model is proposed called noisy feature removal-Relief, NFR-ReliefF in short. To demonstrate the performance of NFR-ReliefF algorithm, an extensive experiment, including classification tests, has been carried out on nine benchmarking high-dimensional datasets by uniting the proposed model with standard classifiers, including the naïve Bayes, C4.5 and KNN. The results prove that NFR-ReliefF outperforms the other models on most tested datasets.

  相似文献   

7.
In this research, we propose a novel method to find the relevant feature subset by using ant colony optimisation minimum-redundancy–maximum-relevance. The proposed approach considers the significance of each feature while reducing the dimensionality. The performance of proposed algorithm has been compared with existing biologically inspired feature subset selection algorithms. Eight datasets have been selected from UCI machine learning repository for experimentation. The experimental results indicate that the presented algorithm out performs the other algorithms in terms of the classification accuracy and feature reduction.  相似文献   

8.
特征选择是机器学习和数据挖掘领域中一项重要的数据预处理技术,它旨在最大化分类任务的精度和最小化最优子集特征个数。运用粒子群算法在高维数据集中寻找最优子集面临着陷入局部最优和计算代价昂贵的问题,导致分类精度下降。针对此问题,提出了基于多因子粒子群算法的高维数据特征选择算法。引入了进化多任务的算法框架,提出了一种两任务模型生成的策略,通过任务间的知识迁移加强种群交流,提高种群多样性以改善易陷入局部最优的缺陷;设计了基于稀疏表示的初始化策略,在算法初始阶段设计具有稀疏表示的初始解,降低了种群在趋向最优解集时的计算开销。在6个公开医学高维数据集上的实验结果表明,所提算法能够有效实现分类任务且得到较好的精度。  相似文献   

9.
为解决高维数据在分类时造成的“维数灾难”问题,提出一种新的将核函数与稀疏学习相结合的属性选择算法。具体地,首先将每一维属性利用核函数映射到核空间,在此高维核空间上执行线性属性选择,从而实现低维空间上的非线性属性选择;其次,对映射到核空间上的属性进行稀疏重构,得到原始数据集的一种稀疏表达方式;接着利用L 1范数构建属性评分选择机制,选出最优属性子集;最后,将属性选择后的数据用于分类实验。在公开数据集上的实验结果表明,该算法能够较好地实现属性选择,与对比算法相比分类准确率提高了约3%。  相似文献   

10.
针对在模式分类问题中,数据往往存在不相关的或冗余的特征,从而影响分类的准确性的问题,提出一种融合Shapley值和粒子群优化算法的混合特征选择算法,以利用最少的特征获得最佳分类效果。在粒子群优化算法的局部搜索中引入博弈论的Shapley值,首先计算粒子(特征子集)中每个特征对分类效果的贡献值(Shapley值),然后逐步删除Shapley值最低的特征以优化特征子集,进而更新粒子,同时也增强了算法的全局搜索能力,最后将改进后的粒子群优化算法运用于特征选择,以支持向量机分类器的分类性能和选择的特征数目作为特征子集评价标准,对UCI机器学习数据集和基因表达数据集的17个具有不同特征数量的医疗数据集进行分类实验。实验结果表明所提算法能有效地删除数据集中55%以上不相关的或冗余的特征,尤其对于中大型数据集能删减80%以上,并且所选择的特征子集也具有较好的分类能力,分类准确率能提高2至23个百分点。  相似文献   

11.

Features subset selection (FSS) generally plays an essential role in the implementation of data mining, particularly in the field of high-dimensional medical data analysis, as well as in supplying early detection with essential features and high accuracy. The latest modern feature selection models are now using the ability of optimization algorithms for extracting features of particular properties to get the highest accuracy performance possible. Many of the optimization algorithms, such as genetic algorithm, often use the required parameters that would need to be adjusted for better results. For the function selection procedure, tuning these parameter values is a difficult challenge. In this paper, a new wrapper-based feature selection approach called binary teaching learning based optimization (BTLBO) is introduced. The binary teaching learning based optimization (BTLBO) is among the most sophisticated meta-heuristic method which does not involve any specific algorithm parameters. It requires only standard process parameters such as population size and a number of iterations to extract a set of features selected from a data. This is a demanding process, to achieve the best possible set of features would be to use a method which is independent of the method controlling parameters. This paper introduces a new modified binary teaching–learning-based optimization (NMBTLBO) as a technique to select subset features and demonstrate support vector machine (SVM) accuracy of binary identification as a fitness function for the implementation of the feature subset selection process. The new proposed algorithm NMBTLBO contains two steps: first, the new updating procedure, second, the new method to select the primary teacher in teacher phase in binary teaching-learning based on optimization algorithm. The proposed technique NMBTLBO was used to classify the rheumatic disease datasets collected from Baghdad Teaching Hospital Outpatient Rheumatology Clinic during 2016–2018. Compared with the original BTLBO algorithm, the improved NMBTLBO algorithm has achieved a major difference in accuracy. Validation was carried out by testing the accuracy of four classification methods: K-nearest neighbors, decision trees, support vector machines and K-means. Study results showed that the classification accuracy of the four methods was increased for the proposed method of selection of features (NMBTLBO) compared to the BTLBO algorithm. SVM classifier provided 89% accuracy of BTLBO-SVM and 95% with NMBTLBO –SVM. Decision trees set the values of 94% with BTLBO-SVM and 95% with the feature selection of NMBTLBO-SVM. The analysis indicates that the latest method (NMBTLBO) enhances classification accuracy.

  相似文献   

12.
Protein function prediction is an important problem in functional genomics. Typically, protein sequences are represented by feature vectors. A major problem of protein datasets that increase the complexity of classification models is their large number of features. Feature selection (FS) techniques are used to deal with this high dimensional space of features. In this paper, we propose a novel feature selection algorithm that combines genetic algorithms (GA) and ant colony optimization (ACO) for faster and better search capability. The hybrid algorithm makes use of advantages of both ACO and GA methods. Proposed algorithm is easily implemented and because of use of a simple classifier in that, its computational complexity is very low. The performance of proposed algorithm is compared to the performance of two prominent population-based algorithms, ACO and genetic algorithms. Experimentation is carried out using two challenging biological datasets, involving the hierarchical functional classification of GPCRs and enzymes. The criteria used for comparison are maximizing predictive accuracy, and finding the smallest subset of features. The results of experiments indicate the superiority of proposed algorithm.  相似文献   

13.
We are witnessing the era of big data computing where computing the resources is becoming the main bottleneck to deal with those large datasets. In the case of high-dimensional data where each view of data is of high dimensionality, feature selection is necessary for further improving the clustering and classification results. In this paper, we propose a new feature selection method, Incremental Filtering Feature Selection (IF2S) algorithm, and a new clustering algorithm, Temporal Interval based Fuzzy Minimal Clustering (TIFMC) algorithm that employs the Fuzzy Rough Set for selecting optimal subset of features and for effective grouping of large volumes of data, respectively. An extensive experimental comparison of the proposed method and other methods are done using four different classifiers. The performance of the proposed algorithms yields promising results on the feature selection, clustering and classification accuracy in the field of biomedical data mining.  相似文献   

14.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

15.
This paper deals with the problem of supervised wrapper-based feature subset selection in datasets with a very large number of attributes. Recently the literature has contained numerous references to the use of hybrid selection algorithms: based on a filter ranking, they perform an incremental wrapper selection over that ranking. Though working fine, these methods still have their problems: (1) depending on the complexity of the wrapper search method, the number of wrapper evaluations can still be too large; and (2) they rely on a univariate ranking that does not take into account interaction between the variables already included in the selected subset and the remaining ones.Here we propose a new approach whose main goal is to drastically reduce the number of wrapper evaluations while maintaining good performance (e.g. accuracy and size of the obtained subset). To do this we propose an algorithm that iteratively alternates between filter ranking construction and wrapper feature subset selection (FSS). Thus, the FSS only uses the first block of ranked attributes and the ranking method uses the current selected subset in order to build a new ranking where this knowledge is considered. The algorithm terminates when no new attribute is selected in the last call to the FSS algorithm. The main advantage of this approach is that only a few blocks of variables are analyzed, and so the number of wrapper evaluations decreases drastically.The proposed method is tested over eleven high-dimensional datasets (2400-46,000 variables) using different classifiers. The results show an impressive reduction in the number of wrapper evaluations without degrading the quality of the obtained subset.  相似文献   

16.

In machine learning, searching for the optimal feature subset from the original datasets is a very challenging and prominent task. The metaheuristic algorithms are used in finding out the relevant, important features, that enhance the classification accuracy and save the resource time. Most of the algorithms have shown excellent performance in solving feature selection problems. A recently developed metaheuristic algorithm, gaining-sharing knowledge-based optimization algorithm (GSK), is considered for finding out the optimal feature subset. GSK algorithm was proposed over continuous search space; therefore, a total of eight S-shaped and V-shaped transfer functions are employed to solve the problems into binary search space. Additionally, a population reduction scheme is also employed with the transfer functions to enhance the performance of proposed approaches. It explores the search space efficiently and deletes the worst solutions from the search space, due to the updation of population size in every iteration. The proposed approaches are tested over twenty-one benchmark datasets from UCI repository. The obtained results are compared with state-of-the-art metaheuristic algorithms including binary differential evolution algorithm, binary particle swarm optimization, binary bat algorithm, binary grey wolf optimizer, binary ant lion optimizer, binary dragonfly algorithm, binary salp swarm algorithm. Among eight transfer functions, V4 transfer function with population reduction on binary GSK algorithm outperforms other optimizers in terms of accuracy, fitness values and the minimal number of features. To investigate the results statistically, two non-parametric statistical tests are conducted that concludes the superiority of the proposed approach.

  相似文献   

17.
《Pattern recognition letters》2001,22(6-7):799-811
Feature selection is used to improve the efficiency of learning algorithms by finding an optimal subset of features. However, most feature selection techniques can handle only certain types of data. Additional limitations of existing methods include intensive computational requirements and inability to identify redundant variables. In this paper, we present a novel, information-theoretic algorithm for feature selection, which finds an optimal set of attributes by removing both irrelevant and redundant features. The algorithm has a polynomial computational complexity and is applicable to datasets of a mixed nature. The method performance is evaluated on several benchmark datasets by using a standard classifier (C4.5).  相似文献   

18.
杨柳  李云 《计算机应用》2021,41(12):3521-3526
K-匿名算法通过对数据的泛化、隐藏等手段使得数据达到K-匿名条件,在隐藏特征的同时考虑数据的隐私性与分类性能,可以视为一种特殊的特征选择方法,即K-匿名特征选择。K-匿名特征选择方法结合K-匿名与特征选择的特点使用多个评价准则选出K-匿名特征子集。过滤式K-匿名特征选择方法难以搜索到所有满足K-匿名条件的候选特征子集,不能保证得到的特征子集的分类性能最优,而封装式特征选择方法计算成本很大,因此,结合过滤式特征排序与封装式特征选择的特点,改进已有方法中的前向搜索策略,设计了一种混合式K-匿名特征选择算法,使用分类性能作为评价准则选出分类性能最好的K-匿名特征子集。在多个公开数据集上进行实验,结果表明,所提算法在分类性能上可以超过现有算法并且信息损失更小。  相似文献   

19.
张翠军  陈贝贝  周冲  尹心歌 《计算机应用》2018,38(11):3156-3160
针对在分类问题中,数据之间存在大量的冗余特征,不仅影响分类的准确性,而且会降低分类算法执行速度的问题,提出了一种基于多目标骨架粒子群优化(BPSO)的特征选择算法,以获取在特征子集个数与分类精确度之间折中的最优策略。为了提高多目标骨架粒子群优化算法的效率,首先使用了一个外部存档,用来引导粒子的更新方向;然后通过变异算子,改善粒子的搜索空间;最后,将多目标骨架粒子群算法应用到特征选择问题中,并利用K近邻(KNN)分类器的分类性能和特征子集的个数作为特征子集的评价标准,对UCI数据集以及基因表达数据集的12个数据集进行实验。实验结果表明,所提算法选择的特征子集具有较好的分类性能,最小分类错误率最大可以降低7.4%,并且分类算法的执行时间最多能缩短12 s,能够有效提高算法的分类性能与执行速度。  相似文献   

20.
数据的特征空间常随时间动态变化,而训练样本的数量固定不变,数据的特征空间在呈现超高维特点的同时通常伴随决策空间的类别不平衡问题.对此,文中提出基于最大决策边界的高维类不平衡数据在线流特征选择算法.借助邻域粗糙集模型,在充分考虑边界样本影响的基础上, 定义自适应邻域关系,设计基于最大决策边界的粗糙依赖度计算公式.同时,提出三种在线特征子集评估指标,用于选择在大类和小类之间具有强区分能力的特征.在 11 个高维类不平衡数据集上的实验表明,在相同的实验环境及特征数量下,文中算法综合性能较优.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号