首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
刘兆赓  李占山  王丽  王涛  于海鸿 《软件学报》2020,31(5):1511-1524
特征选择作为一种重要的数据预处理方法,不但能解决维数灾难问题,还能提高算法的泛化能力.各种各样的方法已被应用于解决特征选择问题,其中,基于演化计算的特征选择算法近年来获得了更多的关注并取得了一些成功.近期研究结果表明,森林优化特征选择算法具有更好的分类性能及维度缩减能力.然而,初始化阶段的随机性、全局播种阶段的人为参数设定,影响了该算法的准确率和维度缩减能力;同时,算法本身存在着高维数据处理能力不足的本质缺陷.从信息增益率的角度给出了一种初始化策略,在全局播种阶段,借用模拟退火控温函数的思想自动生成参数,并结合维度缩减率给出了适应度函数;同时,针对形成的优质森林采取贪心算法,形成一种特征选择算法EFSFOA(enhanced feature selection using forest optimization algorithm).此外,在面对高维数据的处理时,采用集成特征选择的方案形成了一个适用于EFSFOA的集成特征选择框架,使其能够有效处理高维数据特征选择问题.通过设计对比实验,验证了EFSFOA与FSFOA相比在分类准确率和维度缩减率上均有明显的提高,高维数据处理能力更是提高到了100 000维.将EFSFOA与近年来提出的比较高效的基于演化计算的特征选择方法进行对比,EFSFOA仍具有很强的竞争力.  相似文献   

2.
Feature subset selection is a key problem in the data-mining classification task that helps to obtain more compact and understandable models without degrading (or even improving) their performance. In this work we focus on FSS in high-dimensional datasets, that is, with a very large number of predictive attributes. In this case, standard sophisticated wrapper algorithms cannot be applied because of their complexity, and computationally lighter filter-wrapper algorithms have recently been proposed. In this work we propose a stochastic algorithm based on the GRASP meta-heuristic, with the main goal of speeding up the feature subset selection process, basically by reducing the number of wrapper evaluations to carry out. GRASP is a multi-start constructive method which constructs a solution in its first stage, and then runs an improving stage over that solution. Several instances of the proposed GRASP method are experimentally tested and compared with state-of-the-art algorithms over 12 high-dimensional datasets. The statistical analysis of the results shows that our proposal is comparable in accuracy and cardinality of the selected subset to previous algorithms, but requires significantly fewer evaluations.  相似文献   

3.
特征选择是处理高维数据的一项有效技术。针对传统方法的不足,结合[F-score]与互信息,提出了一种最小冗余最大分离的特征选择评价准则,该准则使所选择的特征具有更好的分类和预测能力;采用二进制布谷鸟搜索算法和二次规划两种搜索策略来搜索最优特征子集,并对两种搜索策略的准确性和计算量进行分析比较;最后,利用UCI数据集进行实验测试,实验结果说明了所提理论的有效性。  相似文献   

4.
特征选择是模式识别与数据挖掘的关键问题之一,它可以移除数据集中的冗余和不相关特征以提升学习性能。基于最大相关最小冗余准则,提出一种新的基于相关性与冗余性分析的半监督特征选择方法(S2R2),S2R2方法独立于任何分类学习算法。该方法首先对无监督相关度信息度量进行分析与扩充,然后结合信息增益,设计一种半监督特征相关性与冗余性度量,可以有效识别与移除不相关和冗余特征,最后采用增量搜索技术贪婪地构建特征子集,避免搜索指数级大小的解空间,提高算法的运行效率。本文还提出S2R2方法的快速过滤版本,FS2R2,以更好地应对大规模特征选择问题。多个标准数据集上的实验结果表明了所提方法的有效性和优越性。  相似文献   

5.
针对高维生物医学数据包含大量无关或弱相关特征,影响疾病诊断效率的现状,提出了一种基于改进混合蛙跳算法的高维生物医学数据特征选择方法。该方法将混沌记忆权重因子和平衡分组策略引入基本混合蛙跳算法,在强化算法多样性的同时,维持了算法全局和局部寻优之间的平衡,降低了算法陷入局部最优的可能,进一步提高了混合蛙跳算法特征选择方法在特征空间的探索能力。实验结果表明:与改进遗传算法、粒子群优化算法特征选择方法比较,改进混合蛙跳算法特征选择方法在高维生物医学数据特征子集识别、分类精度方面取得了更好的效果。  相似文献   

6.
考虑到校车路径安排过程中不同车型容量和成本的差异,建立了多车型校车路径问题(SBRP)模型,并提出了一种带参数选择机制的贪婪随机自适应(GRASP)算法进行求解。在初始解构造阶段,设计一组阈值参数控制受限候选列表(RCL)的大小,使用轮盘赌法选择阈值参数。完成初始解构造后,使用可变邻域搜索(VNS)进行邻域解改进,并记录所选择的参数和解的目标值。算法迭代过程中,先设置相同阈值参数的选择概率,每隔若干次迭代后,评估每个阈值参数的性能并修改其选择概率,使得算法能够得到更好的平均解。使用基准测试案例进行了测试,比较了基本GRASP算法与设计的GRASP算法的性能,并与现有求解多车型校车路径问题的算法进行对比,实验结果表明所设计的算法是有效的。  相似文献   

7.
Retrieving the relevant information from the high-dimensional dataset enhances the classification accuracy of a predictive model. This research critique has devised an improved marine predator algorithm based on opposition learning for stable feature selection to overcome the problem of high-dimensionality. Marine predator algorithm is a population-based meta-heuristics optimization algorithm that works on the ‘survival-of-the-fittest’ theory. Classical marine predator algorithm explores the search space merely in one direction, affecting its converging capacity while being responsible for stagnation at local minima. The proposed opposition-based learning nuances enhance the exploration capacity of marine predator algorithm and productively converges the model to global optima. The proposed OBL-based marine predator algorithm selects stable, substantial elements from six different high-dimensional microarray datasets. The performance of the proposed method is investigated using five predominantly used classifiers. From the result, it is understood that the proposed approach outperforms other conventional feature selection techniques in terms of converging capability, classification accuracy, and stable feature selection.  相似文献   

8.
重点论述了基于MI图像特征选择方法[1],简要地讲述了支持向量机的SVMs分类器原理和设计[2]。提出了MI贪婪最优算法,将高维数据处理转化为一维数据处理,简化了运算难度,同时提高了分类速度和准确性。实验结果表明,通过对8个分类、上千张图片进行分类处理,效果好于传统的分类算法。  相似文献   

9.
提出了一种新的面向高维数据的特征选择方法,在特征子集搜索上采用遗传算法进行随机搜索,在特征子集评价上采用基于边界点的可分性度量作为评价指标及适应度。实验结果表明,该算法可有效地找出具有较好的可分离性的特征子集,从而实现降维并提高分类 精度。  相似文献   

10.
The greedy randomized adaptive search procedure (GRASP) is an iterative two-phase multi-start metaheuristic procedure for a combination optimization problem, while path relinking is an intensification procedure applied to the solutions generated by GRASP. In this paper, a hybrid ensemble selection algorithm incorporating GRASP with path relinking (PRelinkGraspEnS) is proposed for credit scoring. The base learner of the proposed method is an extreme learning machine (ELM). Bootstrap aggregation (bagging) is used to produce multiple diversified ELMs, while GRASP with path relinking is the approach for ensemble selection. The advantages of the ELM are inherited by the new algorithm, including fast learning speed, good generalization performance, and easy implementation. The PRelinkGraspEnS algorithm is able to escape from local optima and realizes a multi-start search. By incorporating path relinking into GRASP and using it as the ensemble selection method for the PRelinkGraspEnS the proposed algorithm becomes a procedure with a memory and high convergence speed. Three credit datasets are used to verify the efficiency of our proposed PRelinkGraspEnS algorithm. Experimental results demonstrate that PRelinkGraspEnS achieves significantly better generalization performance than the classical directed hill climbing ensemble pruning algorithm, support vector machines, multi-layer perceptrons, and a baseline method, the best single model. The experimental results further illustrate that by decreasing the average time needed to find a good-quality subensemble for the credit scoring problem, GRASP with path relinking outperforms pure GRASP (i.e., without path relinking).  相似文献   

11.
Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset.  相似文献   

12.
Feature selection is a challenging task that has been the subject of a large amount of research, especially in relation to classification tasks. It permits to eliminate the redundant attributes and enhance the classification accuracy by keeping only the relevant attributes. In this paper, we propose a hybrid search method based on both harmony search algorithm (HSA) and stochastic local search (SLS) for feature selection in data classification. A novel probabilistic selection strategy is used in HSA–SLS to select the appropriate solutions to undergo stochastic local refinement, keeping a good compromise between exploration and exploitation. In addition, the HSA–SLS is combined with a support vector machine (SVM) classifier with optimized parameters. The proposed HSA–SLS method tries to find a subset of features that maximizes the classification accuracy rate of SVM. Experimental results show good performance in favor of our proposed method.  相似文献   

13.
特征选择就是从特征集合中选择出与分类类别相关性强而特征之间冗余性最小的特征子集,这样一方面可以提高分类器的计算效率,另一方面可以提高分类器的泛化能力,进而提高分类精度。基于互信息的特征相关性和冗余性的评价准则,在实际应用中存在以下的问题:(1)变量的概率计算困难,进而影响特征的信息熵计算困难;(2)互信息倾向于选择值较多的特征;(3)基于累积加和的候选特征与特征子集之间冗余性度量准则在特征维数较高的情况下容易失效。为了解决上述问题,提出了基于归一化模糊互信息最大的特征评价准则,基于模糊等价关系计算变量的信息熵、条件熵、联合熵;利用联合互信息最大替换累积加和的度量方法;基于归一化联合互信息对特征重要性进行评价;基于该准则建立了基于前向贪婪搜索的特征选择算法。在UCI机器学习标准数据集上的多组实验,证明算法能够有效地选择出对分类类别有效的特征子集,能够明显提高分类精度。  相似文献   

14.
Support vector machine (SVM) is a novel pattern classification method that is valuable in many applications. Kernel parameter setting in the SVM training process, along with the feature selection, significantly affects classification accuracy. The objective of this study is to obtain the better parameter values while also finding a subset of features that does not degrade the SVM classification accuracy. This study develops a simulated annealing (SA) approach for parameter determination and feature selection in the SVM, termed SA-SVM.To measure the proposed SA-SVM approach, several datasets in UCI machine learning repository are adopted to calculate the classification accuracy rate. The proposed approach was compared with grid search which is a conventional method of performing parameter setting, and various other methods. Experimental results indicate that the classification accuracy rates of the proposed approach exceed those of grid search and other approaches. The SA-SVM is thus useful for parameter determination and feature selection in the SVM.  相似文献   

15.

Features subset selection (FSS) generally plays an essential role in the implementation of data mining, particularly in the field of high-dimensional medical data analysis, as well as in supplying early detection with essential features and high accuracy. The latest modern feature selection models are now using the ability of optimization algorithms for extracting features of particular properties to get the highest accuracy performance possible. Many of the optimization algorithms, such as genetic algorithm, often use the required parameters that would need to be adjusted for better results. For the function selection procedure, tuning these parameter values is a difficult challenge. In this paper, a new wrapper-based feature selection approach called binary teaching learning based optimization (BTLBO) is introduced. The binary teaching learning based optimization (BTLBO) is among the most sophisticated meta-heuristic method which does not involve any specific algorithm parameters. It requires only standard process parameters such as population size and a number of iterations to extract a set of features selected from a data. This is a demanding process, to achieve the best possible set of features would be to use a method which is independent of the method controlling parameters. This paper introduces a new modified binary teaching–learning-based optimization (NMBTLBO) as a technique to select subset features and demonstrate support vector machine (SVM) accuracy of binary identification as a fitness function for the implementation of the feature subset selection process. The new proposed algorithm NMBTLBO contains two steps: first, the new updating procedure, second, the new method to select the primary teacher in teacher phase in binary teaching-learning based on optimization algorithm. The proposed technique NMBTLBO was used to classify the rheumatic disease datasets collected from Baghdad Teaching Hospital Outpatient Rheumatology Clinic during 2016–2018. Compared with the original BTLBO algorithm, the improved NMBTLBO algorithm has achieved a major difference in accuracy. Validation was carried out by testing the accuracy of four classification methods: K-nearest neighbors, decision trees, support vector machines and K-means. Study results showed that the classification accuracy of the four methods was increased for the proposed method of selection of features (NMBTLBO) compared to the BTLBO algorithm. SVM classifier provided 89% accuracy of BTLBO-SVM and 95% with NMBTLBO –SVM. Decision trees set the values of 94% with BTLBO-SVM and 95% with the feature selection of NMBTLBO-SVM. The analysis indicates that the latest method (NMBTLBO) enhances classification accuracy.

  相似文献   

16.
Relief算法是一个过滤式特征选择算法,通过一种贪心的方式最大化最近邻居分类器中的实例边距,结合局部权重方法有作者提出了为每个类别分别训练一个特征权重的类依赖Relief算法(Class Dependent RELIEF algorithm:CDRELIEF).该方法更能反映特征相关性,但是其训练出的特征权重仅仅对于衡量特征对于某一个类的相关性很有效,在实际分类中分类精度不够高.为了将CDRELIEF算法应用于分类过程,本文改变权重更新过程,并给训练集中的每个实例赋予一个实例权重值,通过将实例权重值结合到权重更新公式中从而排除远离分类边界的数据点和离群点对权重更新的影响,进而提高分类准确率.本文提出的实例加权类依赖RELIEF (IWCDRELIEF)在多个UCI二类数据集上,与CDRELIEF进行测试比较.实验结果表明本文提出的算法相比CDRELIEF算法有明显的提高.  相似文献   

17.

Gene expression data play a significant role in the development of effective cancer diagnosis and prognosis techniques. However, many redundant, noisy, and irrelevant genes (features) are present in the data, which negatively affect the predictive accuracy of diagnosis and increase the computational burden. To overcome these challenges, a new hybrid filter/wrapper gene selection method, called mRMR-BAOAC-SA, is put forward in this article. The suggested method uses Minimum Redundancy Maximum Relevance (mRMR) as a first-stage filter to pick top-ranked genes. Then, Simulated Annealing (SA) and a crossover operator are introduced into Binary Arithmetic Optimization Algorithm (BAOA) to propose a novel hybrid wrapper feature selection method that aims to discover the smallest set of informative genes for classification purposes. BAOAC-SA is an enhanced version of the BAOA in which SA and crossover are used to help the algorithm in escaping local optima and enhancing its global search capabilities. The proposed method was evaluated on 10 well-known microarray datasets, and its results were compared to other current state-of-the-art gene selection methods. The experimental results show that the proposed approach has a better performance compared to the existing methods in terms of classification accuracy and the minimum number of selected genes.

  相似文献   

18.
In this article, we focus on solving the power dominating set problem and its connected version. These problems are frequently used for finding optimal placements of phasor measurement units in power systems. We present an improved integer linear program (ILP) for both problems. In addition, a greedy constructive algorithm and a local search are developed. A greedy randomised adaptive search procedure (GRASP) algorithm is created to find near optimal solutions for large scale problem instances. The performance of the GRASP is further enhanced by extending it to the novel fixed set search (FSS) metaheuristic. Our computational results show that the proposed ILP has a significantly lower computational cost than existing ILPs for both versions of the problem. The proposed FSS algorithm manages to find all the optimal solutions that have been acquired using the ILP. In the last group of tests, it is shown that the FSS can significantly outperform the GRASP in both solution quality and computational cost.  相似文献   

19.
特征选择是机器学习和数据挖掘领域中一项重要的数据预处理技术,它旨在最大化分类任务的精度和最小化最优子集特征个数。运用粒子群算法在高维数据集中寻找最优子集面临着陷入局部最优和计算代价昂贵的问题,导致分类精度下降。针对此问题,提出了基于多因子粒子群算法的高维数据特征选择算法。引入了进化多任务的算法框架,提出了一种两任务模型生成的策略,通过任务间的知识迁移加强种群交流,提高种群多样性以改善易陷入局部最优的缺陷;设计了基于稀疏表示的初始化策略,在算法初始阶段设计具有稀疏表示的初始解,降低了种群在趋向最优解集时的计算开销。在6个公开医学高维数据集上的实验结果表明,所提算法能够有效实现分类任务且得到较好的精度。  相似文献   

20.
孙林  赵婧  徐久成  王欣雅 《计算机应用》2022,42(5):1355-1366
针对经典的帝王蝶优化(MBO)算法不能很好地处理连续型数据,以及粗糙集模型对于大规模、高维复杂的数据处理能力不足等问题,提出了基于邻域粗糙集(NRS)和MBO的特征选择算法。首先,将局部扰动和群体划分策略与MBO算法结合,并构建传输机制以形成一种二进制MBO(BMBO)算法;其次,引入突变算子增强算法的探索能力,设计了基于突变算子的BMBO(BMBOM)算法;然后,基于NRS的邻域度构造适应度函数,并对初始化的特征子集的适应度值进行评估并排序;最后,使用BMBOM算法通过不断迭代搜索出最优特征子集,并设计了一种元启发式特征选择算法。在基准函数上评估BMBOM算法的优化性能,并在UCI数据集上评价所提出的特征选择算法的分类能力。实验结果表明,在5个基准函数上,BMBOM算法的最优值、最差值、平均值以及标准差明显优于MBO和粒子群优化(PSO)算法;在UCI数据集上,与基于粗糙集的优化特征选择算法、结合粗糙集与优化算法的特征选择算法、结合NRS与优化算法的特征选择算法、基于二进制灰狼优化的特征选择算法相比,所提特征选择算法在分类精度、所选特征数和适应度值这3个指标上表现良好,能够选择特征数少且分类精度高的最优特征子集。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号