首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
特征选择技术是机器学习和数据挖掘任务的关键预处理技术。传统贪婪式特征选择方法仅考虑本轮最佳特征,从而导致获取的特征子集仅为局部最优,无法获得最优或者近似最优的特征集合。进化搜索方式则有效地对特征空间进行搜索,然而不同的进化算法在搜索过程中存在自身的局限。本文吸取遗传算法(GA)和粒子群优化算法(PSO)的进化优势,以信息熵度量为评价,通过协同演化的方式获取最终特征子集。并提出适用于特征选择问题特有的比特率交叉算子和信息交换策略。实验结果显示,遗传算法和粒子群协同进化(GA-PSO)在进化搜索特征子集的能力和具体分类学习任务上都优于单独的演化搜索方式。进化搜索提供的组合判断能力优于贪婪式特征选择方法。  相似文献   

2.
Feature selection has always been a critical step in pattern recognition, in which evolutionary algorithms, such as the genetic algorithm (GA), are most commonly used. However, the individual encoding scheme used in various GAs would either pose a bias on the solution or require a pre-specified number of features, and hence may lead to less accurate results. In this paper, a tribe competition-based genetic algorithm (TCbGA) is proposed for feature selection in pattern classification. The population of individuals is divided into multiple tribes, and the initialization and evolutionary operations are modified to ensure that the number of selected features in each tribe follows a Gaussian distribution. Thus each tribe focuses on exploring a specific part of the solution space. Meanwhile, tribe competition is introduced to the evolution process, which allows the winning tribes, which produce better individuals, to enlarge their sizes, i.e. having more individuals to search their parts of the solution space. This algorithm, therefore, avoids the bias on solutions and requirement of a pre-specified number of features. We have evaluated our algorithm against several state-of-the-art feature selection approaches on 20 benchmark datasets. Our results suggest that the proposed TCbGA algorithm can identify the optimal feature subset more effectively and produce more accurate pattern classification.  相似文献   

3.
4.
Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset.  相似文献   

5.
为了提高文本自动分类准确率,提出一种改进的蜂群优化神经网络的选择特征的文本数据挖掘算法.该算法将文本特征选择转换成一个多目标优化问题,以特征维数最少、分类正确率最高为选择标准,采用蚁群算法找到最优特征子集,最后神经网络建立文本自动分类器,进行仿真实验测试算法性能.仿真实验结果表明,提出的方法从高维文本最优文本特征,提高了文本自动分类的正确率和识别效率,是一种有效的网络文本挖掘算法.  相似文献   

6.
针对基本海鸥优化算法(SOA)在处理复杂优化问题中存在低精度、慢收敛和易陷入局部最优的不足,提出了一种基于翻筋斗觅食策略的SOA算法(SFSOA)。该算法首先采用基于倒S型函数的控制参数A非线性递减策略更新海鸥个体的位置,以改善个体的质量和加快收敛速度;引入一种基于翻筋斗觅食策略的学习机制以增加海鸥个体位置的多样性,避免算法在搜索后期陷入局部最优值。选取八个基准函数优化问题进行数值实验,并与基本SOA、灰狼优化算法和改进SOA进行比较,结果表明,所提算法具有较高的解精度、较快的收敛速度和较强的全局搜索能力,能有效地处理复杂函数优化问题。最后,将SFSOA用于求解特征选择问题,获得了满意的结果。  相似文献   

7.
基于相关性分析及遗传算法的高维数据特征选择   总被引:4,自引:0,他引:4  
特征选择是模式识别及数据挖掘等领域的重要问题之一。针对高维数据对象,特征选择一方面可以提高分类精度和效率,另一方面可以找出富含信息的特征子集。针对此问题,提出了一种综合了filter模型及wrapper模型的特征选择方法,首先基于特征与类别标签的相关性分析进行特征筛选,只保留与类别标签具有较强相关性的特征,然后针对经过筛选而精简的特征子集采用遗传算法进行随机搜索,并采用感知器模型的分类错误率作为评价指标。实验结果表明,该算法可有效地找出具有较好的线性可分离性的特征子集,从而实现降维并提高分类精度。  相似文献   

8.
Microarray technologies enable quantitative simultaneous monitoring of expression levels for thousands of genes under various experimental conditions. This new technology has provided a new way of biological classification on a genome-wide scale. However, predictive accuracy is affected by the presence of thousands of genes many of which are unnecessary from the classification point of view. So, a key issue of microarray data classification is to identify the smallest possible set of genes that can achieve good predictive accuracy. In this study, we propose a novel Markov blanket-embedded genetic algorithm (MBEGA) for gene selection problem. In particular, the embedded Markov blanket-based memetic operators add or delete features (or genes) from a genetic algorithm (GA) solution so as to quickly improve the solution and fine-tune the search. Empirical results on synthetic and microarray benchmark datasets suggest that MBEGA is effective and efficient in eliminating irrelevant and redundant features based on both Markov blanket and predictive power in classifier model. A detailed comparative study with other methods from each of filter, wrapper, and standard GA shows that MBEGA gives a best compromise among all four evaluation criteria, i.e., classification accuracy, number of selected genes, computational cost, and robustness.  相似文献   

9.
Financially distressed prediction (FDP) has been a widely and continually studied topic in the field of corporate finance. One of the core problems to FDP is to design effective feature selection algorithms. In contrast to existing approaches, we propose an integrated approach to feature selection for the FDP problem that embeds expert knowledge with the wrapper method. The financial features are categorized into seven classes according to their financial semantics based on experts’ domain knowledge surveyed from literature. We then apply the wrapper method to search for “good” feature subsets consisting of top candidates from each feature class. For concept verification, we compare several scholars’ models as well as leading feature selection methods with the proposed method. Our empirical experiment indicates that the prediction model based on the feature set selected by the proposed method outperforms those models based on traditional feature selection methods in terms of prediction accuracy.  相似文献   

10.
A new local search based hybrid genetic algorithm for feature selection   总被引:2,自引:0,他引:2  
This paper presents a new hybrid genetic algorithm (HGA) for feature selection (FS), called as HGAFS. The vital aspect of this algorithm is the selection of salient feature subset within a reduced size. HGAFS incorporates a new local search operation that is devised and embedded in HGA to fine-tune the search in FS process. The local search technique works on basis of the distinct and informative nature of input features that is computed by their correlation information. The aim is to guide the search process so that the newly generated offsprings can be adjusted by the less correlated (distinct) features consisting of general and special characteristics of a given dataset. Thus, the proposed HGAFS receives the reduced redundancy of information among the selected features. On the other hand, HGAFS emphasizes on selecting a subset of salient features with reduced number using a subset size determination scheme. We have tested our HGAFS on 11 real-world classification datasets having dimensions varying from 8 to 7129. The performances of HGAFS have been compared with the results of other existing ten well-known FS algorithms. It is found that, HGAFS produces consistently better performances on selecting the subsets of salient features with resulting better classification accuracies.  相似文献   

11.
There exist numerous state of the art classification algorithms that are designed to handle the data with nominal or binary class labels. Unfortunately, less attention is given to the genre of classification problems where the classes are organized as a structured hierarchy; such as protein function prediction (target area in this work), test scores, gene ontology, web page categorization, text categorization etc. The structured hierarchy is usually represented as a tree or a directed acyclic graph (DAG) where there exist IS-A relationship among the class labels. Class labels at upper level of the hierarchy are more abstract and easy to predict whereas class labels at deeper level are most specific and challenging for correct prediction. It is helpful to consider this class hierarchy for designing a hypothesis that can handle the tradeoff between prediction accuracy and prediction specificity. In this paper, a novel ant colony optimization (ACO) based single path hierarchical classification algorithm is proposed that incorporates the given class hierarchy during its learning phase. The algorithm produces IF–THEN ordered rule list and thus offer comprehensible classification model. Detailed discussion on the architecture and design of the proposed technique is provided which is followed by the empirical evaluation on six ion-channels data sets (related to protein function prediction) and two publicly available data sets. The performance of the algorithm is encouraging as compared to the existing methods based on the statistically significant Student's t-test (keeping in view, prediction accuracy and specificity) and thus confirm the promising ability of the proposed technique for hierarchical classification task.  相似文献   

12.
Feature selection is a significant task for data mining and pattern recognition. It aims to select the optimal feature subset with the minimum redundancy and the maximum discriminating ability. In the paper, a feature selection approach based on a modified binary coded ant colony optimization algorithm (MBACO) combined with genetic algorithm (GA) is proposed. The method comprises two models, which are the visibility density model (VMBACO) and the pheromone density model (PMBACO). In VMBACO, the solution obtained by GA is used as visibility information; on the other hand, in PMBACO, the solution obtained by GA is used as initial pheromone information. In the method, each feature is treated as a binary bit and each bit has two orientations, one is for selecting the feature and another is for deselecting. The proposed method is also compared with that of GA, binary coded ant colony optimization (BACO), advanced BACO (ABACO), binary coded particle swarm optimization (BPSO), binary coded differential evolution (BDE) and a hybrid GA-ACO algorithm on some well-known UCI datasets; furthermore, it is also compared with some other existing techniques such as minimum Redundancy Maximum Relevance (mRMR), Relief algorithm for a comprehensive comparison. Experimental results display that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.  相似文献   

13.
Feature selection is the basic pre-processing task of eliminating irrelevant or redundant features through investigating complicated interactions among features in a feature set. Due to its critical role in classification and computational time, it has attracted researchers’ attention for the last five decades. However, it still remains a challenge. This paper proposes a binary artificial bee colony (ABC) algorithm for the feature selection problems, which is developed by integrating evolutionary based similarity search mechanisms into an existing binary ABC variant. The performance analysis of the proposed algorithm is demonstrated by comparing it with some well-known variants of the particle swarm optimization (PSO) and ABC algorithms, including standard binary PSO, new velocity based binary PSO, quantum inspired binary PSO, discrete ABC, modification rate based ABC, angle modulated ABC, and genetic algorithms on 10 benchmark datasets. The results show that the proposed algorithm can obtain higher classification performance in both training and test sets, and can eliminate irrelevant and redundant features more effectively than the other approaches. Note that all the algorithms used in this paper except for standard binary PSO and GA are employed for the first time in feature selection.  相似文献   

14.
随着社会信息化程度的不断提高,各种形式的数据急剧膨胀.HDFS成为解决海量数据存储问题的一个分布式文件系统,而副本技术是云存储系统的关键.提出了一种基于初始信息素筛选的蚁群优化算法(InitPh_ACO)的副本选择策略,通过将遗传算法(GA)与蚁群优化算法(ACO)算法相结合,将它们进行动态衔接.提出基于初始信息素筛选的ACO算法,既克服了ACO算法初始搜索速度慢,又充分利用GA的快速随机全局搜索能力.利用云计算仿真工具CloudSim来验证此策略的效果,结果表明:InitPh_ACO策略在作业执行时间、副本读取响应时间和副本负载均衡性三个方面的性能均优于基于ACO算法的副本选择策略和基于GA的副本选择策略.  相似文献   

15.
Genetic algorithm-based feature selection in high-resolution NMR spectra   总被引:3,自引:1,他引:2  
High-resolution nuclear magnetic resonance (NMR) spectroscopy has provided a new means for detection and recognition of metabolic changes in biological systems in response to pathophysiological stimuli and to the intake of toxins or nutrition. To identify meaningful patterns from NMR spectra, various statistical pattern recognition methods have been applied to reduce their complexity and uncover implicit metabolic patterns. In this paper, we present a genetic algorithm (GA)-based feature selection method to determine major metabolite features to play a significant role in discrimination of samples among different conditions in high-resolution NMR spectra. In addition, an orthogonal signal filter was employed as a preprocessor of NMR spectra in order to remove any unwanted variation of the data that is unrelated to the discrimination of different conditions. The results of k-nearest neighbors and the partial least squares discriminant analysis of the experimental NMR spectra from human plasma showed the potential advantage of the features obtained from GA-based feature selection combined with an orthogonal signal filter.  相似文献   

16.
由于人类语言的复杂性,文本情感分类算法大多都存在因为冗余而造成的词汇量过大的问题。深度信念网络(DBN)通过学习输入语料中的有用信息以及它的几个隐藏层来解决这个问题。然而对于大型应用程序来说,DBN是一个耗时且计算代价昂贵的算法。针对这个问题,提出了一种半监督的情感分类算法,即基于特征选择和深度信念网络的文本情感分类算法(FSDBN)。首先使用特征选择方法(文档频率(DF)、信息增益(IG)、卡方统计(CHI)、互信息(MI))过滤掉一些不相关的特征从而使词汇表的复杂性降低;然后将特征选择的结果输入到DBN中,使得DBN的学习阶段更加高效。将所提算法应用到中文以及维吾尔语中,实验结果表明在酒店评论数据集上,FSDBN在准确率方面比DBN提高了1.6%,在训练时间上比DBN缩短一半。  相似文献   

17.
特征选择是处理高维大数据常用的降维手段,但其中牵涉到的多个彼此冲突的特征子集评价目标难以平衡。为综合考虑特征选择中多种子集评价方式间的折中,优化子集性能,提出一种基于子集评价多目标优化的特征选择框架,并重点对多目标粒子群优化(MOPSO)在特征子集评价中的应用进行了研究。该框架分别根据子集的稀疏度、分类能力和信息损失度设计多目标优化函数,继而基于多目标优化算法进行特征权值向量寻优,并通过权值向量Pareto解集膝点选取确定最优向量,最终实现基于权值向量排序的特征选择。设计实验对比了基于多目标粒子群优化算法的特征选择(FS_MOPSO)与四种经典方法的性能,多个数据集上的结果表明,FS_MOPSO在低维空间表现出更高的分类精度,并保证了更少的信息损失。  相似文献   

18.
Machine learning-based classification techniques provide support for the decision-making process in many areas of health care, including diagnosis, prognosis, screening, etc. Feature selection (FS) is expected to improve classification performance, particularly in situations characterized by the high data dimensionality problem caused by relatively few training examples compared to a large number of measured features. In this paper, a random forest classifier (RFC) approach is proposed to diagnose lymph diseases. Focusing on feature selection, the first stage of the proposed system aims at constructing diverse feature selection algorithms such as genetic algorithm (GA), Principal Component Analysis (PCA), Relief-F, Fisher, Sequential Forward Floating Search (SFFS) and the Sequential Backward Floating Search (SBFS) for reducing the dimension of lymph diseases dataset. Switching from feature selection to model construction, in the second stage, the obtained feature subsets are fed into the RFC for efficient classification. It was observed that GA-RFC achieved the highest classification accuracy of 92.2%. The dimension of input feature space is reduced from eighteen to six features by using GA.  相似文献   

19.
Feature selection (FS) in data mining is one of the most challenging and most important activities in pattern recognition. In this article, a new hybrid model of whale optimization algorithm (WOA) and flower pollination algorithm (FPA) is presented for the problem of FS based on the concept of opposition‐based learning (OBL) which name is HWOAFPA. The procedure is that the WOA is run first and at the same time during the run, the WOA population is changed by the OBL. And, to increase the accuracy and speed of convergence, it is used as the initial population of FPA. To evaluate the performance of the proposed method, experiments were carried out in two steps. The experiments were performed on 10 datasets from the UCI data repository and Email spam detection datasets. The results obtained from the first step showed that the proposed method was more successful in terms of the average size of selection and classification accuracy than other basic metaheuristic algorithms. In addition, the results from the second step showed that the proposed method which was a run on the Email spam dataset performed much more accurately than other similar algorithms in terms of accuracy of Email spam detection.  相似文献   

20.
针对网页欺诈检测中特征的高维、冗余问题,提出一个基于信息增益和遗传算法的改进特征选择算法(IFS-BIGGA)。首先,通过信息增益(IG)给出特征重要性排序,设定动态阈值减少冗余特征;其次,改进遗传算法(GA)中染色体编码函数和选择算子,并结合随机森林(RF)的受试者工作特征曲线面积(AUC)作为适应度函数,选择高辨识度特征;最后,增加实验迭代次数避免算法随机性,产生最佳最小的特征集合(OMFS)。实验验证表明,应用IFS-BIGGA生成的OMFS与高维特征集合相比,尽管RF下的AUC减小了2%,但是真阳性率(TPR)提高了21%,并且特征维度减少了92%;同时多个常用分类器的平均检测时间减少了83%;另外,IFS-BIGGA的F1值相比传统的遗传算法(TGA)和帝国主义竞争算法(ICA)分别提高了4.2%和3.5%。实验结果表明,IFS-BIGGA可以进行高效特征降维,在实际的网页检测工程中,有效减少计算代价,提高检测效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号