首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
针对特征选择中降维效果与分类精度间的矛盾,通过分析传统的特征选择方法中的优点和不足,结合佳点集遗传算法的思想和K最近邻简单有效的分类特性,提出了基于佳点集遗传算法的特征选择方法.该算法对特征子集采用佳点集遗传算法进行随机搜索,并采用K近邻的分类错误率作为评价指标,淘汰不好的特征子集,保存较优的特征子集.通过实验比较看出,该算法可以有效地找出具有较高分类精度的特征子集,降维效果良好,具有较好的特征子集选择能力.  相似文献   

2.
特征评价和选择是机器学习和模式识别的重要步骤.为了获得稀疏特征子集,结合间隔损失评估策略和L1范数调节技术来获得一种有效的特征选择方法( MLFWL-L1),并将其应用到RBFSVM分类器.实验中,在UCI数据集上将提出的算法与Simba和ReliefF对比表明,验证所提出的算法是一种有效的特征选择方法.  相似文献   

3.
潘巍  马培军  李东 《电脑学习》2012,(1):8-10,15
特征评价和选择是机器学习和模式识别的重要步骤。为了获得稀疏特征子集,结合间隔损失评估策略和L1范数调节技术来获得一种有效的特征选择方法(MLFWL-L1),并将其应用到RBFSVM分类器。实验中,在UCI数据集上将提出的算法与Simba和ReliefF对比表明,验证所提出的算法是一种有效的特征选择方法。  相似文献   

4.
杨震宇  叶军  季雨瑄  敖家欣  王磊 《计算机应用研究》2022,39(4):1118-1123+1131
目前已有蚁群算法优化的特征选择方法,大多采用的是以属性依赖度和信息熵属性重要度作为路径上启发搜索因子,但这类搜索方法在某些决策表中存在算法早熟或搜索到的特征子集包含了冗余特征,从而导致选择精度显著下降。针对此类问题,根据条件属性在分辨矩阵中的占比提出了一种属性重要度的度量方法,以分辨矩阵重要度作为路径上启发因子,设计了一种基于分辨矩阵与蚁群算法优化的特征子集搜索方法。该算法从特征核出发,蚁群依次选择概率大的特征加入特征核集,直至找到最小特征子集算法终止。通过实例验证和UCI数据集实验结果表明,与基于属性依赖度和信息熵属性重要度的特征选择方法相比,在通常情况下,该算法能较小代价找到最小特征子集,并且可以有效减少计算工作量。  相似文献   

5.
基于遗传算法和支持向量机的特征选择研究   总被引:3,自引:0,他引:3  
为了让特征子集获得较高的分类准确率,提出了基于遗传算法和支持向量机的特征选择方法.该方法在ReliefF算法提供先验信息的基础上,将SVM参数混编入特征选择基因编码中,然后利用遗传算法寻求最优的特征子集和支持向量机参数组合.实验结果表明,通过该方法选择的特征子集和支持向量机参数组合能以较小的特征子集获得较高的分类准确率.  相似文献   

6.
针对F-score特征选择算法不能揭示特征间互信息而不能有效降维这一问题,应用去相关的方法对F-score进行改进,利用德语情感语音库EMO-DB,在提取语音情感特征的基础上,根据支持向量机(SVM)的分类精度选择出分类效果最佳的特征子集。与F-score特征选择算法对比,改进后的算法实现了候选特征集较大幅度的降维,选择出了有效的特征子集,同时得到了较理想的语音情感识别效果。  相似文献   

7.
针对高维度小样本数据在特征选择时出现的维数灾难和过拟合的问题,提出一种混合Filter模式与Wrapper模式的特征选择方法(ReFS-AGA)。该方法结合ReliefF算法和归一化互信息,评估特征的相关性并快速筛选重要特征;采用改进的自适应遗传算法,引入最优策略平衡特征多样性,同时以最小化特征数和最大化分类精度为目标,选择特征数作为调节项设计新的评价函数,在迭代进化过程中高效获得最优特征子集。在基因表达数据上利用不同分类算法对简化后的特征子集分类识别,实验结果表明,该方法有效消除了不相关特征,提高了特征选择的效率,与ReliefF算法和二阶段特征选择算法mRMR-GA相比,在取得最小特征子集维度的同时平均分类准确率分别提高了11.18个百分点和4.04个百分点。  相似文献   

8.
维度灾难是机器学习任务中的常见问题,特征选择算法能够从原始数据集中选取出最优特征子集,降低特征维度.提出一种混合式特征选择算法,首先用卡方检验和过滤式方法选择重要特征子集并进行标准化缩放,再用序列后向选择算法(SBS)与支持向量机(SVM)包裹的SBS-SVM算法选择最优特征子集,实现分类性能最大化并有效降低特征数量.实验中,将包裹阶段的SBS-SVM与其他两种算法在3个经典数据集上进行测试,结果表明,SBS-SVM算法在分类性能和泛化能力方面均具有较好的表现.  相似文献   

9.
一种近似Markov Blanket最优特征选择算法   总被引:4,自引:0,他引:4  
特征选择可以有效改善分类效率和精度,传统方法通常只评价单个特征,较少评价特征子集.在研究特征相关性基础上,进一步划分特征为强相关、弱相关、无关和冗余四种特征,建立起Markov Blanket理论和特征相关性之间的联系,结合Chi-Square检验统计方法,提出了一种基于前向选择的近似Markov Blanket特征选择算法,获得近似最优的特征子集.实验结果证明文中方法选取的特征子集与原始特征子集相比,以远小于原始特征数的特征子集获得了高于或接近于原始特征集的分类结果.同时,在高维特征空间的文本分类领域,与其它的特征选择方法OCFS,DF,CHI,IG等方法的分类结果进行了比较,在20 Newsgroup文本数据集上的分类实验结果表明文中提出的方法获得的特征子集在分类时优于其它方法.  相似文献   

10.
针对特征空间中存在潜在相关特征的规律,分别利用谱聚类探索特征间的相关性及邻域互信息以寻求最大相关特征子集,提出联合谱聚类与邻域互信息的特征选择算法.首先利用邻域互信息移除与标记不相干的特征.然后采用谱聚类将特征进行分簇,使同一簇组中的特征强相关而不同簇组中的特征强相异.继而基于邻域互信息从每一特征簇组中选择与类标记强相关而与本组特征低冗余的特征子集.最后将所有选中特征子集组成最终的特征选择结果.在2个基分类器下的实验表明,文中算法能以较少的合理特征获得较高的分类性能.  相似文献   

11.
在已有的特征选择算法中,常用策略是通过相关准则选择与标记集合相关性较强的特征,然而该策略不一定是最优选择,因为与标记集合相关性较弱的特征可能是决定某些类别标记的关键特征.基于这一假设,文中提出基于局部子空间的多标记特征选择算法.该算法首先利用特征与标记集合之间的互信息得到一个重要度由高到低的特征序列,然后将新的特征排序空间划分为几个局部子空间,并在每个子空间设置采样比例以选择冗余性较小的特征,最后融合各子空间的特征子集,得到一组合理的特征子集.在6个数据集和4个评价指标上的实验表明,文中算法优于一些通用的多标记特征选择算法.  相似文献   

12.
维吾尔文常用切分方法会产生大量的语义抽象甚至多义的词特征,因此学习算法难以发现高维数据中隐藏的结构.提出一种无监督切分方法dme-TS和一种无监督特征选择方法UMRMR-UFS.dme-TS从大规模生语料中自动获取单词Bi-gram及上下文语境信息,并将相邻单词间的t-测试差、互信息及双词上下文邻接对熵的线性融合作为一个组合统计量(dme)来评价单词间的结合能力,从而将文本切分成语义具体的独立语言单位的特征集合.UMRMR-UFS用一种综合考虑最大相关度和最小冗余的无监督特征选择标准(UMRMR)来评价每一个特征的重要性,并将最重要的特征依次移入到特征子集中.实验结果表明dme-TS能有效控制原始特征集的规模,提高特征项本身的质量,用UMRMR-UFS的输出来表征文本时,学习算法也表现出其最高的性能.  相似文献   

13.
A new improved forward floating selection (IFFS) algorithm for selecting a subset of features is presented. Our proposed algorithm improves the state-of-the-art sequential forward floating selection algorithm. The improvement is to add an additional search step called “replacing the weak feature” to check whether removing any feature in the currently selected feature subset and adding a new one at each sequential step can improve the current feature subset. Our method provides the optimal or quasi-optimal (close to optimal) solutions for many selected subsets and requires significantly less computational load than optimal feature selection algorithms. Our experimental results for four different databases demonstrate that our algorithm consistently selects better subsets than other suboptimal feature selection algorithms do, especially when the original number of features of the database is large.  相似文献   

14.
Feature selection (FS) is one of the most important fields in pattern recognition, which aims to pick a subset of relevant and informative features from an original feature set. There are two kinds of FS algorithms depending on the presence of information about dataset class labels: supervised and unsupervised algorithms. Supervised approaches utilize class labels of dataset in the process of feature selection. On the other hand, unsupervised algorithms act in the absence of class labels, which makes their process more difficult. In this paper, we propose unsupervised probabilistic feature selection using ant colony optimization (UPFS). The algorithm looks for the optimal feature subset in an iterative process. In this algorithm, we utilize inter-feature information which shows the similarity between the features that leads the algorithm to decreased redundancy in the final set. In each step of the ACO algorithm, to select the next potential feature, we calculate the amount of redundancy between current feature and all those which have been selected thus far. In addition, we utilize a matrix to hold ant related pheromone which shows the rate of the co-presence of every pair of features in solutions. Afterwards, features are ranked based on a probability function extracted from the matrix; then, their m-top is returned as the final solution. We compare the performance of UPFS with 15 well-known supervised and unsupervised feature selection methods using different classifiers (support vector machine, naive Bayes, and k-nearest neighbor) on 10 well-known datasets. The experimental results show the efficiency of the proposed method compared to the previous related methods.  相似文献   

15.
分类问题普遍存在于现代工业生产中。在进行分类任务之前,利用特征选择筛选有用的信息,能够有效地提高分类效率和分类精度。最小冗余最大相关算法(mRMR)考虑最大化特征与类别的相关性和最小化特征之间的冗余性,能够有效地选择特征子集;但该算法存在中后期特征重要度偏差大以及无法直接给出特征子集的问题。针对该问题,文中提出了结合邻域粗糙集差别矩阵和mRMR原理的特征选择算法。根据最大相关性和最小冗余性原则,利用邻域熵和邻域互信息定义了特征的重要度,以更好地处理混合数据类型。基于差别矩阵定义了动态差别集,利用差别集的动态演化有效去除冗余属性,缩小搜索范围,优化特征子集,并根据差别矩阵判定迭代截止条件。实验选取SVM,J48,KNN和MLP作为分类器来评价该特征选择算法的性能。在公共数据集上的实验结果表明,与已有算法相比,所提算法的平均分类精度提升了2%左右,同时在特征较多的数据集上能够有效地缩短特征选择时间。所提算法继承了差别矩阵和mRMR的优点,能够有效地处理特征选择问题。  相似文献   

16.
特征选择旨在从原始特征空间中选择一组规模较小的特征子集,在分类学习任务中提供与原集合近似或更好的性能.文中提出基于信息粒化的多标记特征选择算法,融合标记权重与样本平均间隔,将改进的邻域信息熵应用到特征选择过程中.在6组数据集以及5个评价指标上的实验表明文中算法在分类上的有效性.  相似文献   

17.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.  相似文献   

18.
Feature subset selection is basically an optimization problem for choosing the most important features from various alternatives in order to facilitate classification or mining problems. Though lots of algorithms have been developed so far, none is considered to be the best for all situations and researchers are still trying to come up with better solutions. In this work, a flexible and user-guided feature subset selection algorithm, named as FCTFS (Feature Cluster Taxonomy based Feature Selection) has been proposed for selecting suitable feature subset from a large feature set. The proposed algorithm falls under the genre of clustering based feature selection techniques in which features are initially clustered according to their intrinsic characteristics following the filter approach. In the second step the most suitable feature is selected from each cluster to form the final subset following a wrapper approach. The two stage hybrid process lowers the computational cost of subset selection, especially for large feature data sets. One of the main novelty of the proposed approach lies in the process of determining optimal number of feature clusters. Unlike currently available methods, which mostly employ a trial and error approach, the proposed method characterises and quantifies the feature clusters according to the quality of the features inside the clusters and defines a taxonomy of the feature clusters. The selection of individual features from a feature cluster can be done judiciously considering both the relevancy and redundancy according to user’s intention and requirement. The algorithm has been verified by simulation experiments with different bench mark data set containing features ranging from 10 to more than 800 and compared with other currently used feature selection algorithms. The simulation results prove the superiority of our proposal in terms of model performance, flexibility of use in practical problems and extendibility to large feature sets. Though the current proposal is verified in the domain of unsupervised classification, it can be easily used in case of supervised classification.  相似文献   

19.
It is a significant and challenging task to detect the informative features to carry out explainable analysis for high dimensional data, especially for those with very small number of samples. Feature selection especially the unsupervised ones are the right way to deal with this challenge and realize the task. Therefore, two unsupervised spectral feature selection algorithms are proposed in this paper. They group features using advanced Self-Tuning spectral clustering algorithm based on local standard deviation, so as to detect the global optimal feature clusters as far as possible. Then two feature ranking techniques, including cosine-similarity-based feature ranking and entropy-based feature ranking, are proposed, so that the representative feature of each cluster can be detected to comprise the feature subset on which the explainable classification system will be built. The effectiveness of the proposed algorithms is tested on high dimensional benchmark omics datasets and compared to peer methods, and the statistical test are conducted to determine whether or not the proposed spectral feature selection algorithms are significantly different from those of the peer methods. The extensive experiments demonstrate the proposed unsupervised spectral feature selection algorithms outperform the peer ones in comparison, especially the one based on cosine similarity feature ranking technique. The statistical test results show that the entropy feature ranking based spectral feature selection algorithm performs best. The detected features demonstrate strong discriminative capabilities in downstream classifiers for omics data, such that the AI system built on them would be reliable and explainable. It is especially significant in building transparent and trustworthy medical diagnostic systems from an interpretable AI perspective.  相似文献   

20.
属性选择是提高分类器性能的一种有效的方法.然而已有的属性选择算法要么假设数据无噪声,要么没有考虑属性间的交互作用,不能用于数据集中既有噪声又存在属性交互作用的情况.提出一种基于信息熵的属性选择算法,该算法用条件熵来评价属性子集对目标概念的描述能力,利用后向删除搜索策略进行属性选择.同时,根据不一致实例和关联规则中提升度的概念,给出噪声数据的定义和识别方法.该算法和典型的属性选择算法在10个UCI标准数据集上的对比实验结果表明,提出的算法在减少属性数量的同时将C4.5和NaiveBayes的平均分类精度分别提高了2.77%和3.42%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号