首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In classification problems, a large number of features are typically used to describe the problem’s instances. However, not all of these features are useful for classification. Feature selection is usually an important pre-processing step to overcome the problem of “curse of dimensionality”. Feature selection aims to choose a small number of features to achieve similar or better classification performance than using all features. This paper presents a particle swarm Optimization (PSO)-based multi-objective feature selection approach to evolving a set of non-dominated feature subsets which achieve high classification performance. The proposed algorithm uses local search techniques to improve a Pareto front and is compared with a pure multi-objective PSO algorithm, three well-known evolutionary multi-objective algorithms and a current state-of-the-art PSO-based multi-objective feature selection approach. Their performances are examined on 12 benchmark datasets. The experimental results show that in most cases, the proposed multi-objective algorithm generates better Pareto fronts than all other methods.  相似文献   

2.
Zhang  Leyuan  Li  Yangding  Zhang  Jilian  Li  Pengqing  Li  Jiaye 《Multimedia Tools and Applications》2019,78(23):33319-33337

The characteristics of non-linear, low-rank, and feature redundancy often appear in high-dimensional data, which have great trouble for further research. Therefore, a low-rank unsupervised feature selection algorithm based on kernel function is proposed. Firstly, each feature is projected into the high-dimensional kernel space by the kernel function to solve the problem of linear inseparability in the low-dimensional space. At the same time, the self-expression form is introduced into the deviation term and the coefficient matrix is processed with low rank and sparsity. Finally, the sparse regularization factor of the coefficient vector of the kernel matrix is introduced to implement feature selection. In this algorithm, kernel matrix is used to solve linear inseparability, low rank constraints to consider the global information of the data, and self-representation form determines the importance of features. Experiments show that comparing with other algorithms, the classification after feature selection using this algorithm can achieve good results.

  相似文献   

3.
Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset.  相似文献   

4.
为解决高维数据在分类时造成的“维数灾难”问题,提出一种新的将核函数与稀疏学习相结合的属性选择算法。具体地,首先将每一维属性利用核函数映射到核空间,在此高维核空间上执行线性属性选择,从而实现低维空间上的非线性属性选择;其次,对映射到核空间上的属性进行稀疏重构,得到原始数据集的一种稀疏表达方式;接着利用L 1范数构建属性评分选择机制,选出最优属性子集;最后,将属性选择后的数据用于分类实验。在公开数据集上的实验结果表明,该算法能够较好地实现属性选择,与对比算法相比分类准确率提高了约3%。  相似文献   

5.

对于包含大量特征的数据集, 特征选择已成为一个研究热点, 能剔除无关和冗余特征, 将会有效改善分类准确性. 对此, 在分析已有文献的基础上, 提出一种基于属性关系的特征选择算法(NCMIPV), 获取优化特征子集, 并在UCI 数据集上对NCMIPV 算法进行性能评估. 实验结果表明, 与原始特征子集相比, 该算法能有效降低特征空间维数, 运行时间也相对较短, 分类差错率可与其他算法相比, 在某些场合下性能明显优于其他算法.

  相似文献   

6.
Recent years, various information theoretic based measurements have been proposed to remove redundant features from high-dimensional data set as many as possible. However, most traditional Information-theoretic based selectors will ignore some features which have strong discriminatory power as a group but are weak as individuals. To cope with this problem, this paper introduces a cooperative game theory based framework to evaluate the power of each feature. The power can be served as a metric of the importance of each feature according to the intricate and intrinsic interrelation among features. Then a general filter feature selection scheme is presented based on the introduced framework to handle the feature selection problem. To verify the effectiveness of our method, experimental comparisons with several other existing feature selection methods on fifteen UCI data sets are carried out using four typical classifiers. The results show that the proposed algorithm achieves better results than other methods in most cases.  相似文献   

7.

Preprocessing of data is ubiquitous, and choosing significant attributes has been one of the important steps in the prior processing of data. Feature selection is used to create a subset of relevant feature for effective classification of data. In a classification of high-dimensional data, the classifier usually depends on the feature subset that has been used for classification. The Relief algorithm is a popular heuristic approach to select significant feature subsets. The Relief algorithm estimates feature individually and selects top-scored feature for subset generation. Many extensions of the Relief algorithm have been developed. However, an important defect in the Relief-based algorithms has been ignored for years. Because of the uncertainty and noise of the instances used for measuring the feature score in the Relief algorithm, the outcome results will vacillate with the instances, which lead to poor classification accuracy. To fix this problem, a novel feature selection algorithm based on Chebyshev distance-outlier detection model is proposed called noisy feature removal-Relief, NFR-ReliefF in short. To demonstrate the performance of NFR-ReliefF algorithm, an extensive experiment, including classification tests, has been carried out on nine benchmarking high-dimensional datasets by uniting the proposed model with standard classifiers, including the naïve Bayes, C4.5 and KNN. The results prove that NFR-ReliefF outperforms the other models on most tested datasets.

  相似文献   

8.
特征选择是机器学习和数据挖掘领域中一项重要的数据预处理技术,它旨在最大化分类任务的精度和最小化最优子集特征个数。运用粒子群算法在高维数据集中寻找最优子集面临着陷入局部最优和计算代价昂贵的问题,导致分类精度下降。针对此问题,提出了基于多因子粒子群算法的高维数据特征选择算法。引入了进化多任务的算法框架,提出了一种两任务模型生成的策略,通过任务间的知识迁移加强种群交流,提高种群多样性以改善易陷入局部最优的缺陷;设计了基于稀疏表示的初始化策略,在算法初始阶段设计具有稀疏表示的初始解,降低了种群在趋向最优解集时的计算开销。在6个公开医学高维数据集上的实验结果表明,所提算法能够有效实现分类任务且得到较好的精度。  相似文献   

9.
In feature selection problems, strong relevant features may be misjudged as redundant by the approximate Markov blanket. To avoid this, a new concept called strong approximate Markov blanket is proposed. It is theoretically proved that no strong relevant feature will be misjudged as redundant by the proposed concept. To reduce computation time, we propose the concept of modified strong approximate Markov blanket, which still performs better than the approximate Markov blanket in avoiding misjudgment of strong relevant features. A new filter-based feature selection method that is applicable to high-dimensional datasets is further developed. It first groups features to remove redundant features, and then uses a sequential forward selection method to remove irrelevant features. Numerical results on four benchmark and seven real datasets suggest that it is a competitive feature selection method with high classification accuracy, moderate number of selected features, and above-average robustness.  相似文献   

10.
张鑫  李占山 《软件学报》2020,31(12):3733-3752
特征选择是一种NP-难问题,旨在剔除数据集中不相关及冗余的特征来减少模型训练的时间,提高模型的精确度.因此,特征选择在机器学习、数据挖掘和模式识别等领域中是一种重要的数据预处理手段.提出一种新的基于自然进化策略的特征选择算法——MCC-NES.首先,算法采用了基于对角协方差矩阵建模并通过梯度信息自适应调整参数的自然进化策略;其次,为了使算法有效地处理特征选择问题,在初始化阶段引入了一种特征编码方式;之后,结合分类准确率和维度缩减给出了算法的适应度函数;此外,面对高维数据引入了合作协同进化的思想,将原问题分解为相对较小的子问题并分别对每个子问题独立求解,然后,通过所有子问题相互联系来优化原问题的解决方案;进一步引入分布式种群进化的概念,实现多个种群竞争进化来增加算法的探索能力,并设计了种群重启策略以防止种群陷入局部最优解.最后将提出的算法与几种传统的特征选择算法在一些UCI公共数据集上进行对比实验,实验结果显示:所提出的算法可以有效地完成特征选择问题,并且与经典特征选择算法相比有一定的竞争力,尤其是在处理高维数据时有着出色的表现.  相似文献   

11.
特征选择是处理高维大数据常用的降维手段,但其中牵涉到的多个彼此冲突的特征子集评价目标难以平衡。为综合考虑特征选择中多种子集评价方式间的折中,优化子集性能,提出一种基于子集评价多目标优化的特征选择框架,并重点对多目标粒子群优化(MOPSO)在特征子集评价中的应用进行了研究。该框架分别根据子集的稀疏度、分类能力和信息损失度设计多目标优化函数,继而基于多目标优化算法进行特征权值向量寻优,并通过权值向量Pareto解集膝点选取确定最优向量,最终实现基于权值向量排序的特征选择。设计实验对比了基于多目标粒子群优化算法的特征选择(FS_MOPSO)与四种经典方法的性能,多个数据集上的结果表明,FS_MOPSO在低维空间表现出更高的分类精度,并保证了更少的信息损失。  相似文献   

12.

In hyperspectral image (HSI) analysis, high-dimensional data may contain noisy, irrelevant and redundant information. To mitigate the negative effect from these information, feature selection is one of the useful solutions. Unsupervised feature selection is a data preprocessing technique for dimensionality reduction, which selects a subset of informative features without using any label information. Different from the linear models, the autoencoder is formulated to nonlinearly select informative features. The adjacency matrix of HSI can be constructed to extract the underlying relationship between each data point, where the latent representation of original data can be obtained via matrix factorization. Besides, a new feature representation can be also learnt from the autoencoder. For a same data matrix, different feature representations should consistently share the potential information. Motivated by these, in this paper, we propose a latent representation learning based autoencoder feature selection (LRLAFS) model, where the latent representation learning is used to steer feature selection for the autoencoder. To solve the proposed model, we advance an alternative optimization algorithm. Experimental results on three HSI datasets confirm the effectiveness of the proposed model.

  相似文献   

13.

In this paper, we propose a new feature selection method called kernel fisher discriminant analysis and regression learning based algorithm for unsupervised feature selection. The existing feature selection methods are based on either manifold learning or discriminative techniques, each of which has some shortcomings. Although some studies show the advantages of two-steps method benefiting from both manifold learning and discriminative techniques, a joint formulation has been shown to be more efficient. To do so, we construct a global discriminant objective term of a clustering framework based on the kernel method. We add another term of regression learning into the objective function, which can impose the optimization to select a low-dimensional representation of the original dataset. We use L2,1-norm of the features to impose a sparse structure upon features, which can result in more discriminative features. We propose an algorithm to solve the optimization problem introduced in this paper. We further discuss convergence, parameter sensitivity, computational complexity, as well as the clustering and classification accuracy of the proposed algorithm. In order to demonstrate the effectiveness of the proposed algorithm, we perform a set of experiments with different available datasets. The results obtained by the proposed algorithm are compared against the state-of-the-art algorithms. These results show that our method outperforms the existing state-of-the-art methods in many cases on different datasets, but the improved performance comes with the cost of increased time complexity.

  相似文献   

14.
Abstract

In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA’s multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm’s accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm’s accuracy and the subset size, which determines the algorithm’s efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.  相似文献   

15.

Features subset selection (FSS) generally plays an essential role in the implementation of data mining, particularly in the field of high-dimensional medical data analysis, as well as in supplying early detection with essential features and high accuracy. The latest modern feature selection models are now using the ability of optimization algorithms for extracting features of particular properties to get the highest accuracy performance possible. Many of the optimization algorithms, such as genetic algorithm, often use the required parameters that would need to be adjusted for better results. For the function selection procedure, tuning these parameter values is a difficult challenge. In this paper, a new wrapper-based feature selection approach called binary teaching learning based optimization (BTLBO) is introduced. The binary teaching learning based optimization (BTLBO) is among the most sophisticated meta-heuristic method which does not involve any specific algorithm parameters. It requires only standard process parameters such as population size and a number of iterations to extract a set of features selected from a data. This is a demanding process, to achieve the best possible set of features would be to use a method which is independent of the method controlling parameters. This paper introduces a new modified binary teaching–learning-based optimization (NMBTLBO) as a technique to select subset features and demonstrate support vector machine (SVM) accuracy of binary identification as a fitness function for the implementation of the feature subset selection process. The new proposed algorithm NMBTLBO contains two steps: first, the new updating procedure, second, the new method to select the primary teacher in teacher phase in binary teaching-learning based on optimization algorithm. The proposed technique NMBTLBO was used to classify the rheumatic disease datasets collected from Baghdad Teaching Hospital Outpatient Rheumatology Clinic during 2016–2018. Compared with the original BTLBO algorithm, the improved NMBTLBO algorithm has achieved a major difference in accuracy. Validation was carried out by testing the accuracy of four classification methods: K-nearest neighbors, decision trees, support vector machines and K-means. Study results showed that the classification accuracy of the four methods was increased for the proposed method of selection of features (NMBTLBO) compared to the BTLBO algorithm. SVM classifier provided 89% accuracy of BTLBO-SVM and 95% with NMBTLBO –SVM. Decision trees set the values of 94% with BTLBO-SVM and 95% with the feature selection of NMBTLBO-SVM. The analysis indicates that the latest method (NMBTLBO) enhances classification accuracy.

  相似文献   

16.
张磊  李柳  杨海鹏  孙翔  程凡  孙晓燕  苏喻 《控制与决策》2023,38(10):2832-2840
频繁高效用项集挖掘是数据挖掘的一项重要任务,挖掘到的项集由支持度和效用这2个指标衡量.在一系列用于解决这类问题的方法中,进化多目标方法能够提供1组高质量解以满足不同用户的需求,避免传统算法中支持度和效用的阈值难以确定的问题.但是已有多目标算法多采用0-1编码,使得决策空间的维度与数据集中项数成正比,因此,面对高维数据集会出现维度灾难问题.鉴于此,设计一种项集归减策略,通过在进化过程中不断对不重要项进行归减以减小搜索空间.基于此策略,进而提出一种基于项集归减的高维频繁高效用项集挖掘多目标优化算法(IR-MOEA),并针对可能存在的归减过度或未归减到位的个体提出基于学习的种群修复策略用以调整进化方向.此外还提出一种基于项集适应度的初始化策略,使得算法在进化初期生成利于后期进化的稀疏解.多个数据集上的实验结果表明,所提出算法优于现有的多目标优化算法,特别是在高维数据集上.  相似文献   

17.
Cervical cancer is one of the vital and most frequent cancers, but can be cured effectively if diagnosed in the early stage. This is a novel effort towards effective characterization of cervix lesions from contrast enhanced CT-Scan images to provide a reliable and objective discrimination between benign and malignant lesions. Performance of such classification models mostly depends on features used to represent samples in a training dataset. Selection of optimal feature subset here is NP-hard; where, randomized algorithms do better. In this paper, Grey Wolf Optimizer (GWO), which is a population based meta-heuristic inspired by the leadership hierarchy and hunting mechanism of grey wolves has been utilized for feature selection. The traditional GWO is applicable for continuous single objective optimization problems. Since, feature selection is inherently multi-objective; this paper proposes two different approaches for multi-objective binary GWO algorithms. One is a scalarized approach to multi-objective GWO (MOGWO) and the other is a Non-dominated Sorting based GWO (NSGWO). These are used for wrapper based feature selection that selects optimal textural feature subset for improved classification of cervix lesions. For experiments, contrast enhanced CT-Scan (CECT) images of 62 patients have been used, where all lesions had been recommended for surgical biopsy by specialist. Gray-level co-occurrence matrix based texture features are extracted from two-level decomposition of wavelet coefficients of cervix regions extracted from CECT images. The results of proposed approaches are compared with mostly used meta-heuristics such as genetic algorithm (GA) and firefly algorithm (FA) for multi-objective optimization. With better diversification and intensification, GWO obtains Pareto solutions, which dominate the solutions obtained by GA and FA when assessed on the utilized cervix lesion cases. Cervix lesions are up to 91% accurately classified as benign and malignant with only five features selected by NSGWO. A two-tailed t-test was conducted by hypothesizing the mean F-score obtained by the proposed NSGWO method at significance level = 0.05. This confirms that NSGWO performs significantly better than other methods for the real cervix lesion dataset in hand. Further experiments were conducted on high dimensional microarray gene expression datasets collected online. The results demonstrate that the proposed method performs significantly better than other methods selecting relevant genes for high-dimensional, multi-category cancer diagnosis with an average of 12.82% improvement in F-score value.  相似文献   

18.
高维数据中许多特征之间互不相关或冗余,这给传统的学习算法带来了巨大的挑战。为了解决该问题,特征选择应运而生。与此同时,许多实际问题中数据存在多个视图而且数据的标签难以获取,多视图学习和半监督学习成为机器学习中的热点问题。本文研究怎样从"部分标签"的多视图数据中选择最大相关最小冗余的特征子集,提出一种基于多视图的半监督特征选择方法。为了剔除冗余和无关的特征,探索蕴含于多视图数据中的互补信息以及每个视图中不同特征之间的冗余关系,并利用少量标签数据蕴含的信息协同未标签数据同时进行特征选择。实验结果验证了本算法能够获得很好的特征选择效果及聚类效果。  相似文献   

19.
特征选择是去除不相关和冗余特征,找到具有良好泛化能力的原始特征的紧凑表示,同时,数据中含有的噪声和离群点会使学习获得的系数矩阵的秩变大,使得算法无法捕捉到高维数据中真实的低秩结构。因此,利用Schatten-p范数逼近秩最小化问题和特征自表示重构无监督特征选择问题中的系数矩阵,建立一个基于Schatten-p范数和特征自表示的无监督特征选择(SPSR)算法,并使用增广拉格朗日乘子法和交替方向法乘子法框架进行求解。最后在6个公开数据集上与经典无监督特征选择算法进行实验比较,SPSR算法的聚类精度更高,可以有效地识别代表性特征子集。  相似文献   

20.
在多标记学习中,特征选择是处理数据高维问题和提升分类性能的一种有效手段,然而现有特征选择算法大多是基于标记分布大致平衡这一假设,鲜有考虑标记分布不平衡的问题。针对这一问题,本文提出了一种边缘标记弱化的多标记特征选择算法(Multi-label feature selection algorithm with weakening marginal labels,WML),计算不同标记下正负标记的频数比率作为该标记的权值,然后通过赋权方式弱化边缘标记,将标记空间信息融入到特征选择的过程中,得到一组更为高效的特征序列,提升标记对样本描述的精确性。在多个数据集上的实验结果表明,本文算法具有一定优势,通过稳定性分析和统计假设检验进一步证明本文算法的有效性和合理性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号