首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In feature selection problems, strong relevant features may be misjudged as redundant by the approximate Markov blanket. To avoid this, a new concept called strong approximate Markov blanket is proposed. It is theoretically proved that no strong relevant feature will be misjudged as redundant by the proposed concept. To reduce computation time, we propose the concept of modified strong approximate Markov blanket, which still performs better than the approximate Markov blanket in avoiding misjudgment of strong relevant features. A new filter-based feature selection method that is applicable to high-dimensional datasets is further developed. It first groups features to remove redundant features, and then uses a sequential forward selection method to remove irrelevant features. Numerical results on four benchmark and seven real datasets suggest that it is a competitive feature selection method with high classification accuracy, moderate number of selected features, and above-average robustness.  相似文献   

2.
Hiroshi   《Pattern recognition》2006,39(12):2393-2404
We present a new method based on the ROC (Receiver Operating Characteristic) curve to efficiently select a feature subset in classifying a high-dimensional microarray dataset with a limited number of observations. Our method has two steps: (1) selecting the most relevant features to the target label using the ROC curve and (2) iteratively eliminating a redundant feature using the ROC curves. The ROC curve is strongly related with a non-parametric hypothesis testing, which must be effective for a dataset with small numerical observations. Experiments with real datasets revealed the significant performance advantage of our method over two competing feature subset selection methods.  相似文献   

3.
Niu  Ben  Yi  Wenjie  Tan  Lijing  Geng  Shuang  Wang  Hong 《Natural computing》2021,20(1):63-76

Feature selection plays an important role in data preprocessing. The aim of feature selection is to recognize and remove redundant or irrelevant features. The key issue is to use as few features as possible to achieve the lowest classification error rate. This paper formulates feature selection as a multi-objective problem. In order to address feature selection problem, this paper uses the multi-objective bacterial foraging optimization algorithm to select the feature subsets and k-nearest neighbor algorithm as the evaluation algorithm. The wheel roulette mechanism is further introduced to remove duplicated features. Four information exchange mechanisms are integrated into the bacteria-inspired algorithm to avoid the individuals getting trapped into the local optima so as to achieve better results in solving high-dimensional feature selection problem. On six small datasets and ten high-dimensional datasets, comparative experiments with different conventional wrapper methods and several evolutionary algorithms demonstrate the superiority of the proposed bacteria-inspired based feature selection method.

  相似文献   

4.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

5.
F-score作为特征评价准则时,没有考虑不同特征的不同测量量纲对特征重要性的影响。为此,提出一种新的特征评价准则D-score,该准则不仅可以衡量样本特征在两类或多类之间的辨别能力,而且不受特征测量量纲对特征重要性的影响。以D-score为特征重要性评价准则,结合前向顺序搜索、前向顺序浮动搜索以及后向浮动搜索三种特征搜索策略,以支持向量机分类正确率评价特征子集的分类性能得到三种混合的特征选择方法。这些特征选择方法结合了Filter方法和Wrapper方法的各自优势实现特征选择。对UCI机器学习数据库中9个标准数据集的实验测试,以及与基于改进F-score与支持向量机的混合特征选择方法的实验比较,表明D-score特征评价准则是一种有效的样本特征重要性,也即特征辨别能力衡量准则。基于该准则与支持向量机的混合特征选择方法实现了有效的特征选择,在保持数据集辨识能力不变情况下实现了维数压缩。  相似文献   

6.
Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset.  相似文献   

7.
Today, feature selection is an active research in machine learning. The main idea of feature selection is to choose a subset of available features, by eliminating features with little or no predictive information, as well as redundant features that are strongly correlated. There are a lot of approaches for feature selection, but most of them can only work with crisp data. Until now there have not been many different approaches which can directly work with both crisp and low quality (imprecise and uncertain) data. That is why, we propose a new method of feature selection which can handle both crisp and low quality data. The proposed approach is based on a Fuzzy Random Forest and it integrates filter and wrapper methods into a sequential search procedure with improved classification accuracy of the features selected. This approach consists of the following main steps: (1) scaling and discretization process of the feature set; and feature pre-selection using the discretization process (filter); (2) ranking process of the feature pre-selection using the Fuzzy Decision Trees of a Fuzzy Random Forest ensemble; and (3) wrapper feature selection using a Fuzzy Random Forest ensemble based on cross-validation. The efficiency and effectiveness of this approach is proved through several experiments using both high dimensional and low quality datasets. The approach shows a good performance (not only classification accuracy, but also with respect to the number of features selected) and good behavior both with high dimensional datasets (microarray datasets) and with low quality datasets.  相似文献   

8.
Imbalanced data is one type of datasets that are frequently found in real-world applications, e.g., fraud detection and cancer diagnosis. For this type of datasets, improving the accuracy to identify their minority class is a critically important issue. Feature selection is one method to address this issue. An effective feature selection method can choose a subset of features that favor in the accurate determination of the minority class. A decision tree is a classifier that can be built up by using different splitting criteria. Its advantage is the ease of detecting which feature is used as a splitting node. Thus, it is possible to use a decision tree splitting criterion as a feature selection method. In this paper, an embedded feature selection method using our proposed weighted Gini index (WGI) is proposed. Its comparison results with Chi2, F-statistic and Gini index feature selection methods show that F-statistic and Chi2 reach the best performance when only a few features are selected. As the number of selected features increases, our proposed method has the highest probability of achieving the best performance. The area under a receiver operating characteristic curve (ROC AUC) and F-measure are used as evaluation criteria. Experimental results with two datasets show that ROC AUC performance can be high, even if only a few features are selected and used, and only changes slightly as more and more features are selected. However, the performance of F-measure achieves excellent performance only if 20% or more of features are chosen. The results are helpful for practitioners to select a proper feature selection method when facing a practical problem.   相似文献   

9.
It is a significant and challenging task to detect the informative features to carry out explainable analysis for high dimensional data, especially for those with very small number of samples. Feature selection especially the unsupervised ones are the right way to deal with this challenge and realize the task. Therefore, two unsupervised spectral feature selection algorithms are proposed in this paper. They group features using advanced Self-Tuning spectral clustering algorithm based on local standard deviation, so as to detect the global optimal feature clusters as far as possible. Then two feature ranking techniques, including cosine-similarity-based feature ranking and entropy-based feature ranking, are proposed, so that the representative feature of each cluster can be detected to comprise the feature subset on which the explainable classification system will be built. The effectiveness of the proposed algorithms is tested on high dimensional benchmark omics datasets and compared to peer methods, and the statistical test are conducted to determine whether or not the proposed spectral feature selection algorithms are significantly different from those of the peer methods. The extensive experiments demonstrate the proposed unsupervised spectral feature selection algorithms outperform the peer ones in comparison, especially the one based on cosine similarity feature ranking technique. The statistical test results show that the entropy feature ranking based spectral feature selection algorithm performs best. The detected features demonstrate strong discriminative capabilities in downstream classifiers for omics data, such that the AI system built on them would be reliable and explainable. It is especially significant in building transparent and trustworthy medical diagnostic systems from an interpretable AI perspective.  相似文献   

10.
We address the problem of credit scoring as a classification and feature subset selection problem. Based on the current framework of sophisticated feature selection methods, we identify features that contain the most relevant information to distinguish good loan payers from bad loan payers. The feature selection methods are validated on several real‐world datasets with different types of classifiers. We show the advantages following from using the subspace approach to classification. We discuss many practical issues related to the applicability of feature selection methods. We show and discuss some difficulties that used to be insufficiently emphasized in standard feature selection literature. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 985–999, 2005.  相似文献   

11.
Feature selection ensemble methods are a recent approach aiming at adding diversity in sets of selected features, improving performance and obtaining more robust and stable results. However, using an ensemble introduces the need for an aggregation step to combine all the output methods that confirm the ensemble. Besides, when trying to improve computational efficiency, ranking methods that order all initial features are preferred, and so an additional thresholding step is also mandatory. In this work two different ensemble designs based on ranking methods are described. The main difference between them is the order in which the combination and thresholding steps are performed. In addition, a new automatic threshold based on the combination of three data complexity measures is proposed and compared with traditional thresholding approaches based on retaining a fixed percentage of features. The behavior of these methods was tested, according to the SVM classification accuracy, with satisfactory results, for three different scenarios: synthetic datasets and two types of real datasets (where sample size is much higher than feature size, and where feature size is much higher than sample size).  相似文献   

12.
This paper proposed a novel feature selection method that includes a self-representation loss function, a graph regularization term and an \({l_{2,1}}\)-norm regularization term. Different from traditional least square loss function which focuses on achieving the minimal regression error between the class labels and their corresponding predictions, the proposed self-representation loss function pushes to represent each feature with a linear combination of its relevant features, aim at effectively selecting representative features and ensuring the robustness to outliers. The graph regularization terms include two kinds of inherent information, i.e., the relationship between samples (the sample–sample relation for short) and the relationship between features (the feature–feature relation for short). The feature–feature relation reflects the similarity between two features and preserves the relation into the coefficient matrix, while the sample–sample relation reflects the similarity between two samples and preserves the relation into the coefficient matrix. The \({l_{2,1}}\)-norm regularization term is used to conduct feature selection, aim at selecting the features, which satisfies the characteristics mentioned above. Furthermore, we put forward a new optimization method to solve our objective function. Finally, we feed reduced data into support vector machine (SVM) to conduct classification on real datasets. The experimental results showed that the proposed method has a better performance comparing with state-of-the-art methods, such as k nearest neighbor, ridge regression, SVM and so on.  相似文献   

13.
高维数据中许多特征之间互不相关或冗余,这给传统的学习算法带来了巨大的挑战。为了解决该问题,特征选择应运而生。与此同时,许多实际问题中数据存在多个视图而且数据的标签难以获取,多视图学习和半监督学习成为机器学习中的热点问题。本文研究怎样从"部分标签"的多视图数据中选择最大相关最小冗余的特征子集,提出一种基于多视图的半监督特征选择方法。为了剔除冗余和无关的特征,探索蕴含于多视图数据中的互补信息以及每个视图中不同特征之间的冗余关系,并利用少量标签数据蕴含的信息协同未标签数据同时进行特征选择。实验结果验证了本算法能够获得很好的特征选择效果及聚类效果。  相似文献   

14.
针对监督分类中的特征选择问题, 提出一种基于量子进化算法的包装式特征选择方法. 首先分析了现有子集评价方法存在过度偏好分类精度的缺点, 进而提出基于固定阈值和统计检验的两种子集评价方法. 然后改进了量子进化算法的进化策略, 即将整个进化过程分为两个阶段, 分别选用个体极值和全局极值作为种群的进化目标. 在此基础上, 按照包装式特征选择遵循的一般框架设计了特征选择算法. 最后, 通过15个UCI数据集分别验证了子集评价方法和进化策略的有效性, 以及新方法相较于其它6种特征选择方法的优越性. 结果表明, 新方法在80%以上的数据集上取得相似甚至更好的分类精度, 在86.67%的数据集上选择了特征个数更小的子集.  相似文献   

15.
特征选择方法可以从成千上万个特征中选择合适的少量特征,使模型更加有效、高效。本文考虑到真实场景下高维数据集中特征之间互相关联以及使用复杂网络结构描述特征空间的全局性与合理性,提出无监督场景下的基于复杂网络节点度中心性的特征选择方法。根据特征间的相关性大小,设定阈值选择保留符合要求的关联;再利用保留的关联生成以特征为节点的无向无权重网络结构;最后以衡量节点度中心性的方法筛选此网络中影响力最大的节点集,亦即最优特征子集。本文方法为处理特征重要性及特征冗余增加了灵活性。采用对比实验,将本文方法与常用特征选择或特征提取方法在多个高维数据集上进行性能比较。实验分析结果表明此方法的有效性以及普适性。  相似文献   

16.
In general, the analysis of microarray data requires two steps: feature selection and classification. From a variety of feature selection methods and classifiers, it is difficult to find optimal ensembles composed of any feature-classifier pairs. This paper proposes a novel method based on the evolutionary algorithm (EA) to form sophisticated ensembles of features and classifiers that can be used to obtain high classification performance. In spite of the exponential number of possible ensembles of individual feature-classifier pairs, an EA can produce the best ensemble in a reasonable amount of time. The chromosome is encoded with real values to decide the weight for each feature-classifier pair in an ensemble. Experimental results with two well-known microarray datasets in terms of time and classification rate indicate that the proposed method produces ensembles that are superior to individual classifiers, as well as other ensembles optimized by random and greedy strategies.  相似文献   

17.
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.  相似文献   

18.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.  相似文献   

19.
Microarray experiments have raised challenging questions such as how to make an accurate identification of a set of marker genes responsible for various cancers. In statistics, this specific task can be posed as the feature selection problem. Since a support vector machine can deal with a vast number of features, it has gained wide spread use in microarray data analysis. We propose a stepwise feature selection using the generalized logistic loss that is a smooth approximation of the usual hinge loss. We compare the proposed method with the support vector machine with recursive feature elimination for both real and simulated datasets. It is illustrated that the proposed method can improve the quality of feature selection through standardization while the method retains similar predictive performance compared with the recursive feature elimination.  相似文献   

20.
Over the last few years, the dimensionality of datasets involved in data mining applications has increased dramatically. In this situation, feature selection becomes indispensable as it allows for dimensionality reduction and relevance detection. The research proposed in this paper broadens the scope of feature selection by taking into consideration not only the relevance of the features but also their associated costs. A new general framework is proposed, which consists of adding a new term to the evaluation function of a filter feature selection method so that the cost is taken into account. Although the proposed methodology could be applied to any feature selection filter, in this paper the approach is applied to two representative filter methods: Correlation-based Feature Selection (CFS) and Minimal-Redundancy-Maximal-Relevance (mRMR), as an example of use. The behavior of the proposed framework is tested on 17 heterogeneous classification datasets, employing a Support Vector Machine (SVM) as a classifier. The results of the experimental study show that the approach is sound and that it allows the user to reduce the cost without compromising the classification error.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号