首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An efficient filter feature selection (FS) method is proposed in this paper, the SVM-FuzCoC approach, achieving a satisfactory trade-off between classification accuracy and dimensionality reduction. Additionally, the method has reasonably low computational requirements, even in high-dimensional feature spaces. To assess the quality of features, we introduce a local fuzzy evaluation measure with respect to patterns that embraces fuzzy membership degrees of every pattern in their classes. Accordingly, the above measure reveals the adequacy of data coverage provided by each feature. The required membership grades are determined via a novel fuzzy output kernel-based support vector machine, applied on single features. Based on a fuzzy complementary criterion (FuzCoC), the FS procedure iteratively selects features with maximum additional contribution in regard to the information content provided by previously selected features. This search strategy leads to small subsets of powerful and complementary features, alleviating the feature redundancy problem. We also devise different SVM-FuzCoC variants by employing seven other methods to derive fuzzy degrees from SVM outputs, based on probabilistic or fuzzy criteria. Our method is compared with a set of existing FS methods, in terms of performance capability, dimensionality reduction, and computational speed, via a comprehensive experimental setup, including synthetic and real-world datasets.  相似文献   

2.
基于SVM的特征筛选方法及其若干应用   总被引:7,自引:7,他引:7  
对于拟合问题,传统的模式识别特征筛选方法以各特征量对训练数据拟合能力的贡献为取舍标准,未考虑经验风险最小化和结构风险最小化间的差别,不能获得预报能力最强的特征筛选结果。为此我们提出了结合支持向量回归法与留一法的特征筛选新算法,并将它试用于镍氢电池材料和氧化铝溶出率两套实验数据集的特征筛选。  相似文献   

3.
We introduce a novel wrapper Algorithm for Feature Selection, using Support Vector Machines with kernel functions. Our method is based on a sequential backward selection, using the number of errors in a validation subset as the measure to decide which feature to remove in each iteration. We compare our approach with other algorithms like a filter method or Recursive Feature Elimination SVM to demonstrate its effectiveness and efficiency.  相似文献   

4.
针对多维数据集,为得到一个最优特征子集,提出一种基于特征聚类的封装式特征选择算法。在初始阶段,利用三支决策理论动态地将原始特征集划分为若干特征子空间,通过特征聚类算法对每个特征子空间内的特征进行聚类;从每个特征类簇里挑选代表特征,利用邻域互信息对剩余特征进行降序排序并依次迭代选择,使用封装器评估该特征是否应该被选择,可得到一个具有最低分类错误率的最优特征子集。在UCI数据集上的实验结果表明,相较于其它特征选择算法,该算法能有效地提高各数据集在libSVM、J48、Nave Bayes以及KNN分类器上的分类准确率。  相似文献   

5.
Ensemble of classifiers is a learning paradigm where many classifiers are jointly used to solve a problem. Research has shown that ensemble is very effective for classification tasks. Diversity and accuracy are two basic requirements for the ensemble creation. In this paper, we propose an ensemble creation method based on GA wrapper feature selection. Preliminary experimental results on real-world data show that the proposed method is promising, especially when the number of training data is limited.  相似文献   

6.
This paper presents a novel wrapper feature selection algorithm for classification problems, namely hybrid genetic algorithm (GA)- and extreme learning machine (ELM)-based feature selection algorithm (HGEFS). It utilizes GA to wrap ELM to search for the optimum subsets in the huge feature space, and then, a set of subsets are selected to make ensemble to improve the final prediction accuracy. To prevent GA from being trapped in the local optimum, we propose a novel and efficient mechanism specifically designed for feature selection problems to maintain GA’s diversity. To measure each subset’s quality fairly and efficiently, we adopt a modified ELM called error-minimized extreme learning machine (EM-ELM) which automatically determines an appropriate network architecture for each feature subsets. Moreover, EM-ELM has good generalization ability and extreme learning speed which allows us to perform wrapper feature selection processes in an affordable time. In other words, we simultaneously optimize feature subset and classifiers’ parameters. After finishing the search process of GA, to further promote the prediction accuracy and get a stable result, we select a set of EM-ELMs from the obtained population to make the final ensemble according to a specific ranking and selecting strategy. To verify the performance of HGEFS, empirical comparisons are carried out on different feature selection methods and HGEFS with benchmark datasets. The results reveal that HGEFS is a useful method for feature selection problems and always outperforms other algorithms in comparison.  相似文献   

7.
8.
We focus on a hybrid approach of feature selection. We begin our analysis with a filter model, exploiting the geometrical information contained in the minimum spanning tree (MST) built on the learning set. This model exploits a statistical test of relative certainty gain, used in a forward selection algorithm. In the second part of the paper, we show that the MST can be replaced by the 1 nearest-neighbor graph without challenging the statistical framework. This leads to a feature selection algorithm belonging to a new category of hybrid models (filter-wrapper). Experimental results on readily available synthetic and natural domains are presented and discussed.  相似文献   

9.
The authors present a statistical-heuristic feature selection criterion for constructing multibranching decision trees in noisy real-world domains. Real world problems often have multivalued features. To these problems, multibranching decision trees provide a more efficient and more comprehensible solution that binary decision trees. The authors propose a statistical-heuristic criterion, the symmetrical τ and then discuss its consistency with a Bayesian classifier and its built-in statistical test. The combination of a measure of proportional-reduction-in-error and cost-of-complexity heuristic enables the symmetrical τ to be a powerful criterion with many merits, including robustness to noise, fairness to multivalued features, and ability to handle a Boolean combination of logical features, and middle-cut preference. The τ criterion also provides a natural basis for prepruning and dynamic error estimation. Illustrative examples are also presented  相似文献   

10.
基于Fisher准则和特征聚类的特征选择   总被引:2,自引:0,他引:2  
王飒  郑链 《计算机应用》2007,27(11):2812-2813
特征选择是机器学习和模式识别等领域的重要问题之一。针对高维数据,提出了一种基于Fisher准则和特征聚类的特征选择方法。首先基于Fisher准则,预选出鉴别性能较强的特征子集,然后在预选所得到的特征子集上对特征进行分层聚类,从而最终达到去除不相关和冗余特征的目的。实验结果表明该方法是一种有效的特征选择方法。  相似文献   

11.
Band selection is widely used to identify relevant bands for land-cover classification of hyperspectral images. The combination of spectral and spatial information can improve the classification performance of hyperspectral images dramatically. Similarly, the fusion of spectral–spatial information should also improve the performance of band selection. In this article, two semi-supervised wrapper-based spectral–spatial band selection algorithms are proposed. The local spatial smoothness of hyperspectral imagery is used to improve the performance of band selection when limited labelled samples available. With superpixel segmentation, the first algorithm uses the statistical characteristics of classification map to predict the classification quality of all samples. Based on the Markov random field model, the second algorithm incorporates the spatial information by the minimization of spectral–spatial energy function. Four widely used real hyperspectral data sets are used to demonstrate the effectiveness of the proposed methods, when compared to cross-validation-based wrapper method, the accuracy is improved by 2% for different data sets.  相似文献   

12.
This paper deals with the problem of supervised wrapper-based feature subset selection in datasets with a very large number of attributes. Recently the literature has contained numerous references to the use of hybrid selection algorithms: based on a filter ranking, they perform an incremental wrapper selection over that ranking. Though working fine, these methods still have their problems: (1) depending on the complexity of the wrapper search method, the number of wrapper evaluations can still be too large; and (2) they rely on a univariate ranking that does not take into account interaction between the variables already included in the selected subset and the remaining ones.Here we propose a new approach whose main goal is to drastically reduce the number of wrapper evaluations while maintaining good performance (e.g. accuracy and size of the obtained subset). To do this we propose an algorithm that iteratively alternates between filter ranking construction and wrapper feature subset selection (FSS). Thus, the FSS only uses the first block of ranked attributes and the ranking method uses the current selected subset in order to build a new ranking where this knowledge is considered. The algorithm terminates when no new attribute is selected in the last call to the FSS algorithm. The main advantage of this approach is that only a few blocks of variables are analyzed, and so the number of wrapper evaluations decreases drastically.The proposed method is tested over eleven high-dimensional datasets (2400-46,000 variables) using different classifiers. The results show an impressive reduction in the number of wrapper evaluations without degrading the quality of the obtained subset.  相似文献   

13.
Besides optimizing classifier predictive performance and addressing the curse of the dimensionality problem, feature selection techniques support a classification model as simple as possible. In this paper, we present a wrapper feature selection approach based on Bat Algorithm (BA) and Optimum-Path Forest (OPF), in which we model the problem of feature selection as an binary-based optimization technique, guided by BA using the OPF accuracy over a validating set as the fitness function to be maximized. Moreover, we present a methodology to better estimate the quality of the reduced feature set. Experiments conducted over six public datasets demonstrated that the proposed approach provides statistically significant more compact sets and, in some cases, it can indeed improve the classification effectiveness.  相似文献   

14.
Exploratory data analysis methods are essential for getting insight into data. Identifying the most important variables and detecting quasi-homogenous groups of data are problems of interest in this context. Solving such problems is a difficult task, mainly due to the unsupervised nature of the underlying learning process. Unsupervised feature selection and unsupervised clustering can be successfully approached as optimization problems by means of global optimization heuristics if an appropriate objective function is considered. This paper introduces an objective function capable of efficiently guiding the search for significant features and simultaneously for the respective optimal partitions. Experiments conducted on complex synthetic data suggest that the function we propose is unbiased with respect to both the number of clusters and the number of features.  相似文献   

15.
Feature selection is a process aimed at filtering out unrepresentative features from a given dataset, usually allowing the later data mining and analysis steps to produce better results. However, different feature selection algorithms use different criteria to select representative features, making it difficult to find the best algorithm for different domain datasets. The limitations of single feature selection methods can be overcome by the application of ensemble methods, combining multiple feature selection results. In the literature, feature selection algorithms are classified as filter, wrapper, or embedded techniques. However, to the best of our knowledge, there has been no study focusing on combining these three types of techniques to produce ensemble feature selection. Therefore, the aim here is to answer the question as to which combination of different types of feature selection algorithms offers the best performance for different types of medical data including categorical, numerical, and mixed data types. The experimental results show that a combination of filter (i.e., principal component analysis) and wrapper (i.e., genetic algorithms) techniques by the union method is a better choice, providing relatively high classification accuracy and a reasonably good feature reduction rate.  相似文献   

16.
Occupancy information is essential to facilitate demand-driven operations of air-conditioning and mechanical ventilation (ACMV) systems. Environmental sensors are increasingly being explored as cost effective and non-intrusive means to obtain the occupancy information. This requires the extraction and selection of useful features from the sensor data. In past works, feature selection has generally been implemented using filter-based approaches. In this work, we introduce the use of wrapper and hybrid feature selection for better occupancy estimation. To achieve a fast computation time, we introduce a ranking-based incremental search in our algorithms, which is more efficient than the exhaustive search used in past works. For wrapper feature selection, we propose the WRANK-ELM, which searches an ordered list of features using the extreme learning machine (ELM) classifier. For hybrid feature selection, we propose the RIG-ELM, which is a filter–wrapper hybrid that uses the relative information gain (RIG) criterion for feature ranking and the ELM for the incremental search. We present experimental results in an office space with a multi-sensory network to validate the proposed algorithms.  相似文献   

17.
Stochastic policy gradient methods have been applied to a variety of robot control tasks such as robot’s acquisition of motor skills because they have an advantage in learning in high-dimensional and continuous feature spaces by combining some heuristics like motor primitives. However, when we apply one of them to a real-world task, it is difficult to represent the task well by designing the policy function and the feature space due to the lack of enough prior knowledge about the task. In this research, we propose a method to extract a preferred feature space autonomously to achieve a task using a stochastic policy gradient method for a sample-based policy. We apply our method to a control of linear dynamical system and the computer simulation result shows that a desirable controller is obtained and that the performance of the controller is improved by the feature selection.  相似文献   

18.
In this paper, we propose a robust region-based active contour model driven by fuzzy c-means energy that draws upon the clustering intensity information for fast image segmentation. The main idea of fuzzy c-means energy is to quickly compute the two types of cluster center functions for all points in image domain by fuzzy c-means algorithm locally with a proper preprocessing procedure before the curve starts to evolve. The time-consuming local fitting functions in traditional models are substituted with these two functions. Furthermore, a sign function and a Gaussian filtering function are utilized to replace the penalty term and the length term in most models, respectively. Experiments on several synthetic and real images have proved that the proposed model can segment images with intensity inhomogeneity efficiently and precisely. Moreover, the proposed model has a good robustness on initial contour, parameters and different kinds of noise.  相似文献   

19.
20.
On an extended fisher criterion for feature selection   总被引:1,自引:0,他引:1  
This correspondence considers the extraction of features as a task of linear transformation of an initial pattern space into a new space, optimal with respect to discriminating the data. A solution of the feature extraction problem is given for two multivariate normal distributed pattern classes using an extended Fisher criterion as the distance measure. The introduced distance measure consists of two terms. The first term estimates the distance between classes upon the difference of mean vectors of classes and the second one upon the difference of class covariance matrices. The proposed method is compared to some of the more popular alternative methods: Fukunaga-Koontz method and Foley-Sammon method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号