首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Feature selection is a dimensionality reduction technique that helps to improve data visualization, simplify learning, and enhance the efficiency of learning algorithms. The existing redundancy-based approach, which relies on relevance and redundancy criteria, does not account for feature complementarity. Complementarity implies information synergy, in which additional class information becomes available due to feature interaction. We propose a novel filter-based approach to feature selection that explicitly characterizes and uses feature complementarity in the search process. Using theories from multi-objective optimization, the proposed heuristic penalizes redundancy and rewards complementarity, thus improving over the redundancy-based approach that penalizes all feature dependencies. Our proposed heuristic uses an adaptive cost function that uses redundancy–complementarity ratio to automatically update the trade-off rule between relevance, redundancy, and complementarity. We show that this adaptive approach outperforms many existing feature selection methods using benchmark datasets.  相似文献   

2.
文本分类的特点是高维的特征空间和高度的特征冗余.针对这两个特点,采用χ2统计量处理高维的特征空间,利用信息新颖度的思想处理高度的特征冗余,根据最大边缘相关的定义,将二者有机结合,提出一种基于最大边缘相关的特征选择方法.该方法可以在特征选择过程中减少大量的冗余特征.最后,在Reuters-21578Top10和OHSCAL两个文本数据集上进行实验.实验结果表明,基于最大边缘相关的特征选择方法比χ2统计量和信息增益两种特征选择方法更高效,并且能够提高nave Bayes,Rocchio和kNN 3种不同分类器的性能.  相似文献   

3.
特征选择是模式识别与数据挖掘的关键问题之一,它可以移除数据集中的冗余和不相关特征以提升学习性能。基于最大相关最小冗余准则,提出一种新的基于相关性与冗余性分析的半监督特征选择方法(S2R2),S2R2方法独立于任何分类学习算法。该方法首先对无监督相关度信息度量进行分析与扩充,然后结合信息增益,设计一种半监督特征相关性与冗余性度量,可以有效识别与移除不相关和冗余特征,最后采用增量搜索技术贪婪地构建特征子集,避免搜索指数级大小的解空间,提高算法的运行效率。本文还提出S2R2方法的快速过滤版本,FS2R2,以更好地应对大规模特征选择问题。多个标准数据集上的实验结果表明了所提方法的有效性和优越性。  相似文献   

4.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

5.
多标签特征选择是应对数据维度灾难现象的主要方法之一,可以在降低特征维度的同时提高学习效率,优化分类性能。针对目前特征选择算法没有考虑标签间的相互关系,以及信息量的衡量范围存在偏差的问题,提出一种基于标签关系改进的多标签特征选择算法。首先引入对称不确定性对信息量进行归一化处理,然后用归一化的互信息量作为相关性的衡量方法,并据此定义标签的重要性权重,对依赖度和冗余度中的标签相关项进行加权处理;进而提出一种特征评分函数作为特征重要性的评价指标,并依次选择出评分最高的特征组成最佳特征子集。实验结果表明,与其他算法相比,该算法在提取出更加精确的低维特征子集后,不仅能够有效提高面向实体信息挖掘的多标签学习算法的性能,也能提高基于离散特征的多标签学习算法的效率。  相似文献   

6.
The curse of high dimensionality in text classification is a worrisome problem that requires efficient and optimal feature selection (FS) methods to improve classification accuracy and reduce learning time. Existing filter-based FS methods evaluate features independently of other related ones, which can then lead to selecting a large number of redundant features, especially in high-dimensional datasets, resulting in more learning time and less classification performance, whereas information theory-based methods aim to maximize feature dependency with the class variable and minimize its redundancy for all selected features, which gradually becomes impractical when increasing the feature space. To overcome the time complexity issue of information theory-based methods while taking into account the redundancy issue, in this article, we propose a new feature selection method for text classification termed correlation-based redundancy removal, which aims to minimize the redundancy using subsets of features having close mutual information scores without sequentially seeking already selected features. The idea is that it is not important to assess the redundancy of a dominant feature having high classification information with another irrelevant feature having low classification information and vice-versa since they are implicitly weakly correlated. Our method, tested on seven datasets using both traditional classifiers (Naive Bayes and support vector machines) and deep learning models (long short-term memory and convolutional neural networks), demonstrated strong performance by reducing redundancy and improving classification compared to ten competitive metrics.  相似文献   

7.
针对处理高维度属性的大数据的属性约减方法进行了研究。发现属性选择和子空间学习是属性约简的两种常见方法,其中属性选择具有很好的解释性,子空间学习的分类效果优于属性选择。而往往这两种方法是各自独立进行应用。为此,提出了综合这两种属性约简方法,设计出新的属性选择方法。即利用子空间学习的两种技术(即线性判别分析(LDA)和局部保持投影(LPP)),考虑数据的全局特性和局部特性,同时设置稀疏正则化因子实现属性选择。基于分类准确率、方差和变异系数等评价指标的实验结果比较,表明该算法相比其它对比算法,能更有效的选取判别属性,并能取得很好的分类效果。  相似文献   

8.
ABSTRACT

High dimensional remote sensing data sets typically contain redundancy amongst the features. Traditional approaches to feature selection are prone to instability and selection of sub-optimal features in these circumstances. They can also be computationally expensive, especially when dealing with very large remote sensing data sets. This article presents an efficient, deterministic feature ranking method that is robust to redundancy. Affinity propagation is used to group correlated features into clusters. A relevance criterion is evaluated for each feature. Clusters are then ranked based on the median of the relevance values of their constituent features. The most relevant individual features can then be selected automatically from the best clusters. Other criteria, such as computation time or measurement cost, can optionally be considered interactively when making this selection. The proposed feature selection method is compared to competing filter approach methods on a number of remote sensing data sets containing feature redundancy. Mutual information and naive Bayes relevance criteria were evaluated in conjunction with the feature selection methods. Using the proposed method it was shown that the stability of selected features improved under different data samplings, while similar or better classification accuracies were achieved compared to competing methods.  相似文献   

9.
We propose a novel feature selection filter for supervised learning, which relies on the efficient estimation of the mutual information between a high-dimensional set of features and the classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon entropy. Thus, the complexity does not depend on the number of dimensions but on the number of patterns/samples, and the curse of dimensionality is circumvented. We show that it is then possible to outperform algorithms which individually rank features, as well as a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification. For most of the tested data sets, we obtain better classification results than those reported in the literature.  相似文献   

10.
Feature selection plays an important role in data mining and pattern recognition, especially for large scale data. During past years, various metrics have been proposed to measure the relevance between different features. Since mutual information is nonlinear and can effectively represent the dependencies of features, it is one of widely used measurements in feature selection. Just owing to these, many promising feature selection algorithms based on mutual information with different parameters have been developed. In this paper, at first a general criterion function about mutual information in feature selector is introduced, which can bring most information measurements in previous algorithms together. In traditional selectors, mutual information is estimated on the whole sampling space. This, however, cannot exactly represent the relevance among features. To cope with this problem, the second purpose of this paper is to propose a new feature selection algorithm based on dynamic mutual information, which is only estimated on unlabeled instances. To verify the effectiveness of our method, several experiments are carried out on sixteen UCI datasets using four typical classifiers. The experimental results indicate that our algorithm achieved better results than other methods in most cases.  相似文献   

11.
Feature selection is one of the most important machine learning procedure, and it has been successfully applied to make a preprocessing before using classification and clustering methods. High-dimensional features often appear in big data, and it’s characters block data processing. So spectral feature selection algorithms have been increasing attention by researchers. However, most feature selection methods, they consider these tasks as two steps, learn similarity matrix from original feature space (may be include redundancy for all features), and then conduct data clustering. Due to these limitations, they do not get good performance on classification and clustering tasks in big data processing applications. To address this problem, we propose an Unsupervised Feature Selection method with graph learning framework, which can reduce the redundancy features influence and utilize a low-rank constraint on the weight matrix simultaneously. More importantly, we design a new objective function to handle this problem. We evaluate our approach by six benchmark datasets. And all empirical classification results show that our new approach outperforms state-of-the-art feature selection approaches.  相似文献   

12.
Feature selection plays an important role in classification algorithms. It is particularly useful in dimensionality reduction for selecting features with high discriminative power. This paper introduces a new feature-selection method called Feature Interaction Maximisation (FIM), which employs three-way interaction information as a measure of feature redundancy. It uses a forward greedy search to select features which have maximum interaction information with the features already selected, and which provide maximum relevance. The experiments conducted to verify the performance of the proposed method use three datasets from the UCI repository. The method is compared with four other well-known feature-selection methods: Information Gain (IG), Minimum Redundancy Maximum Relevance (mRMR), Double Input Symmetrical Relevance (DISR), and Interaction Gain Based Feature Selection (IGFS). The average classification accuracy of two classifiers, Naïve Bayes and K-nearest neighbour, is used to assess the performance of the new feature-selection method. The results show that FIM outperforms the other methods.  相似文献   

13.
An efficient filter feature selection (FS) method is proposed in this paper, the SVM-FuzCoC approach, achieving a satisfactory trade-off between classification accuracy and dimensionality reduction. Additionally, the method has reasonably low computational requirements, even in high-dimensional feature spaces. To assess the quality of features, we introduce a local fuzzy evaluation measure with respect to patterns that embraces fuzzy membership degrees of every pattern in their classes. Accordingly, the above measure reveals the adequacy of data coverage provided by each feature. The required membership grades are determined via a novel fuzzy output kernel-based support vector machine, applied on single features. Based on a fuzzy complementary criterion (FuzCoC), the FS procedure iteratively selects features with maximum additional contribution in regard to the information content provided by previously selected features. This search strategy leads to small subsets of powerful and complementary features, alleviating the feature redundancy problem. We also devise different SVM-FuzCoC variants by employing seven other methods to derive fuzzy degrees from SVM outputs, based on probabilistic or fuzzy criteria. Our method is compared with a set of existing FS methods, in terms of performance capability, dimensionality reduction, and computational speed, via a comprehensive experimental setup, including synthetic and real-world datasets.  相似文献   

14.
传统的基于特征选择的分类算法中,由于其采用的冗余度和相关度评价标准单一,从而使得此类算法应用范围受限.针对这个问题,本文提出一种新的最大相关最小冗余特征选择算法,该算法在度量特征之间冗余度的评价准则中引入了两种不同的评价准则;在度量特征与类别之间的相关度中引入了4种不同的评价准则,衍生出8种不同的特征选择算法,从而使得...  相似文献   

15.
开放动态环境下的机器学习任务面临着数据特征空间的高维性和动态性。目前已有在线流特征选择算法基本仅考虑特征的重要性和冗余性,忽略了特征的交互性。特征交互是指那些本身与标签单独统计时呈现无关或弱相关,但与其他特征结合时却能与标签呈强相关的特征。基于此,提出一种基于邻域信息交互的在线流特征选择算法,该算法分为在线交互特征选择和在线冗余特征剔除两个阶段,即直接计算新到特征与整个已选特征子集的交互强弱程度,以及利用成对比较机制剔除冗余特征。在10个数据集上的实验结果表明了所提算法的有效性。  相似文献   

16.
基于互信息的无监督特征选择   总被引:5,自引:0,他引:5  
在数据分析中,特征选择可以用来降低特征的冗余,提高分析结果的可理解性和发现高维数据中隐藏的结构.提出了一种基于互信息的无监督的特征选择方法(UFS-MI),在UFS-MI中,使用了一种综合考虑了相关度和冗余度的特征选择标准UmRMR(无监督最小冗余最大相关)来评价特征的重要性.相关度和冗余度分别使用互信息来度量特征与潜在类别变量之间的依赖和特征与特征之间的依赖.UFS-MI同时适用于数值型和非数值型特征.在理论上证明了UFS-MI的有效性,实验结果也表明UFS-MI可以达到与传统的特征选择方法相当甚至更好的性能.  相似文献   

17.
分类分析中基于信息论准则的特征选取   总被引:3,自引:1,他引:2  
Feature selection aims to reduce the dimensionality of patterns for classificatory analysis by selecting the most informative instead of irrelevant and/or redundant features. In this study, two novel information-theoretic measures for feature ranking are presented: one is an improved formula to estimate the conditional mutual information between the candidate feature fi and the target class C given the subset of selected features S, i.e., I(C;fi|S), under the assumption that information of features is distributed uniformly; the other is a mutual information (MI) based constructive criterion that is able to capture both irrelevant and redundant input features under arbitrary distributions of information of features. With these two measures, two new feature selection algorithms, called the quadratic MI-based feature selection (QMIFS) approach and the MI-based constructive criterion (MICC) approach, respectively, are proposed, in which no parameters like β in Battiti's MIFS and (Kwak and Choi)'s MIFS-U methods need to be preset. Thus, the intractable problem of how to choose an appropriate value for β to do the tradeoff between the relevance to the target classes and the redundancy with the already-selected features is avoided completely. Experimental results demonstrate the good performances of QMIFS and MICC on both synthetic and benchmark data sets.  相似文献   

18.
Dimensionality reduction (DR) has been one central research topic in information theory, pattern recognition, and machine learning. Apparently, the performance of many learning models significantly rely on dimensionality reduction: successful DR can largely improve various approaches in clustering and classification, while inappropriate DR may deteriorate the systems. When applied on high-dimensional data, some existing research approaches often try to reduce the dimensionality first, and then input the reduced features to other available models, e.g., Gaussian mixture model (GMM). Such independent learning could however significantly limit the performance, since the optimal subspace given by a particular DR approach may not be appropriate for the following model. In this paper, we focus on investigating how unsupervised dimensionality reduction could be performed together with GMM and if such joint learning could lead to improvement in comparison with the traditional unsupervised method. In particular, we engage the mixture of factor analyzers with the assumption that a common factor loading exists for all the components. Based on that, we then present EM-algorithm that converges to a local optimal solution. Such setting exactly optimizes a dimensionality reduction together with the parameters of GMM. We describe the framework, detail the algorithm, and conduct a series of experiments to validate the effectiveness of our proposed approach. Specifically, we compare the proposed joint learning approach with two competitive algorithms on one synthetic and six real data sets. Experimental results show that the joint learning significantly outperforms the comparison methods in terms of three criteria.  相似文献   

19.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.  相似文献   

20.
Attribute selection is one of the important problems encountered in pattern recognition, machine learning, data mining, and bioinformatics. It refers to the problem of selecting those input attributes or features that are most effective to predict the sample categories. In this regard, rough set theory has been shown to be successful for selecting relevant and nonredundant attributes from a given data set. However, the classical rough sets are unable to handle real valued noisy features. This problem can be addressed by the fuzzy-rough sets, which are the generalization of classical rough sets. A feature selection method is presented here based on fuzzy-rough sets by maximizing both relevance and significance of the selected features. This paper also presents different feature evaluation criteria such as dependency, relevance, redundancy, and significance for attribute selection task using fuzzy-rough sets. The performance of different rough set models is compared with that of some existing feature evaluation indices based on the predictive accuracy of nearest neighbor rule, support vector machine, and decision tree. The effectiveness of the fuzzy-rough set based attribute selection method, along with a comparison with existing feature evaluation indices and different rough set models, is demonstrated on a set of benchmark and microarray gene expression data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号