首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a novel feature selection method based on rough sets and mutual information is proposed. The dependency of each feature guides the selection, and mutual information is employed to reduce the features which do not favor addition of dependency significantly. So the dependency of the subset found by our method reaches maximum with small number of features. Since our method evaluates both definitive relevance and uncertain relevance by a combined selection criterion of dependency and class-based distance metric, the feature subset is more relevant than other rough sets based methods. As a result, the subset is near optimal solution. In order to verify the contribution, eight different classification applications are employed. Our method is also employed on a real Alzheimer’s disease dataset, and finds a feature subset where classification accuracy arrives at 81.3 %. Those present results verify the contribution of our method.  相似文献   

2.
比较研究了与类别信息无关的文档频率和与类别信息有关的信息增益、互信息和χ2统计特征选择方法,在此基础上分析了以往直接组合这两类特征选择方法的弊端,并提出基于相关性和冗余度的联合特征选择算法。该算法将文档频率方法分别与信息增益、互信息和χ2统计方法联合进行特征选择,旨在删除冗余特征,并保留有利于分类的特征,从而提高文本情感分类效果。实验结果表明,该联合特征选择方法具有较好的性能,并且能够有效降低特征维数。  相似文献   

3.
Li  Zhao  Lu  Wei  Sun  Zhanquan  Xing  Weiwei 《Neural computing & applications》2016,28(1):513-524

Text classification is a popular research topic in data mining. Many classification methods have been proposed. Feature selection is an important technique for text classification since it is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. In recent years, data have become increasingly larger in both the number of instances and the number of features in many applications. As a result, classical feature selection methods do not work well in processing large-scale dataset due to the expensive computational cost. To address this issue, in this paper, a parallel feature selection method based on MapReduce is proposed. Specifically, mutual information based on Renyi entropy is used to measure the relationship between feature variables and class variables. Maximum mutual information theory is then employed to choose the most informative combination of feature variables. We implemented the selection process based on MapReduce, which is efficient and scalable for large-scale problems. At last, a practical example well demonstrates the efficiency of the proposed method.

  相似文献   

4.
Rough set theory is one of the effective methods to feature selection, which can preserve the meaning of the features. The essence of rough set approach to feature selection is to find a subset of the original features. Since finding a minimal subset of the features is a NP-hard problem, it is necessary to investigate effective and efficient heuristic algorithms. Ant colony optimization (ACO) has been successfully applied to many difficult combinatorial problems like quadratic assignment, traveling salesman, scheduling, etc. It is particularly attractive for feature selection since there is no heuristic information that can guide search to the optimal minimal subset every time. However, ants can discover the best feature combinations as they traverse the graph. In this paper, we propose a new rough set approach to feature selection based on ACO, which adopts mutual information based feature significance as heuristic information. A novel feature selection algorithm is also given. Jensen and Shen proposed a ACO-based feature selection approach which starts from a random feature. Our approach starts from the feature core, which changes the complete graph to a smaller one. To verify the efficiency of our algorithm, experiments are carried out on some standard UCI datasets. The results demonstrate that our algorithm can provide efficient solution to find a minimal subset of the features.  相似文献   

5.
In this paper, we present a novel scheme for linear feature extraction in classification. The method is based on the maximization of the mutual information (MI) between the features extracted and the classes. The sum of the MI corresponding to each of the features is taken as an heuristic that approximates the MI of the whole output vector. Then, a component-by-component gradient-ascent method is proposed for the maximization of the MI, similar to the gradient-based entropy optimization used in independent component analysis (ICA). The simulation results show that not only is the method competitive when compared to existing supervised feature extraction methods in all cases studied, but it also remarkably outperform them when the data are characterized by strongly nonlinear boundaries between classes.  相似文献   

6.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

7.
函数型数据的特征选择是从庞大的函数信息中选出那些相关性小、代表性强的少部分特征,以简化后期分类器的计算,提高泛化能力.由于特征选择方法用于函数数据分类效果并不理想,文中提出面向函数型数据的结合主成分分析法和最小凸包法的快速特征选择(FFS)方法,可以快速获得稳定的特征子集.此外,考虑到特征之间可能存在相关性,将FFS的结果作为其它方法的初始特征子集,故融合FFS与条件互信息方法.在UCR数据集上的实验证明FFS的有效性,并通过对比实验给出在不同时间代价和分类精度需求下的方法选择策略.  相似文献   

8.
The feature transformation is a very important step in pattern recognition systems. A feature transformation matrix can be obtained using different criteria such as discrimination between classes or feature independence or mutual information between features and classes. The obtained matrix can also be used for feature reduction. In this paper, we propose a new method for finding a feature transformation-based on Mutual Information (MI). For this purpose, we suppose that the Probability Density Function (PDF) of features in classes is Gaussian, and then we use the gradient ascent to maximize the mutual information between features and classes. Experimental results show that the proposed MI projection consistently outperforms other methods for a variety of cases. In the UCI Glass database we improve the classification accuracy up to 7.95 %. Besides, the improvement of phoneme recognition rate is 3.55 % on TIMIT.  相似文献   

9.
中文情感分析中的一个重要问题就是情感倾向分类,情感特征选择是基于机器学习的情感倾向分类的前提和基础,其作用在于通过剔除无关或冗余的特征来降低特征集的维数。提出一种将Lasso算法与过滤式特征选择方法相结合的情感混合特征选择方法:先利用Lasso惩罚回归算法对原始特征集合进行筛选,得出冗余度较低的情感分类特征子集;再对特征子集引入CHI,MI,IG等过滤方法来评价候选特征词与文本类别的依赖性权重,并据此剔除候选特征词中相关性较低的特征词;最终,在使用高斯核函数的SVM分类器上对比所提方法与DF,MI,IG和CHI在不同特征词数量下的分类效果。在微博短文本语料库上进行了实验,结果表明所提算法具有有效性和高效性;并且在特征子集维数小于样本数量时,提出的混合方法相比DF,MI,IG和CHI的特征选择效果都有一定程度的改善;通过对比识别率和查全率可以发现,Lasso-MI方法相比MI以及其他过滤方法更为有效。  相似文献   

10.
黄源  李茂  吕建成 《计算机科学》2015,42(5):54-56, 77
开方检验是目前文本分类中一种常用的特征选择方法.该方法仅关注词语和类别间的关系,而没有考虑词与词之间的关联,因此选择出的特征集具有较大的冗余度.定义了词语的“剩余互信息”概念,提出了对开方检验的选择结果进行优化的方法.使用该方法可以得到既有很强表征性又有很高独立性的特征集.实验表明,该方法表现良好.  相似文献   

11.
This paper proposes a novel criterion for estimating the redundancy information of selected feature sets in multi-dimensional pattern classification. An appropriate feature selection process typically maximizes the relevancy of features to each class and minimizes the redundancy of features between selected features. Unlike to the relevancy information that can be measured by mutual information, however, it is difficult to estimate the redundancy information because its dynamic range is varied by the characteristics of features and classes.By utilizing the conceptual diagram of the relationship between candidate features, selected features, and class variables, this paper proposes a new criterion to accurately compute the amount of redundancy. Specifically, the redundancy term is estimated by conditional mutual information between selected and candidate features to each class variable, which does not need a cumbersome normalization process as the conventional algorithm does. The proposed algorithm is implemented into a speech/music discrimination system to evaluate classification performance. Experimental results by varying the number of selected features verify that the proposed method shows higher classification accuracy than conventional algorithms.  相似文献   

12.
Feature selection is one of the fundamental problems in pattern recognition and data mining. A popular and effective approach to feature selection is based on information theory, namely the mutual information of features and class variable. In this paper we compare eight different mutual information-based feature selection methods. Based on the analysis of the comparison results, we propose a new mutual information-based feature selection method. By taking into account both the class-dependent and class-independent correlation among features, the proposed method selects a less redundant and more informative set of features. The advantage of the proposed method over other methods is demonstrated by the results of experiments on UCI datasets (Asuncion and Newman, 2010 [1]) and object recognition.  相似文献   

13.
以智慧城市管理应用系统中的案件上报短文本为对象,研究有效的特征生成和特征选择方法,实现案件快速准确地自动分类。根据案件描述短文本的特点,提出一种互邻特征组合算法,以生成描述力更强的组合特征;为进一步约减特征并优化特征空间,提出一种新的隶属度函数来为分类体系中的每个类别构建一个类别特征域,然后利用类别特征域进一步优化选择原始特征与组合特征,最终得到对分类贡献最高的特征表示集合。以南宁市青秀区“城管通”App中的案例分类为实例,验证提出的特征生成及选择方法,实验表明相对于文档频率、互信息和信息增益,提出的方法对案件分类的准确率更高,引入组合特征能显著提升分类准确率。  相似文献   

14.
Feature selection is one of the major problems in an intrusion detection system (IDS) since there are additional and irrelevant features. This problem causes incorrect classification and low detection rate in those systems. In this article, four feature selection algorithms, named multivariate linear correlation coefficient (MLCFS), feature grouping based on multivariate mutual information (FGMMI), feature grouping based on linear correlation coefficient (FGLCC), and feature grouping based on pairwise MI, are proposed to solve this problem. These algorithms are implementable in any IDS. Both linear and nonlinear measures are used in the sense that the correlation coefficient and the multivariate correlation coefficient are linear, whereas the MI and the multivariate MI are nonlinear. Least Square Support Vector Machine (LS-SVM) as an intrusion classifier is used to evaluate the selected features. Experimental results on the KDDcup99 and Network Security Laboratory-Knowledge Discovery and Data Mining (NSL) datasets showed that the proposed feature selection methods have a higher detection and accuracy and lower false-positive rate compared with the pairwise linear correlation coefficient and the pairwise MI employed in several previous algorithms.  相似文献   

15.
Jin-Jie  Yun-Ze  Xiao-Ming   《Neurocomputing》2008,71(7-9):1656-1668
A parameterless feature ranking approach is presented for feature selection in the pattern classification task. Compared with Battiti's mutual information feature selection (MIFS) and Kwak and Choi's MIFS-U methods, the proposed method derives an estimation of the conditional MI between the candidate feature fi and the output class C given the subset of selected features S, i.e. I(C;fiS), without any parameters like β in MIFS and MIFS-U methods to be preset. Thus, the intractable problem can be avoided completely, which is how to choose an appropriate value for β to achieve the tradeoff between the relevance to the output classes and the redundancy with the already-selected features. Furthermore, a modified greedy feature selection algorithm called the second order MI feature selection approach (SOMIFS) is proposed. Experimental results demonstrate the superiority of SOMIFS in terms of both synthetic and benchmark data sets.  相似文献   

16.
刘海燕  王超  牛军钰 《计算机工程》2012,38(14):135-137
针对传统特征选择算法只专注于特征类相关性或者特征冗余性的问题,提出一种基于条件互信息的特征选择算法。该算法采用k-means的基本思想聚类特征,并从中选出类相关度最大的特征,从而去除不相关和冗余特征。实验使用5个数据集,结果表明,该算法的分类性能优于传统特征选择算法。  相似文献   

17.
针对互信息(mutual information,MI)特征选择方法存在的正负相关性的现象以及未考虑特征项在不同类别内词频的问题,提出了一种混合互信息特征选择算法(hybrid mutual information,HMI)。引入逆文档频率系数和类间词频信息系数,使得整个文档中的词频信息以及每个类之间的词频信息得以有效利用;引入正负相关性系数,区分正相关性和负相关性并进行有效的利用。通过实验对比表明,混合互信息算法可以有效地提高特征选择的质量,进而提高文本情感分析的效果。  相似文献   

18.
特征选择就是从特征集合中选择出与分类类别相关性强而特征之间冗余性最小的特征子集,这样一方面可以提高分类器的计算效率,另一方面可以提高分类器的泛化能力,进而提高分类精度。基于互信息的特征相关性和冗余性的评价准则,在实际应用中存在以下的问题:(1)变量的概率计算困难,进而影响特征的信息熵计算困难;(2)互信息倾向于选择值较多的特征;(3)基于累积加和的候选特征与特征子集之间冗余性度量准则在特征维数较高的情况下容易失效。为了解决上述问题,提出了基于归一化模糊互信息最大的特征评价准则,基于模糊等价关系计算变量的信息熵、条件熵、联合熵;利用联合互信息最大替换累积加和的度量方法;基于归一化联合互信息对特征重要性进行评价;基于该准则建立了基于前向贪婪搜索的特征选择算法。在UCI机器学习标准数据集上的多组实验,证明算法能够有效地选择出对分类类别有效的特征子集,能够明显提高分类精度。  相似文献   

19.
一种改进的文本分类特征选择方法   总被引:1,自引:0,他引:1       下载免费PDF全文
文本分类中特征空间的高维问题是文本分类的主要障碍之一。特征选择(Feature Selection)是一种有效的特征降维方法。现有的特征选择函数主要有文档频率(DF),信息增益(IG),互信息(MI)等。基于特征的基本约束条件以及高性能特征选择方法的设计步骤,提出了一种改进的特征选择方法SIG。该特征选择方法在保证分类效果的同时,提高了对中低频特征的偏向。在语料集Reuters-21578上的实验证明,该方法能够获得较好的分类效果,同时有效提高了对具有强分类能力的中低频特征的利用。  相似文献   

20.
Measures of relevance between features play an important role in classification and regression analysis. Mutual information has been proved an effective measure for decision tree construction and feature selection. However, there is a limitation in computing relevance between numerical features with mutual information due to problems of estimating probability density functions in high-dimensional spaces. In this work, we generalize Shannon’s information entropy to neighborhood information entropy and propose a measure of neighborhood mutual information. It is shown that the new measure is a natural extension of classical mutual information which reduces to the classical one if features are discrete; thus the new measure can also be used to compute the relevance between discrete variables. In addition, the new measure introduces a parameter delta to control the granularity in analyzing data. With numeric experiments, we show that neighborhood mutual information produces the nearly same outputs as mutual information. However, unlike mutual information, no discretization is required in computing relevance when used the proposed algorithm. We combine the proposed measure with four classes of evaluating strategies used for feature selection. Finally, the proposed algorithms are tested on several benchmark data sets. The results show that neighborhood mutual information based algorithms yield better performance than some classical ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号