首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Computed tomographic (CT) colonography is a promising alternative to traditional invasive colonoscopic methods used in the detection and removal of cancerous growths, or polyps in the colon. Existing computer-aided diagnosis (CAD) algorithms used in CT colonography typically employ the use of a classifier to discriminate between true and false positives generated by a polyp candidate detection system based on a set of features extracted from the candidates. However, these classifiers often suffer from a phenomenon termed the curse of dimensionality, whereby there is a marked degradation in the performance of a classifier as the number of features used in the classifier is increased. In addition, an increase in the number of features used also contributes to an increase in computational complexity and demands on storage space.This paper investigates the benefits of feature selection on a polyp candidate database, with the aim of increasing specificity while preserving sensitivity. Two new mutual information methods for feature selection are proposed in order to select a subset of features for optimum performance. Initial results show that the performance of the widely used support vector machine (SVM) classifier is indeed better with the use of a small set of features, with receiver operating characteristic curve (AUC) measures reaching 0.78-0.88.  相似文献   

2.
In this paper, we introduced a novel feature selection method based on the hybrid model (filter-wrapper). We developed a feature selection method using the mutual information criterion without requiring a user-defined parameter for the selection of the candidate feature set. Subsequently, to reduce the computational cost and avoid encountering to local maxima of wrapper search, a wrapper approach searches in the space of a superreduct which is selected from the candidate feature set. Finally, the wrapper approach determines to select a proper feature set which better suits the learning algorithm. The efficiency and effectiveness of our technique is demonstrated through extensive comparison with other representative methods. Our approach shows an excellent performance, not only high classification accuracy, but also with respect to the number of features selected.  相似文献   

3.
在高维数据如图像数据、基因数据、文本数据等的分析过程中,当样本存在冗余特征时会大大增加问题分析复杂难度,因此在数据分析前从中剔除冗余特征尤为重要。基于互信息(MI)的特征选择方法能够有效地降低数据维数,提高分析结果精度,但是,现有方法在特征选择过程中评判特征是否冗余的标准单一,无法合理排除冗余特征,最终影响分析结果。为此,提出一种基于最大联合条件互信息的特征选择方法(MCJMI)。MCJMI选择特征时考虑整体联合互信息与条件互信息两个因素,两个因素融合增强特征选择约束。在平均预测精度方面,MCJMI与信息增益(IG)、最小冗余度最大相关性(mRMR)特征选择相比提升了6个百分点;与联合互信息(JMI)、最大化联合互信息(JMIM)相比提升了2个百分点;与LW向前搜索方法(SFS-LW)相比提升了1个百分点。在稳定性方面,MCJMI稳定性达到了0.92,优于JMI、JMIM、SFS-LW方法。实验结果表明MCJMI能够有效地提高特征选择的准确率与稳定性。  相似文献   

4.
针对邻域信息系统的特征选择模型存在人为设定邻域参数值的问题。分别计算样本与最近同类样本和最近异类样本的距离,用于定义样本的最近邻以确定信息粒子的大小。将最近邻的概念扩展到信息理论,提出最近邻互信息。在此基础上,采用前向贪心搜索策略构造了基于最近邻互信息的特征算法。在两个不同基分类器和八个UCI数据集上进行实验。实验结果表明:相比当前多种流行算法,该模型能够以较少的特征获得较高的分类性能。  相似文献   

5.
Vanessa  Michel  Jrme 《Neurocomputing》2009,72(16-18):3580
The classification of functional or high-dimensional data requires to select a reduced subset of features among the initial set, both to help fighting the curse of dimensionality and to help interpreting the problem and the model. The mutual information criterion may be used in that context, but it suffers from the difficulty of its estimation through a finite set of samples. Efficient estimators are not designed specifically to be applied in a classification context, and thus suffer from further drawbacks and difficulties. This paper presents an estimator of mutual information that is specifically designed for classification tasks, including multi-class ones. It is combined to a recently published stopping criterion in a traditional forward feature selection procedure. Experiments on both traditional benchmarks and on an industrial functional classification problem show the added value of this estimator.  相似文献   

6.
Feature selection plays an important role in classification algorithms. It is particularly useful in dimensionality reduction for selecting features with high discriminative power. This paper introduces a new feature-selection method called Feature Interaction Maximisation (FIM), which employs three-way interaction information as a measure of feature redundancy. It uses a forward greedy search to select features which have maximum interaction information with the features already selected, and which provide maximum relevance. The experiments conducted to verify the performance of the proposed method use three datasets from the UCI repository. The method is compared with four other well-known feature-selection methods: Information Gain (IG), Minimum Redundancy Maximum Relevance (mRMR), Double Input Symmetrical Relevance (DISR), and Interaction Gain Based Feature Selection (IGFS). The average classification accuracy of two classifiers, Naïve Bayes and K-nearest neighbour, is used to assess the performance of the new feature-selection method. The results show that FIM outperforms the other methods.  相似文献   

7.
Multi-label learning deals with data associated with a set of labels simultaneously. Like traditional single-label learning, the high-dimensionality of data is a stumbling block for multi-label learning. In this paper, we first introduce the margin of instance to granulate all instances under different labels, and three different concepts of neighborhood are defined based on different cognitive viewpoints. Based on this, we generalize neighborhood information entropy to fit multi-label learning and propose three new measures of neighborhood mutual information. It is shown that these new measures are a natural extension from single-label learning to multi-label learning. Then, we present an optimization objective function to evaluate the quality of the candidate features, which can be solved by approximating the multi-label neighborhood mutual information. Finally, extensive experiments conducted on publicly available data sets verify the effectiveness of the proposed algorithm by comparing it with state-of-the-art methods.  相似文献   

8.
为解决连续值特征条件互信息计算困难和对多值特征偏倚的问题,提出了一种基于 Parzen 窗条件互信息计算的特征选择方法。该方法通过 Parzen 窗估计出连续值特征的概率密度函数,进而方便准确地计算出条件互信息;同时在评价准则中引入特征离散度作为惩罚因子,克服了条件互信息计算对于多值特征的偏倚,实现了对连续型数据的特征选择。实验证明,该方法能够达到与现有方法相当甚至更好的效果,是一种有效的特征选择方法。  相似文献   

9.
Determining optimal subspace projections that can maintain task-relevant information in the data is an important problem in machine learning and pattern recognition. In this paper, we propose a nonparametric nonlinear subspace projection technique that maintains class separability maximally under the Shannon mutual information (MI) criterion. Employing kernel density estimates for nonparametric estimation of MI makes possible an interesting marriage of kernel density estimation-based information theoretic methods and kernel machines, which have the ability to determine nonparametric nonlinear solutions for difficult problems in machine learning. Significant computational savings are achieved by translating the definition of the desired projection into the kernel-induced feature space, which leads to obtain analytical solution.  相似文献   

10.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

11.
We present a new linear discriminant analysis method based on information theory, where the mutual information between linearly transformed input data and the class labels is maximized. First, we introduce a kernel-based estimate of mutual information with a variable kernel size. Furthermore, we devise a learning algorithm that maximizes the mutual information w.r.t. the linear transformation. Two experiments are conducted: the first one uses a toy problem to visualize and compare the transformation vectors in the original input space; the second one evaluates the performance of the method for classification by employing cross-validation tests on four datasets from the UCI repository. Various classifiers are investigated. Our results show that this method can significantly boost class separability over conventional methods, especially for nonlinear classification.  相似文献   

12.
特征选择就是从特征集合中选择出与分类类别相关性强而特征之间冗余性最小的特征子集,这样一方面可以提高分类器的计算效率,另一方面可以提高分类器的泛化能力,进而提高分类精度。基于互信息的特征相关性和冗余性的评价准则,在实际应用中存在以下的问题:(1)变量的概率计算困难,进而影响特征的信息熵计算困难;(2)互信息倾向于选择值较多的特征;(3)基于累积加和的候选特征与特征子集之间冗余性度量准则在特征维数较高的情况下容易失效。为了解决上述问题,提出了基于归一化模糊互信息最大的特征评价准则,基于模糊等价关系计算变量的信息熵、条件熵、联合熵;利用联合互信息最大替换累积加和的度量方法;基于归一化联合互信息对特征重要性进行评价;基于该准则建立了基于前向贪婪搜索的特征选择算法。在UCI机器学习标准数据集上的多组实验,证明算法能够有效地选择出对分类类别有效的特征子集,能够明显提高分类精度。  相似文献   

13.
朱接文  肖军 《计算机应用》2014,34(9):2608-2611
针对大型数据中大量冗余特征的存在可能降低数据分类性能的问题,提出了一种基于互信息(MI)与模糊C均值(FCM)聚类集成的特征自动优选方法FCC-MI。首先分析了互信息特征及其相关度函数,根据相关度对特征进行排序;然后按照最大相关度对应的特征对数据进行分组,采用FCM聚类方法自动确定最优特征数目;最后基于相关度对特征进行了优选。在UCI机器学习数据库的7个数据集上进行实验,并与相关文献中提出的基于类内方差与相关度结合的特征选择方法(WCMFS)、基于近似Markov blanket和动态互信息的特征选择算法(B-AMBDMI)及基于互信息和遗传算法的两阶段特征选择方法(T-MI-GA)进行对比。理论分析和实验结果表明,FCC-MI不但提高了数据分类的效率,而且在有效保证分类精度的同时能自动确定最优特征子集,减少了数据集的特征数目,适用于海量、数据特征相关性大的特征约简及数据分析。  相似文献   

14.
基于最大互信息最大相关熵的特征选择方法   总被引:5,自引:1,他引:4  
特征选择算法主要分为filter和wrapper两大类,并已提出基于不同理论的算法模型,但依然存在算法处理能力不强、子集分类精度不高等问题。基于模糊粗糙集的信息熵模型提出最大互信息最大相关熵标准,并根据该标准设计了一种新的特征选择方法,能同时处理离散数据、连续数据和模糊数据等混合信息。经UCI数据集试验,表明该算法与其他算法相比,具有较高的精度,且稳定性较高,是有效的。  相似文献   

15.
A good feature selection method should take into account both category information and high‐frequency information to select useful features that can effectively display the information of a target. Because basic mutual information (BMI) prefers low‐frequency features and ignores high‐frequency features, clustering mutual information is proposed, which is based on clustering and makes effective high‐frequency features become unique, better integrating category information and useful high‐frequency information. Time is an important factor in topic detection and tracking (TDT). In order to improve the performance of TDT, time difference is integrated into clustering mutual information to dynamically adjust the mutual information, and then another algorithm called the dynamic clustering mutual information (DCMI) is given. In order to obtain the optimal subsets to display topics information, an objective function is proposed, which is based on the idea that a good feature subset should have the smallest distance within‐class and the largest distance across‐class. Experiments on TDT4 corpora using this objective function are performed; then, comparing the performances of BMI, DCMI, and the only existed topic feature selection algorithm Incremental Term Frequency‐Inverted Document Frequency (ITF‐IDF), these performance information will be displayed by four figures. Computation time of DCMI is previously lower than BMI and ITF‐IDF. The optimal normalized‐detection performance (Cdet)norm of DCMI is decreased by 0.3044 and 0.0970 compared with those of BMI and ITF‐IDF, respectively.  相似文献   

16.
在文本分类中,互信息是一种被广泛应用的特征选择方法,但是该方法仅考虑了特征的文档频而没有考虑特征的词频,导致它经常倾向于选择出现频率较低的特征。为此,提出了一个新的文档频并把它引入到互信息方法中,从而获得了一种优化的互信息方法。该优化的互信息方法不但考虑了特征的文档频而且还考虑了特征出现的词频。实验结果表明该优化的互信息方法性能良好。  相似文献   

17.
This paper presents a fault diagnosis procedure based on discriminant analysis and mutual information. In order to obtain good classification performances, a selection of important features is done with a new developed algorithm based on the mutual information between variables. The application of the new fault diagnosis procedure on a benchmark problem, the Tennessee Eastman Process, shows better results than other well known published methods.  相似文献   

18.
With the advent of technology in various scientific fields, high dimensional data are becoming abundant. A general approach to tackle the resulting challenges is to reduce data dimensionality through feature selection. Traditional feature selection approaches concentrate on selecting relevant features and ignoring irrelevant or redundant ones. However, most of these approaches neglect feature interactions. On the other hand, some datasets have imbalanced classes, which may result in biases towards the majority class. The main goal of this paper is to propose a novel feature selection method based on the interaction information (II) to provide higher level interaction analysis and improve the search procedure in the feature space. In this regard, an evolutionary feature subset selection algorithm based on interaction information is proposed, which consists of three stages. At the first stage, candidate features and candidate feature pairs are identified using traditional feature weighting approaches such as symmetric uncertainty (SU) and bivariate interaction information. In the second phase, candidate feature subsets are formed and evaluated using multivariate interaction information. Finally, the best candidate feature subsets are selected using dominant/dominated relationships. The proposed algorithm is compared with some other feature selection algorithms including mRMR, WJMI, IWFS, IGFS, DCSF, IWFS, K_OFSD, WFLNS, Information Gain and ReliefF in terms of the number of selected features, classification accuracy, F-measure and algorithm stability using three different classifiers, namely KNN, NB, and CART. The results justify the improvement of classification accuracy and the robustness of the proposed method in comparison with the other approaches.  相似文献   

19.
一种改进的基于条件互信息的特征选择算法   总被引:10,自引:0,他引:10  
目前在文本分类领域较常用到的特征选择算法中,仅仅考虑了特征与类别之间的关联性,而对特征与特征之间的关联性没有予以足够的重视,这导致了特征之间预测能力的相互削弱,无法选出最有效的特征。提出了一种新的用于文本分类的特征选择算法(CMIM),它可以帮助选出区分能力强、弱相关的特征。经实验验证,CMIM比传统的特征选择算法具有更好的性能。  相似文献   

20.
Genetic algorithms (GAs) have been used as conventional methods for classifiers to adaptively evolve solutions for classification problems. Feature selection plays an important role in finding relevant features in classification. In this paper, feature selection is explored with modular GA-based classification. A new feature selection technique, relative importance factor (RIF), is proposed to find less relevant features in the input domain of each class module. By removing these features, it is aimed to reduce the classification error and dimensionality of classification problems. Benchmark classification data sets are used to evaluate the proposed approach. The experiment results show that RIF can be used to find less relevant features and help achieve lower classification error with the feature space dimension reduced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号