首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
在多标记学习框架中,特征选择是解决维数灾难,提高多标记分类器的有效手段。提出了一种融合特征排序的多标记特征选择算法。该算法首先在各标记下进行自适应的粒化样本,以此来构造特征与类别标记之间的邻域互信息。其次,对得到邻域互信息进行排序,使得每个类别标记下均能得到一组特征排序。最后,多个独立的特征排序经过聚类融合成一组新的特征排序。在4个多标记数据集和4个评价指标上的实验结果表明,所提算法优于一些当前流行的多标记降维方法。  相似文献   

2.
Feature selection plays an important role in data mining and pattern recognition, especially for large scale data. During past years, various metrics have been proposed to measure the relevance between different features. Since mutual information is nonlinear and can effectively represent the dependencies of features, it is one of widely used measurements in feature selection. Just owing to these, many promising feature selection algorithms based on mutual information with different parameters have been developed. In this paper, at first a general criterion function about mutual information in feature selector is introduced, which can bring most information measurements in previous algorithms together. In traditional selectors, mutual information is estimated on the whole sampling space. This, however, cannot exactly represent the relevance among features. To cope with this problem, the second purpose of this paper is to propose a new feature selection algorithm based on dynamic mutual information, which is only estimated on unlabeled instances. To verify the effectiveness of our method, several experiments are carried out on sixteen UCI datasets using four typical classifiers. The experimental results indicate that our algorithm achieved better results than other methods in most cases.  相似文献   

3.
特征选择是机器学习非常重要的预处理步骤,而邻域互信息是一种能直接处理连续型或离散型特征的有效方法。然而基于邻域互信息的特征选择方法一般采用启发式贪婪策略,其特征子集质量难以得到有效保证。基于三支决策的思想,提出了三支邻域互信息特征选择方法(NMI-TWD)。通过扩展三个潜在的候选特征子集,并保持各子集之间的差异性,以获得更高质量的特征子集。对三个差异性的特征子集进行集成学习,构建三支协同决策模型,以进一步提高分类学习性能。UCI实验数据表明,新方法的特征选择结果和分类性能较其他方法更优,说明了其有效性。  相似文献   

4.
在已有的特征选择算法中,常用策略是通过相关准则选择与标记集合相关性较强的特征,然而该策略不一定是最优选择,因为与标记集合相关性较弱的特征可能是决定某些类别标记的关键特征.基于这一假设,文中提出基于局部子空间的多标记特征选择算法.该算法首先利用特征与标记集合之间的互信息得到一个重要度由高到低的特征序列,然后将新的特征排序空间划分为几个局部子空间,并在每个子空间设置采样比例以选择冗余性较小的特征,最后融合各子空间的特征子集,得到一组合理的特征子集.在6个数据集和4个评价指标上的实验表明,文中算法优于一些通用的多标记特征选择算法.  相似文献   

5.
提出了一种针对分类属性数据特征选择的新算法。通过给出一种能够直接评价分类属性数据特征选择的评价函数新定义,重新构造能实现分类属性数据信息量、条件互信息、特征之间依赖度定义的计算公式,并在此基础上,提出了一种基于互信息较大相关、较小冗余的特征选择(MRLR)算法。MRLR算法在特征选择时不仅考虑了特征与类标签之间的相关性,而且还考虑了特征之间的冗余性。大量的仿真实验表明,MRLR算法在针对分类属性数据的特征选择时,能获得冗余度小且更具代表性的特征子集,具有较好的高效性和稳定性。  相似文献   

6.
Input feature selection by mutual information based on Parzen window   总被引:10,自引:0,他引:10  
Mutual information is a good indicator of relevance between variables, and have been used as a measure in several feature selection algorithms. However, calculating the mutual information is difficult, and the performance of a feature selection algorithm depends on the accuracy of the mutual information. In this paper, we propose a new method of calculating mutual information between input and class variables based on the Parzen window, and we apply this to a feature selection algorithm for classification problems.  相似文献   

7.
基于最大互信息最大相关熵的特征选择方法   总被引:5,自引:1,他引:4  
特征选择算法主要分为filter和wrapper两大类,并已提出基于不同理论的算法模型,但依然存在算法处理能力不强、子集分类精度不高等问题。基于模糊粗糙集的信息熵模型提出最大互信息最大相关熵标准,并根据该标准设计了一种新的特征选择方法,能同时处理离散数据、连续数据和模糊数据等混合信息。经UCI数据集试验,表明该算法与其他算法相比,具有较高的精度,且稳定性较高,是有效的。  相似文献   

8.
分类问题普遍存在于现代工业生产中。在进行分类任务之前,利用特征选择筛选有用的信息,能够有效地提高分类效率和分类精度。最小冗余最大相关算法(mRMR)考虑最大化特征与类别的相关性和最小化特征之间的冗余性,能够有效地选择特征子集;但该算法存在中后期特征重要度偏差大以及无法直接给出特征子集的问题。针对该问题,文中提出了结合邻域粗糙集差别矩阵和mRMR原理的特征选择算法。根据最大相关性和最小冗余性原则,利用邻域熵和邻域互信息定义了特征的重要度,以更好地处理混合数据类型。基于差别矩阵定义了动态差别集,利用差别集的动态演化有效去除冗余属性,缩小搜索范围,优化特征子集,并根据差别矩阵判定迭代截止条件。实验选取SVM,J48,KNN和MLP作为分类器来评价该特征选择算法的性能。在公共数据集上的实验结果表明,与已有算法相比,所提算法的平均分类精度提升了2%左右,同时在特征较多的数据集上能够有效地缩短特征选择时间。所提算法继承了差别矩阵和mRMR的优点,能够有效地处理特征选择问题。  相似文献   

9.
特征选择是从原始数据集中去除无关的特征并选择良好的特征子集,可以避免维数灾难和提高学习算法的性能。为解决已选特征和类别动态变化(DCSF)算法在特征选择过程中只考虑已选特征和类别之间动态变化的信息量,而忽略候选特征和已选特征的交互相关性的问题,提出了一种基于动态相关性的特征选择(DRFS)算法。该算法采用条件互信息度量已选特征和类别的条件相关性,并采用交互信息度量候选特征和已选特征发挥的协同作用,从而选择相关特征并且去除冗余特征以获得优良特征子集。仿真实验表明,与现有算法相比,所提算法能有效地提升特征选择的分类准确率。  相似文献   

10.
Neighborhood rough set based heterogeneous feature subset selection   总被引:6,自引:0,他引:6  
Feature subset selection is viewed as an important preprocessing step for pattern recognition, machine learning and data mining. Most of researches are focused on dealing with homogeneous feature selection, namely, numerical or categorical features. In this paper, we introduce a neighborhood rough set model to deal with the problem of heterogeneous feature subset selection. As the classical rough set model can just be used to evaluate categorical features, we generalize this model with neighborhood relations and introduce a neighborhood rough set model. The proposed model will degrade to the classical one if we specify the size of neighborhood zero. The neighborhood model is used to reduce numerical and categorical features by assigning different thresholds for different kinds of attributes. In this model the sizes of the neighborhood lower and upper approximations of decisions reflect the discriminating capability of feature subsets. The size of lower approximation is computed as the dependency between decision and condition attributes. We use the neighborhood dependency to evaluate the significance of a subset of heterogeneous features and construct forward feature subset selection algorithms. The proposed algorithms are compared with some classical techniques. Experimental results show that the neighborhood model based method is more flexible to deal with heterogeneous data.  相似文献   

11.
针对标签排序问题的特点,提出一种面向标签排序数据集的特征选择算法(Label Ranking Based Feature Selection, LRFS)。该算法首先基于邻域粗糙集定义了新的邻域信息测度,能直接度量连续型、离散型以及排序型特征间的相关性、冗余性和关联性。然后,在此基础上提出基于邻域关联权重因子的标签排序特征选择算法。实验结果表明,LRFS算法能够在不降低排序准确率的前提下,有效剔除标签排序数据集中的无关特征或冗余特征。  相似文献   

12.
Li  Zhao  Lu  Wei  Sun  Zhanquan  Xing  Weiwei 《Neural computing & applications》2016,28(1):513-524

Text classification is a popular research topic in data mining. Many classification methods have been proposed. Feature selection is an important technique for text classification since it is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. In recent years, data have become increasingly larger in both the number of instances and the number of features in many applications. As a result, classical feature selection methods do not work well in processing large-scale dataset due to the expensive computational cost. To address this issue, in this paper, a parallel feature selection method based on MapReduce is proposed. Specifically, mutual information based on Renyi entropy is used to measure the relationship between feature variables and class variables. Maximum mutual information theory is then employed to choose the most informative combination of feature variables. We implemented the selection process based on MapReduce, which is efficient and scalable for large-scale problems. At last, a practical example well demonstrates the efficiency of the proposed method.

  相似文献   

13.
With the rapid development of information techniques, the dimensionality of data in many application domains, such as text categorization and bioinformatics, is getting higher and higher. The high‐dimensionality data may bring many adverse situations, such as overfitting, poor performance, and low efficiency, to traditional learning algorithms in pattern classification. Feature selection aims at reducing the dimensionality of data and providing discriminative features for pattern learning algorithms. Due to its effectiveness, feature selection is now gaining increasing attentions from a variety of disciplines and currently many efforts have been attempted in this field. In this paper, we propose a new supervised feature selection method to pick important features by using information criteria. Unlike other selection methods, the main characteristic of our method is that it not only takes both maximal relevance to the class labels and minimal redundancy to the selected features into account, but also works like feature clustering in an agglomerative way. To measure the relevance and redundancy of feature exactly, two different information criteria, i.e., mutual information and coefficient of relevance, have been adopted in our method. The performance evaluations on 12 benchmark data sets show that the proposed method can achieve better performance than other popular feature selection methods in most cases.  相似文献   

14.
针对邻域信息系统的特征选择模型存在人为设定邻域参数值的问题。分别计算样本与最近同类样本和最近异类样本的距离,用于定义样本的最近邻以确定信息粒子的大小。将最近邻的概念扩展到信息理论,提出最近邻互信息。在此基础上,采用前向贪心搜索策略构造了基于最近邻互信息的特征算法。在两个不同基分类器和八个UCI数据集上进行实验。实验结果表明:相比当前多种流行算法,该模型能够以较少的特征获得较高的分类性能。  相似文献   

15.
特征选择就是从特征集合中选择出与分类类别相关性强而特征之间冗余性最小的特征子集,这样一方面可以提高分类器的计算效率,另一方面可以提高分类器的泛化能力,进而提高分类精度。基于互信息的特征相关性和冗余性的评价准则,在实际应用中存在以下的问题:(1)变量的概率计算困难,进而影响特征的信息熵计算困难;(2)互信息倾向于选择值较多的特征;(3)基于累积加和的候选特征与特征子集之间冗余性度量准则在特征维数较高的情况下容易失效。为了解决上述问题,提出了基于归一化模糊互信息最大的特征评价准则,基于模糊等价关系计算变量的信息熵、条件熵、联合熵;利用联合互信息最大替换累积加和的度量方法;基于归一化联合互信息对特征重要性进行评价;基于该准则建立了基于前向贪婪搜索的特征选择算法。在UCI机器学习标准数据集上的多组实验,证明算法能够有效地选择出对分类类别有效的特征子集,能够明显提高分类精度。  相似文献   

16.
特征选择对于分类器的分类精度和泛化性能起重要作用。目前的多标记特征选择算法主要利用最大相关性最小冗余性准则在全部特征集中进行特征选择,没有考虑专家特征,因此多标记特征选择算法的运行时间较长、复杂度较高。实际上,在现实生活中专家依据几个或者多个关键特征就能够直接决定整体的预测方向。如果提取关注这些信息,必将减少特征选择的计算时间,甚至提升分类器性能。基于此,提出一种基于专家特征的条件互信息多标记特征选择算法。首先将专家特征与剩余的特征相联合,再利用条件互信息得出一个与标记集合相关性由强到弱的特征序列,最后通过划分子空间去除冗余性较大的特征。该算法在7个多标记数据集上进行了实验对比,结果表明该算法较其他特征选择算法有一定优势,统计假设检验与稳定性分析进一步证明了所提出算法的有效性和合理性。  相似文献   

17.
针对特征空间中存在潜在相关特征的规律,分别利用谱聚类探索特征间的相关性及邻域互信息以寻求最大相关特征子集,提出联合谱聚类与邻域互信息的特征选择算法.首先利用邻域互信息移除与标记不相干的特征.然后采用谱聚类将特征进行分簇,使同一簇组中的特征强相关而不同簇组中的特征强相异.继而基于邻域互信息从每一特征簇组中选择与类标记强相关而与本组特征低冗余的特征子集.最后将所有选中特征子集组成最终的特征选择结果.在2个基分类器下的实验表明,文中算法能以较少的合理特征获得较高的分类性能.  相似文献   

18.
黄源  李茂  吕建成 《计算机科学》2015,42(5):54-56, 77
开方检验是目前文本分类中一种常用的特征选择方法.该方法仅关注词语和类别间的关系,而没有考虑词与词之间的关联,因此选择出的特征集具有较大的冗余度.定义了词语的“剩余互信息”概念,提出了对开方检验的选择结果进行优化的方法.使用该方法可以得到既有很强表征性又有很高独立性的特征集.实验表明,该方法表现良好.  相似文献   

19.
A new algorithm for ranking the input features and obtaining the best feature subset is developed and illustrated in this paper. The asymptotic formula for mutual information and the expectation maximisation (EM) algorithm are used to developing the feature selection algorithm in this paper. We not only consider the dependence between the features and the class, but also measure the dependence among the features. Even for noisy data, this algorithm still works well. An empirical study is carried out in order to compare the proposed algorithm with the current existing algorithms. The proposed algorithm is illustrated by application to a variety of problems.  相似文献   

20.
姚晟  徐风  吴照玉  陈菊  汪杰  王维 《控制与决策》2019,34(2):353-361
属性约简是粗糙集理论一项重要的应用,目前已广泛运用于机器学习和数据挖掘等领域,邻域粗糙集是粗糙集理论中处理连续型数据的一种重要方法.针对目前邻域粗糙集模型中属性约简存在的缺陷,构造一种基于邻域粗糙集的邻域粗糙熵模型,并基于此给出邻域粗糙联合熵、邻域粗糙条件熵和邻域粗糙互信息熵等概念.邻域粗糙互信息熵是评估属性集相关性的一种重要的方法,具有非单调性变化的特性,对此,提出一种基于邻域粗糙互信息熵的非单调性属性约简算法.实验分析表明,所提出算法不仅比目前已有的单调性属性约简算法具有更优越的属性约简结果,而且具有更高的约简效率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号