首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
中文文本分类中特征选择方法的比较   总被引:1,自引:0,他引:1  
在自动文本分类系统中,特征选择是有效的降维数方法.通过实验对中文文本分类中的特征选择方法逐一进行测试研究,力图确定较优的中文文本分类特征选择方法.根据实验得出:在所测试的所有特征选择方法中,统计方法的分类性能最好,其次为信息增益(IG),交叉熵(CE)和文本证据权(WE)也取得了较好的效果,互信息(MI)较差.  相似文献   

2.
特征选择是中文文本分类过程中的一个关键环节,文本特征项选择的优劣将直接影响文本分类的准确率。针对传统的特征选择算法没有考虑到特征项的类别区分度在特征选择中的作用而丧失了一些优秀的特征项的问题,文中通过引入特征项的类别区分度对传统的特征选择算法进行改进。实验结果表明,改进方法的分类效果要好于传统方法,从而验证了改进方法的有效性和可行性。  相似文献   

3.
《软件》2019,(9):71-74
在文本分类领域,中文文本需要经过数据处理,将文档表达成计算机可以理解并处理的信息。本文采用TF-IDF作为文本表示方法,针对中文文章的多分类问题,对传统支持向量机进行改进,提出了一种基于特征选择的多类支持向量机分类方法。在中文文章数据集的对比实验结果表明,本文的方法在多分类性能上较优于其他模式识别方法。  相似文献   

4.
针对中文文本自动分类算法的评估体系   总被引:1,自引:0,他引:1  
中文文本自动分类能够帮助人们更有效地利用不断膨胀的海量中文信息.现有中文文本自动分类算法基于不同原理,性能各异,适用于不同情况.对于分类算法的比较评估能够确定某个分类算法的适用环境和性能特征.目前缺乏针对中文文本自动分类算法的系统评估体系.本文将引入一个评估体系,并基于该体系实现一个开放的研究平台,得出若干已有中文文本自动分类算法的比较结果.  相似文献   

5.
董梅  胡学钢 《微机发展》2007,17(7):117-119
自动文本分类就是在给定的分类体系下,让计算机根据文本的内容确定与它相关联的类别。特征选择作为文本分类中的关键,困难之一是特征空间的高维性,因此寻求一种有效的特征选择方法,降低特征空间的维数,成为文本分类中的重要问题。在分析已有的文本分类特征选择方法的基础上,实现了一种组合不同特征选择方法的多特征选择方法,应用于KNN文本分类算法,实验表明,多特征选择方法分类效果比单一的特征选择方法分类效果有明显的提高。  相似文献   

6.
基于多特征选择的中文文本分类   总被引:1,自引:0,他引:1  
自动文本分类就是在给定的分类体系下,让计算机根据文本的内容确定与它相关联的类别。特征选择作为文本分类中的关键,困难之一是特征空间的高维性,因此寻求一种有效的特征选择方法,降低特征空间的维数,成为文本分类中的重要问题。在分析已有的文本分类特征选择方法的基础上,实现了一种组合不同特征选择方法的多特征选择方法,应用于KNN文本分类算法,实验表明,多特征选择方法分类效果比单一的特征选择方法分类效果有明显的提高。  相似文献   

7.
为了提高文本自动分类准确率,提出一种改进的蜂群优化神经网络的选择特征的文本数据挖掘算法.该算法将文本特征选择转换成一个多目标优化问题,以特征维数最少、分类正确率最高为选择标准,采用蚁群算法找到最优特征子集,最后神经网络建立文本自动分类器,进行仿真实验测试算法性能.仿真实验结果表明,提出的方法从高维文本最优文本特征,提高了文本自动分类的正确率和识别效率,是一种有效的网络文本挖掘算法.  相似文献   

8.
特征选择是中文文本自动分类领域中极其重要的研究内容,其目的是为了解决特征空间高维性和文档表示向量稀疏性之间的矛盾。针对互信息(MI)特征选择方法分类效果较差的现状,提出了一种改进的互信息特征选择方法IMI。该方法考虑了特征项在当前文本中出现的频率以及互信息值为负数情况下的特征选取,从而能更有效地过滤低频词。通过在自动分类器KNN上的实验表明,改进后的方法极大地提高了分类精度。  相似文献   

9.
一种文本特征选择方法的研究   总被引:2,自引:2,他引:0  
在文本分类中,对高维的特征集进行降维是非常重要的,不但可以提高分类精度和效率,也可以找出富含信息的特征子集.而特征选择是有效降低特征向量维数的一种方法.目前常用的一些特征选择算法仅仅考虑了特征词与类别间的相关性,而忽略了特征词与特征词之间的相关性,从而存在特征冗余,影响了分类效果.为此,在分析了常用的一些特征选择算法之后,提出了一种基于mRMR模型的特征选择方法.实验表明,该特征选择方法有助于提高分类性能.  相似文献   

10.
针对传统金融分析报告分类效率低的问题,提出基于支持向量机的中文文本分类技术来对金融分析报告进行分类,该分类技术采用中科院提供的中文分词系统以及使用两种特征选择算法相结合进行分词和特征选择,并且提出针对TF/IDF权重计算的改进方法。该分类技术选择支持向量机作为分类算法,通过开源的支持向量机对样本进行训练和测试。实验结果表明,采用中文文本分类技术对金融分析报告按照行业进行分类能够满足金融机构的使用需求。  相似文献   

11.
Given a large set of potential features, it is usually necessary to find a small subset with which to classify. The task of finding an optimal feature set is inherently combinatoric and therefore suboptimal algorithms are typically used to find feature sets. If feature selection is based directly on classification error, then a feature-selection algorithm must base its decision on error estimates. This paper addresses the impact of error estimation on feature selection using two performance measures: comparison of the true error of the optimal feature set with the true error of the feature set found by a feature-selection algorithm, and the number of features among the truly optimal feature set that appear in the feature set found by the algorithm. The study considers seven error estimators applied to three standard suboptimal feature-selection algorithms and exhaustive search, and it considers three different feature-label model distributions. It draws two conclusions for the cases considered: (1) depending on the sample size and the classification rule, feature-selection algorithms can produce feature sets whose corresponding classifiers possess errors far in excess of the classifier corresponding to the optimal feature set; and (2) for small samples, differences in performances among the feature-selection algorithms are less significant than performance differences among the error estimators used to implement the algorithms. Moreover, keeping in mind that results depend on the particular classifier-distribution pair, for the error estimators considered in this study, bootstrap and bolstered resubstitution usually outperform cross-validation, and bolstered resubstitution usually performs as well as or better than bootstrap.  相似文献   

12.
几种典型特征选取方法在中文网页分类上的效果比较   总被引:31,自引:2,他引:31  
针对中文网页,比较研究了CHI、IG、DF以及MI特征选取方法。主要的实验结果有:(1)CHI、IG和DF的性能明显优于MI;(2)CHI、IG和DF的性能大体相当,都能够过滤掉85%以上的特征项;(3)DF具有算法简单、质量高的优点,可以用来代替CHI和IG;(4)使用普通英文文本和中文网页评测特征选取方法的结果是一致的。  相似文献   

13.
In this paper, we propose a new feature-selection algorithm for text classification, called best terms (BT). The complexity of BT is linear in respect to the number of the training-set documents and is independent from both the vocabulary size and the number of categories. We evaluate BT on two benchmark document collections, Reuters-21578 and 20-Newsgroups, using two classification algorithms, naive Bayes (NB) and support vector machines (SVM). Our experimental results, comparing BT with an extensive and representative list of feature-selection algorithms, show that (1) BT is faster than the existing feature-selection algorithms; (2) BT leads to a considerable increase in the classification accuracy of NB and SVM as measured by the F1 measure; (3) BT leads to a considerable improvement in the speed of NB and SVM; in most cases, the training time of SVM has dropped by an order of magnitude; (4) in most cases, the combination of BT with the simple, but very fast, NB algorithm leads to classification accuracy comparable with SVM while sometimes it is even more accurate.  相似文献   

14.
为了提高情感文本分类的准确率,对英文情感文本不同的预处理方式进行了研究,同时提出了一种改进的卡方统计量(CHI)特征提取算法.卡方统计量是一种有效的特征选择方法,但分析发现存在负相关现象和倾向于选择低频特征词的问题.为了克服不足之处,在考虑到词频、集中度和分散度等因素的基础上,考虑文本的长短不均衡和特征词分布,对词频进行归一化,提出了一种改进的卡方统计量特征提取算法.利用经典朴素贝叶斯和支持向量机分类算法在均衡语料、非均衡语料和混合长短文本语料上实验,实验结果表明:新的方法提高了情感文本分类的准确率.  相似文献   

15.
Contemporary biological technologies produce extremely high-dimensional data sets from which to design classifiers, with 20,000 or more potential features being common place. In addition, sample sizes tend to be small. In such settings, feature selection is an inevitable part of classifier design. Heretofore, there have been a number of comparative studies for feature selection, but they have either considered settings with much smaller dimensionality than those occurring in current bioinformatics applications or constrained their study to a few real data sets. This study compares some basic feature-selection methods in settings involving thousands of features, using both model-based synthetic data and real data. It defines distribution models involving different numbers of markers (useful features) versus non-markers (useless features) and different kinds of relations among the features. Under this framework, it evaluates the performances of feature-selection algorithms for different distribution models and classifiers. Both classification error and the number of discovered markers are computed. Although the results clearly show that none of the considered feature-selection methods performs best across all scenarios, there are some general trends relative to sample size and relations among the features. For instance, the classifier-independent univariate filter methods have similar trends. Filter methods such as the t-test have better or similar performance with wrapper methods for harder problems. This improved performance is usually accompanied with significant peaking. Wrapper methods have better performance when the sample size is sufficiently large. ReliefF, the classifier-independent multivariate filter method, has worse performance than univariate filter methods in most cases; however, ReliefF-based wrapper methods show performance similar to their t-test-based counterparts.  相似文献   

16.
Feature selection plays an important role in classification algorithms. It is particularly useful in dimensionality reduction for selecting features with high discriminative power. This paper introduces a new feature-selection method called Feature Interaction Maximisation (FIM), which employs three-way interaction information as a measure of feature redundancy. It uses a forward greedy search to select features which have maximum interaction information with the features already selected, and which provide maximum relevance. The experiments conducted to verify the performance of the proposed method use three datasets from the UCI repository. The method is compared with four other well-known feature-selection methods: Information Gain (IG), Minimum Redundancy Maximum Relevance (mRMR), Double Input Symmetrical Relevance (DISR), and Interaction Gain Based Feature Selection (IGFS). The average classification accuracy of two classifiers, Naïve Bayes and K-nearest neighbour, is used to assess the performance of the new feature-selection method. The results show that FIM outperforms the other methods.  相似文献   

17.
Radio-frequency fingerprinting is a technique for the authentication and identification of wireless devices using their intrinsic physical features and an analysis of the digitized signal collected during transmission. The technique is based on the fact that the unique physical features of the devices generate discriminating features in the transmitted signal, which can then be analyzed using signal-processing and machine-learning algorithms. Deep learning and more specifically convolutional neural networks (CNNs) have been successfully applied to the problem of radio-frequency fingerprinting using a spectral domain representation of the signal. A potential problem is the large size of the data to be processed, because this size impacts on the processing time during the application of the CNN. We propose an approach to addressing this problem, based on dimensionality reduction using feature-selection algorithms before the spectrum domain representation is given as an input to the CNN. The approach is applied to two public data sets of radio-frequency devices using different feature-selection algorithms for different values of the signal-to-noise ratio. The results show that the approach is able to achieve not only a shorter processing time; it also provides a superior classification performance in comparison to the direct application of CNNs.  相似文献   

18.
中文情感分析中的一个重要问题就是情感倾向分类,情感特征选择是基于机器学习的情感倾向分类的前提和基础,其作用在于通过剔除无关或冗余的特征来降低特征集的维数。提出一种将Lasso算法与过滤式特征选择方法相结合的情感混合特征选择方法:先利用Lasso惩罚回归算法对原始特征集合进行筛选,得出冗余度较低的情感分类特征子集;再对特征子集引入CHI,MI,IG等过滤方法来评价候选特征词与文本类别的依赖性权重,并据此剔除候选特征词中相关性较低的特征词;最终,在使用高斯核函数的SVM分类器上对比所提方法与DF,MI,IG和CHI在不同特征词数量下的分类效果。在微博短文本语料库上进行了实验,结果表明所提算法具有有效性和高效性;并且在特征子集维数小于样本数量时,提出的混合方法相比DF,MI,IG和CHI的特征选择效果都有一定程度的改善;通过对比识别率和查全率可以发现,Lasso-MI方法相比MI以及其他过滤方法更为有效。  相似文献   

19.
开发了以DSP为核心的扫描识别及英汉互译系统(电子阅读笔),在小型手持设备上实现了从扫描输入图像到脱机OCR以及翻译显示的处理,并且完全独立于计算机工作.主要讨论了硬件和软件系统的构成,并重点阐速了软件的个别核心算法.实验系统工作稳定,平均识别率达到95%以上,并提供中英文共15万词汇量的翻译词典.  相似文献   

20.
情感分类是一项具有实用价值的分类技术。目前英语和汉语的情感分类的研究比较多,而针对维吾尔语的研究较少。以n-gram模型作为不同的文本表示特征,以互信息、信息增益、CHI统计量和文档频率作为不同的特征选择方法,选择不同的特征数量,以Naǐve Bayes、ME(最大熵)和SVM(支持向量机)作为不同的文本分类方法,分别进行了维吾尔语情感分类实验,并对实验结果进行了比较,结果表明:采用UniGrams特征表示方法、在5 000个特征数量和合适的特征选择函数,ME和SVM对维吾尔语情感分类能取得较好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号