首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
特征权重计算是文本分类过程的基础,传统基于概率的特征权重算法,往往只对词频,逆文档频和逆类频等进行统计,忽略了类别之间的相互关系。而对于多分类问题,类别之间的关系对统计又有重要意义。因此,针对这一不足,本文提出了基于类别方差的特征权重算法,通过计算类别文档频率的方差来度量类别之间的联系,并在搜狗新闻数据集上对五种特征权重算法进行分类实验。结果表明,与其他四种特征权重算法相比,本文提出的算法在F1宏平均和F1微平均上都有较大的提高,提升了文本分类的效果。  相似文献   

2.
在文本分类领域中,目前关于特征权重的研究存在两方面不足:一方面,对于基于文档频率的特征权重算法,其中的文档频率常常忽略特征的词频信息;另一方面,对特征与类别的关系表达不够准确和充分。针对以上两点不足,提出一种新的基于词频的类别相关特征权重算法(全称CDF-AICF)。该算法在度量特征权重时,考虑了特征在每个词频下的文档频率。同时,为了准确表达特征与类别的关系,提出了两个新的概念:类别相关文档频率CDF和平均逆类频率AICF,分别用于表示特征对类别的表现力和区分力。最后,通过与其它5个特征权重度量方法相比较,在三个数据集上进行分类实验。结果显示,CDF-AICF的分类性能优于其它5种度量方法。  相似文献   

3.
基于改进TFIDF算法的文本分类研究   总被引:1,自引:0,他引:1  
由于文本分类在信息检索、邮件过滤、网页分类、个性化推荐等领域有着广泛的应用价值,所以自文本分类的概念提出以来,受到了学者们的广泛关注。在文本分类的研究中,学者们运用了很多方法,其中TFIDF是文档特征权重计算的最常用算法之一,但是传统的TFID算法忽略了特征项在类内和类间的分布,导致很多区分度不大的特征项被赋予了较大的权重。针对传统TFIDF算法的不足,本文在IDF的计算过程中,用词条在类内与类间的文档占比来考虑词条在类内与类间的分布。在实验中,用改进的权重算法表示文本向量,通过考察分类的效果,验证了改进算法的有效性。  相似文献   

4.
文本分类特征权重改进算法   总被引:6,自引:2,他引:4       下载免费PDF全文
台德艺  王俊 《计算机工程》2010,36(9):197-199
TF-IDF是一种在文本分类领域获得广泛应用的特征词权重算法,着重考虑了词频与逆文档频等因素,但无法把握特征词在类间与类内的分布情况。为提高在同类中频繁出现、类内均匀分布的具有代表性的特征词权重,引入特征词分布集中度系数改进IDF函数、用分散度系数进行加权,提出TF-IIDF-DIC权重函数。实验结果表明,基于TF-IIDF-DIC权重算法的K-NN文本分类宏平均F1值比TF-IDF算法提高了6.79%。  相似文献   

5.
文本分类特征权重改进算法   总被引:1,自引:2,他引:1       下载免费PDF全文
台德艺  王俊 《计算机工程》2010,36(9):197-199,
TF-IDF是一种在文本分类领域获得广泛应用的特征词权重算法,着重考虑了词频与逆文档频等因素,但无法把握特征词在类间与类内的分布情况。为提高在同类中频繁出现、类内均匀分布的具有代表性的特征词权重,引入特征词分布集中度系数改进IDF函数、用分散度系数进行加权,提出TF-IIDF-DIC权重函数。实验结果表明,基于TF-IIDF-DIC权重算法的K-NN文本分类宏平均F1值比TF-IDF算法提高了6.79%。  相似文献   

6.
文本分类中特征提取对分类效果有较大的影响,传统的特征提取方法在特征分布信息的量化方面存在不足。为此,提出一种基于特征词类内、类外平均词频的特征提取算法。算法通过特征词的平均词频类间集中度和文档频类间集中度来计算特征词的权重,能够更准确地反映特征词的分布情况。通过实验结果比较,可以证明,该算法有效地提高了分类效果。  相似文献   

7.
基于图的特征词权重算法及其在文档排序中的应用   总被引:1,自引:0,他引:1  
信息检索的核心工作包括文档的分类和排序等操作,如何对文档中的特征词权重进行有效度量是其中的一项关键技术。利用词的共现等关系为每个文档建立文本图,基于邻接词间重要性相互影响的思路,结合文档中特征词的词频特性,迭代计算每个词的权重,进一步结合文本图的密度等全局特性,对信息检索的结果进行排序。实验证实,算法在标准数据集上具有良好的效果。  相似文献   

8.
传统类别区分词特征选择算法以类间分散度和类内重要度作为度量指标,忽略了2个指标对特征评分函数的贡献权重往往不同这一事实,从而在一定程度上影响了特征选择效果。在类别区分词特征选择算法基础上,引入平衡因子,通过调节平衡因子来调整2个指标对特征评价函数的贡献权重,完成更加高效的特征选择,进而达到更好的文本分类效果。使用朴素贝叶斯算法进行文本分类,相比主流特征选择算法,改进算法在分类准确率、查准率、查全率和F1指标上都取得了可观的性能提升。    相似文献   

9.
为了克服传统的TF-IDF(Term Frequency Inverse Document Frequency)算法分类F1值低的缺陷,利用特征词在类内和类间的分布信息,提出一种改进的TF-IDF-dist算法。实验结果表明,改进的算法在不同特征维度下F1值平均提升3.2%,结合不同特征选择算法,F1值平均提升2.75%,并且对倾斜数据集有更强的适应性,表明本文算法在文本分类中的有效性。  相似文献   

10.
SVM分类算法处理高维数据具有较大优势,但其未考虑语义的相似性度量问题,而LDA主题模型可以解决传统的文本分类中相似性度量和主题单一性问题.为了充分结合SVM和LDA算法的优势并提高分类精确度,提出了一种新的LDA-wSVM高效分类算法模型.利用LDA主题模型进行建模和特征选择,确定主题数和隐主题—文本矩阵;在经典权重计算方法上作改进,考虑各特征项与类别的关联度,设计了一种新的权重计算方法;在特征词空间上使用这种基于权重计算的wSVM分类器进行分类.实验基于R软件平台对搜狗实验室的新闻文本集进行分类,得到了宏平均值为0.943的高精确度分类结果.实验结果表明,提出的LDA-wSVM模型在文本自动分类中具有很好的优越性能.  相似文献   

11.
Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, often feature selection is used for reducing the dimensionality. In this paper, we make an evaluation and comparison of the feature selection policies used in text categorization by employing some of the popular feature selection metrics. For the experiments, we use datasets which vary in size, complexity, and skewness. We use support vector machine as the classifier and tf-idf weighting for weighting the terms. In addition to the evaluation of the policies, we propose new feature selection metrics which show high success rates especially with low number of keywords. These metrics are two-sided local metrics and are based on the difference of the distributions of a term in the documents belonging to a class and in the documents not belonging to that class. Moreover, we propose a keyword selection framework called adaptive keyword selection. It is based on selecting different number of terms for each class and it shows significant improvement on skewed datasets that have a limited number of training instances for some of the classes.  相似文献   

12.
文本表示是使用分类算法处理文本时必不可少的环节,文本表示方法的选择对最终的分类精度起着至关重要的作用。针对经典的特征权重计算方法TFIDF(Term Frequency and Inverted Document Frequency)中存在的不足,提出了一种基于信息熵理论的特征权重算法ETFIDF(Entropy based TFIDF)。ETFIDF不仅考虑特征项在文档中出现的频率及该特征项在训练集中的集中度,而且还考虑该特征项在各个类别中的分散度。实验结果表明,采用ETFIDF计算特征权重可以有效地提高文本分类性能,对ETFIDF与特征选择的关系进行了较详细的理论分析和实验研究。实验结果表明,在文本表示阶段考虑特征与类别的关系可以更为准确地表示文本;如果综合考虑精度与效率两个方面因素,ETFIDF算法与特征选择算法一起采用能够得到更好的分类效果。  相似文献   

13.
Text categorization is continuing to be one of the most researched NLP problems due to the ever-increasing amounts of electronic documents and digital libraries. In this paper, we present a new text categorization method that combines the distributional clustering of words and a learning logic technique, called Lsquare, for constructing text classifiers. The high dimensionality of text in a document has not been fruitful for the task of categorization, for which reason, feature clustering has been proven to be an ideal alternative to feature selection for reducing the dimensionality. We, therefore, use distributional clustering method (IB) to generate an efficient representation of documents and apply Lsquare for training text classifiers. The method was extensively tested and evaluated. The proposed method achieves higher or comparable classification accuracy and {rm F}_1 results compared with SVM on exact experimental settings with a small number of training documents on three benchmark data sets WebKB, 20Newsgroup, and Reuters-21578. The results prove that the method is a good choice for applications with a limited amount of labeled training data. We also demonstrate the effect of changing training size on the classification performance of the learners.  相似文献   

14.
文本分类中特征权重因子的作用研究   总被引:1,自引:0,他引:1  
在传统的基于向量空间的文本分类中,特征权重计算与特征选择过程完全割裂,特征选择函数的得分能反映特征的重要性,却未被纳入权重表示,造成特征表示不精确并影响分类性能。一些改进方法使用特征选择函数等修改TFIDF模型,提高了分类性能,但没有探究各权重因子如何影响分类的性能。该文以词频、逆文档频率及特征选择函数分别作为衡量特征的文档代表性、文档区分性及类别区分性的因子,通过实验测试了它们对分类性能的影响,得到文档代表性因子能使分类效果峰值最高但抵抗噪音特征能力差、文档区分性因子具有抗噪能力但性能不稳定、而类别区分性因子抗噪能力最强且性能最稳定的结论。最后给出权重表示的四点构造原则,并通过实验验证了其对分类性能的优化效果。  相似文献   

15.
Wang  Tao  Cai  Yi  Leung  Ho-fung  Lau  Raymond Y. K.  Xie  Haoran  Li  Qing 《Knowledge and Information Systems》2021,63(9):2313-2346

In text categorization, Vector Space Model (VSM) has been widely used for representing documents, in which a document is represented by a vector of terms. Since different terms contribute to a document’s semantics in various degrees, a number of term weighting schemes have been proposed for VSM to improve text categorization performance. Much evidence shows that the performance of a term weighting scheme often varies across different text categorization tasks, while the mechanism underlying variability in a scheme’s performance remains unclear. Moreover, existing schemes often weight a term with respect to a category locally, without considering the global distribution of a term’s occurrences across all categories in a corpus. In this paper, we first systematically examine pros and cons of existing term weighting schemes in text categorization and explore the reasons why some schemes with sound theoretical bases, such as chi-square test and information gain, perform poorly in empirical evaluations. By measuring the concentration that a term distributes across all categories in a corpus, we then propose a series of entropy-based term weighting schemes to measure the distinguishing power of a term in text categorization. Through extensive experiments on five different datasets, the proposed term weighting schemes consistently outperform the state-of-the-art schemes. Moreover, our findings shed new light on how to choose and develop an effective term weighting scheme for a specific text categorization task.

  相似文献   

16.
Harun Uğuz 《Knowledge》2011,24(7):1024-1032
Text categorization is widely used when organizing documents in a digital form. Due to the increasing number of documents in digital form, automated text categorization has become more promising in the last ten years. A major problem of text categorization is its large number of features. Most of those are irrelevant noise that can mislead the classifier. Therefore, feature selection is often used in text categorization to reduce the dimensionality of the feature space and to improve performance. In this study, two-stage feature selection and feature extraction is used to improve the performance of text categorization. In the first stage, each term within the document is ranked depending on their importance for classification using the information gain (IG) method. In the second stage, genetic algorithm (GA) and principal component analysis (PCA) feature selection and feature extraction methods are applied separately to the terms which are ranked in decreasing order of importance, and a dimension reduction is carried out. Thereby, during text categorization, terms of less importance are ignored, and feature selection and extraction methods are applied to the terms of highest importance; thus, the computational time and complexity of categorization is reduced. To evaluate the effectiveness of dimension reduction methods on our purposed model, experiments are conducted using the k-nearest neighbour (KNN) and C4.5 decision tree algorithm on Reuters-21,578 and Classic3 datasets collection for text categorization. The experimental results show that the proposed model is able to achieve high categorization effectiveness as measured by precision, recall and F-measure.  相似文献   

17.
焦庆争  蔚承建 《计算机应用》2009,29(12):3303-3306
针对文本分类问题,基于特征分布评估权值调节特征概率标准差设计了一种无须特征选择的高效的线性文本分类器。该算法的基本思路是使用特征概率标准差量化特征在文档类中的离散度,并作为特征的基础权重,同时以后验概率的Beta分布函数为基础,运用概率确定性密度函数,评估特征在类别中的分布信息得到特征分布权值,将其调节基础权重得到特征权重,实现了线性文本分类器。在20Newsgroup、复旦中文分类语料、Reuters-21578三个语料集进行了比较实验,实验结果表明,新算法分类性能相对传统算法优势显著,且稳定、高效、实用,适于大规模文本分类任务。  相似文献   

18.
特征选择是文本分类的一个重要步骤。分析了互信息,针对其不足引进了粗糙集给出了一个基于关系积的属性约简算法,并以此为基础提出了一个新的适用于海量文本数据集的特征选择方法。该方法使互信息进行特征初选,利用基于关系积的属性约简算法消除冗余词。实验结果表明此种特征选择方法的微平均F1和宏平均F1较高。  相似文献   

19.
Text categorization presents unique challenges to traditional classification methods due to the large number of features inherent in the datasets from real-world applications of text categorization, and a great deal of training samples. In high-dimensional document data, the classes are typically categorized only by subsets of features, which are typically different for the classes of different topics. This paper presents a simple but effective classifier for text categorization using class-dependent projection based method. By projecting onto a set of individual subspaces, the samples belonging to different document classes are separated such that they are easily to be classified. This is achieved by developing a new supervised feature weighting algorithm to learn the optimized subspaces for all the document classes. The experiments carried out on common benchmarking corpuses showed that the proposed method achieved both higher classification accuracy and lower computational costs than some distinguishing classifiers in text categorization, especially for datasets including document categories with overlapping topics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号