首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
刘端阳  陆洋 《计算机工程》2012,38(8):128-130
传统tf.idf方法未利用分类数据的特性,无法反映词在各个类别之间的比例关系。为此,在分析有指导的文本特征加权方法tf.rf基础上,提出一种基于有指导的改进文本特征加权方法tf.ridf。该改进方法结合tf.idf和tf.rf 2种方法的特点,考虑词在总体文档及各类别文档之间的关系,实现文本特征加权。实验结果表明,该方法的分类能力比tf.rf方法有明显提升。  相似文献   

2.
Wang  Tao  Cai  Yi  Leung  Ho-fung  Lau  Raymond Y. K.  Xie  Haoran  Li  Qing 《Knowledge and Information Systems》2021,63(9):2313-2346

In text categorization, Vector Space Model (VSM) has been widely used for representing documents, in which a document is represented by a vector of terms. Since different terms contribute to a document’s semantics in various degrees, a number of term weighting schemes have been proposed for VSM to improve text categorization performance. Much evidence shows that the performance of a term weighting scheme often varies across different text categorization tasks, while the mechanism underlying variability in a scheme’s performance remains unclear. Moreover, existing schemes often weight a term with respect to a category locally, without considering the global distribution of a term’s occurrences across all categories in a corpus. In this paper, we first systematically examine pros and cons of existing term weighting schemes in text categorization and explore the reasons why some schemes with sound theoretical bases, such as chi-square test and information gain, perform poorly in empirical evaluations. By measuring the concentration that a term distributes across all categories in a corpus, we then propose a series of entropy-based term weighting schemes to measure the distinguishing power of a term in text categorization. Through extensive experiments on five different datasets, the proposed term weighting schemes consistently outperform the state-of-the-art schemes. Moreover, our findings shed new light on how to choose and develop an effective term weighting scheme for a specific text categorization task.

  相似文献   

3.
非凸在线支持向量机(LASVM-NC)具有抗噪能力强和训练速度快的优点,而词频相关频率积(tf.rf)则是一种自适应能力很强、分类性能非常好的文本特征。通过把非凸在线支持向量机和词频相关频率积相结合,提出了一种新的文本分类方法,即LASVM-NC+tf.rf。实验结果表明,这种方法在LASVM-NC与多种其他特征的结合中性能是最好的,且与SVM+tf.rf相比,不仅所产生的分类器具有泛化能力更强、模型表达更稀疏的优点,而且在处理含噪声的数据时具有更好的鲁棒性,在处理大规模数据时具有快得多的训练速度。  相似文献   

4.
一种基于向量空间模型的文本分类方法   总被引:21,自引:1,他引:21  
介绍的文本分类是指在给定分类体系下,根据文本的内容自动确定文本类别的过程。通过分析网页的特点及因特网用户感兴趣的查询信息,提出了一种基于机器学习的、独立于语种的文本分类模型。这一模型的关键算法主要利用字间的相关信息、词频、页面的标记信息以及对用户的查询信息的浅层语义分析,提取网页特征,并计算可调的词频加权参数和增加特征词的可分性信息,然后通过本类和非本类训练,建立预定义类的特征向量空间,进一步对文本进行分类。这种分类方法在相似文本分类中具有明显的优势。  相似文献   

5.
文本分类中特征权重因子的作用研究   总被引:1,自引:0,他引:1  
在传统的基于向量空间的文本分类中,特征权重计算与特征选择过程完全割裂,特征选择函数的得分能反映特征的重要性,却未被纳入权重表示,造成特征表示不精确并影响分类性能。一些改进方法使用特征选择函数等修改TFIDF模型,提高了分类性能,但没有探究各权重因子如何影响分类的性能。该文以词频、逆文档频率及特征选择函数分别作为衡量特征的文档代表性、文档区分性及类别区分性的因子,通过实验测试了它们对分类性能的影响,得到文档代表性因子能使分类效果峰值最高但抵抗噪音特征能力差、文档区分性因子具有抗噪能力但性能不稳定、而类别区分性因子抗噪能力最强且性能最稳定的结论。最后给出权重表示的四点构造原则,并通过实验验证了其对分类性能的优化效果。  相似文献   

6.
由于图象存储数据量非常大,因此提取图象特征和检索极为耗时.为了提高图象检索效率,将文本检索中的有效检索方法(基于关键字频率与关键字逆文档频率乘积的索引模型)结合三角树索引机制应用到基于内容的图象检索,提出了一种基于独立关键子块和三角树的快速图象检索新方法.该方法首先用独立分量分析将样本图象子块中的直方图特征映射到色彩概念空间来得到类似于文本中关键字的独立关键子块;然后再用训练好的模糊支持向量机去识别每幅图象中所包含的独立关键子块,由于独立分量分析能够使特征彼此保持高阶独立性,因此该方法与主成分分析方法对比,具有较高检索效率;最后,再通过构造三角树来来为图象数据库建立分层索引结构,以加快检索速度.  相似文献   

7.
文档中词语权重计算方法的改进   总被引:57,自引:5,他引:52  
文本的形式化表示一直是文本检索、自动文摘和搜索引擎等信息检索领域关注的基础性问题。向量空间模型(Vector Space Model) 中的tf.idf文本表示是该领域里得到广泛应用并且取得较好效果的一种文本表示方法。词语在文本集合中的分布比例量上的差异是决定词语表达文本内容的重要因素之一,但现在tf.idf方法无法把握这一因素。针对这个问题,本文引入信息论中信息增益的概念,提出一种对tf.idf的改进方法tf.idf.IG文本表示方法。该方法将词语的信息增益作为一个文本表示的一个因子,来衡量词语在文本集合中分布比例在量上的差异。在文本分类实验中,tf.idf.IG文本表示的向量空间模型的分类效果要好于tf.idf方法,验证了改进方法tf.idf.IG的有效性和可行性。  相似文献   

8.
基于关键词语的文本特征选择及权重计算方案   总被引:5,自引:3,他引:2  
文本的形式化表示一直是文本分类的重要难题.在被广泛采用的向量空间模型中,文本的每一维特征的权重就是其TFIDF值,这种方法难以突出对文本内容起到关键性作用的特征。提出一种基于关键词语的特征选择及权重计算方案,它利用了文本的结构信息同时运用互信息理论提取出对文本内容起到关键性作用的词语;权重计算则综合了词语位置、词语关系和词语频率等信息,突出了文本中关键词语的贡献,弥补了TFIDF的缺陷。通过采用支持向量机(SVM)分类器进行实验,结果显示提出的Score权重计算法比传统TFIDF法的平均分类准确率要高5%左右。  相似文献   

9.
With the rapid growth of textual content on the Internet, automatic text categorization is a comparatively more effective solution in information organization and knowledge management. Feature selection, one of the basic phases in statistical-based text categorization, crucially depends on the term weighting methods In order to improve the performance of text categorization, this paper proposes four modified frequency-based term weighting schemes namely; mTF, mTFIDF, TFmIDF, and mTFmIDF. The proposed term weighting schemes take the amount of missing terms into account calculating the weight of existing terms. The proposed schemes show the highest performance for a SVM classifier with a micro-average F1 classification performance value of 97%. Moreover, benchmarking results on Reuters-21578, 20Newsgroups, and WebKB text-classification datasets, using different classifying algorithms such as SVM and KNN show that the proposed schemes mTF, mTFIDF, and mTFmIDF outperform other weighting schemes such as TF, TFIDF, and Entropy. Additionally, the statistical significance tests show a significant enhancement of the classification performance based on the modified schemes.  相似文献   

10.
Term weighting is a strategy that assigns weights to terms to improve the performance of sentiment analysis and other text mining tasks. In this paper, we propose a supervised term weighting scheme based on two basic factors: Importance of a term in a document (ITD) and importance of a term for expressing sentiment (ITS), to improve the performance of analysis. For ITD, we explore three definitions based on term frequency. Then, seven statistical functions are employed to learn the ITS of each term from training documents with category labels. Compared with the previous unsupervised term weighting schemes originated from information retrieval, our scheme can make full use of the available labeling information to assign appropriate weights to terms. We have experimentally evaluated the proposed method against the state-of-the-art method. The experimental results show that our method outperforms the method and produce the best accuracy on two of three data sets.  相似文献   

11.
Text categorization is the task of assigning predefined categories to natural language text. With the widely used 'bag of words' representation, previous researches usually assign a word with values such that whether this word appears in the document concerned or how frequently this word appears. Although these values are useful for text categorization, they have not fully expressed the abundant information contained in the document. This paper explores the effect of other types of values, which express the distribution of a word in the document. These novel values assigned to a word are called distributional features, which include the compactness of the appearances of the word and the position of the first appearance of the word. The proposed distributional features are exploited by a tf idf style equation and different features are combined using ensemble learning techniques. Experiments show that the distributional features are useful for text categorization. In contrast to using the traditional term frequency values solely, including the distributional features requires only a little additional cost, while the categorization performance can be significantly improved. Further analysis shows that the distributional features are especially useful when documents are long and the writing style is casual.  相似文献   

12.
Feature selection for text categorization is a well-studied problem and its goal is to improve the effectiveness of categorization, or the efficiency of computation, or both. The system of text categorization based on traditional term-matching is used to represent the vector space model as a document; however, it needs a high dimensional space to represent the document, and does not take into account the semantic relationship between terms, which leads to a poor categorization accuracy. The latent semantic indexing method can overcome this problem by using statistically derived conceptual indices to replace the individual terms. With the purpose of improving the accuracy and efficiency of categorization, in this paper we propose a two-stage feature selection method. Firstly, we apply a novel feature selection method to reduce the dimension of terms; and then we construct a new semantic space, between terms, based on the latent semantic indexing method. Through some applications involving the spam database categorization, we find that our two-stage feature selection method performs better.  相似文献   

13.
特征加权是文本分类中的重要环节,通过考察传统的特征选择函数,发现互信息方法在特征加权过程中表现尤为突出。为了提高互信息方法在特征加权时的性能,加入了词频信息、文档频率信息以及类别相关度因子,提出了一种基于改进的互信息特征加权方法。实验结果表明,该方法比传统的特征加权方法具有更好的分类性能。  相似文献   

14.
基于信息增益的文本特征权重改进算法   总被引:2,自引:0,他引:2       下载免费PDF全文
传统tf.idf算法中的idf函数只能从宏观上评价特征区分不同文档的能力,无法反映特征在训练集各文档以及各类别中分布比例上的差异对特征权重计算结果的影响,降低文本表示的准确性。针对以上问题,提出一种改进的特征权重计算方法tf.igt.igC。该方法从考察特征分布入手,通过引入信息论中信息增益的概念,实现对上述特征分布具体维度的综合考虑,克服传统公式存在的不足。实验结果表明,与tf.idf.ig和tf.idf.igc 2种特征权重计算方法相比,tf.igt.igC在计算特征权重时更加有效。  相似文献   

15.
文本分类中词语权重计算方法的改进与应用   总被引:3,自引:0,他引:3       下载免费PDF全文
文本的形式化表示一直是信息检索领域关注的基础性问题。向量空间模型(Vector Space Model)中的tf.idf文本表示是该领域里得到广泛应用,并且取得较好效果的一种文本表示方法。词语在文本集合中的分布比例量上的差异是决定词语表达文本内容的重要因素之一。但是其IDF的计算,并没有考虑到特征项在类间的分布情况,也没有考虑到在类内分布相对均匀的特征项的权重应该比分布不均匀的要高,应该赋予其较高的权重。用改进的TFIDF选择特征词条、用KNN分类算法和遗传算法训练分类器来验证其有效性,实验表明改进的策略是可行的。  相似文献   

16.
Harun Uğuz 《Knowledge》2011,24(7):1024-1032
Text categorization is widely used when organizing documents in a digital form. Due to the increasing number of documents in digital form, automated text categorization has become more promising in the last ten years. A major problem of text categorization is its large number of features. Most of those are irrelevant noise that can mislead the classifier. Therefore, feature selection is often used in text categorization to reduce the dimensionality of the feature space and to improve performance. In this study, two-stage feature selection and feature extraction is used to improve the performance of text categorization. In the first stage, each term within the document is ranked depending on their importance for classification using the information gain (IG) method. In the second stage, genetic algorithm (GA) and principal component analysis (PCA) feature selection and feature extraction methods are applied separately to the terms which are ranked in decreasing order of importance, and a dimension reduction is carried out. Thereby, during text categorization, terms of less importance are ignored, and feature selection and extraction methods are applied to the terms of highest importance; thus, the computational time and complexity of categorization is reduced. To evaluate the effectiveness of dimension reduction methods on our purposed model, experiments are conducted using the k-nearest neighbour (KNN) and C4.5 decision tree algorithm on Reuters-21,578 and Classic3 datasets collection for text categorization. The experimental results show that the proposed model is able to achieve high categorization effectiveness as measured by precision, recall and F-measure.  相似文献   

17.
文本表示作为文本分类的一个基本问题,一直广受关注。目前文本表示主要有词袋模型、隐式语义表达和基于知识库的显式语义表达3种方式。本文首先分析对比了这3种文本表示方式在文本分类中的效果。实验发现,基于知识库的显式语义表达并没有如预期一样提高文本分类的效果。经分析,其原因在于显式语义表达在扩展文档表达时易引入噪声。针对该问题,本文提出了一种有监督的显式语义表达方法。该方法利用数据集的标注信息识别文档中与分类最相关的核心概念,并扩展核心概念以形成文档显式语义表达。3个标准分类数据集上的结果证实了本文所提文本表示方法的有效性。  相似文献   

18.
Text categorization presents unique challenges to traditional classification methods due to the large number of features inherent in the datasets from real-world applications of text categorization, and a great deal of training samples. In high-dimensional document data, the classes are typically categorized only by subsets of features, which are typically different for the classes of different topics. This paper presents a simple but effective classifier for text categorization using class-dependent projection based method. By projecting onto a set of individual subspaces, the samples belonging to different document classes are separated such that they are easily to be classified. This is achieved by developing a new supervised feature weighting algorithm to learn the optimized subspaces for all the document classes. The experiments carried out on common benchmarking corpuses showed that the proposed method achieved both higher classification accuracy and lower computational costs than some distinguishing classifiers in text categorization, especially for datasets including document categories with overlapping topics.  相似文献   

19.
This paper proposed a new text categorization model based on the combination of modified back propagation neural network (MBPNN) and latent semantic analysis (LSA). The traditional back propagation neural network (BPNN) has slow training speed and is easy to trap into a local minimum, and it will lead to a poor performance and efficiency. In this paper, we propose the MBPNN to accelerate the training speed of BPNN and improve the categorization accuracy. LSA can overcome the problems caused by using statistically derived conceptual indices instead of individual words. It constructs a conceptual vector space in which each term or document is represented as a vector in the space. It not only greatly reduces the dimension but also discovers the important associative relationship between terms. We test our categorization model on 20-newsgroup corpus and reuter-21578 corpus, experimental results show that the MBPNN is much faster than the traditional BPNN. It also enhances the performance of the traditional BPNN. And the application of LSA for our system can lead to dramatic dimensionality reduction while achieving good classification results.  相似文献   

20.
基于向量空间模型的中文文本层次分类方法研究   总被引:8,自引:0,他引:8  
肖雪  何中市 《计算机应用》2006,26(5):1125-1126
在文本分类的类别数量庞大的情况下,层次分类是一种有效的分类途径。针对层次分类的结构特点,考虑到不同的层次对特征选择和分类方法有不同的要求,提出了新的基于向量空间模型的二重特征选择方法FDS以及层次分类算法HTC。二重特征选择方法对每一层均进行一次特征选择,并逐层改变特征数量和权重计算方法;HTC算法把分别对粗分和细分更有效的类中心向量法与SVM方法相结合。实验表明,该方法相对于平面分类和一般的层次分类方法,有较高的准确率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号