首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于Bootstrapping的文本分类模型   总被引:1,自引:3,他引:1  
本文提出一种基于Bootstrapping 的文本分类模型,该模型采用最大熵模型作为分类器,从少量的种子集出发,自动学习更多的文本作为新的种子样本,这样不断学习来提高最大熵分类器的文本分类性能。文中提出一个权重因子来调整新的种子样本在分类器训练过程中的权重。实验结果表明,在相同的手工训练语料的条件下,与传统的文本分类模型相比这种基于Bootstrapping 的文本分类模型具有明显优势,仅使用每类100 篇种子训练集,分类结果的F1 值为70156 % ,比传统模型高出4170 %。该模型通过使用适当的权重因子可以更好改善分类器的训练效果。  相似文献   

2.
模糊kNN在文本分类中的应用研究   总被引:1,自引:0,他引:1  
自动文本分类是根据已经分配好类标签的训练文档集,来对新文档分配类标签.针对模糊kNN算法用于文本分类的性能进行了一系列的实验研究与分析.在中英文两个不同的语料集上,采用四种著名的文本特征选择方法进行特征选择,对改进的模糊kNN方法与经典kNN及目前广泛使用的基于相似度加权的kNN方法进行实验比较.结果表明,在不同的特征选择方法下,该算法均能削弱训练样本分布的不均匀性对分类性能的影响,提高分类精度,并且在一定程度上降低对k值的敏感性.  相似文献   

3.
针对有特殊结构的文本,传统的文本分类算法已经不能满足需求,为此提出一种基于多示例学习框架的文本分类算法。将每个文本当作一个示例包,文本中的标题和正文视为该包的两个示例;利用基于一类分类的多类分类支持向量机算法,将包映射到高维特征空间中;引入高斯核函数训练分类器,完成对无标记文本的分类预测。实验结果表明,该算法相较于传统的机器学习分类算法具有更高的分类精度,为具有特殊文本结构的文本挖掘领域研究提供了新的角度。  相似文献   

4.
基于主动学习的文档分类   总被引:3,自引:0,他引:3  
In the field of text categorization,the number of unlabeled documents is generally much gretaer than that of labeled documents. Text categorization is the problem of categorization in high-dimension vector space, and more training samples will generally improve the accuracy of text classifier. How to add the unlabeled documents of training set so as to expand training set is a valuable problem. The theory of active learning is introducted and applied to the field of text categorization in this paper ,exploring the method of using unlabeled documents to improve the accuracy oftext classifier. It is expected that such technology will improve text classifier's accuracy through adopting relativelylarge number of unlabelled documents samples. We brought forward an active learning based algorithm for text categorization,and the experiments on Reuters news corpus showed that when enough training samples available,it′s effective for the algorithm to promote text classifier's accuracy through adopting unlabelled document samples.  相似文献   

5.
基于核方法的Web挖掘研究   总被引:2,自引:0,他引:2  
基于词空间的分类方法很难处理文本的高维特性和捕获文本语义概念.利用核主成分分析和支持向量机。提出一种通过约简文本数据维数抽取语义概念、基于语义概念进行文本分类的新方法.首先将文档映射到高维线性特征空间消除非线性特征,然后在映射空间中通过主成分分析消除变量之间的相关性,实现降维和语义概念抽取,得到文档的语义概念空间,最后在语义概念空间中采用支持向量机进行分类.通过新定义的核函数,不必显式实现到语义概念空间的映射,可在原始文档向量空间中直接实现基于语义概念的分类.利用核化的GHA方法自适应迭代求解核矩阵的特征向量和特征值,适于求解大规模的文本分类问题.试验结果表明该方法对于改进文本分类的性能具有较好的效果.  相似文献   

6.
针对人脸图像不完备的问题和人脸图像在不同视角、光照和噪声下所造成训练样本污损的问题,提出了一种快速的人脸识别算法--RPCA_CRC。首先,将人脸训练样本对应的矩阵D0分解为类间低秩矩阵D和稀疏误差矩阵E;其次,以低秩矩阵D为基础,得到测试样本的协同表征;最后,通过重构误差进行分类。相对于基于稀疏表征的分类(SRC)方法,所提算法运行速度平均提高25倍;且在训练样本数不完备的情况下,识别率平均提升30%。实验证明该算法快速有效,识别率高。  相似文献   

7.
Traditional term weighting schemes in text categorization, such as TF-IDF, only exploit the statistical information of terms in documents. Instead, in this paper, we propose a novel term weighting scheme by exploiting the semantics of categories and indexing terms. Specifically, the semantics of categories are represented by senses of terms appearing in the category labels as well as the interpretation of them by WordNet. Also, the weight of a term is correlated to its semantic similarity with a category. Experimental results on three commonly used data sets show that the proposed approach outperforms TF-IDF in the cases that the amount of training data is small or the content of documents is focused on well-defined categories. In addition, the proposed approach compares favorably with two previous studies.  相似文献   

8.
Researches in text categorization have been confined to whole-document-level classification, probably due to lack of full-text test collections. However, full-length documents available today in large quantities pose renewed interests in text classification. A document is usually written in an organized structure to present its main topic(s). This structure can be expressed as a sequence of subtopic text blocks, or passages. In order to reflect the subtopic structure of a document, we propose a new passage-level or passage-based text categorization model, which segments a test document into several passages, assigns categories to each passage, and merges the passage categories to the document categories. Compared with traditional document-level categorization, two additional steps, passage splitting and category merging, are required in this model. Using four subsets of the Reuters text categorization test collection and a full-text test collection of which documents are varying from tens of kilobytes to hundreds, we evaluate the proposed model, especially the effectiveness of various passage types and the importance of passage location in category merging. Our results show simple windows are best for all test collections tested in these experiments. We also found that passages have different degrees of contribution to the main topic(s), depending on their location in the test document.  相似文献   

9.
Recently research on text mining has attracted lots of attention from both industrial and academic fields. Text mining concerns of discovering unknown patterns or knowledge from a large text repository. The problem is not easy to tackle due to the semi-structured or even unstructured nature of those texts under consideration. Many approaches have been devised for mining various kinds of knowledge from texts. One important aspect of text mining is on automatic text categorization, which assigns a text document to some predefined category if the document falls into the theme of the category. Traditionally the categories are arranged in hierarchical manner to achieve effective searching and indexing as well as easy comprehension for human beings. The determination of category themes and their hierarchical structures were most done by human experts. In this work, we developed an approach to automatically generate category themes and reveal the hierarchical structure among them. We also used the generated structure to categorize text documents. The document collection was trained by a self-organizing map to form two feature maps. These maps were then analyzed to obtain the category themes and their structure. Although the test corpus contains documents written in Chinese, the proposed approach can be applied to documents written in any language and such documents can be transformed into a list of separated terms.  相似文献   

10.
一种基于向量空间模型的文本分类方法   总被引:21,自引:1,他引:21  
介绍的文本分类是指在给定分类体系下,根据文本的内容自动确定文本类别的过程。通过分析网页的特点及因特网用户感兴趣的查询信息,提出了一种基于机器学习的、独立于语种的文本分类模型。这一模型的关键算法主要利用字间的相关信息、词频、页面的标记信息以及对用户的查询信息的浅层语义分析,提取网页特征,并计算可调的词频加权参数和增加特征词的可分性信息,然后通过本类和非本类训练,建立预定义类的特征向量空间,进一步对文本进行分类。这种分类方法在相似文本分类中具有明显的优势。  相似文献   

11.
字典模型(BOW)是一种经典的图像描述方法,模型中特征字典的构造方法至关重要。针对特征字典构造问题,提出了一种类别约束下的低秩优化特征字典构造方法LRC-DT,通过低秩优化的方法使训练出来的特征字典在描述同类图像时表示系数矩阵的秩相对较低,从而将类别信息引入到字典学习中,提高字典对图像描述的可分辨性。在标准公测库Caltech-101和Caltech-256上的实验结果表明:将SPM、稀疏编码下的SPM(ScSPM)、局部线性编码(LLC)和线性核函数的SPM(LSPM)编码方法中的特征字典替换为加入低秩约束(LRC)的特征字典后,随着训练样本数目增多,字典模型的分类准确率与未引入低秩约束的方法相比有所提高。  相似文献   

12.
Web 2.0 provides user-friendly tools that allow persons to create and publish content online. User generated content often takes the form of short texts (e.g., blog posts, news feeds, snippets, etc). This has motivated an increasing interest on the analysis of short texts and, specifically, on their categorisation. Text categorisation is the task of classifying documents into a certain number of predefined categories. Traditional text classification techniques are mainly based on word frequency statistical analysis and have been proved inadequate for the classification of short texts where word occurrence is too small. On the other hand, the classic approach to text categorization is based on a learning process that requires a large number of labeled training texts to achieve an accurate performance. However labeled documents might not be available, when unlabeled documents can be easily collected. This paper presents an approach to text categorisation which does not need a pre-classified set of training documents. The proposed method only requires the category names as user input. Each one of these categories is defined by means of an ontology of terms modelled by a set of what we call proximity equations. Hence, our method is not category occurrence frequency based, but highly depends on the definition of that category and how the text fits that definition. Therefore, the proposed approach is an appropriate method for short text classification where the frequency of occurrence of a category is very small or even zero. Another feature of our method is that the classification process is based on the ability of an extension of the standard Prolog language, named Bousi~Prolog , for flexible matching and knowledge representation. This declarative approach provides a text classifier which is quick and easy to build, and a classification process which is easy for the user to understand. The results of experiments showed that the proposed method achieved a reasonably useful performance.  相似文献   

13.
基于主题词频数特征的文本主题划分   总被引:4,自引:1,他引:4  
康恺  林坤辉  周昌乐 《计算机应用》2006,26(8):1993-1995
目前文本分类所采用的文本—词频矩阵具有词频维数过大和过于稀疏两个特点,给计算造成了一定困难。为解决这一问题,从用户使用搜索引擎时选择所需文本的心理出发,提出了一种基于主题词频数特征的文本主题划分方法。该方法首先根据统计方法筛选各文本类的主题词,然后以主题词类替代单个词作为特征采用模糊C 均值(FCM)算法施行文本聚类。实验获得了较好的主题划分效果,并与一种基于词聚类的文本聚类方法进行了过程及结果中多个方面的比较,得出了一些在实施要点和应用背景上较有意义的结论。  相似文献   

14.
文本分类中基于对数似然比测试的特征词选择方法   总被引:9,自引:1,他引:8  
本文将对数似然比测试用于文本分类中的特征词选择。与传统的频度、集中度和分散度等多种统计指标的测试独立进行的方法相比较,这种方法利用协方差矩阵协调了各个统计指标之间的联系,从而将它们有机地统一为一个整体。实验显示,这种特征词选择方法优于传统的频度测试、集中度测试和分散度测试独立进行的特征词选择的方法。  相似文献   

15.
During the last decades the Web has become the greatest repository of digital information. In order to organize all this information, several text categorization methods have been developed, achieving accurate results in most cases and in very different domains. Due to the recent usage of Internet as communication media, short texts such as news, tweets, blogs, and product reviews are more common every day. In this context, there are two main challenges; on the one hand, the length of these documents is short, and therefore, the word frequencies are not informative enough, making text categorization even more difficult than usual. On the other hand, topics are changing constantly at a fast rate, causing the lack of adequate amounts of training data. In order to deal with these two problems we consider a text classification method that is supported on the idea that similar documents may belong to the same category. Mainly, we propose a neighborhood consensus classification method that classifies documents by considering their own information as well as information about the category assigned to other similar documents from the same target collection. In particular, the short texts we used in our evaluation are news titles with an average of 8 words. Experimental results are encouraging; they indicate that leveraging information from similar documents helped to improve classification accuracy and that the proposed method is especially useful when labeled training resources are limited.  相似文献   

16.
基于类信息的文本特征选择与加权算法研究   总被引:3,自引:1,他引:2  
文本自动分类中特征选择和加权的目的是为了降低文本特征空间维数、去除噪音和提高分类精度。传统的特征选择方案筛选出的特征往往偏爱类分布不均匀文档集中的大类,而常用的TF·IDF特征加权方案仅考虑了特征与文档的关系,缺乏对特征与类别关系的考虑。针对上述问题,提出了基于类别信息的特征选择与加权方法,在两个不同的语料集上进行比较和分析实验,结果显示基于类别信息的特征选择与加权方法比传统方法在处理类分布不均匀的文档集时能有效提高分类精度,并且降维程度有所提高。  相似文献   

17.
神经网络在文本分类上的一种应用   总被引:5,自引:1,他引:5  
现有的文本分类方法在知识获取方面存在不足。该文针对某种应用需求,提出了人工神经网络和文本分类结合的一种文本分类方法。采用特征词的向量空间来描述文本,利用人工神经网络的良好的学习能力,通过对文本样本集进行训练,从中提取出对文本分类的知识,再利用神经网络和所获得的分类知识实现对文本的分类。  相似文献   

18.
Text categorization is continuing to be one of the most researched NLP problems due to the ever-increasing amounts of electronic documents and digital libraries. In this paper, we present a new text categorization method that combines the distributional clustering of words and a learning logic technique, called Lsquare, for constructing text classifiers. The high dimensionality of text in a document has not been fruitful for the task of categorization, for which reason, feature clustering has been proven to be an ideal alternative to feature selection for reducing the dimensionality. We, therefore, use distributional clustering method (IB) to generate an efficient representation of documents and apply Lsquare for training text classifiers. The method was extensively tested and evaluated. The proposed method achieves higher or comparable classification accuracy and {rm F}_1 results compared with SVM on exact experimental settings with a small number of training documents on three benchmark data sets WebKB, 20Newsgroup, and Reuters-21578. The results prove that the method is a good choice for applications with a limited amount of labeled training data. We also demonstrate the effect of changing training size on the classification performance of the learners.  相似文献   

19.
提出基于粗糙集理论的动态类别扩展算法,可以根据新文献与已有训练规则的匹配程度,有效地进行新类别的自动扩展和新分类规则的自动生成,从而屏蔽训练集和分类规则的更新等问题.  相似文献   

20.
Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, often feature selection is used for reducing the dimensionality. In this paper, we make an evaluation and comparison of the feature selection policies used in text categorization by employing some of the popular feature selection metrics. For the experiments, we use datasets which vary in size, complexity, and skewness. We use support vector machine as the classifier and tf-idf weighting for weighting the terms. In addition to the evaluation of the policies, we propose new feature selection metrics which show high success rates especially with low number of keywords. These metrics are two-sided local metrics and are based on the difference of the distributions of a term in the documents belonging to a class and in the documents not belonging to that class. Moreover, we propose a keyword selection framework called adaptive keyword selection. It is based on selecting different number of terms for each class and it shows significant improvement on skewed datasets that have a limited number of training instances for some of the classes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号