首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
传统的文本分类方法需要大量的已知类别样本来得到一个好的文本分类器,然而在现实的文本分类应用过程中,大量的已知类别样本通常很难获得,因此如何利用少量的已知类别样本和大量的未知类别样本来获得比较好的分类效果成为一个热门的研究课题。本文为此提出了一种扩大已知类别样本集的新方法,该方法先从已知类别样本集中提取出每个类别的代表特征,然后根据代表特征从未知类别样本集中寻找相似样本加入已知类别样本集。实验证明,该方法能有效地提高分类效果。  相似文献   

2.
Traditional approaches for text data stream classification usually require the manual labeling of a number of documents, which is an expensive and time consuming process. In this paper, to overcome this limitation, we propose to classify text streams by keywords without labeled documents so as to reduce the burden of labeling manually. We build our base text classifiers with the help of keywords and unlabeled documents to classify text streams, and utilize classifier ensemble algorithms to cope with concept drifting in text data streams. Experimental results demonstrate that the proposed method can build good classifiers by keywords without manual labeling, and when the ensemble based algorithm is used, the concept drift in the streams can be well detected and adapted, which performs better than the single window algorithm.  相似文献   

3.
The text categorization (TC) is the automated assignment of text documents to predefined categories based on document contents. TC has been an application for many learning approaches, which proved effective. Nevertheless, TC provides many challenges to machine learning. In this paper, we suggest, for text categorization, the integration of external WordNet lexical information to supplement training data for a semi-supervised clustering algorithm which (i) uses a finite design set of labeled data to (ii) help agglomerative hierarchical clustering algorithms (AHC) partition a finite set of unlabeled data and then (iii) terminates without the capacity to classify other objects. This algorithm is the “semi-supervised agglomerative hierarchical clustering algorithm” (ssAHC). Our experiments use Reuters 21578 database and consist of binary classifications for categories selected from the 89 TOPICS classes of the Reuters collection. Using the vector space model (VSM), each document is represented by its original feature vector augmented with external feature vector generated using WordNet. We verify experimentally that the integration of WordNet helps ssAHC improve its performance, effectively addresses the classification of documents into categories with few training documents, and does not interfere with the use of training data. © 2001 John Wiley & Sons, Inc.  相似文献   

4.
基于TF-IDF和余弦相似度的文本分类方法   总被引:1,自引:0,他引:1  
文本分类是文本处理的基本任务。大数据处理时代的到来致使文本分类问题面临着新的挑战。研究者已经针对不同情况提出多种文本分类算法,如KNN、朴素贝叶斯、支持向量机及一系列改进算法。这些算法的性能取决于固定数据集,不具有自学习功能。该文提出一种新的文本分类方法,包括三个步骤: 基于TF-IDF方法提取类别关键词;通过类别关键词和待分类文本关键词的相似性进行文本分类;在分类过程中更新类别关键词改进分类器性能。仿真实验结果表明,本文提出方法的准确度较目前常用方法有较大提高,在实验数据集上分类准确度达到90%,当文本数据量较大时,分类准确度可达到95%。算法初次使用时,需要一定的训练样本和训练时间,但分类时间可下降到其他算法的十分之一。该方法具有自学习模块,在分类过程中,可以根据分类经验自动更新类别关键词,保证分类器准确率,具有很强的现实应用性。  相似文献   

5.
基于主动学习的文档分类   总被引:3,自引:0,他引:3  
In the field of text categorization,the number of unlabeled documents is generally much gretaer than that of labeled documents. Text categorization is the problem of categorization in high-dimension vector space, and more training samples will generally improve the accuracy of text classifier. How to add the unlabeled documents of training set so as to expand training set is a valuable problem. The theory of active learning is introducted and applied to the field of text categorization in this paper ,exploring the method of using unlabeled documents to improve the accuracy oftext classifier. It is expected that such technology will improve text classifier's accuracy through adopting relativelylarge number of unlabelled documents samples. We brought forward an active learning based algorithm for text categorization,and the experiments on Reuters news corpus showed that when enough training samples available,it′s effective for the algorithm to promote text classifier's accuracy through adopting unlabelled document samples.  相似文献   

6.
Text mining, intelligent text analysis, text data mining and knowledge-discovery in text are generally used aliases to the process of extracting relevant and non-trivial information from text. Some crucial issues arise when trying to solve this problem, such as document representation and deficit of labeled data. This paper addresses these problems by introducing information from unlabeled documents in the training set, using the support vector machine (SVM) separating margin as the differentiating factor. Besides studying the influence of several pre-processing methods and concluding on their relative significance, we also evaluate the benefits of introducing background knowledge in a SVM text classifier. We further evaluate the possibility of actively learning and propose a method for successfully combining background knowledge and active learning. Experimental results show that the proposed techniques, when used alone or combined, present a considerable improvement in classification performance, even when small labeled training sets are available.  相似文献   

7.
中心分类法性能高效,但需要大量的训练文档(已标识文档)来训练分类器以保证分类的正确性.而训练文档因需花费大量人力物力来分类而数量有限,同时,网络上存在着很多未标识文档.为此,对中心分类法进行改进,提出了ONUC和0FFUC算法,以弥补当训练文档不足时,中心分类法性能急剧下降的缺陷.考虑到中心分类法易受孤立点的影响,采取了去边处理.实验证明,与普通的中心分类法、其它半监督经典算法比较,在训练文档很少的情况下,该算法能获得较好的性能.  相似文献   

8.
基于关联规则挖掘的中文文本自动分类   总被引:7,自引:0,他引:7  
随着电子出版物和互联网文档的飞速增加,自动文档分类工作正变得日渐重要.提出一种基于关联规则的中文文本自动分类方法.该算法将文档视作事务.关键词视作项,利用改进的关联规则挖掘算法挖掘项和类剐间的相关关系.挖掘出的规则形成分类器,可用于类标号未知的文档的区分.实验证明,该算法能较快地获得可理解的规则并且具有较好的召回率和准确率.  相似文献   

9.
一种利用近邻和信息熵的主动文本标注方法   总被引:1,自引:0,他引:1  
由于大规模标注文本数据费时费力,利用少量标注样本和大量未标注样本的半监督文本分类发展迅速.在半监督文本分类中,少量标注样本主要用来初始化分类模型,其合理性将影响最终分类模型的性能.为了使标注样本尽可能吻合原始数据的分布,提出一种避开选择已标注样本的K近邻来抽取下一组候选标注样本的方法,使得分布在不同区域的样本有更多的标注机会.在此基础上,为了获得更多的类别信息,在候选标注样本中选择信息熵最大的样本作为最终的标注样本.真实文本数据上的实验表明了提出方法的有效性.  相似文献   

10.
针对现有的威胁感知算法对样本标注代价较大且在训练分类器时只使用已标注的威胁样本,提出了一种基于图约束和预聚类的主动学习算法,该算法旨在通过降低标注威胁样本的代价和充分利用未标注的威胁样本对训练分类器的辅助作用,训练出更好的分类器,实现有效地感知威胁情景。该算法首先用已标注的威胁样本集合训练分类器,接着从未标注的威胁样本集中挑选出最有价值的威胁样本,并对其进行标注,再将标注后的威胁样本加入已标注的样本集中并从原来未标注的样本集中删去此样本,最后用新的已标注的威胁样本集重新训练分类器,直到满足循环条件终止。仿真实验表明,基于图约束与预聚类的主动学习算法在达到目标准确率的同时降低了标注代价且能够有效地感知威胁情景,具有一定的研究意义。  相似文献   

11.
主动协同半监督粗糙集分类模型   总被引:1,自引:0,他引:1  
粗糙集理论是一种有监督学习模型,一般需要适量有标记的数据来训练分类器。但现实一些问题往往存在大量无标记的数据,而有标记数据由于标记代价过大较为稀少。文中结合主动学习和协同训练理论,提出一种可有效利用无标记数据提升分类性能的半监督粗糙集模型。该模型利用半监督属性约简算法提取两个差异性较大的约简构造基分类器,然后基于主动学习思想在无标记数据中选择两分类器分歧较大的样本进行人工标注,并将更新后的分类器交互协同学习。UCI数据集实验对比分析表明,该模型能明显提高分类学习性能,甚至能达到数据集的最优值。  相似文献   

12.
We address the problem of predicting category labels for unlabeled videos in a large video dataset by using a ground-truth set of objectively labeled videos that we have created. Large video databases like YouTube require that a user uploading a new video assign to it a category label from a prescribed set of labels. Such category labeling is likely to be corrupted by the subjective biases of the uploader. Despite their noisy nature, these subjective labels are frequently used as gold standard in algorithms for multimedia classification and retrieval. Our goal in this paper is NOT to propose yet another algorithm that predicts labels for unseen videos based on the subjective ground-truth. On the other hand, our goal is to demonstrate that the video classification performance can be improved if instead of using subjective labels, we first create an objectively labeled ground-truth set of videos and then train a classifier based on such a ground-truth so as to predict objective labels for the set of unlabeled videos.  相似文献   

13.
数据流分类是数据挖掘领域的重要研究任务之一,已有的数据流分类算法大多是在有标记数据集上进行训练,而实际应用领域数据流中有标记的数据数量极少。为解决这一问题,可通过人工标注的方式获取标记数据,但人工标注昂贵且耗时。考虑到未标记数据的数量极大且隐含大量信息,因此在保证精度的前提下,为利用这些未标记数据的信息,本文提出了一种基于Tri-training的数据流集成分类算法。该算法采用滑动窗口机制将数据流分块,在前k块含有未标记数据和标记数据的数据集上使用Tri-training训练基分类器,通过迭代的加权投票方式不断更新分类器直到所有未标记数据都被打上标记,并利用k个Tri-training集成模型对第k+1块数据进行预测,丢弃分类错误率高的分类器并在当前数据块上重建新分类器从而更新当前模型。在10个UCI数据集上的实验结果表明:与经典算法相比,本文提出的算法在含80%未标记数据的数据流上的分类精度有显著提高。  相似文献   

14.
This paper deals with verb-verb morphological disambiguation of two different verbs that have the same inflected form. The verb-verb morphological ambiguity (VVMA) is one of the critical Korean parts of speech (POS) tagging issues. The recognition of verb base forms related to ambiguous words highly depends on the lexical information in their surrounding contexts and the domains they occur in. However, current probabilistic morpheme-based POS tagging systems cannot handle VVMA adequately since most of them have a limitation to reflect a broad context of word level, and they are trained on too small amount of labeled training data to represent sufficient lexical information required for VVMA disambiguation.In this study, we suggest a classifier based on a large pool of raw text that contains sufficient lexical information to handle the VVMA. The underlying idea is that we automatically generate the annotated training set applicable to the ambiguity problem such as VVMA resolution via unlabeled unambiguous instances which belong to the same class. This enables to label ambiguous instances with the knowledge that can be induced from unambiguous instances. Since the unambiguous instances have only one label, the automatic generation of their annotated corpus are possible with unlabeled data.In our problem, since all conjugations of irregular verbs do not lead to the spelling changes that cause the VVMA, a training data for the VVMA disambiguation are generated via the instances of unambiguous conjugations related to each possible verb base form of ambiguous words. This approach does not require an additional annotation process for an initial training data set or a selection process for good seeds to iteratively augment a labeling set which are important issues in bootstrapping methods using unlabeled data. Thus, this can be strength against previous related works using unlabeled data. Furthermore, a plenty of confident seeds that are unambiguous and can show enough coverage for learning process are assured as well.We also suggest a strategy to extend the context information incrementally with web counts only to selected test examples that are difficult to predict using the current classifier or that are highly different from the pre-trained data set.As a result, automatic data generation and knowledge acquisition from unlabeled text for the VVMA resolution improved the overall tagging accuracy (token-level) by 0.04%. In practice, 9-10% out of verb-related tagging errors are fixed by the VVMA resolution whose accuracy was about 98% by using the Naïve Bayes classifier coupled with selective web counts.  相似文献   

15.
Word Sense Disambiguation by Learning Decision Trees from Unlabeled Data   总被引:1,自引:0,他引:1  
In this paper we describe a machine learning approach to word sense disambiguation that uses unlabeled data. Our method is based on selective sampling with committees of decision trees. The committee members are trained on a small set of labeled examples which are then augmented by a large number of unlabeled examples. Using unlabeled examples is important because obtaining labeled data is expensive and time-consuming while it is easy and inexpensive to collect a large number of unlabeled examples. The idea behind this approach is that the labels of unlabeled examples can be estimated by using committees. Using additional unlabeled examples, therefore, improves the performance of word sense disambiguation and minimizes the cost of manual labeling. Effectiveness of this approach was examined on a raw corpus of one million words. Using unlabeled data, we achieved an accuracy improvement up to 20.2%.  相似文献   

16.
Web 2.0 provides user-friendly tools that allow persons to create and publish content online. User generated content often takes the form of short texts (e.g., blog posts, news feeds, snippets, etc). This has motivated an increasing interest on the analysis of short texts and, specifically, on their categorisation. Text categorisation is the task of classifying documents into a certain number of predefined categories. Traditional text classification techniques are mainly based on word frequency statistical analysis and have been proved inadequate for the classification of short texts where word occurrence is too small. On the other hand, the classic approach to text categorization is based on a learning process that requires a large number of labeled training texts to achieve an accurate performance. However labeled documents might not be available, when unlabeled documents can be easily collected. This paper presents an approach to text categorisation which does not need a pre-classified set of training documents. The proposed method only requires the category names as user input. Each one of these categories is defined by means of an ontology of terms modelled by a set of what we call proximity equations. Hence, our method is not category occurrence frequency based, but highly depends on the definition of that category and how the text fits that definition. Therefore, the proposed approach is an appropriate method for short text classification where the frequency of occurrence of a category is very small or even zero. Another feature of our method is that the classification process is based on the ability of an extension of the standard Prolog language, named Bousi~Prolog , for flexible matching and knowledge representation. This declarative approach provides a text classifier which is quick and easy to build, and a classification process which is easy for the user to understand. The results of experiments showed that the proposed method achieved a reasonably useful performance.  相似文献   

17.
Text Classification from Labeled and Unlabeled Documents using EM   总被引:51,自引:0,他引:51  
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available.We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.  相似文献   

18.
一种协同半监督分类算法Co-S3OM   总被引:1,自引:0,他引:1  
为了提高半监督分类的有效性, 提出了一种基于SOM神经网络和协同训练的半监督分类算法Co-S3OM (coordination semi-supervised SOM)。将有限的有标记样本分为无重复的三个均等的训练集, 分别使用改进的监督SSOM算法(supervised SOM)训练三个单分类器, 通过三个单分类器共同投票的方法挖掘未标记样本中的隐含信息, 扩大有标记样本的数量, 依次扩充单分类器训练集, 生成最终的分类器。最后选取UCI数据集进行实验, 结果表明Co-S3OM具有较高的标记率和分类率。  相似文献   

19.
冯建周  马祥聪 《自动化学报》2020,46(8):1759-1766
细粒度实体分类(Fine-grained entity type classification, FETC)旨在将文本中出现的实体映射到层次化的细分实体类别中. 近年来, 采用深度神经网络实现实体分类取得了很大进展. 但是, 训练一个具备精准识别度的神经网络模型需要足够数量的标注数据, 而细粒度实体分类的标注语料非常稀少, 如何在没有标注语料的领域进行实体分类成为难题. 针对缺少标注语料的实体分类任务, 本文提出了一种基于迁移学习的细粒度实体分类方法, 首先通过构建一个映射关系模型挖掘有标注语料的实体类别与无标注语料实体类别间的语义关系, 对无标注语料的每个实体类别, 构建其对应的有标注语料的类别映射集合. 然后, 构建双向长短期记忆(Bidirectional long short term memory, BiLSTM)模型, 将代表映射类别集的句子向量组合作为模型的输入用来训练无标注实体类别. 基于映射类别集中不同类别与对应的无标注类别的语义距离构建注意力机制, 从而实现实体分类器以识别未知实体分类. 实验证明, 我们的方法取得了较好的效果, 达到了在无任何标注语料前提下识别未知命名实体分类的目的.  相似文献   

20.
Word sense disambiguation (WSD) is the problem of determining the right sense of a polysemous word in a certain context. This paper investigates the use of unlabeled data for WSD within a framework of semi-supervised learning, in which labeled data is iteratively extended from unlabeled data. Focusing on this approach, we first explicitly identify and analyze three problems inherently occurred piecemeal in the general bootstrapping algorithm; namely the imbalance of training data, the confidence of new labeled examples, and the final classifier generation; all of which will be considered integratedly within a common framework of bootstrapping. We then propose solutions for these problems with the help of classifier combination strategies. This results in several new variants of the general bootstrapping algorithm. Experiments conducted on the English lexical samples of Senseval-2 and Senseval-3 show that the proposed solutions are effective in comparison with previous studies, and significantly improve supervised WSD.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号