首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 189 毫秒
1.
一种直推式多标记文档分类方法   总被引:3,自引:0,他引:3  
真实世界的文档往往同时属于多个类别,因此,利用多标记学习技术进行文档分类是一个重要的研究方向,现有多标记文档分类方法需要利用大量有正确分类标记的文档才能获得好的分类性能,然而,在实际应用中往往只能得到少量的有标记文档作为分类所需的训练文档.出于利用未标记文档的想法,提出一种基于随机游走的直推式多标记文档分类方法,可以利用大量的未标记文档来辅助提高分类性能,实验结果表明,该方法的性能优于现有直推式多标记分类方法CNMF.  相似文献   

2.
面对海量增长的互联网舆情信息,对这些舆情文本信息进行分类成为一项非常有意义的任务。首先,文章给出了文本文档的表示模型及特征选择函数的选取。然后,分析了随机森林算法在分类学习算法中的特点,提出了通过构建一系列的文档决策树来完成文档所属类别的判定。在实验中,收集了大量的网络媒体语料,并设定了训练集和测试集,通过对比测试得到了常见算法(包括k NN、SMO、SVM)与本算法RF的对比量化性能数据,证明了本文提出的算法具有较好的综合分类率和分类稳定性。  相似文献   

3.
多标记学习主要用于解决因单个样本对应多个概念标记而带来的歧义性问题,而半监督多标记学习是近年来多标记学习任务中的一个新的研究方向,它试图综合利用少量的已标记样本和大量的未标记样本来提高学习性能。为了进一步挖掘未标记样本的信息和价值并将其应用于文档多标记分类问题,该文提出了一种基于Tri-training的半监督多标记学习算法(MKSMLT),该算法首先利用k近邻算法扩充已标记样本集,结合Tri-training算法训练分类器,将多标记学习问题转化为标记排序问题。实验表明,该算法能够有效提高文档分类性能。  相似文献   

4.
特征选择是文档分类中常见的预处理工作,通过对文档特征空间降维,可以提高文档的分类性能。针对多数特征选择算法不考虑特征词共现关系的问题,该文提出了一种利用关联特征来增强文档分类性能的方法,针对特征扩展后产生的高维向量空间设计了一种快速冗余特征去除和选择算法,以满足实际应用中对增强特征分类性能和执行效率的需要。实验采用朴素贝叶斯网作为分类器,从特征降维效果、分类性能以及算法执行效率等方面与其他算法进行了比较。  相似文献   

5.
陈杰  陈彩  梁毅 《计算机系统应用》2017,26(11):159-164
文档的特征提取和文档的向量表示是文档分类中的关键,本文针对这两个关键点提出一种基于word2vec的文档分类方法.该方法根据DF采集特征词袋,以尽可能的保留文档集中的重要特征词,并且利用word2vec的潜在语义分析特性,将语义相关的特征词用一个主题词乘以合适的系数来代替,有效地浓缩了特征词袋,降低了文档向量的维度;该方法还结合了TF-IDF算法,对特征词进行加权,给每个特征词赋予更合适的权重.本文与另外两种文档分类方法进行了对比实验,实验结果表明,本文提出的基于word2vec的文档分类方法在分类效果上较其他两种方法均有所提高.  相似文献   

6.
TFIDF_NB协同训练算法   总被引:2,自引:0,他引:2  
采用少量已标记和大量未标记文档进行文本分类已成为一种重要研究趋势 .在分析了 EM和联合训练 (Co-training)两类算法的基础上 ,提出一种新的协同训练算法 .该算法利用 Bayes和 TFIDF两种分类器结合少量已标记和大量未标记文档协同增量训练 .实验结果表明 ,协同训练算法正确率较高 ,平均错误率较 EM和联合训练低 ,具有较好的性能  相似文献   

7.
采用索引技术,对输入的XML文档建立一个双索引结构来改进YFilter算法,优化XML文档过滤性能。藉助索引结构,该算法超前搜索元素结点在文档中的结构信息,预先排除不能保证得到任何匹配结果的元素结点,以避免大量不必要的查询处理。实验结果显示,当输入的XML文档较大时,该算法有较好的过滤性能。  相似文献   

8.
基于扩展角分类神经网络的文档分类方法   总被引:10,自引:0,他引:10  
CC4神经网络是一种三层前馈网络的新型角分类(corner classification)训练算法,原用于元搜索引擎Anvish的文档分类.当各文档之间的规模接近时,CC4神经网络有较好的分类效果.然而当文档之间规模差别较大时,其分类性能较差.针对这一问题,本文意图扩展原始CC4神经网络,达到对文档有效分类的效果.为此,提出了一种基于MDS-NN的数据索引方法,将每一文档映射至k维空间数据点,并尽可能多地保持原始文档之间的距离信息.其次,通过将索引信息变换为CC4神经网络接受的0,1序列,实现对CC4神经网络的扩展,使其能够接受索引信息作为输入.实验结果表明对相互之间规模差别较大的文档,扩展CC4神经网络的性能优于原始CC4神经网络的性能.同时,扩展CC4神经网络的分类精度与文档索引方法有密切关系.  相似文献   

9.
自然语言处理中的文档分类任务需要模型从低层级词向量中抽取高层级特征.通常,深度神经网络的特征抽取会利用文档中所有词语,这种做法不能很好适应内容较长的文档.此外,训练深度神经网络需要大量标记数据,在弱监督情况下往往不能取得良好效果.为迎接这些挑战,本研究提出应对弱监督长文档分类的方法.一方面,利用少量种子信息生成伪文档以增强训练数据,应对缺乏标记数据造成的精度难以提升的局面.另一方面,使用循环局部注意力学习,仅基于若干文档片段抽取出摘要特征,就足以支撑后续类别预测,提高模型的速度和精度.实验表明,本研究提出的伪文档生成模型确实能够增强训练数据,对预测精度的提升在弱监督情况下尤为显著;同时,基于局部注意力机制的长文档分类模型在预测精度上显著高于基准模型,处理速度也表现优异,具有实际应用价值.  相似文献   

10.
基于流形学习和SVM的Web文档分类算法   总被引:7,自引:4,他引:3       下载免费PDF全文
王自强  钱旭 《计算机工程》2009,35(15):38-40
为解决Web文档分类问题,提出一种基于流形学习和SVM的Web文档分类算法。该算法利用流形学习算法LPP对训练集中的高维Web文档空间进行非线性降维,从中找出隐藏在高维观测数据中有意义的低维结构,在降维后的低维特征空间中利用乘性更新规则的优化SVM进行分类预测。实验结果表明该算法以较少的运行时间获得更高的分类准确率。  相似文献   

11.
信息技术的飞速发展造成了大量的文本数据累积,其中很大一部分是短文本数据。文本分类技术对于从这些海量短文中自动获取知识具有重要意义。但是由于短文中的关键词出现次数少,而且带标签的训练样本又通常数量很少,现有的一般文本挖掘算法很难得到可接受的准确度。一些基于语义的分类方法获得了较好的准确度但又由于其低效性而无法适用于海量数据。文本提出了一个新颖的短文分类算法。该算法基于文本语义特征图,并使用类似kNN的方法进行分类。实验表明该算法在对海量短文进行分类时,其准确度和性能超过其它的算法。  相似文献   

12.
Text Classification from Labeled and Unlabeled Documents using EM   总被引:51,自引:0,他引:51  
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available.We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.  相似文献   

13.
沙莎  罗巍  罗三定 《计算机工程》2005,31(14):170-171,198
提出了一种基于文档集最心向量分类思想的概念特征提取方法。该方法从文档集中提取表述概念特征的向量,挖掘概念特征深层次的差异,表达文档中蕴含的抽象概念,为基于领域知识库的概念网构建提供良好的先决条件。  相似文献   

14.
Classification is a key problem in machine learning/data mining. Algorithms for classification have the ability to predict the class of a new instance after having been trained on data representing past experience in classifying instances. However, the presence of a large number of features in training data can hurt the classification capacity of a machine learning algorithm. The Feature Selection problem involves discovering a subset of features such that a classifier built only with this subset would attain predictive accuracy no worse than a classifier built from the entire set of features. Several algorithms have been proposed to solve this problem. In this paper we discuss how parallelism can be used to improve the performance of feature selection algorithms. In particular, we present, discuss and evaluate a coarse-grained parallel version of the feature selection algorithm FortalFS. This algorithm performs well compared with other solutions and it has certain characteristics that makes it a good candidate for parallelization. Our parallel design is based on the master--slave design pattern. Promising results show that this approach is able to achieve near optimum speedups in the context of Amdahl's Law.  相似文献   

15.
Because the labor needed to manually label a huge training sample set is usually not available, the problem of hyperspectral image classification often suffers from a lack of labeled training samples. At the same time, hyperspectral data represented in a large number of bands are usually highly correlated. In this paper, to overcome the small sample problem in hyperspectral image classification, correlation of spectral bands is fully utilized to generate multiple new sub-samples from each original sample. The number of labeled training samples is thus increased several times. Experiment results demonstrate that the proposed method has an obvious advantage when the number of labeled samples is small.  相似文献   

16.
A multi-objective GRASP for partial classification   总被引:4,自引:1,他引:3  
Metaheuristic algorithms have been used successfully in a number of data mining contexts and specifically in the production of classification rules. Classification rules describe a class of interest or a subset of this class, and as such may also be used as an aid in prediction. The production and selection of classification rules for a particular class of the database is often referred to as partial classification. Since partial classification rules are often evaluated according to a number of conflicting objectives, the generation of such rules is a task that is well suited to a multi-objective (MO) metaheuristic approach. In this paper we discuss how to adapt well known MO algorithms for the task of partial classification. Additionally, we introduce a new MO algorithm for this task based on a greedy randomized adaptive search procedure (GRASP). GRASP has been applied to a number of problems in combinatorial optimization, but it has very seldom been used in a MO setting, and generally only through repeated optimization of single objective problems, using either linear combinations of the objectives or additional constraints. The approach presented takes advantage of some specific characteristics of the data mining problem being solved, allowing for the very effective construction of a set of solutions that form the starting point for the local search phase of the GRASP. The resulting algorithm is guided solely by the concepts of dominance and Pareto-optimality. We present experimental results for our partial classification GRASP and other MO metaheuristics. These show that such algorithms are generally very well suited to this data mining task and furthermore, the GRASP brings additional efficiency to the search for partial classification rules.  相似文献   

17.
传统的文本分类方法需要大量的已知类别样本来得到一个好的文本分类器,然而在现实的文本分类应用过程中,大量的已知类别样本通常很难获得,因此如何利用少量的已知类别样本和大量的未知类别样本来获得比较好的分类效果成为一个热门的研究课题。本文为此提出了一种扩大已知类别样本集的新方法,该方法先从已知类别样本集中提取出每个类别的代表特征,然后根据代表特征从未知类别样本集中寻找相似样本加入已知类别样本集。实验证明,该方法能有效地提高分类效果。  相似文献   

18.
During the last decades the Web has become the greatest repository of digital information. In order to organize all this information, several text categorization methods have been developed, achieving accurate results in most cases and in very different domains. Due to the recent usage of Internet as communication media, short texts such as news, tweets, blogs, and product reviews are more common every day. In this context, there are two main challenges; on the one hand, the length of these documents is short, and therefore, the word frequencies are not informative enough, making text categorization even more difficult than usual. On the other hand, topics are changing constantly at a fast rate, causing the lack of adequate amounts of training data. In order to deal with these two problems we consider a text classification method that is supported on the idea that similar documents may belong to the same category. Mainly, we propose a neighborhood consensus classification method that classifies documents by considering their own information as well as information about the category assigned to other similar documents from the same target collection. In particular, the short texts we used in our evaluation are news titles with an average of 8 words. Experimental results are encouraging; they indicate that leveraging information from similar documents helped to improve classification accuracy and that the proposed method is especially useful when labeled training resources are limited.  相似文献   

19.
不平衡数据分类方法综述   总被引:1,自引:0,他引:1  
随着信息技术的快速发展,各领域的数据正以前所未有的速度产生并被广泛收集和存储,如何实现数据的智能化处理从而利用数据中蕴含的有价值信息已成为理论和应用的研究热点.数据分类作为一种基础的数据处理方法,已广泛应用于数据的智能化处理.传统分类方法通常假设数据类别分布均衡且错分代价相等,然而,现实中的数据通常具有不平衡特性,即某一类的样本数量要小于其他类的样本数量,且少数类具有更高错分代价.当利用传统的分类算法处理不平衡数据时,由于多数类和少数类在数量上的倾斜,以总体分类精度最大为目标会使得分类模型偏向于多数类而忽略少数类,造成少数类的分类精度较低.如何针对不平衡数据分类问题设计分类算法,同时保证不平衡数据中多数类与少数类的分类精度,已成为机器学习领域的研究热点,并相继出现了一系列优秀的不平衡数据分类方法.鉴于此,对现有的不平衡数据分类方法给出较为全面的梳理,从数据预处理层面、特征层面和分类算法层面总结和比较现有的不平衡数据分类方法,并结合当下机器学习的研究热点,探讨不平衡数据分类方法存在的挑战.最后展望不平衡数据分类未来的研究方向.  相似文献   

20.
已有的数据流分类算法多采用有监督学习,需要使用大量已标记数据训练分类器,而获取已标记数据的成本很高,算法缺乏实用性。针对此问题,文中提出基于半监督学习的集成分类算法SEClass,能利用少量已标记数据和大量未标记数据,训练和更新集成分类器,并使用多数投票方式对测试数据进行分类。实验结果表明,使用同样数量的已标记训练数据,SEClass算法与最新的有监督集成分类算法相比,其准确率平均高5。33%。且运算时间随属性维度和类标签数量的增加呈线性增长,能够适用于高维、高速数据流分类问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号