首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 786 毫秒
1.
樊康新 《计算机工程》2009,35(24):191-193
针对朴素贝叶斯(NB)分类器在分类过程中存在诸如分类模型对样本具有敏感性、分类精度难以提高等缺陷,提出一种基于多种特征选择方法的NB组合文本分类器方法。依据Boosting分类算法,采用多种不同的特征选择方法建立文本的特征词集,训练NB分类器作为Boosting迭代过程的基分类器,通过对基分类器的加权投票生成最终的NB组合文本分类器。实验结果表明,该组合分类器较单NB文本分类器具有更好的分类性能。  相似文献   

2.
基于向量空间模型的文本分类由于文本向量维数较高导致分类器效率较低.针对这一不足,提出一种新的基于簇划分的文本分类方法.其主要思想是根据向量空间中向量间的距离,将训练文档分成若干簇,同一簇中的文档具有相同类别.测试时,根据测试文档落入哪个簇,确定文档的类别,并且和传统的文本分类方法k-NN进行了比较.实验结果表明,该方法在高维空间具有良好的泛化能力和很好的时间性能.  相似文献   

3.
Naive Bayesian分类器是一种有效的文本分类方法,但由于具有较强的稳定性,很难通过Boosting机制提高其性能。因此用Naive Bayesian分类器作为Boosting的基分类器需要解决的最大问题,就是如何破坏Naive Bayesian分类器的稳定性。提出了3种破坏Naive Bayesian学习器稳定性的方法。第一种方法改变训练集样本,第二种方法采用随机属性选择社团,第三种方法是在Boosting的每次迭代中利用不同的文本特征提取方法建立不同的特征词集。实验表明,这几种方法各有其优缺点,但都比原有方法准确、高效。  相似文献   

4.
中文文本分类器的设计   总被引:6,自引:0,他引:6  
文本分类是指在给定分类体系下,根据文本的内容自动确定文本类型的过程。文章应用球形的k-均值算法确定每个文本的类标签,并通过Boosting算法构建分类器。构建的分类器具有以下特点:分类器的设计针对未知类标签的语料库,实用性好;分类器能随着语料库中文本的变化而增加新的类,具有很好的可扩展性;分类器基于Boosting算法,具有很好的分类精度。  相似文献   

5.
方丁  王刚 《计算机系统应用》2012,21(7):177-181,248
随着Web2.0的迅速发展,越来越多的用户乐于在互联网上分享自己的观点或体验。这类评论信息迅速膨胀,仅靠人工的方法难以应对网上海量信息的收集和处理,因此基于计算机的文本情感分类技术应运而生,并且研究的重点之一就是提高分类的精度。由于集成学习理论是提高分类精度的一种有效途径,并且已在许多领域显示出其优于单个分类器的良好性能,为此,提出基于集成学习理论的文本情感分类方法。实验结果显示三种常用的集成学习方法 Bagging、Boosting和Random Subspace对基础分类器的分类精度都有提高,并且在不同的基础分类器条件下,Random Subspace方法较Bagging和Boosting方法在统计意义上更优,以上结果进一步验证了集成学习理论在文本情感分类中应用的有效性。  相似文献   

6.
传统词嵌入通常将词项的不同上下文编码至同一参数空间,造成词向量未能有效辨别多义词的语义;CNN网络极易关注文本局部特征而忽略文本时序语义,BiGRU网络善于学习文本时序整体语义,造成关键局部特征提取不足.针对上述问题,提出一种基于词性特征的CNN_BiGRU文本分类模型.引入词性特征构建具有词性属性的词性向量;将词性向量与词向量交叉组合形成增强词向量,以改善文本表示;采用CNN网络获取增强词向量的局部表示,利用BiGRU网络捕获增强词向量的全局上下文表示;融合两模型学习的表示形成深度语义特征;将该深度语义特征连接至Softmax分类器完成分类预测.实验结果表明,该模型提高了分类准确率,具有良好的文本语义建模和识别能力.  相似文献   

7.
目前采用短文本分类的方法几乎都使用词向量,不管是机器学习还是深度学习本质上都是对数字的处理.将文本汉字转换成计算机可识别的数字信息是词向量的作用.ERNIE是百度提出主要针对中文设计的词向量模型.将ERNIE词向量与深金字塔卷积神经网络相融合,对中文类新闻文本标题进行文本分类处理.通过实验比较,ERNIE词向量与深金字塔卷积神经网络相结合的短文本分类模型具有较高的分类精度.  相似文献   

8.
基于Boosting的智能车辆多类障碍物识别   总被引:2,自引:1,他引:1       下载免费PDF全文
提出一种基于Boosting集成学习的二叉树支持向量机(BBT-SVM)。根据城区交通环境中各类障碍物的出现概率、模式间的类间差异,设计适用于智能车辆障碍物识别的SVM树型结构。对每个节点SVM分类器采用Boosting集成学习方法进行改进,减少差错积累误差,提高分类精度和泛化能力。实验结果表明,该方法能有效地对城区交通场景中6类常规障碍物模式进行实时在线识别。  相似文献   

9.
许多实际问题涉及到多分类技术,该技术能有效地缩小用户与计算机之间的理解差异。在传统的多类Boosting方法中,多类损耗函数未必具有猜测背离性,并且多类弱学习器的结合被限制为线性的加权和。为了获得高精度的最终分类器,多类损耗函数应具有多类边缘极大化、贝叶斯一致性与猜测背离性。此外,弱学习器的缺点可能会限制线性分类器的性能,但它们的非线性结合可以提供较强的判别力。根据这两个观点,设计了一个自适应的多类Boosting分类器,即SOHPBoost算法。在每次迭代中,SOHPBoost算法能够利用向量加法或Hadamard乘积来集成最优的多类弱学习器。这个自适应的过程可以产生多类弱学习的Hadamard乘积向量和,进而挖掘出数据集的隐藏结构。实验结果表明,SOHPBoost算法可以产生较好的多分类性能。  相似文献   

10.
针对词向量文本分类模型记忆能力弱,缺少全局词特征信息等问题,提出基于宽度和词向量特征的文本分类模型(WideText):首先对文本进行清洗、分词、词元编码和定义词典等,计算全局词元的词频-逆文档频度(TFIDF)指标并将每条文本向量化,将输入文本中的词通过编码映射到词嵌入矩阵中,词向量特征经嵌入和平均叠加后,和基于TF-IDF的文本向量特征进行拼接,传入到输出层后计算属于每个分类的概率.该模型在低维词向量的基础上结合了文本向量特征的表达能力,具有良好的泛化和记忆能力.实验结果表明,在引入宽度特征后,WideText分类性能不仅较词向量文本分类模型有明显提升,且略优于前馈神经网络分类器.  相似文献   

11.
The text categorization (TC) is the automated assignment of text documents to predefined categories based on document contents. TC has been an application for many learning approaches, which proved effective. Nevertheless, TC provides many challenges to machine learning. In this paper, we suggest, for text categorization, the integration of external WordNet lexical information to supplement training data for a semi-supervised clustering algorithm which (i) uses a finite design set of labeled data to (ii) help agglomerative hierarchical clustering algorithms (AHC) partition a finite set of unlabeled data and then (iii) terminates without the capacity to classify other objects. This algorithm is the “semi-supervised agglomerative hierarchical clustering algorithm” (ssAHC). Our experiments use Reuters 21578 database and consist of binary classifications for categories selected from the 89 TOPICS classes of the Reuters collection. Using the vector space model (VSM), each document is represented by its original feature vector augmented with external feature vector generated using WordNet. We verify experimentally that the integration of WordNet helps ssAHC improve its performance, effectively addresses the classification of documents into categories with few training documents, and does not interfere with the use of training data. © 2001 John Wiley & Sons, Inc.  相似文献   

12.
针对一些多标签文本分类算法没有考虑文本-术语相关性和准确率不高的问题,提出一种结合旋转森林和AdaBoost分类器的集成多标签文本分类方法。首先,通过旋转森林算法对样本集进行分割,通过特征变换将各样本子集映射到新的特征空间,形成多个具有较大差异性的新样本子集。然后,基于AdaBoost算法,在样本子集中通过多次迭代构建多个AdaBoost基分类器。最后,通过概率平均法融合多个基分类器的决策结果,以此做出最终标签预测。在4个基准数据集上的实验结果表明,该方法在平均精确度、覆盖率、排名损失、汉明损失和1-错误率方面都具有优越的性能。  相似文献   

13.
We propose a new approach to text categorization known as generalized instance set (GIS) algorithm under the framework of generalized instance patterns. Our GIS algorithm unifies the strengths of k-NN and linear classifiers and adapts to characteristics of text categorization problems. It focuses on refining the original instances and constructs a set of generalized instances. We also propose a metamodel framework based on category feature characteristics. It has a metalearning phase which discovers a relationship between category feature characteristics and each component algorithm. Extensive experiments have been conducted on two large-scale document corpora for both GIS and the metamodel. The results demonstrate that both approaches generally achieve promising text categorization performance.  相似文献   

14.
Boosting算法是目前流行的一种机器学习算法。采用Boosting家族的Adaboost.MH算法作为分类算法,设计了一个中文文本自动分类器,并给出了评估方法和结果。评价表明,该分类器和SVM的分类精度相当,而较基于其他分类算法的分类器有更好的分类精度。  相似文献   

15.
罗军  况夯 《计算机应用》2008,28(9):2386-2388
提出一种新颖的基于Boosting模糊分类的文本分类方法。首先采用潜在语义索引(LSI)对文本特征进行选择;然后提出Boosting算法集成模糊分类器学习,在每轮迭代训练过程中,算法通过调整训练样本的分布,利用遗传算法产生分类规则。减少分类规则能够正确分类样本的权值,使得新产生的分类规则重点考虑难于分类的样本。实验结果表明,该文本分类算法具有良好分类的性能。  相似文献   

16.
Text categorization systems often use machine learning techniques to induce document classifiers from preclassified examples. The fact that each example document belongs to many classes often leads to very high computational costs that sometimes grow exponentially in the number of features. Seeking to reduce these costs, we explored the possibility of running a "baseline induction algorithm" separately for subsets of features, obtaining a set of classifiers to be combined. For the specific case of classifiers that return not only class labels but also confidences in these labels, we investigate here a few alternative fusion techniques, including our own mechanism that was inspired by the Dempster-Shafer Theory. The paper describes the algorithm and, in our specific case study, compares its performance to that of more traditional mechanisms.  相似文献   

17.
Researches in text categorization have been confined to whole-document-level classification, probably due to lack of full-text test collections. However, full-length documents available today in large quantities pose renewed interests in text classification. A document is usually written in an organized structure to present its main topic(s). This structure can be expressed as a sequence of subtopic text blocks, or passages. In order to reflect the subtopic structure of a document, we propose a new passage-level or passage-based text categorization model, which segments a test document into several passages, assigns categories to each passage, and merges the passage categories to the document categories. Compared with traditional document-level categorization, two additional steps, passage splitting and category merging, are required in this model. Using four subsets of the Reuters text categorization test collection and a full-text test collection of which documents are varying from tens of kilobytes to hundreds, we evaluate the proposed model, especially the effectiveness of various passage types and the importance of passage location in category merging. Our results show simple windows are best for all test collections tested in these experiments. We also found that passages have different degrees of contribution to the main topic(s), depending on their location in the test document.  相似文献   

18.
在文本分类研究中,集成学习是一种提高分类器性能的有效方法.Bagging算法是目前流行的一种集成学习算法.针对Bagging算法弱分类器具有相同权重问题,提出一种改进的Bagging算法.该方法通过对弱分类器分类结果进行可信度计算得到投票权重,应用于Attribute Bagging算法设计了一个中文文本自动分类器.采用kNN作为弱分类器基本模型对Sogou实验室提供的新闻集进行分类.实验表明该算法比Attribute Bagging有更好的分类精度.  相似文献   

19.
Web video categorization is a fundamental task for web video search. In this paper, we explore web video categorization from a new perspective, by integrating the model-based and data-driven approaches to boost the performance. The boosting comes from two aspects: one is the performance improvement for text classifiers through query expansion from related videos and user videos. The model-based classifiers are built based on the text features extracted from title and tags. Related videos and user videos act as external resources for compensating the shortcoming of the limited and noisy text features. Query expansion is adopted to reinforce the classification performance of text features through related videos and user videos. The other improvement is derived from the integration of model-based classification and data-driven majority voting from related videos and user videos. From the data-driven viewpoint, related videos and user videos are treated as sources for majority voting from the perspective of video relevance and user interest, respectively. Semantic meaning from text, video relevance from related videos, and user interest induced from user videos, are combined to robustly determine the video category. Their combination from semantics, relevance and interest further improves the performance of web video categorization. Experiments on YouTube videos demonstrate the significant improvement of the proposed approach compared to the traditional text based classifiers.  相似文献   

20.
Boosting Algorithms for Parallel and Distributed Learning   总被引:1,自引:0,他引:1  
The growing amount of available information and its distributed and heterogeneous nature has a major impact on the field of data mining. In this paper, we propose a framework for parallel and distributed boosting algorithms intended for efficient integrating specialized classifiers learned over very large, distributed and possibly heterogeneous databases that cannot fit into main computer memory. Boosting is a popular technique for constructing highly accurate classifier ensembles, where the classifiers are trained serially, with the weights on the training instances adaptively set according to the performance of previous classifiers. Our parallel boosting algorithm is designed for tightly coupled shared memory systems with a small number of processors, with an objective of achieving the maximal prediction accuracy in fewer iterations than boosting on a single processor. After all processors learn classifiers in parallel at each boosting round, they are combined according to the confidence of their prediction. Our distributed boosting algorithm is proposed primarily for learning from several disjoint data sites when the data cannot be merged together, although it can also be used for parallel learning where a massive data set is partitioned into several disjoint subsets for a more efficient analysis. At each boosting round, the proposed method combines classifiers from all sites and creates a classifier ensemble on each site. The final classifier is constructed as an ensemble of all classifier ensembles built on disjoint data sets. The new proposed methods applied to several data sets have shown that parallel boosting can achieve the same or even better prediction accuracy considerably faster than the standard sequential boosting. Results from the experiments also indicate that distributed boosting has comparable or slightly improved classification accuracy over standard boosting, while requiring much less memory and computational time since it uses smaller data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号