首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于隐马尔可夫模型的文本分类算法   总被引:2,自引:0,他引:2  
杨健  汪海航 《计算机应用》2010,30(9):2348-2350
自动文本分类领域近年来已经产生了若干成熟的分类算法,但这些算法主要基于概率统计模型,没有与文本自身的语法和语义建立起联系。提出了将隐马尔可夫序列分析模型(HMM)用于自动文本分类的算法,首先构造表示文档类别的特征词集合,并以文档类别的特征词序列作为不同HMM分类器的观察序列,而HMM的状态转换序列则隐含地表示了不同类别文档内容的形成演化过程。分类时,具有最大生成概率的HMM分类器类标即为测试文档的分类结果。该算法构造的分类器模型一定程度上体现了不同类别文档的语法和语义特征,并可以实现多类别的自动文本分类,分类效率较高。  相似文献   

2.
情感分类是一项具有实用价值的分类技术。目前英语和汉语的情感分类的研究比较多,而针对维吾尔语的研究较少。以n-gram模型作为不同的文本表示特征,以互信息、信息增益、CHI统计量和文档频率作为不同的特征选择方法,选择不同的特征数量,以Naǐve Bayes、ME(最大熵)和SVM(支持向量机)作为不同的文本分类方法,分别进行了维吾尔语情感分类实验,并对实验结果进行了比较,结果表明:采用UniGrams特征表示方法、在5 000个特征数量和合适的特征选择函数,ME和SVM对维吾尔语情感分类能取得较好的效果。  相似文献   

3.
基于LDA模型的文本分类研究   总被引:3,自引:0,他引:3       下载免费PDF全文
针对传统的降维算法在处理高维和大规模的文本分类时存在的局限性,提出了一种基于LDA模型的文本分类算法,在判别模型SVM框架中,应用LDA概率增长模型,对文档集进行主题建模,在文档集的隐含主题-文本矩阵上训练SVM,构造文本分类器。参数推理采用Gibbs抽样,将每个文本表示为固定隐含主题集上的概率分布。应用贝叶斯统计理论中的标准方法,确定最优主题数T。在语料库上进行的分类实验表明,与文本表示采用VSM结合SVM,LSI结合SVM相比,具有较好的分类效果。  相似文献   

4.
There are three factors involved in text classification. These are classification model, similarity measure and document representation model. In this paper, we will focus on document representation and demonstrate that the choice of document representation has a profound impact on the quality of the classifier. In our experiments, we have used the centroid-based text classifier, which is a simple and robust text classification scheme. We will compare four different types of document representations: N-grams, Single terms, phrases and RDR which is a logic-based document representation. The N-gram representation is a string-based representation with no linguistic processing. The Single term approach is based on words with minimum linguistic processing. The phrase approach is based on linguistically formed phrases and single words. The RDR is based on linguistic processing and representing documents as a set of logical predicates. We have experimented with many text collections and we have obtained similar results. Here, we base our arguments on experiments conducted on Reuters-21578. We show that RDR, the more complex representation, produces more effective classifier on Reuters-21578, followed by the phrase approach.  相似文献   

5.
基于核方法的XML文档自动分类   总被引:3,自引:0,他引:3  
杨建武 《计算机学报》2011,34(2):353-359
支持向量机(SVM)方法通过核函数进行空间映射并构造最优分类超平面解决分类器的构造问题,该方法在文本自动分类应用中具有明显优势.XML 文档是文本内容信息与结构信息的综合体,作为一种新的数据形式,成为当前的研究热点.文中以结构链接向量模型为基础,研究了基于支持向量机的XML文档自动分类方法,提出了适合XML文档分类的核...  相似文献   

6.
Text categorization is one of the most common themes in data mining and machine learning fields. Unlike structured data, unstructured text data is more difficult to be analyzed because it contains complicated both syntactic and semantic information. In this paper, we propose a two-level representation model (2RM) to represent text data, one is for representing syntactic information and the other is for semantic information. Each document, in syntactic level, is represented as a term vector where the value of each component is the term frequency and inverse document frequency. The Wikipedia concepts related to terms in syntactic level are used to represent document in semantic level. Meanwhile, we designed a multi-layer classification framework (MLCLA) to make use of the semantic and syntactic information represented in 2RM model. The MLCLA framework contains three classifiers. Among them, two classifiers are applied on syntactic level and semantic level in parallel. The outputs of these two classifiers will be combined and input to the third classifier, so that the final results can be obtained. Experimental results on benchmark data sets (20Newsgroups, Reuters-21578 and Classic3) have shown that the proposed 2RM model plus MLCLA framework improves the text classification performance by comparing with the existing flat text representation models (Term-based VSM, Term Semantic Kernel Model, Concept-based VSM, Concept Semantic Kernel Model and Term + Concept VSM) plus existing classification methods.  相似文献   

7.
庞超  尹传环 《计算机科学》2018,45(1):144-147, 178
自动文本摘要是自然语言处理领域中一项重要的研究内容,根据实现方式的不同其分为摘录式和理解式,其中理解式文摘是基于不同的形式对原始文档的中心内容和概念的重新表示,生成的文摘中的词语无需与原始文档相同。提出了一种基于分类的理解式文摘模型。该模型将基于递归神经网络的编码-解码结构与分类结构相结合,并充分利用监督信息,从而获得更多的摘要特性;通过在编码-解码结构中使用注意力机制,模型能更精确地获取原文的中心内容。模型的两部分可以同时在大数据集下进行训练优化,训练过程简单且有效。所提模型表现出了优异的自动摘要性能。  相似文献   

8.
针对当前医院护理不良事件上报的内容多为非结构化文本数据,缺乏合理明确的分类,人工分析难度大、人为因素多、存在漏报瞒报、人为降低事件级别等问题,提出一种基于字符卷积神经网络CNN与支持向量机SVM的中文护理不良事件文本分类模型。该模型通过构建字符级文本词汇表对文本进行向量化,利用CNN对文本进行抽象的特征提取,并用SVM分类器实现中文文本分类。与传统基于TF-IDF的SVM、随机森林等多组分类模型进行对比实验,来验证该模型在中文护理不良事件文本分类中的分类效果。  相似文献   

9.
提出一个文本分类器性能评价模型,对文本分类结果的可信度进行了估计,给出计算可信度的公式。将每一个子分类器的可信度指标用于Bagging集成学习算法,得到了改进的基于子分类器性能评价的Bagging算法(PBagging)。应用支持向量机作为子分类器基本模型,对日本共同社大样本新闻集进行分类。实验表明,与Bagging算法相比,PBagging算法分类准确率有了明显提高。  相似文献   

10.
使用Logistic回归模型进行中文文本分类,通过实验,比较和分析了不同的中文文本特征、不同的特征数目、不同文档集合的情况下,基于Logistic回归模型的分类器的性能。并将其与线性SVM文本分类器进行了比较,结果显示它的分类性能与线性SVM方法相当,表明这种方法应用于文本分类的有效性。  相似文献   

11.
This paper presents the implementation of a new text document classification framework that uses the Support Vector Machine (SVM) approach in the training phase and the Euclidean distance function in the classification phase, coined as Euclidean-SVM. The SVM constructs a classifier by generating a decision surface, namely the optimal separating hyper-plane, to partition different categories of data points in the vector space. The concept of the optimal separating hyper-plane can be generalized for the non-linearly separable cases by introducing kernel functions to map the data points from the input space into a high dimensional feature space so that they could be separated by a linear hyper-plane. This characteristic causes the implementation of different kernel functions to have a high impact on the classification accuracy of the SVM. Other than the kernel functions, the value of soft margin parameter, C is another critical component in determining the performance of the SVM classifier. Hence, one of the critical problems of the conventional SVM classification framework is the necessity of determining the appropriate kernel function and the appropriate value of parameter C for different datasets of varying characteristics, in order to guarantee high accuracy of the classifier. In this paper, we introduce a distance measurement technique, using the Euclidean distance function to replace the optimal separating hyper-plane as the classification decision making function in the SVM. In our approach, the support vectors for each category are identified from the training data points during training phase using the SVM. In the classification phase, when a new data point is mapped into the original vector space, the average distances between the new data point and the support vectors from different categories are measured using the Euclidean distance function. The classification decision is made based on the category of support vectors which has the lowest average distance with the new data point, and this makes the classification decision irrespective of the efficacy of hyper-plane formed by applying the particular kernel function and soft margin parameter. We tested our proposed framework using several text datasets. The experimental results show that this approach makes the accuracy of the Euclidean-SVM text classifier to have a low impact on the implementation of kernel functions and soft margin parameter C.  相似文献   

12.
Automatic text classification based on vector space model (VSM), artificial neural networks (ANN), K-nearest neighbor (KNN), Naives Bayes (NB) and support vector machine (SVM) have been applied on English language documents, and gained popularity among text mining and information retrieval (IR) researchers. This paper proposes the application of VSM and ANN for the classification of Tamil language documents. Tamil is morphologically rich Dravidian classical language. The development of internet led to an exponential increase in the amount of electronic documents not only in English but also other regional languages. The automatic classification of Tamil documents has not been explored in detail so far. In this paper, corpus is used to construct and test the VSM and ANN models. Methods of document representation, assigning weights that reflect the importance of each term are discussed. In a traditional word-matching based categorization system, the most popular document representation is VSM. This method needs a high dimensional space to represent the documents. The ANN classifier requires smaller number of features. The experimental results show that ANN model achieves 93.33% which is better than the performance of VSM which yields 90.33% on Tamil document classification.  相似文献   

13.
Concept index (CI) is a very fast and efficient feature extraction (FE) algorithm for text classification. The key approach in CI scheme is to express each document as a function of various concepts (centroids) present in the collection. However,the representative ability of centroids for categorizing corpus is often influenced by so-called model misfit caused by a number of factors in the FE process including feature selection to similarity measure. In order to address this issue, this work employs the "DragPushing" Strategy to refine the centroids that are used for concept index. We present an extensive experimental evaluation of refined concept index (RCI) on two English collections and one Chinese corpus using state-of-the-art Support Vector Machine (SVM) classifier. The results indicate that in each case, RCI-based SVM yields a much better performance than the normal CI-based SVM but lower computation cost during training and classification phases.  相似文献   

14.
《Applied Soft Computing》2007,7(3):908-914
This paper presents a least square support vector machine (LS-SVM) that performs text classification of noisy document titles according to different predetermined categories. The system's potential is demonstrated with a corpus of 91,229 words from University of Denver's Penrose Library catalogue. The classification accuracy of the proposed LS-SVM based system is found to be over 99.9%. The final classifier is an LS-SVM array with Gaussian radial basis function (GRBF) kernel, which uses the coefficients generated by the latent semantic indexing algorithm for classification of the text titles. These coefficients are also used to generate the confidence factors for the inference engine that present the final decision of the entire classifier. The system is also compared with a K-nearest neighbor (KNN) and Naïve Bayes (NB) classifier and the comparison clearly claims that the proposed LS-SVM based architecture outperforms the KNN and NB based system. The comparison between the conventional linear SVM based classifiers and neural network based classifying agents shows that the LS-SVM with LSI based classifying agents improves text categorization performance significantly and holds a lot of potential for developing robust learning based agents for text classification.  相似文献   

15.
网络信息浩如烟海又纷繁芜杂,从中掌握最有效的信息是信息处理的一大目标,而文本分类是组织和管理数据的有力手段.由于最大熵模型可以综合观察到的各种相关或不相关的概率知识,具有对许多问题的处理都可以达到较好的结果的优势,将最大熵模型引入到中文文本分类的研究中,并通过使用一种特征聚合的算法改进特征选择的有效性.实验表明与Bayes、KNN和SVM这三种性能优越的算法相比,基于最大熵的文本分类算法可取得较之更优的分类精度.  相似文献   

16.
刘美茹 《计算机工程》2007,33(15):217-219
文本分类技术是文本数据挖掘的基础和核心,是基于自然语言处理技术和机器学习算法的一个具体应用。特征选择和分类算法是文本分类中两个最关键的技术,该文提出了利用潜在语义索引进行特征提取和降维,并结合支持向量机(SVM)算法进行多类分类,实验结果显示与向量空间模型(VSM)结合SVM方法和LSI结合K近邻(KNN)方法相比,取得了更好的效果,在文本类别数较少、类别划分比较清晰的情况下可以达到实用效果。  相似文献   

17.
基于N元语言模型的文本分类方法   总被引:6,自引:0,他引:6  
分类是近年来自然语言处理领域的一个研究热点。在分析了传统的分类模型后,文中提出了用N元语言模型作为中文文本分类模型。该模型不以传统的"词袋"(bagofwords)方法表示文档,而将文档视为词的随机观察序列。根据该方法,设计并实现一个基于词的2元语言模型分类器。通过N元语言模型与传统分类模型(向量空间模型和NaiveBayes模型)的实验对比,结果表明:N元模型分类器具有更好的分类性能。  相似文献   

18.
一种基于聚类的PU主动文本分类方法   总被引:1,自引:0,他引:1  
刘露  彭涛  左万利  戴耀康 《软件学报》2013,24(11):2571-2583
文本分类是信息检索的关键问题之一.提取更多的可信反例和构造准确高效的分类器是PU(positive andunlabeled)文本分类的两个重要问题.然而,在现有的可信反例提取方法中,很多方法提取的可信反例数量较少,构建的分类器质量有待提高.分别针对这两个重要步骤提供了一种基于聚类的半监督主动分类方法.与传统的反例提取方法不同,利用聚类技术和正例文档应与反例文档共享尽可能少的特征项这一特点,从未标识数据集中尽可能多地移除正例,从而可以获得更多的可信反例.结合SVM 主动学习和改进的Rocchio 构建分类器,并采用改进的TFIDF(term frequency inverse document frequency)进行特征提取,可以显著提高分类的准确度.分别在3 个不同的数据集中测试了分类结果(RCV1,Reuters-21578,20 Newsgoups).实验结果表明,基于聚类寻找可信反例可以在保持较低错误率的情况下获取更多的可信反例,而且主动学习方法的引入也显著提升了分类精度.  相似文献   

19.
蔡崇超  王士同 《计算机应用》2007,27(5):1235-1237
在Bernoulli混合模型和期望最大化(EM)算法的基础上给出了一种基于不完整数据的改进方法。首先在已标记数据的基础上通过Bernoulli混合模型和朴素贝叶斯算法得到似然函数参数估计初始值, 然后利用含有权值的EM算法对分类器的先验概率模型进行参数估计,得到最终的分类器。实验结果表明,该方法在准确率和查全率方面要优于朴素贝叶斯文本分类。  相似文献   

20.
支持向量机是在统计学习理论基础上发展起来的一种性能优良的新型机器学习方法,它具有坚实的理论基础,巧妙的算法实现。支持向量机的卓越性能依赖于它的参数的正确选择。本文采用改进的免疫遗传算法对支持向量机的参数进行优化。实验证明对于低维数据分类时,本文的优化算法比传统的网格法可以较大减少参数优化时间和提升分类的准确率。对高维的文本数据分类时,在保证分类准确率的前提下,仍然可以较大减少优化的时间。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号