首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Since a decade, text categorization has become an active field of research in the machine learning community. Most of the approaches are based on the term occurrence frequency. The performance of such surface-based methods can decrease when the texts are too complex, i.e., ambiguous. One alternative is to use the semantic-based approaches to process textual documents according to their meaning. Furthermore, research in text categorization has mainly focused on “flat texts” whereas many documents are now semi-structured and especially under the XML format. In this paper, we propose a semantic kernel for semi-structured biomedical documents. The semantic meanings of words are extracted using the unified medical language system (UMLS) framework. The kernel, with a SVM classifier, has been applied to a text categorization task on a medical corpus of free text documents. The results have shown that the semantic kernel outperforms the linear kernel and the naive Bayes classifier. Moreover, this kernel was ranked in the top 10 of the best algorithms among 44 classification methods at the 2007 Computational Medicine Center (CMC) Medical NLP International Challenge.  相似文献   

2.
朴素Bayes分类器是一种简单有效的机器学习工具.本文用朴素Bayes分类器的原理推导出“朴素Bayes组合”公式,并构造相应的分类器.经过测试,该分类器有较好的分类性能和实用性,克服了朴素Bayes分类器精确度差的缺点,并且比其他分类器更加快速而不会显著丧失精确度.  相似文献   

3.
针对汉语篇章分析的三个任务: 篇章单元切割、篇章结构生成和篇章关系识别,该文提出引入框架语义进行分析研究。首先基于框架构建了汉语篇章连贯性描述体系以及相应语料库;然后抽取句首、依存句法、短语结构、目标词、框架等特征,分别训练基于最大熵的篇章单元间有无关系分类器和篇章关系分类器;最后采用贪婪算法自下向上生成篇章结构树。实验证明,框架语义可以有效切割篇章单元,并且框架特征可以有效提升篇章结构以及篇章关系的识别效果。  相似文献   

4.
篇章关系识别是篇章分析的核心组成部分。汉语中,缺少显式连接词的隐式篇章关系占比很高,篇章关系识别更具挑战性。该文给出了一个基于多层局部推理的汉语篇章关系及主次联合识别方法。该方法借助双向LSTM和多头自注意力机制进行篇章关系对应论元的表征;进一步借助软对齐方式获取论元间局部语义的推理权重,形成论元间交互语义信息的表征;再将两类信息结合进行篇章关系的局部推理,并通过堆叠多层局部推理部件构建了汉语篇章关系及主次联合识别框架,在CDTB语料库上的关系识别F1值达到了67.0%。该文进一步将该联合识别模块嵌入一个基于转移的篇章解析器,在自动生成的篇章结构下进行篇章关系及主次的联合分析,形成了完整的汉语篇章解析器。  相似文献   

5.
In this paper we introduce and discuss a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner how we construct them, i.e., what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking words as they appear in a text, i.e., sn-grams are constructed by following paths in syntactic trees. In this manner, sn-grams allow bringing syntactic knowledge into machine learning methods; still, previous parsing is necessary for their construction. Sn-grams can be applied in any natural language processing (NLP) task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. We used as baseline traditional n-grams of words, part of speech (POS) tags and characters; three classifiers were applied: support vector machines (SVM), naive Bayes (NB), and tree classifier J48. Sn-grams give better results with SVM classifier.  相似文献   

6.
Some Effective Techniques for Naive Bayes Text Classification   总被引:3,自引:0,他引:3  
While naive Bayes is quite effective in various data mining tasks, it shows a disappointing result in the automatic text classification problem. Based on the observation of naive Bayes for the natural language text, we found a serious problem in the parameter estimation process, which causes poor results in text classification domain. In this paper, we propose two empirical heuristics: per-document text normalization and feature weighting method. While these are somewhat ad hoc methods, our proposed naive Bayes text classifier performs very well in the standard benchmark collections, competing with state-of-the-art text classifiers based on a highly complex learning method such as SVM  相似文献   

7.
Lazy Learning of Bayesian Rules   总被引:19,自引:0,他引:19  
The naive Bayesian classifier provides a simple and effective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still suffers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of LBR are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statistically significant, a selective naive Bayesian classifier.  相似文献   

8.
Automatic text classification based on vector space model (VSM), artificial neural networks (ANN), K-nearest neighbor (KNN), Naives Bayes (NB) and support vector machine (SVM) have been applied on English language documents, and gained popularity among text mining and information retrieval (IR) researchers. This paper proposes the application of VSM and ANN for the classification of Tamil language documents. Tamil is morphologically rich Dravidian classical language. The development of internet led to an exponential increase in the amount of electronic documents not only in English but also other regional languages. The automatic classification of Tamil documents has not been explored in detail so far. In this paper, corpus is used to construct and test the VSM and ANN models. Methods of document representation, assigning weights that reflect the importance of each term are discussed. In a traditional word-matching based categorization system, the most popular document representation is VSM. This method needs a high dimensional space to represent the documents. The ANN classifier requires smaller number of features. The experimental results show that ANN model achieves 93.33% which is better than the performance of VSM which yields 90.33% on Tamil document classification.  相似文献   

9.
Bayesian Network Classifiers   总被引:154,自引:0,他引:154  
Friedman  Nir  Geiger  Dan  Goldszmidt  Moises 《Machine Learning》1997,29(2-3):131-163
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.  相似文献   

10.
11.
自动篇章处理是自然语言处理中非常有挑战的一个任务,对自然语言处理的其他任务,如问答系统,自动文摘以及篇章生成都有重要的作用。近年来,大规模篇章语料PDTB的出现为篇章研究提供了一个公共的平台。该文在PDTB语料之上提出了一个完整的基于条件随机场模型的显式篇章分析平台,该平台包含连接词识别、篇章关系分类和关系论元提取三个子任务。给出了在PDTB上各模块的实验结果,并针对错误传播问题,给出了完整平台的性能及详细分析。  相似文献   

12.
基于LDA模型的文本分类研究   总被引:3,自引:0,他引:3       下载免费PDF全文
针对传统的降维算法在处理高维和大规模的文本分类时存在的局限性,提出了一种基于LDA模型的文本分类算法,在判别模型SVM框架中,应用LDA概率增长模型,对文档集进行主题建模,在文档集的隐含主题-文本矩阵上训练SVM,构造文本分类器。参数推理采用Gibbs抽样,将每个文本表示为固定隐含主题集上的概率分布。应用贝叶斯统计理论中的标准方法,确定最优主题数T。在语料库上进行的分类实验表明,与文本表示采用VSM结合SVM,LSI结合SVM相比,具有较好的分类效果。  相似文献   

13.
为了进一步提高文本分类的准确率,文中介绍了一种新的用于文本分类的概率分类器。该分类器首先通过自然语言处理技术对文本进行预处理,然后从训练集中读取文本信息从而产生正、负规则,并计算正负权重系数,最后计算正、负概率。文中给出了计算正负权重系数的算法,并根据计算出来的权重系数及正、负概率值对文本进行分类。将文中提出的概率分类器与SVM分类器进行对比实验,实验结果表明,文中设计的概率分类器对于文本分类效果较好。  相似文献   

14.
DeConverter is core software in a Universal Networking Language(UNL) system.A UNL system has EnConverter and DeConverter as its two major components.EnConverter is used to convert a natural language sentence into an equivalent UNL expression,and DeConverter is used to generate a natural language sentence from an input UNL expression.This paper presents design and development of a Punjabi DeConverter.It describes five phases of the proposed Punjabi DeConverter,i.e.,UNL parser,lexeme selection,morphology generation,function word insertion,and syntactic linearization.This paper also illustrates all these phases of the Punjabi DeConverter with a special focus on syntactic linearization issues of the Punjabi DeConverter.Syntactic linearization is the process of defining arrangements of words in generated output.The algorithms and pseudocodes for implementation of syntactic linearization of a simple UNL graph,a UNL graph with scope nodes and a node having un-traversed parents or multiple parents in a UNL graph have been discussed in this paper.Special cases of syntactic linearization with respect to Punjabi language for UNL relations like ’and’,’or’,’fmt’,’cnt’,and ’seq’ have also been presented in this paper.This paper also provides implementation results of the proposed Punjabi DeConverter.The DeConverter has been tested on 1000 UNL expressions by considering a Spanish UNL language server and agricultural domain threads developed by Indian Institute of Technology(IIT),Bombay,India,as gold-standards.The proposed system generates 89.0% grammatically correct sentences,92.0% faithful sentences to the original sentences,and has a fluency score of 3.61 and an adequacy score of 3.70 on a 4-point scale.The system is also able to achieve a bilingual evaluation understudy(BLEU) score of 0.72.  相似文献   

15.
This work implements an enhanced hybrid classification method through the utilization of the naïve Bayes classifier and the Support Vector Machine (SVM). In this project, the Bayes formula was used to vectorize (as opposed to classify) a document according to a probability distribution reflecting the probable categories that the document may belong to. The Bayes formula gives a range of probabilities to which the document can be assigned according to a pre determined set of topics such as those found in the "20 newsgroups" dataset for instance. Using this probability distribution as the vectors to represent the document, the SVM can then be used to classify the documents on a multi – dimensional level. The effects of an inadvertent dimensionality reduction caused by classifying using only the highest probability using the naïve Bayes classifier can be overcome using the SVM by employing all the probability values associated with every category for each document. This method can be used for any dataset and shows a significant reduction in training time as compared to the LSquare method and significant improvement in classification accuracy when compared to pure naïve Bayes systems and also the TF-IDF/SVM hybrids.  相似文献   

16.
篇章分析技术综述   总被引:1,自引:0,他引:1  
篇章作为词和句子之后的一种文本分析粒度在自然语言理解和自然语言生成中起到至关重要的作用。该文从计算语言学角度出发,对中英文篇章分析技术的研究现状进行了综述。介绍了中英文篇章分析技术在自然语言处理中的应用,并分别从篇章理论、篇章语料库及评测、篇章分析器的自动构建等方面详细阐述了中英文篇章分析技术。最后归纳出篇章分析技术后续研究的几个方向。  相似文献   

17.
基于N元语言模型的文本分类方法   总被引:6,自引:0,他引:6  
分类是近年来自然语言处理领域的一个研究热点。在分析了传统的分类模型后,文中提出了用N元语言模型作为中文文本分类模型。该模型不以传统的"词袋"(bagofwords)方法表示文档,而将文档视为词的随机观察序列。根据该方法,设计并实现一个基于词的2元语言模型分类器。通过N元语言模型与传统分类模型(向量空间模型和NaiveBayes模型)的实验对比,结果表明:N元模型分类器具有更好的分类性能。  相似文献   

18.
最大散度差分类器及其在文本分类中的应用   总被引:7,自引:0,他引:7  
提出的最大散度差分类器是在修正Fisher线性鉴别准则的基础上建立起来的,它与Rocchio和SVM分类器有着十分密切的联系,在国际标准语料库20Ncwsgroups上进行的仿真实验结果表明,最大散度差分类器具有良好的文本分类性能,其正确识别率明显高于Naive Bayes和Rochio,与SVM相当。  相似文献   

19.
《Knowledge》2007,20(2):120-126
The naive Bayes classifier continues to be a popular learning algorithm for data mining applications due to its simplicity and linear run-time. Many enhancements to the basic algorithm have been proposed to help mitigate its primary weakness – the assumption that attributes are independent given the class. All of them improve the performance of naive Bayes at the expense (to a greater or lesser degree) of execution time and/or simplicity of the final model. In this paper we present a simple filter method for setting attribute weights for use with naive Bayes. Experimental results show that naive Bayes with attribute weights rarely degrades the quality of the model compared to standard naive Bayes and, in many cases, improves it dramatically. The main advantages of this method compared to other approaches for improving naive Bayes is its run-time complexity and the fact that it maintains the simplicity of the final model.  相似文献   

20.
在长距离依赖场景,篇章依存分析的效果欠佳,传统分析方法通常设计大量特征模板来缓解这一瓶颈问题.该文提出一种层次化篇章依存分析方法,减少了篇章分析器所需一次性处理的篇章分析单元的数量,从而缩短了分析器所处理的依存对之间的距离;并通过长短时记忆模型直接处理篇章分析单元中的序列信息,避免了特征提取.在RS T语料库上进行实验...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号