共查询到20条相似文献,搜索用时 15 毫秒
1.
Sujeevan Aseervatham Author Vitae Younès Bennani Author Vitae 《Pattern recognition》2009,42(9):2067-2076
Since a decade, text categorization has become an active field of research in the machine learning community. Most of the approaches are based on the term occurrence frequency. The performance of such surface-based methods can decrease when the texts are too complex, i.e., ambiguous. One alternative is to use the semantic-based approaches to process textual documents according to their meaning. Furthermore, research in text categorization has mainly focused on “flat texts” whereas many documents are now semi-structured and especially under the XML format. In this paper, we propose a semantic kernel for semi-structured biomedical documents. The semantic meanings of words are extracted using the unified medical language system (UMLS) framework. The kernel, with a SVM classifier, has been applied to a text categorization task on a medical corpus of free text documents. The results have shown that the semantic kernel outperforms the linear kernel and the naive Bayes classifier. Moreover, this kernel was ranked in the top 10 of the best algorithms among 44 classification methods at the 2007 Computational Medicine Center (CMC) Medical NLP International Challenge. 相似文献
2.
朴素Bayes分类器是一种简单有效的机器学习工具.本文用朴素Bayes分类器的原理推导出“朴素Bayes组合”公式,并构造相应的分类器.经过测试,该分类器有较好的分类性能和实用性,克服了朴素Bayes分类器精确度差的缺点,并且比其他分类器更加快速而不会显著丧失精确度. 相似文献
3.
4.
篇章关系识别是篇章分析的核心组成部分。汉语中,缺少显式连接词的隐式篇章关系占比很高,篇章关系识别更具挑战性。该文给出了一个基于多层局部推理的汉语篇章关系及主次联合识别方法。该方法借助双向LSTM和多头自注意力机制进行篇章关系对应论元的表征;进一步借助软对齐方式获取论元间局部语义的推理权重,形成论元间交互语义信息的表征;再将两类信息结合进行篇章关系的局部推理,并通过堆叠多层局部推理部件构建了汉语篇章关系及主次联合识别框架,在CDTB语料库上的关系识别F1值达到了67.0%。该文进一步将该联合识别模块嵌入一个基于转移的篇章解析器,在自动生成的篇章结构下进行篇章关系及主次的联合分析,形成了完整的汉语篇章解析器。 相似文献
5.
《Expert systems with applications》2014,41(3):853-860
In this paper we introduce and discuss a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner how we construct them, i.e., what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking words as they appear in a text, i.e., sn-grams are constructed by following paths in syntactic trees. In this manner, sn-grams allow bringing syntactic knowledge into machine learning methods; still, previous parsing is necessary for their construction. Sn-grams can be applied in any natural language processing (NLP) task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. We used as baseline traditional n-grams of words, part of speech (POS) tags and characters; three classifiers were applied: support vector machines (SVM), naive Bayes (NB), and tree classifier J48. Sn-grams give better results with SVM classifier. 相似文献
6.
Some Effective Techniques for Naive Bayes Text Classification 总被引:3,自引:0,他引:3
Sang-Bum Kim Kyoung-Soo Han Hae-Chang Rim Sung Hyon Myaeng 《Knowledge and Data Engineering, IEEE Transactions on》2006,18(11):1457-1466
While naive Bayes is quite effective in various data mining tasks, it shows a disappointing result in the automatic text classification problem. Based on the observation of naive Bayes for the natural language text, we found a serious problem in the parameter estimation process, which causes poor results in text classification domain. In this paper, we propose two empirical heuristics: per-document text normalization and feature weighting method. While these are somewhat ad hoc methods, our proposed naive Bayes text classifier performs very well in the standard benchmark collections, competing with state-of-the-art text classifiers based on a highly complex learning method such as SVM 相似文献
7.
Lazy Learning of Bayesian Rules 总被引:19,自引:0,他引:19
The naive Bayesian classifier provides a simple and effective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still suffers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of LBR are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statistically significant, a selective naive Bayesian classifier. 相似文献
8.
K. Rajan V. Ramalingam M. Ganesan S. Palanivel B. Palaniappan 《Expert systems with applications》2009,36(8):10914-10918
Automatic text classification based on vector space model (VSM), artificial neural networks (ANN), K-nearest neighbor (KNN), Naives Bayes (NB) and support vector machine (SVM) have been applied on English language documents, and gained popularity among text mining and information retrieval (IR) researchers. This paper proposes the application of VSM and ANN for the classification of Tamil language documents. Tamil is morphologically rich Dravidian classical language. The development of internet led to an exponential increase in the amount of electronic documents not only in English but also other regional languages. The automatic classification of Tamil documents has not been explored in detail so far. In this paper, corpus is used to construct and test the VSM and ANN models. Methods of document representation, assigning weights that reflect the importance of each term are discussed. In a traditional word-matching based categorization system, the most popular document representation is VSM. This method needs a high dimensional space to represent the documents. The ANN classifier requires smaller number of features. The experimental results show that ANN model achieves 93.33% which is better than the performance of VSM which yields 90.33% on Tamil document classification. 相似文献
9.
Bayesian Network Classifiers 总被引:154,自引:0,他引:154
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection. 相似文献
10.
11.
12.
针对传统的降维算法在处理高维和大规模的文本分类时存在的局限性,提出了一种基于LDA模型的文本分类算法,在判别模型SVM框架中,应用LDA概率增长模型,对文档集进行主题建模,在文档集的隐含主题-文本矩阵上训练SVM,构造文本分类器。参数推理采用Gibbs抽样,将每个文本表示为固定隐含主题集上的概率分布。应用贝叶斯统计理论中的标准方法,确定最优主题数T。在语料库上进行的分类实验表明,与文本表示采用VSM结合SVM,LSI结合SVM相比,具有较好的分类效果。 相似文献
13.
为了进一步提高文本分类的准确率,文中介绍了一种新的用于文本分类的概率分类器。该分类器首先通过自然语言处理技术对文本进行预处理,然后从训练集中读取文本信息从而产生正、负规则,并计算正负权重系数,最后计算正、负概率。文中给出了计算正负权重系数的算法,并根据计算出来的权重系数及正、负概率值对文本进行分类。将文中提出的概率分类器与SVM分类器进行对比实验,实验结果表明,文中设计的概率分类器对于文本分类效果较好。 相似文献
14.
DeConverter is core software in a Universal Networking Language(UNL) system.A UNL system has EnConverter and DeConverter as its two major components.EnConverter is used to convert a natural language sentence into an equivalent UNL expression,and DeConverter is used to generate a natural language sentence from an input UNL expression.This paper presents design and development of a Punjabi DeConverter.It describes five phases of the proposed Punjabi DeConverter,i.e.,UNL parser,lexeme selection,morphology generation,function word insertion,and syntactic linearization.This paper also illustrates all these phases of the Punjabi DeConverter with a special focus on syntactic linearization issues of the Punjabi DeConverter.Syntactic linearization is the process of defining arrangements of words in generated output.The algorithms and pseudocodes for implementation of syntactic linearization of a simple UNL graph,a UNL graph with scope nodes and a node having un-traversed parents or multiple parents in a UNL graph have been discussed in this paper.Special cases of syntactic linearization with respect to Punjabi language for UNL relations like ’and’,’or’,’fmt’,’cnt’,and ’seq’ have also been presented in this paper.This paper also provides implementation results of the proposed Punjabi DeConverter.The DeConverter has been tested on 1000 UNL expressions by considering a Spanish UNL language server and agricultural domain threads developed by Indian Institute of Technology(IIT),Bombay,India,as gold-standards.The proposed system generates 89.0% grammatically correct sentences,92.0% faithful sentences to the original sentences,and has a fluency score of 3.61 and an adequacy score of 3.70 on a 4-point scale.The system is also able to achieve a bilingual evaluation understudy(BLEU) score of 0.72. 相似文献
15.
Text Document Preprocessing with the Bayes Formula for Classification Using the Support Vector Machine 总被引:1,自引:0,他引:1
《Knowledge and Data Engineering, IEEE Transactions on》2008,20(9):1264-1272
This work implements an enhanced hybrid classification method through the utilization of the naïve Bayes classifier and the Support Vector Machine (SVM). In this project, the Bayes formula was used to vectorize (as opposed to classify) a document according to a probability distribution reflecting the probable categories that the document may belong to. The Bayes formula gives a range of probabilities to which the document can be assigned according to a pre determined set of topics such as those found in the "20 newsgroups" dataset for instance. Using this probability distribution as the vectors to represent the document, the SVM can then be used to classify the documents on a multi dimensional level. The effects of an inadvertent dimensionality reduction caused by classifying using only the highest probability using the naïve Bayes classifier can be overcome using the SVM by employing all the probability values associated with every category for each document. This method can be used for any dataset and shows a significant reduction in training time as compared to the LSquare method and significant improvement in classification accuracy when compared to pure naïve Bayes systems and also the TF-IDF/SVM hybrids. 相似文献
16.
17.
基于N元语言模型的文本分类方法 总被引:6,自引:0,他引:6
分类是近年来自然语言处理领域的一个研究热点。在分析了传统的分类模型后,文中提出了用N元语言模型作为中文文本分类模型。该模型不以传统的"词袋"(bagofwords)方法表示文档,而将文档视为词的随机观察序列。根据该方法,设计并实现一个基于词的2元语言模型分类器。通过N元语言模型与传统分类模型(向量空间模型和NaiveBayes模型)的实验对比,结果表明:N元模型分类器具有更好的分类性能。 相似文献
18.
19.
《Knowledge》2007,20(2):120-126
The naive Bayes classifier continues to be a popular learning algorithm for data mining applications due to its simplicity and linear run-time. Many enhancements to the basic algorithm have been proposed to help mitigate its primary weakness – the assumption that attributes are independent given the class. All of them improve the performance of naive Bayes at the expense (to a greater or lesser degree) of execution time and/or simplicity of the final model. In this paper we present a simple filter method for setting attribute weights for use with naive Bayes. Experimental results show that naive Bayes with attribute weights rarely degrades the quality of the model compared to standard naive Bayes and, in many cases, improves it dramatically. The main advantages of this method compared to other approaches for improving naive Bayes is its run-time complexity and the fact that it maintains the simplicity of the final model. 相似文献