首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
模糊kNN在文本分类中的应用研究   总被引:1,自引:0,他引:1  
自动文本分类是根据已经分配好类标签的训练文档集,来对新文档分配类标签.针对模糊kNN算法用于文本分类的性能进行了一系列的实验研究与分析.在中英文两个不同的语料集上,采用四种著名的文本特征选择方法进行特征选择,对改进的模糊kNN方法与经典kNN及目前广泛使用的基于相似度加权的kNN方法进行实验比较.结果表明,在不同的特征选择方法下,该算法均能削弱训练样本分布的不均匀性对分类性能的影响,提高分类精度,并且在一定程度上降低对k值的敏感性.  相似文献   

2.
一种基于向量空间模型的文本分类方法   总被引:21,自引:1,他引:21  
介绍的文本分类是指在给定分类体系下,根据文本的内容自动确定文本类别的过程。通过分析网页的特点及因特网用户感兴趣的查询信息,提出了一种基于机器学习的、独立于语种的文本分类模型。这一模型的关键算法主要利用字间的相关信息、词频、页面的标记信息以及对用户的查询信息的浅层语义分析,提取网页特征,并计算可调的词频加权参数和增加特征词的可分性信息,然后通过本类和非本类训练,建立预定义类的特征向量空间,进一步对文本进行分类。这种分类方法在相似文本分类中具有明显的优势。  相似文献   

3.
Using Wikipedia knowledge to improve text classification   总被引:7,自引:7,他引:0  
Text classification has been widely used to assist users with the discovery of useful information from the Internet. However, traditional classification methods are based on the “Bag of Words” (BOW) representation, which only accounts for term frequency in the documents, and ignores important semantic relationships between key terms. To overcome this problem, previous work attempted to enrich text representation by means of manual intervention or automatic document expansion. The achieved improvement is unfortunately very limited, due to the poor coverage capability of the dictionary, and to the ineffectiveness of term expansion. In this paper, we automatically construct a thesaurus of concepts from Wikipedia. We then introduce a unified framework to expand the BOW representation with semantic relations (synonymy, hyponymy, and associative relations), and demonstrate its efficacy in enhancing previous approaches for text classification. Experimental results on several data sets show that the proposed approach, integrated with the thesaurus built from Wikipedia, can achieve significant improvements with respect to the baseline algorithm.
Pu WangEmail:
  相似文献   

4.
Text categorization refers to the task of assigning the pre-defined classes to text documents based on their content. k-NN algorithm is one of top performing classifiers on text data. However, there is little research work on the use of different voting methods over text data. Also, when a huge number of training data is available online, the response speed slows down, since a test document has to obtain the distance with each training data. On the other hand, min–max-modular k-NN (M3-k-NN) has been applied to large-scale text categorization. M3-k-NN achieves a good performance and has faster response speed in a parallel computing environment. In this paper, we investigate five different voting methods for k-NN and M3-k-NN. The experimental results and analysis show that the Gaussian voting method can achieve the best performance among all voting methods for both k-NN and M3-k-NN. In addition, M3-k-NN uses less k-value to achieve the better performance than k-NN, and thus is faster than k-NN in a parallel computing environment. The work of K. Wu and B. L. Lu was supported in part by the National Natural Science Foundation of China under the grants NSFC 60375022 and NSFC 60473040, and the Microsoft Laboratory for Intelligent Computing and Intelligent Systems of Shanghai Jiao Tong University.  相似文献   

5.
基于潜在语义分析的中文文本层次分类技术   总被引:9,自引:0,他引:9  
从网络文本自动分类的需求出发,针对基于VSM模型的分类处理中词条无关假设和词条维度过高等问题,对基于类中心向量的分类方法进行了改进。利用LSA分析中的SVD分解获得Web文档的语义特征向量,并在此基础上进行分类处理,在不损害分类精度的同时提高了分类及其后处理速度,并设计实现了一个原型系统。  相似文献   

6.
Text representation is an essential task in transforming the input from text into features that can be later used for further Text Mining and Information Retrieval tasks. The commonly used text representation model is Bags-of-Words (BOW) and the N-gram model. Nevertheless, some known issues of these models, which are inaccurate semantic representation of text and high dimensionality of word size combination, should be investigated. A pattern-based model named Frequent Adjacent Sequential Pattern (FASP) is introduced to represent the text using a set of sequence adjacent words that are frequently used across the document collection. The purpose of this study is to discover the similarity of textual pattern between documents that can be later converted to a set of rules to describe the main news event. The FASP is based on the Pattern-Growth’s divide-and-conquer strategy where the main difference between FASP and the prior technique is in the Pattern Generation phase. This approach is tested against the BOW and N-gram text representation model using Malay and English language news dataset with different term weightings in the Vector Space Model (VSM). The findings demonstrate that the FASP model has a promising performance in finding similarities between documents with the average vector size reduction of 34% against the BOW and 77% against the N-gram model using the Malay dataset. Results using the English dataset is also consistent, indicating that the FASP approach is also language independent.  相似文献   

7.
词包模型中视觉单词歧义性分析   总被引:4,自引:0,他引:4       下载免费PDF全文
刘扬闻  霍宏  方涛 《计算机工程》2011,37(19):204-206,209
传统词包(BOW)模型中的视觉单词是通过无监督聚类图像块的特征向量得到的,没有考虑视觉单词的语义信息和语义性质。为解决该问题,提出一种基于文本分类的视觉单词歧义性分析方法。利用传统BOW模型生成初始视觉单词词汇表,使用文档频率、χ2分布和信息增益这3种文本分类方法分析单词语义性质,剔除具有低类别信息的歧义性单词,并采用支持向量机分类器实现图像分类。实验结果表明,该方法具有较高的分类精度。  相似文献   

8.
In this paper, a corpus-based thesaurus and WordNet were used to improve text categorization performance. We employed the k-NN algorithm and the back propagation neural network (BPNN) algorithms as the classifiers. The k-NN is a simple and famous approach for categorization, and the BPNNs has been widely used in the categorization and pattern recognition fields. However the standard BPNN has some generally acknowledged limitations, such as a slow training speed and can be easily trapped into a local minimum. To alleviate the problems of the standard BPNN, two modified versions, Morbidity neurons Rectified BPNN (MRBP) and Learning Phase Evaluation BPNN (LPEBP), were considered and applied to the text categorization. We conducted the experiments on both the standard reuter-21578 data set and the 20 Newsgroups data set. Experimental results showed that our proposed methods achieved high categorization effectiveness as measured by the precision, recall and F-measure protocols.  相似文献   

9.
In the present article we introduce and validate an approach for single-label multi-class document categorization based on text content features. The introduced approach uses the statistical property of Principal Component Analysis, which minimizes the reconstruction error of the training documents used to compute a low-rank category transformation matrix. Such matrix transforms the original set of training documents from a given category to a new low-rank space and then optimally reconstructs them to the original space with a minimum reconstruction error. The proposed method, called Minimizer of the Reconstruction Error (mRE) classifier, uses this property, and extends and applies it to new unseen test documents. Several experiments on four multi-class datasets for text categorization are conducted in order to test the stable and generally better performance of the proposed approach in comparison with other popular classification methods.  相似文献   

10.
Since a decade, text categorization has become an active field of research in the machine learning community. Most of the approaches are based on the term occurrence frequency. The performance of such surface-based methods can decrease when the texts are too complex, i.e., ambiguous. One alternative is to use the semantic-based approaches to process textual documents according to their meaning. Furthermore, research in text categorization has mainly focused on “flat texts” whereas many documents are now semi-structured and especially under the XML format. In this paper, we propose a semantic kernel for semi-structured biomedical documents. The semantic meanings of words are extracted using the unified medical language system (UMLS) framework. The kernel, with a SVM classifier, has been applied to a text categorization task on a medical corpus of free text documents. The results have shown that the semantic kernel outperforms the linear kernel and the naive Bayes classifier. Moreover, this kernel was ranked in the top 10 of the best algorithms among 44 classification methods at the 2007 Computational Medicine Center (CMC) Medical NLP International Challenge.  相似文献   

11.
Harun Uğuz 《Knowledge》2011,24(7):1024-1032
Text categorization is widely used when organizing documents in a digital form. Due to the increasing number of documents in digital form, automated text categorization has become more promising in the last ten years. A major problem of text categorization is its large number of features. Most of those are irrelevant noise that can mislead the classifier. Therefore, feature selection is often used in text categorization to reduce the dimensionality of the feature space and to improve performance. In this study, two-stage feature selection and feature extraction is used to improve the performance of text categorization. In the first stage, each term within the document is ranked depending on their importance for classification using the information gain (IG) method. In the second stage, genetic algorithm (GA) and principal component analysis (PCA) feature selection and feature extraction methods are applied separately to the terms which are ranked in decreasing order of importance, and a dimension reduction is carried out. Thereby, during text categorization, terms of less importance are ignored, and feature selection and extraction methods are applied to the terms of highest importance; thus, the computational time and complexity of categorization is reduced. To evaluate the effectiveness of dimension reduction methods on our purposed model, experiments are conducted using the k-nearest neighbour (KNN) and C4.5 decision tree algorithm on Reuters-21,578 and Classic3 datasets collection for text categorization. The experimental results show that the proposed model is able to achieve high categorization effectiveness as measured by precision, recall and F-measure.  相似文献   

12.
基于维基百科社区挖掘的词语语义相似度计算   总被引:1,自引:0,他引:1  
词语语义相似度计算在自然语言处理如词义消歧、语义信息检索、文本自动分类中有着广泛的应用。不同于传统的方法,提出的是一种基于维基百科社区挖掘的词语语义相似度计算方法。本方法不考虑单词页面文本内容,而是利用维基百科庞大的带有类别标签的单词页面网信息,将基于主题的社区发现算法HITS应用到该页面网,获取单词页面的社区。在获取社区的基础上,从3个方面来考虑两个单词间的语义相似度:(1)单词页面语义关系;(2)单词页面社区语义关系;(3)单词页面社区所属类别的语义关系。最后,在标准数据集WordSimilarity-353上的实验结果显示,该算法具有可行性且略优于目前的一些经典算法;在最好的情况下,其Spearman相关系数达到0.58。  相似文献   

13.
独立于语种的文本分类方法   总被引:44,自引:4,他引:40  
文本分类是指在给定分类体系下,根据文本的内容自动确定文本类别的过程。本文提出了一个基于机器学习的、独立于语种的文本分类模型,并对模型中的特征抽取、分类器和评价方法进行了详细的介绍。该模型已经在中文和日文两个语种的新闻语料上得到实现,并获得了较好的分类性能。  相似文献   

14.
基于概念的文本类别特征提取与文本模糊匹配   总被引:15,自引:1,他引:15  
文本信息特征提取和文本分类是当前智能信息服务系统基础研究的重点。该文给出一种新的类别特征提取与文本匹配方法。首先对术语特征权进行了综合计算,然后基于概念网络术语—概念映射关系,将特征权由术语空间转换到概念空间并做权值限幅处理。在此基础上,通过对概念进行类内和类间的统计分析,得到类别特征的均值与方差两个向量,通过模糊距离计算来对文本进行类别匹配。该文方法克服了传统IDF方法缺点,能有效地从概念上提取文本类特征,提高文本自动分类的准确性。  相似文献   

15.
神经网络在文本分类上的一种应用   总被引:5,自引:1,他引:5  
现有的文本分类方法在知识获取方面存在不足。该文针对某种应用需求,提出了人工神经网络和文本分类结合的一种文本分类方法。采用特征词的向量空间来描述文本,利用人工神经网络的良好的学习能力,通过对文本样本集进行训练,从中提取出对文本分类的知识,再利用神经网络和所获得的分类知识实现对文本的分类。  相似文献   

16.
Due to the exponential growth of documents on the Internet and the emergent need to organize them, the automated categorization of documents into predefined labels has received an ever-increased attention in the recent years. A wide range of supervised learning algorithms has been introduced to deal with text classification. Among all these classifiers, K-Nearest Neighbors (KNN) is a widely used classifier in text categorization community because of its simplicity and efficiency. However, KNN still suffers from inductive biases or model misfits that result from its assumptions, such as the presumption that training data are evenly distributed among all categories. In this paper, we propose a new refinement strategy, which we called as DragPushing, for the KNN Classifier. The experiments on three benchmark evaluation collections show that DragPushing achieved a significant improvement on the performance of the KNN Classifier.  相似文献   

17.
Text classification is usually based on constructing a model through learning from training examples to automatically classify text documents. However, as the size of text document repositories grows rapidly, the storage requirement and computational cost of model learning become higher. Instance selection is one solution to solve these limitations whose aim is to reduce the data size by filtering out noisy data from a given training dataset. In this paper, we introduce a novel algorithm for these tasks, namely a biological-based genetic algorithm (BGA). BGA fits a “biological evolution” into the evolutionary process, where the most streamlined process also complies with the reasonable rules. In other words, after long-term evolution, organisms find the most efficient way to allocate resources and evolve. Consequently, we can closely simulate the natural evolution of an algorithm, such that the algorithm will be both efficient and effective. The experimental results based on the TechTC-100 and Reuters-21578 datasets show the outperformance of BGA over five state-of-the-art algorithms. In particular, using BGA to select text documents not only results in the largest dataset reduction rate, but also requires the least computational time. Moreover, BGA can make the k-NN and SVM classifiers provide similar or slightly better classification accuracy than GA.  相似文献   

18.
Sharing sustainable and valuable knowledge among knowledge workers is a fundamental aspect of knowledge management. In organizations, knowledge workers usually have personal folders in which they organize and store needed codified knowledge (textual documents) in categories. In such personal folder environments, providing knowledge workers with needed knowledge from other workers’ folders is important because it increases the workers’ productivity and the possibility of reusing and sharing knowledge. Conventional recommendation methods can be used to recommend relevant documents to workers; however, those methods recommend knowledge items without considering whether the items are assigned to the appropriate category in the target user’s personal folders. In this paper, we propose novel document recommendation methods, including content-based filtering and categorization, collaborative filtering and categorization, and hybrid methods, which integrate text categorization techniques, to recommend documents to target worker’s personalized categories. Our experiment results show that the hybrid methods outperform the pure content-based and the collaborative filtering and categorization methods. The proposed methods not only proactively notify knowledge workers about relevant documents held by their peers, but also facilitate push-mode knowledge sharing.  相似文献   

19.
基于本体的异构文本分类系统   总被引:3,自引:0,他引:3  
赵国涛  何钦铭 《计算机工程》2004,30(21):123-125
提出了一个基于本体的异构文本分类系统,使用结构本体很好地消除了文本文档的结构上的差异,并将领域本体引入到分类系统中,使得分类更加准确、高效,分类的规则更易理解。在使用COSA算法提取有效概念的同时,也大大地减少了关键术语的数量,节省了运算开销。  相似文献   

20.
基于核方法的Web挖掘研究   总被引:2,自引:0,他引:2  
基于词空间的分类方法很难处理文本的高维特性和捕获文本语义概念.利用核主成分分析和支持向量机。提出一种通过约简文本数据维数抽取语义概念、基于语义概念进行文本分类的新方法.首先将文档映射到高维线性特征空间消除非线性特征,然后在映射空间中通过主成分分析消除变量之间的相关性,实现降维和语义概念抽取,得到文档的语义概念空间,最后在语义概念空间中采用支持向量机进行分类.通过新定义的核函数,不必显式实现到语义概念空间的映射,可在原始文档向量空间中直接实现基于语义概念的分类.利用核化的GHA方法自适应迭代求解核矩阵的特征向量和特征值,适于求解大规模的文本分类问题.试验结果表明该方法对于改进文本分类的性能具有较好的效果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号