首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
海量文本分析是实现大数据理解和价值发现的重要手段,其中文本分类作为自然语言处理的经典问题受到研究者广泛关注,而人工神经网络在文本分析方面的优异表现使其成为目前的主要研究方向。在此背景下,介绍卷积神经网络、时间递归神经网络、结构递归神经网络和预训练模型等主流方法在文本分类中应用的发展历程,比较不同模型基于常用数据集的分类效果,表明利用人工神经网络结构自动获取文本特征,可避免繁杂的人工特征工程,使文本分类效果得到提升。在此基础上,对未来文本分类的研究方向进行展望。  相似文献   

2.
文本分类技术是知识管理系统实现知识有效组织、存储和检索的重要手段,而基于词向量空间模型的文本分类没有考虑知识管理系统的特点,从而也不能满足知识管理系统中多分类的需要.论文提出了一种新的基于知识本体的文本分类算法,该方法利用知识管理系统中的本体集,实现了多概念粒度分类,实验表明该方法具备良好的分类性能.  相似文献   

3.
文本分类技术是自然语言处理领域的研究热点,其主要应用于舆情检测、新闻文本分类等领域。近年来,人工神经网络技术在自然语言处理的许多任务中有着很好的表现,将神经网络技术应用于文本分类取得了许多成果。在基于深度学习的文本分类领域,文本分类的数值化表示技术和基于深度学习的文本分类技术是两个重要的研究方向。对目前文本表示的有关词向量的重要技术和应用于文本分类的深度学习方法的实现原理和研究现状进行了系统的分析和总结,并针对当前的技术发展,分析了文本分类方法的不足和发展趋势。  相似文献   

4.
基于情绪知识的中文微博情感分类方法   总被引:1,自引:0,他引:1       下载免费PDF全文
庞磊  李寿山  周国栋 《计算机工程》2012,38(13):156-158,162
通过对新浪微博文本进行情感信息方面的分析与研究,提出一种基于情绪知识的非监督情感分类方法。利用情绪词和表情图片 2种情绪知识对大规模微博非标注语料进行筛选并自动标注,用自动标注好的语料作为训练集构建微博情感文本分类器,对微博文本进行情感极性自动分类。实验结果表明,该方法对微博文本的情感极性分类达到较好的效果。  相似文献   

5.
[目的]在自然语言处理领域,文本分类是十分重要的基础研究,可以应用于许多下游任务中,例如文章检索、推荐系统、问答系统等.受到知识图谱在文本推理领域发挥作用的启发,本文探索了将知识图谱应用于文本分类任务的方法,在降低对标注训练数据依赖的同时利用知识图谱的推理能力提升文本分类的效果.[方法]本文提出了基于知识图谱的图匹配文...  相似文献   

6.
利用交叉分类机制共享因特网上各种语言的信息资源是知识挖掘的重要方法,本文给出了双语交叉分类的模型以及实现方法。其主要思想是不需要进行机器翻译和人工标注,利用文本特征抽取机制提取类别特征项和文本特征项,通过基于概念扩充的对译映射规则自动生成类别和文本特征向量,在此基础上利用潜在语义分析,将双语文本在语义层面上统一起来,通过类别与文本的语义相似度进行分类。从而获取较高的精度。  相似文献   

7.
针对现有基于语义知识规则分析的文本相似性度量方法存在时间复杂度高的局限性,提出基于分类词典的文本相似性度量方法。利用汉语词法分析系统ICTCLAS对文本分词,运用TF×IDF方法提取文本关键词,遍历分类词典获取关键词编码,通过计算文本关键词编码的近似性来衡量原始文本之间的相似度。选取基于语义知识规则和基于统计两个类别的相似性度量方法作为对比方法,通过传统聚类与KNN分类分别对相似性度量方法进行效果验证。数值实验结果表明,新方法在聚类与分类实验中均能取得较好的实验结果,相较于其他基于语义分析的相似性度量方法还具有良好的时间效率。  相似文献   

8.
马建刚  张鹏  马应龙 《计算机应用》2019,39(5):1293-1298
随着全国司法机关智能化建设的深入推进,通过信息化建设应用所积累的海量司法文书为开展司法智能服务提供了司法数据分析基础。通过司法文书的相似性分析实现类案推送,可以为司法人员提供智能辅助办案决策支持,从而提高办案的质量和效率。针对面向通用领域的文本分类方法因没有考虑特定司法领域文本的复杂结构和知识语义而导致司法文本分类的效能低问题,提出一种基于司法知识块摘要和词转移距离(WMD)的高效司法文档分类方法。首先为司法文书构建领域本体知识模型,进而基于领域本体,利用信息抽取技术获取司法文档中核心知识块摘要;然后基于司法文本的知识块摘要利用WMD进行司法文档相似度计算;最后利用K最近邻算法进行司法文本分类。以两个典型罪名的案件文档集作为实验数据,与传统的WMD文档相似度计算方法进行对比,实验结果表明,所提方法能明显提高司法文本分类的正确率(分别有5.5和9.9个百分点的提升),同时也降低了文档分类所需的时间(速度分别提升到原来的52.4和89.1倍)。  相似文献   

9.
随着网络的迅速发展,网络上的信息越来越丰富,如何发现潜在的有用知识将成为今后发展的方向。本文在分析了向量表示法的弊端之后,提出了利用文件指纹对Web文本进行分类的方法,然后再利用k-means算法对所分类文本进行聚类分析,得到所需结果。通过文本挖掘模型,建立起可操作性的挖掘方法。  相似文献   

10.
在广泛研究现有文本自动分类方法的基础上,发现人工神经网络具有很强的自学习性、自组织性、联想记忆功能和推理意识等,在文本自动分类上有着独特的优势.为此,设计一个基于神经网络的文本自动分类系统.该系统采用模块化的设计,关键算法和功能均封装在模块中,使系统具有良好的扩展性.  相似文献   

11.
The text categorization (TC) is the automated assignment of text documents to predefined categories based on document contents. TC has been an application for many learning approaches, which proved effective. Nevertheless, TC provides many challenges to machine learning. In this paper, we suggest, for text categorization, the integration of external WordNet lexical information to supplement training data for a semi-supervised clustering algorithm which (i) uses a finite design set of labeled data to (ii) help agglomerative hierarchical clustering algorithms (AHC) partition a finite set of unlabeled data and then (iii) terminates without the capacity to classify other objects. This algorithm is the “semi-supervised agglomerative hierarchical clustering algorithm” (ssAHC). Our experiments use Reuters 21578 database and consist of binary classifications for categories selected from the 89 TOPICS classes of the Reuters collection. Using the vector space model (VSM), each document is represented by its original feature vector augmented with external feature vector generated using WordNet. We verify experimentally that the integration of WordNet helps ssAHC improve its performance, effectively addresses the classification of documents into categories with few training documents, and does not interfere with the use of training data. © 2001 John Wiley & Sons, Inc.  相似文献   

12.
基于包含全部特征的类别特征数据库,利用基于距离度量的Rocchio算法、Fast TC算法和基于概率模型的NB算法,从定量的角度来分析停用词、词干合并、数字和测试文档长度4个因素对文本分类精度的影响程度。实验表明,过滤停用词方法是一种无损的特征压缩手段,词干合并虽然对分类精度略有减弱,但仍能保证特征压缩的可行性。数字与其他词汇的语义关联性提高了Rocchio算法和Fast TC算法的分类精度,但降低了视特征彼此独立的NB算法的分类精度。3种算法在测试文档取不同数量的关键词时分类精度的变化趋势说明了特征所包含的有益信息和噪音信息对分类精度的影响。  相似文献   

13.
Sharing sustainable and valuable knowledge among knowledge workers is a fundamental aspect of knowledge management. In organizations, knowledge workers usually have personal folders in which they organize and store needed codified knowledge (textual documents) in categories. In such personal folder environments, providing knowledge workers with needed knowledge from other workers’ folders is important because it increases the workers’ productivity and the possibility of reusing and sharing knowledge. Conventional recommendation methods can be used to recommend relevant documents to workers; however, those methods recommend knowledge items without considering whether the items are assigned to the appropriate category in the target user’s personal folders. In this paper, we propose novel document recommendation methods, including content-based filtering and categorization, collaborative filtering and categorization, and hybrid methods, which integrate text categorization techniques, to recommend documents to target worker’s personalized categories. Our experiment results show that the hybrid methods outperform the pure content-based and the collaborative filtering and categorization methods. The proposed methods not only proactively notify knowledge workers about relevant documents held by their peers, but also facilitate push-mode knowledge sharing.  相似文献   

14.
Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, often feature selection is used for reducing the dimensionality. In this paper, we make an evaluation and comparison of the feature selection policies used in text categorization by employing some of the popular feature selection metrics. For the experiments, we use datasets which vary in size, complexity, and skewness. We use support vector machine as the classifier and tf-idf weighting for weighting the terms. In addition to the evaluation of the policies, we propose new feature selection metrics which show high success rates especially with low number of keywords. These metrics are two-sided local metrics and are based on the difference of the distributions of a term in the documents belonging to a class and in the documents not belonging to that class. Moreover, we propose a keyword selection framework called adaptive keyword selection. It is based on selecting different number of terms for each class and it shows significant improvement on skewed datasets that have a limited number of training instances for some of the classes.  相似文献   

15.
Text categorization is continuing to be one of the most researched NLP problems due to the ever-increasing amounts of electronic documents and digital libraries. In this paper, we present a new text categorization method that combines the distributional clustering of words and a learning logic technique, called Lsquare, for constructing text classifiers. The high dimensionality of text in a document has not been fruitful for the task of categorization, for which reason, feature clustering has been proven to be an ideal alternative to feature selection for reducing the dimensionality. We, therefore, use distributional clustering method (IB) to generate an efficient representation of documents and apply Lsquare for training text classifiers. The method was extensively tested and evaluated. The proposed method achieves higher or comparable classification accuracy and {rm F}_1 results compared with SVM on exact experimental settings with a small number of training documents on three benchmark data sets WebKB, 20Newsgroup, and Reuters-21578. The results prove that the method is a good choice for applications with a limited amount of labeled training data. We also demonstrate the effect of changing training size on the classification performance of the learners.  相似文献   

16.
In the present article we introduce and validate an approach for single-label multi-class document categorization based on text content features. The introduced approach uses the statistical property of Principal Component Analysis, which minimizes the reconstruction error of the training documents used to compute a low-rank category transformation matrix. Such matrix transforms the original set of training documents from a given category to a new low-rank space and then optimally reconstructs them to the original space with a minimum reconstruction error. The proposed method, called Minimizer of the Reconstruction Error (mRE) classifier, uses this property, and extends and applies it to new unseen test documents. Several experiments on four multi-class datasets for text categorization are conducted in order to test the stable and generally better performance of the proposed approach in comparison with other popular classification methods.  相似文献   

17.
基于主动学习的文档分类   总被引:3,自引:0,他引:3  
In the field of text categorization,the number of unlabeled documents is generally much gretaer than that of labeled documents. Text categorization is the problem of categorization in high-dimension vector space, and more training samples will generally improve the accuracy of text classifier. How to add the unlabeled documents of training set so as to expand training set is a valuable problem. The theory of active learning is introducted and applied to the field of text categorization in this paper ,exploring the method of using unlabeled documents to improve the accuracy oftext classifier. It is expected that such technology will improve text classifier's accuracy through adopting relativelylarge number of unlabelled documents samples. We brought forward an active learning based algorithm for text categorization,and the experiments on Reuters news corpus showed that when enough training samples available,it′s effective for the algorithm to promote text classifier's accuracy through adopting unlabelled document samples.  相似文献   

18.
Information access methods must be improved to overcome theinformation overload that most professionals face nowadays. Textclassification tasks, like Text Categorization, help the usersto access to the great amount of text they find in the Internetand their organizations.TC is the classification of documents into a predefined set ofcategories. Most approaches to automatic TC are based on theutilization of a training collection, which is a set of manuallyclassified documents. Other linguistic resources that areemerging, like lexical databases, can also be used forclassification tasks. This article describes an approach to TCbased on the integration of a training collection (Reuters-21578)and a lexical database (WordNet 1.6) as knowledge sources.Lexical databases accumulate information on the lexical items ofone or several languages. This information must be filtered inorder to make an effective use of it in our model of TC. Thisfiltering process is a Word Sense Disambiguation task. WSDis the identification of the sense of words in context. This taskis an intermediate process in many natural language processingtasks like machine translation or multilingual informationretrieval. We present the utilization of WSD as an aid for TC. Ourapproach to WSD is also based on the integration of two linguisticresources: a training collection (SemCor and Reuters-21578) and alexical database (WordNet 1.6).We have developed a series of experiments that show that: TC andWSD based on the integration of linguistic resources are veryeffective; and, WSD is necessary to effectively integratelinguistic resources in TC.  相似文献   

19.
针对目前很多文本分类方法很少控制混杂变量,且分类准确度对数据分布的鲁棒性较低的问题,提出一种基于协变量调整的文本分类方法.首先,假设文本分类中的混杂因子(变量)可在训练阶段观察到,但无法在测试阶段观察到;然后,以训练阶段的混杂因子为条件,在预测阶段计算出混杂因子的总和;最后,基于Pearl的协变量调整,通过控制混杂因子来观察文本特征和分类变量对分类器的精度影响.通过微博数据集和IMDB数据集验证所提方法的性能,实验结果表明,与其他方法相比,所提方法处理混杂关系时,可以得到更高的分类准确度,且对混杂变量具备鲁棒性.  相似文献   

20.
文本分类中特征选择的约束研究   总被引:7,自引:0,他引:7  
特征选择在文本分类中起重要的作用.文档频率(DF)、信息增益(IG)和互信息(MI)等特征选择方法在文本分类中广泛应用.已有的实验结果表明,IG是最有效的特征选择算法之一,DF稍差而MI效果相对较差.在文本分类中,现有的特征选择函数性能的评估均是通过实验验证的方法,即完全是基于经验的方法,为此提出了一种定性地评估特征选择函数性能的方法,并且定义了一组与分类信息相关的基本的约束条件.分析和实验表明,IG完全满足该约束条件,DF不能完全满足,MI和该约束相冲突,即一个特征选择算法的性能在实验中的表现与它是否满足这些约束条件是紧密相关的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号