首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 46 毫秒
1.
一个面向文本分类的中文特征词自动抽取方法   总被引:1,自引:0,他引:1  
文章根据主流文本分类模型只对词频敏感、且只关注中高频词条的特点,设计实现了一个基于多步过滤汉字结合模式的无词典特征词自动抽取方法,并通过实验与传统的词典分词法进行了比较,结果表明,这种方法对于中高频词条的识别率接近于词典分词法,而分词速度则远远高于词典分词法,能够满足对大规模开放域文本进行快速特征词自动抽取的需求。  相似文献   

2.
文本分类中的特征抽取   总被引:52,自引:3,他引:52  
特征提取是用机器学习方法进行文本分类的重点和难点。文中比较了目前几种最常用的特征抽取方法,提出了一种改进型的互信息特征抽取方法,并在构建的实验系统中比较了这几种特征抽取方法,发现改进的特征抽取方法是有效可行的。  相似文献   

3.
如何高效地文本分类是当前研究的一个热点。首先对文本分类概念及流程中的分词、特征提取和文本分类方法等相关技术及研究现状进行了介绍和阐述,然后分析了现有文本分类相关技术面临的挑战,最后对文本分类的发展趋势进行了总结。  相似文献   

4.
针对领域概念术语提取过程中特征项来源于人工获取领域文本集以及特征项抽取的准确性不高的问题,提出一种特征项自动抽取方法。首先利用第三方接口从文献资源库中获取大量领域文本集,并对其进行段落分析,在文本预处理阶段提出一种改进的无词典分词方法进行二次分词,结合TFIDF,开方检验,信息增益及词汇位置权重方法进行特征项抽取。实验结果表明,该方法能实现特征项自动化抽取,且准确性较高  相似文献   

5.
基于文本集密度的特征词选择与权重计算方法   总被引:3,自引:0,他引:3  
根据汉语语言自身的特点,在基于原有的特征项提取方法基础之上,提出了基于文本集密度的特征词选择的思想,对于特征项个数和选择进行了界定,找出了不损失文本有效信息的最小特征词语集,并且利用其中的中间值作为词语权重计算的一部分,创造出更为合理的权重计算方案。最后利用一种新的衡量权重好坏的标准——元打分法,对文中所提出的方法的正确性和有效性进行了实验和证明。  相似文献   

6.
基于自动文本分类的关键词抽取算法   总被引:6,自引:2,他引:4       下载免费PDF全文
张虹 《计算机工程》2009,35(12):145-147
分析现有几种中文分词方法,提出一种关键词抽取算法。以词语的权重公式为中心,利用遗传算法训练、优化公式中的参数,得到一组适合中文文本的参数,提高文章子主题划分的精度。实验分析表明,该算法能将抽取系统中的命名实体有效地切分出来,准确完成抽取关键词的工作,并具有一定的通用性。  相似文献   

7.
用于文本分类和文本聚类的特征抽取方法的研究   总被引:2,自引:0,他引:2  
文本信息处理已成为一门日趋成熟、应用面日趋广泛的学科.文本分类和聚类技术是应信息检索和查询需要而出现的自然语言处理领域的重要研究课题.面对急速膨胀的各种文本信息,通过使用文本分类和聚类技术,人们能对这些信息进行高效地组织和整理,以便于实现信息的准确定位和分流,从而提高用户查询和检索的效率.本文针对文本信息处理中最重要的研究方向--文本分类和聚类技术展开了研究,分析了特征抽取法在文本分类和文本聚类中应用的重要性,以及论证了为何要对文本进行特征抽取,最后分别阐述了用于文本分类和文本聚类的特征抽取方法.  相似文献   

8.
中文文本分类中特征抽取方法的比较研究   总被引:99,自引:9,他引:99  
本文比较研究了在中文文本分类中特征选取方法对分类效果的影响。考察了文档频率DF、信息增益IG、互信息MI、χ2分布CHI四种不同的特征选取方法。采用支持向量机(SVM)和KNN两种不同的分类器以考察不同抽取方法的有效性。实验结果表明,在英文文本分类中表现良好的特征抽取方法(IG、MI和CHI)在不加修正的情况下并不适合中文文本分类。文中从理论上分析了产生差异的原因,并分析了可能的矫正方法包括采用超大规模训练语料和采用组合的特征抽取方法。最后通过实验验证组合特征抽取方法的有效性。  相似文献   

9.
特征抽取是中文文本分类的重点和难点,文中比较了不同特征单元对分类性能的影响,将字特征与词特征相结合以期更好地表现文本特征。并在构建的实验系统中比较了不同特征单元的分类准确性,发现采用混合特征来进行分类,能得到较好的分类效果。  相似文献   

10.
基于Bigram的特征词抽取及自动分类方法研究   总被引:1,自引:1,他引:1  
王笑旻 《计算机工程与应用》2005,41(22):177-179,210
用计算机信息处理技术实现文本自动分类是计算机自然语言理解学科共同关注的课题。该文提出了一种基于Bigram的无词典的中文文本特征词的抽取方法,并利用互信息概念对得到的特征词进行处理,提高了特征词抽取的准确性。此外,通过采用基于统计学习原理和结构风险最小原则的支持向量机算法对一些文本进行了分类,验证了由所提出的算法得到的特征词的有效性和可行性。  相似文献   

11.
Artificial Intelligence Review - Feature Selection (FS) methods alleviate key problems in classification procedures as they are used to improve classification accuracy, reduce data dimensionality,...  相似文献   

12.
中国画的特征提取及分类   总被引:1,自引:0,他引:1       下载免费PDF全文
中国画作为中国传统文化艺术的瑰宝,根据语义对国画图像进行检索是必要的。国画的语义主要反映在颜色和形状。依据国画自身的特点,研究了颜色和形状的特征提取算法,融合图像的颜色和目标的形状特征,构建了一种新的特征向量,分析了国画图像的多维低阶特征与高阶语义之间的相关性,采用支持向量机实现语义分类,实验结果表明该方法提取的特征向量稳定,能得到较高的分类精度。  相似文献   

13.
特征提取是文本抄袭检测的重要环节,文本特征的数量和质量严重影响文本抄袭检测的准确率。针对现有方法的不足,提出一种基于依存句法的文本抄袭检测算法。该算法在依存句法分析的基础上,通过分析句子中词语间的关系以及合并短小词语建立句法框架,进而提取文本特征。其中,短小词语的合并能够使无意义词语合并成为有意义实体来表示文本特征,使文本特征更全面。实验结果表明,该文本特征提取算法能够准确选择文本的特征集,解决了文本特征数量多的问题,检测的准确率也有所提高。  相似文献   

14.
Automatic keyword extraction is an important research direction in text mining, natural language processing and information retrieval. Keyword extraction enables us to represent text documents in a condensed way. The compact representation of documents can be helpful in several applications, such as automatic indexing, automatic summarization, automatic classification, clustering and filtering. For instance, text classification is a domain with high dimensional feature space challenge. Hence, extracting the most important/relevant words about the content of the document and using these keywords as the features can be extremely useful. In this regard, this study examines the predictive performance of five statistical keyword extraction methods (most frequent measure based keyword extraction, term frequency-inverse sentence frequency based keyword extraction, co-occurrence statistical information based keyword extraction, eccentricity-based keyword extraction and TextRank algorithm) on classification algorithms and ensemble methods for scientific text document classification (categorization). In the study, a comprehensive study of comparing base learning algorithms (Naïve Bayes, support vector machines, logistic regression and Random Forest) with five widely utilized ensemble methods (AdaBoost, Bagging, Dagging, Random Subspace and Majority Voting) is conducted. To the best of our knowledge, this is the first empirical analysis, which evaluates the effectiveness of statistical keyword extraction methods in conjunction with ensemble learning algorithms. The classification schemes are compared in terms of classification accuracy, F-measure and area under curve values. To validate the empirical analysis, two-way ANOVA test is employed. The experimental analysis indicates that Bagging ensemble of Random Forest with the most-frequent based keyword extraction method yields promising results for text classification. For ACM document collection, the highest average predictive performance (93.80%) is obtained with the utilization of the most frequent based keyword extraction method with Bagging ensemble of Random Forest algorithm. In general, Bagging and Random Subspace ensembles of Random Forest yield promising results. The empirical analysis indicates that the utilization of keyword-based representation of text documents in conjunction with ensemble learning can enhance the predictive performance and scalability of text classification schemes, which is of practical importance in the application fields of text classification.  相似文献   

15.
We address the problem of texture classification. Random walks are simulated for plane domains A bounded by absorbing boundaries Γ, and the absorption distributions are estimated. Measurements derived from the above distributions are the features used for texture classification. Experiments using such a model have been performed and the results showed a rate of accuracy of 89.7% for a data set consisting of one hundred and twenty-eight textured images equally distributed among thirty-two classes of textures.  相似文献   

16.
This paper presents two algorithms for smoothing and feature extraction for fingerprint classification. Deutsch's(2) Thinning algorithm (rectangular array) is used for thinning the digitized fingerprint (binary version). A simple algorithm is also suggested for classifying the fingerprints. Experimental results obtained using such algorithms are presented.  相似文献   

17.
Feature sub-set selection (FSS) is an important step for effective text classification (TC) systems. This paper presents an empirical comparison of seventeen traditional FSS metrics for TC tasks. The TC is restricted to support vector machine (SVM) classifier and only for Arabic articles. Evaluation used a corpus that consists of 7842 documents independently classified into ten categories. The experimental results are presented in terms of macro-averaging precision, macro-averaging recall and macro-averaging F1 measures. Results reveal that Chi-square and Fallout FSS metrics work best for Arabic TC tasks.  相似文献   

18.
The curse of high dimensionality in text classification is a worrisome problem that requires efficient and optimal feature selection (FS) methods to improve classification accuracy and reduce learning time. Existing filter-based FS methods evaluate features independently of other related ones, which can then lead to selecting a large number of redundant features, especially in high-dimensional datasets, resulting in more learning time and less classification performance, whereas information theory-based methods aim to maximize feature dependency with the class variable and minimize its redundancy for all selected features, which gradually becomes impractical when increasing the feature space. To overcome the time complexity issue of information theory-based methods while taking into account the redundancy issue, in this article, we propose a new feature selection method for text classification termed correlation-based redundancy removal, which aims to minimize the redundancy using subsets of features having close mutual information scores without sequentially seeking already selected features. The idea is that it is not important to assess the redundancy of a dominant feature having high classification information with another irrelevant feature having low classification information and vice-versa since they are implicitly weakly correlated. Our method, tested on seven datasets using both traditional classifiers (Naive Bayes and support vector machines) and deep learning models (long short-term memory and convolutional neural networks), demonstrated strong performance by reducing redundancy and improving classification compared to ten competitive metrics.  相似文献   

19.
A novel framework for termset based feature extraction is proposed for binary text classification. The proposed approach is based on the encoding of the terms within a termset. The ternary codes ‘+1’ and ‘?1’ are used to represent the class that the term supports, whereas ‘0’ denotes no support to any of the classes. Four different encoding schemes are proposed where the term weights and the term occurrence probabilities in the positive and negative documents are used to define the ternary code of a given term. The ternary patterns are utilized to define novel features by splitting them into positive and negative codes where each code is treated as a different feature extractor. Use of the derived features individually and together with bag of words representation are both investigated. The histograms of the resultant features are also employed to study the improvements that can be achieved using a small number of additional features to augment bag of words representation. Experiments conducted on four benchmark datasets with different characteristics have shown that the proposed feature extraction framework provides significant improvements compared to the bag of words representation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号