共查询到19条相似文献,搜索用时 46 毫秒
1.
文章根据主流文本分类模型只对词频敏感、且只关注中高频词条的特点,设计实现了一个基于多步过滤汉字结合模式的无词典特征词自动抽取方法,并通过实验与传统的词典分词法进行了比较,结果表明,这种方法对于中高频词条的识别率接近于词典分词法,而分词速度则远远高于词典分词法,能够满足对大规模开放域文本进行快速特征词自动抽取的需求。 相似文献
2.
特征提取是用机器学习方法进行文本分类的重点和难点。文中比较了目前几种最常用的特征抽取方法,提出了一种改进型的互信息特征抽取方法,并在构建的实验系统中比较了这几种特征抽取方法,发现改进的特征抽取方法是有效可行的。 相似文献
3.
如何高效地文本分类是当前研究的一个热点。首先对文本分类概念及流程中的分词、特征提取和文本分类方法等相关技术及研究现状进行了介绍和阐述,然后分析了现有文本分类相关技术面临的挑战,最后对文本分类的发展趋势进行了总结。 相似文献
4.
针对领域概念术语提取过程中特征项来源于人工获取领域文本集以及特征项抽取的准确性不高的问题,提出一种特征项自动抽取方法。首先利用第三方接口从文献资源库中获取大量领域文本集,并对其进行段落分析,在文本预处理阶段提出一种改进的无词典分词方法进行二次分词,结合TFIDF,开方检验,信息增益及词汇位置权重方法进行特征项抽取。实验结果表明,该方法能实现特征项自动化抽取,且准确性较高 相似文献
5.
根据汉语语言自身的特点,在基于原有的特征项提取方法基础之上,提出了基于文本集密度的特征词选择的思想,对于特征项个数和选择进行了界定,找出了不损失文本有效信息的最小特征词语集,并且利用其中的中间值作为词语权重计算的一部分,创造出更为合理的权重计算方案。最后利用一种新的衡量权重好坏的标准——元打分法,对文中所提出的方法的正确性和有效性进行了实验和证明。 相似文献
6.
分析现有几种中文分词方法,提出一种关键词抽取算法。以词语的权重公式为中心,利用遗传算法训练、优化公式中的参数,得到一组适合中文文本的参数,提高文章子主题划分的精度。实验分析表明,该算法能将抽取系统中的命名实体有效地切分出来,准确完成抽取关键词的工作,并具有一定的通用性。 相似文献
7.
文本信息处理已成为一门日趋成熟、应用面日趋广泛的学科.文本分类和聚类技术是应信息检索和查询需要而出现的自然语言处理领域的重要研究课题.面对急速膨胀的各种文本信息,通过使用文本分类和聚类技术,人们能对这些信息进行高效地组织和整理,以便于实现信息的准确定位和分流,从而提高用户查询和检索的效率.本文针对文本信息处理中最重要的研究方向--文本分类和聚类技术展开了研究,分析了特征抽取法在文本分类和文本聚类中应用的重要性,以及论证了为何要对文本进行特征抽取,最后分别阐述了用于文本分类和文本聚类的特征抽取方法. 相似文献
8.
本文比较研究了在中文文本分类中特征选取方法对分类效果的影响。考察了文档频率DF、信息增益IG、互信息MI、χ2分布CHI四种不同的特征选取方法。采用支持向量机(SVM)和KNN两种不同的分类器以考察不同抽取方法的有效性。实验结果表明,在英文文本分类中表现良好的特征抽取方法(IG、MI和CHI)在不加修正的情况下并不适合中文文本分类。文中从理论上分析了产生差异的原因,并分析了可能的矫正方法包括采用超大规模训练语料和采用组合的特征抽取方法。最后通过实验验证组合特征抽取方法的有效性。 相似文献
9.
特征抽取是中文文本分类的重点和难点,文中比较了不同特征单元对分类性能的影响,将字特征与词特征相结合以期更好地表现文本特征。并在构建的实验系统中比较了不同特征单元的分类准确性,发现采用混合特征来进行分类,能得到较好的分类效果。 相似文献
10.
用计算机信息处理技术实现文本自动分类是计算机自然语言理解学科共同关注的课题。该文提出了一种基于Bigram的无词典的中文文本特征词的抽取方法,并利用互信息概念对得到的特征词进行处理,提高了特征词抽取的准确性。此外,通过采用基于统计学习原理和结构风险最小原则的支持向量机算法对一些文本进行了分类,验证了由所提出的算法得到的特征词的有效性和可行性。 相似文献
11.
Artificial Intelligence Review - Feature Selection (FS) methods alleviate key problems in classification procedures as they are used to improve classification accuracy, reduce data dimensionality,... 相似文献
12.
This paper presents two algorithms for smoothing and feature extraction for fingerprint classification. Deutsch's (2) Thinning algorithm (rectangular array) is used for thinning the digitized fingerprint (binary version). A simple algorithm is also suggested for classifying the fingerprints. Experimental results obtained using such algorithms are presented. 相似文献
13.
We address the problem of texture classification. Random walks are simulated for plane domains bounded by absorbing boundaries Γ, and the absorption distributions are estimated. Measurements derived from the above distributions are the features used for texture classification. Experiments using such a model have been performed and the results showed a rate of accuracy of 89.7% for a data set consisting of one hundred and twenty-eight textured images equally distributed among thirty-two classes of textures. 相似文献
14.
Feature sub-set selection (FSS) is an important step for effective text classification (TC) systems. This paper presents an empirical comparison of seventeen traditional FSS metrics for TC tasks. The TC is restricted to support vector machine (SVM) classifier and only for Arabic articles. Evaluation used a corpus that consists of 7842 documents independently classified into ten categories. The experimental results are presented in terms of macro-averaging precision, macro-averaging recall and macro-averaging F1 measures. Results reveal that Chi-square and Fallout FSS metrics work best for Arabic TC tasks. 相似文献
15.
A novel framework for termset based feature extraction is proposed for binary text classification. The proposed approach is based on the encoding of the terms within a termset. The ternary codes ‘+1’ and ‘?1’ are used to represent the class that the term supports, whereas ‘0’ denotes no support to any of the classes. Four different encoding schemes are proposed where the term weights and the term occurrence probabilities in the positive and negative documents are used to define the ternary code of a given term. The ternary patterns are utilized to define novel features by splitting them into positive and negative codes where each code is treated as a different feature extractor. Use of the derived features individually and together with bag of words representation are both investigated. The histograms of the resultant features are also employed to study the improvements that can be achieved using a small number of additional features to augment bag of words representation. Experiments conducted on four benchmark datasets with different characteristics have shown that the proposed feature extraction framework provides significant improvements compared to the bag of words representation. 相似文献
16.
In manipulating data such as in supervised learning, we often extract new features from the original features for the purpose of reducing the dimensions of feature space and achieving better performance. In this paper, we show how standard algorithms for independent component analysis (ICA) can be appended with binary class labels to produce a number of features that do not carry information about the class labels-these features will be discarded-and a number of features that do. We also provide a local stability analysis of the proposed algorithm. The advantage is that general ICA algorithms become available to a task of feature extraction for classification problems by maximizing the joint mutual information between class labels and new features, although only for two-class problems. Using the new features, we can greatly reduce the dimension of feature space without degrading the performance of classifying systems. 相似文献
17.
近年来以大数据为中心的人工智能技术得到蓬勃发展,自然语言处理成为了人工智能时代最突出的前沿研究领域之一。然而,在自然语言处理领域的短文本分类中,不同的特征提取方法与机器学习算法集成时,处理效果差异明显。针对短文本分类精度较低的问题,基于组合的方式和预设的评价指标,通过将不同特征提取方法与不同机器学习算法进行组合,探究其在超短文本分类中的效果以寻求最优组合模型进而获得最佳分类效果。实验结果表明,在所选取的四种最优组合方法中,以词频-逆文件频率为特征提取方法、以逻辑回归为算法的组合模型在公开数据集中取得最好的实验效果,精度为92. 13%,查全率为90. 12%,适合应用于超短文本的分类应用场景。 相似文献
18.
The class imbalance problem occurs when the distribution among classes is not balanced. This can be a problem that causes classifier models to bias toward classes with many training samples. The class imbalance problem is inherent in text classification. The abstract feature extraction method is a versatile term weighting scheme. It serves not only as a feature extractor to form a structural form from unorganized text data but also as a dimension reduction technique and classifier. In this study, we tackle the problem of class imbalance in abstract feature extraction. The proposed method utilizes relative imbalance ratio as a factor to elevate the representation of minority classes. Besides, we also integrate relevant term factors to boost the general accuracy. Experiments conducted with three different data sets, one of which is collected for this study, show that the original abstract feature extraction method indeed suffers from the class imbalance problem and the proposed methods demonstrate significant improvements in terms of f1-micro, f1-macro, and Matthew’s correlation coefficient. The experimental results also suggest that the proposed method is a competitive classifier and term weighting scheme when compared to the well-known classifiers (KNN, SVM, and Nearest Centroid) and term weighting schemes (TF-IDF, TF-ICF, TF-ICSDF, TF-RF, TF-PROB, TF-IGM, and TF-MONO). 相似文献
19.
Knowledge and Information Systems - Extracting shape-related features from a given query subsequence is a crucial preprocessing step for chart pattern matching in rule-based, template-based and... 相似文献
|